Categories: BlogCanonicalUbuntu

Accelerating AI with open source machine learning infrastructure

Accelerating ai with open source machine learning infrastructure 2

The landscape of artificial intelligence is rapidly evolving, demanding robust and scalable infrastructure. To meet these challenges, we’ve developed a comprehensive reference architecture (RA) that leverages the power of open-source tools and cutting-edge hardware. This architecture, built on Canonical’s MicroK8s and Charmed Kubeflow, running on Dell PowerEdge

Sponsored
R7525 servers, and accelerated by NVIDIA NIM microservices, provides a streamlined path for deploying and managing machine learning workloads.

Empowering data scientists and engineers

This solution is designed to empower data scientists and machine learning engineers, enabling them to iterate faster, scale seamlessly, and maintain robust security. For infrastructure builders, solution architects, DevOps engineers, and CTOs, this RA offers a clear path to advance AI initiatives while addressing the complexities of large-scale deployments.

At the heart of this architecture lies the synergy between Canonical and NVIDIA. Our collaboration ensures that the software stack, from Ubuntu Server and Ubuntu Pro to Charmed Kubeflow, is optimized for NVIDIA-Certified Systems. This integration delivers exceptional performance and reliability, allowing organizations to maximize their AI efficiency.

Dell PowerEdge R7525: the foundation for high-performance AI

The Dell PowerEdge R7525 server plays a crucial role in this architecture, providing the robust hardware foundation needed for demanding AI workloads. As a 2U rack server, it’s engineered for high-performance computing, virtualization, and data-intensive tasks. 

Featuring dual-socket AMD EPYC processors, the R7525 delivers exceptional scalability, advanced memory capabilities, and flexible storage options. This makes it ideal for AI and machine learning environments, where processing large datasets and complex models is essential. The R7525’s design ensures that organizations can virtualize traditional IT applications alongside transformative AI systems, providing a unified platform for diverse workloads.

Leveraging NVIDIA NIM and A100 GPUs

The architecture leverages NVIDIA NIM microservices included with NVIDIA AI Enterprise software platform, for secure and reliable AI model inferencing. This, combined with the power of NVIDIA A100 GPUs, provides the necessary computational muscle for demanding AI workloads. By deploying an LLM with NVIDIA NIM on Charmed Kubeflow, organizations can seamlessly transition from model development to production.

Canonical’s open-source components

Canonical’s MicroK8s, a CNCF-certified Kubernetes distribution, provides a lightweight and efficient container orchestration platform. Charmed Kubeflow simplifies the deployment and management of AI workflows, offering an extensive ecosystem of tools and frameworks. This combination ensures a smooth and efficient machine learning lifecycle.

Sponsored

Key benefits of the architecture

The benefits of this architecture are numerous. Faster iteration, enhanced scalability, and robust security are just a few. The deep integrations between NVIDIA and Canonical ensure that the solution works seamlessly out of the box, with expedited bug fixes and prompt security updates. Moreover, the foundation of Ubuntu provides a secure and stable operating environment.

This reference architecture is more than just a blueprint— it’s a practical guide. The document includes hardware specifications, software versions, and a step-by-step tutorial for deploying an LLM with NIM. It also addresses cluster monitoring and management, providing a holistic view of the system.

Unlocking new opportunities

By leveraging the combined expertise of Canonical, Dell, and NVIDIA, organizations can unlock new opportunities in their respective domains. This solution enhances data analytics, optimizes decision-making processes, and revolutionizes customer experiences.

Get started today

This RA is a solid foundation for deploying AI workloads. By combining Canonical, Dell, and NVIDIA’s expertise, organizations can enhance data analytics, optimize decision-making, and revolutionize customer experiences. As the RA concludes, organizations can confidently embrace this solution to drive innovation and accelerate AI adoption.

Ready to elevate your AI initiatives?

Download it now

Further Information

Ubuntu Server Admin

Recent Posts

Raising the bar for automotive cybersecurity in open source – Canonical’s ISO/SAE 21434 certification

Cybersecurity in the automotive world isn’t just a best practice anymore – it’s a regulatory…

17 hours ago

OBS Studio 31.1.0 Released with Multitrack Video for Linux & macOS

OBS Studio 31.1.0 is a major update that brings multitrack video support to Linux and…

23 hours ago

What our users make with Ubuntu Pro – Episode 1

Secure homelabs – and more – for the entire family Ubuntu Pro isn’t just for…

2 days ago

What our users make with Ubuntu Pro – Episode 1

Secure homelabs – and more – for the entire family Ubuntu Pro isn’t just for…

2 days ago

Ubuntu Weekly Newsletter Issue 899

Welcome to the Ubuntu Weekly Newsletter, Issue 899 for the week of June 29 –…

2 days ago

The State of Silicon and Devices – Q2 2025 Roundup

Welcome to the Q2 2025 edition of the State of Silicon and Devices by Canonical.…

3 days ago