Categories: BlogCanonicalUbuntu

Accelerating AI with open source machine learning infrastructure

Accelerating ai with open source machine learning infrastructure 2

The landscape of artificial intelligence is rapidly evolving, demanding robust and scalable infrastructure. To meet these challenges, we’ve developed a comprehensive reference architecture (RA) that leverages the power of open-source tools and cutting-edge hardware. This architecture, built on Canonical’s MicroK8s and Charmed Kubeflow, running on Dell PowerEdge

Sponsored
R7525 servers, and accelerated by NVIDIA NIM microservices, provides a streamlined path for deploying and managing machine learning workloads.

Empowering data scientists and engineers

This solution is designed to empower data scientists and machine learning engineers, enabling them to iterate faster, scale seamlessly, and maintain robust security. For infrastructure builders, solution architects, DevOps engineers, and CTOs, this RA offers a clear path to advance AI initiatives while addressing the complexities of large-scale deployments.

At the heart of this architecture lies the synergy between Canonical and NVIDIA. Our collaboration ensures that the software stack, from Ubuntu Server and Ubuntu Pro to Charmed Kubeflow, is optimized for NVIDIA-Certified Systems. This integration delivers exceptional performance and reliability, allowing organizations to maximize their AI efficiency.

Dell PowerEdge R7525: the foundation for high-performance AI

The Dell PowerEdge R7525 server plays a crucial role in this architecture, providing the robust hardware foundation needed for demanding AI workloads. As a 2U rack server, it’s engineered for high-performance computing, virtualization, and data-intensive tasks. 

Featuring dual-socket AMD EPYC processors, the R7525 delivers exceptional scalability, advanced memory capabilities, and flexible storage options. This makes it ideal for AI and machine learning environments, where processing large datasets and complex models is essential. The R7525’s design ensures that organizations can virtualize traditional IT applications alongside transformative AI systems, providing a unified platform for diverse workloads.

Leveraging NVIDIA NIM and A100 GPUs

The architecture leverages NVIDIA NIM microservices included with NVIDIA AI Enterprise software platform, for secure and reliable AI model inferencing. This, combined with the power of NVIDIA A100 GPUs, provides the necessary computational muscle for demanding AI workloads. By deploying an LLM with NVIDIA NIM on Charmed Kubeflow, organizations can seamlessly transition from model development to production.

Canonical’s open-source components

Canonical’s MicroK8s, a CNCF-certified Kubernetes distribution, provides a lightweight and efficient container orchestration platform. Charmed Kubeflow simplifies the deployment and management of AI workflows, offering an extensive ecosystem of tools and frameworks. This combination ensures a smooth and efficient machine learning lifecycle.

Sponsored

Key benefits of the architecture

The benefits of this architecture are numerous. Faster iteration, enhanced scalability, and robust security are just a few. The deep integrations between NVIDIA and Canonical ensure that the solution works seamlessly out of the box, with expedited bug fixes and prompt security updates. Moreover, the foundation of Ubuntu provides a secure and stable operating environment.

This reference architecture is more than just a blueprint— it’s a practical guide. The document includes hardware specifications, software versions, and a step-by-step tutorial for deploying an LLM with NIM. It also addresses cluster monitoring and management, providing a holistic view of the system.

Unlocking new opportunities

By leveraging the combined expertise of Canonical, Dell, and NVIDIA, organizations can unlock new opportunities in their respective domains. This solution enhances data analytics, optimizes decision-making processes, and revolutionizes customer experiences.

Get started today

This RA is a solid foundation for deploying AI workloads. By combining Canonical, Dell, and NVIDIA’s expertise, organizations can enhance data analytics, optimize decision-making, and revolutionize customer experiences. As the RA concludes, organizations can confidently embrace this solution to drive innovation and accelerate AI adoption.

Ready to elevate your AI initiatives?

Download it now

Further Information

Ubuntu Server Admin

Recent Posts

Kolla Ansible OpenStack Installation (Ubuntu 24.04)

Kolla Ansible provides production-ready containers (here, Docker) and deployment tools for operating OpenStack clouds. This…

21 hours ago

Canonical announces first Ubuntu Desktop image for Qualcomm Dragonwing™ Platform with Ubuntu 24.04

This public beta enables the full Ubuntu Desktop experience on the Qualcomm Dragonwing™ QCS6490 and…

2 days ago

The long march towards delivering CRA compliance

Time is running out to be in full compliance with the EU Cyber Resilience Act,…

2 days ago

Extra Factor Authentication: how to create zero trust IAM with third-party IdPs

Identity management is vitally important in cybersecurity. Every time someone tries to access your networks,…

3 days ago

Ubuntu Weekly Newsletter Issue 889

Welcome to the Ubuntu Weekly Newsletter, Issue 889 for the week of April 20 –…

5 days ago

From pidfd to Shimanami Kaido: My RubyKaigi 2025 Experience

Introduction I just returned from RubyKaigi 2025, which ran from April 16th to 18th at…

5 days ago