Categories: BlogCanonicalUbuntu

Charmed Kubeflow 1.7 is now available

Run serverless ML workloads. Optimise models for deep learning. Expand your data science tooling. 

Canonical, the publisher of Ubuntu, announced today the general availability of Charmed Kubeflow 1.7. Charmed Kubeflow is an open-source, end-to-end MLOps platform that can run on any cloud, including hybrid cloud or multi-cloud scenarios. This latest release offers the ability to run serverless machine learning workloads and perform model serving, regardless of the framework that professionals use. This new capability increases developer productivity by reducing routine tasks, helping organisations lower their operational costs. It unburdens developers from explicitly describing the infrastructure underneath.  

Based on a poll run by Canonical, open source and ease of use are the most important factors professionals consider when selecting AI/ML tooling. Charmed Kubeflow 1.7 expands its spectrum of open-source frameworks and libraries and makes the model development deployment process easier with a new set of capabilities. 

Serverless workloads and new model serving capabilities

In a recent MLOps report by Deloitte AI Institute, 74% of respondents indicated that they plan to integrate artificial intelligence (AI) into all enterprise applications within three years. To achieve this, companies need to find ways to scale their AI projects in a reproducible, portable and reliable manner. Charmed Kubeflow 1.7 brings new capabilities for enterprise AI:

  • The introduction of KNative in the Kubeflow bundle allows organisations to run serverless machine learning workloads.
  • The addition of KServe enables users to perform model serving, regardless of the framework.
  • New frameworks for model serving, such as NVIDIA Triton.

While observability features have been available in the product since last year, Charmed Kubeflow 1.7 comes with new dashboards for an improved user experience and easier infrastructure monitoring. More information about these capabilities can be found in Canonical’s recently published guide: Integrate with Observability Stack using COS.

More development tooling

Charmed Kubeflow 1.7 supports PaddlePaddle, an industrial platform with a rich set of features that help data scientists develop deep learning models. Deep learning is a subset of machine learning that uses neural networks to mimic human brains. It requires a tremendous amount of computing power and very large volumes of data. PaddlePaddle addresses this challenge by enabling parallel, distributed deep learning. 

Deep learning is gaining more popularity and PaddlePaddle itself has more than 1.9 million users.  With the introduction of PaddlePaddle, Charmed Kubeflow expands its library of open-source frameworks and gives professionals the flexibility to choose what suits them the best. 

Improved model optimisation features

Data scientists spend a lot of time optimising their models and need to stay up to date with the latest AI advancements, frameworks and libraries. Katib solves this issue by simplifying log access and hyperparameter tuning. Charmed Kubeflow’s Katib component has a new user interface (UI) that reduces the number of low-level commands needed to find appropriate correlations between logs. Furthermore, Katib includes new features such as Tune API, which makes tuning experiments easy to build and simplifies how users access trial metrics from the Katib database. 

With these Katib enhancements, data scientists can reach better performance metrics, reduce time spent on optimisation and experiment quickly. This results in faster project delivery, shorter machine learning lifecycles and a smoother path to optimised decision-making with AI projects.

Other highlights in Charmed Kubeflow 1.7

Charmed Kubeflow 1.7 also supports statistical analysis to address a new category of professionals working with statistics. It can analyse both structured and unstructured data, providing access to packages such as R Shiny or libraries such as Plotly.   

Charmed Kubeflow also recently became NVIDIA DGX-software certified, accelerating at-scale deployments of AI and data science projects on the highest-performing hardware.

Further reading:

Ubuntu Server Admin

Recent Posts

🚀 Deploy Elastic Stack on Ubuntu VPS (5 Minute Quick-Start Guide)

Here’s the guide to deploy Elastic Stack on Ubuntu VPS, with secure access, HTTPS proxying,…

2 days ago

🚀 Deploy Nagios on Ubuntu VPS

This guide walks through deploying Nagios Core on an Ubuntu VPS, from system prep to…

2 days ago

Shoryuken Has a New Maintainer, and v7.0.0 Is Almost There

After a decade under Pablo Cantero's stewardship, Shoryuken has a new maintainer - me. I'm…

6 days ago

A better way to provision NVIDIA BlueField DPUs at scale with MAAS

MAAS 3.7 has been officially released and it includes a bunch of cool new features.…

2 weeks ago

Ruby Floats: When 2.6x Faster Is Actually Slower (and Then Faster Again)

Update: This article originally concluded that Eisel-Lemire wasn't worth it for Ruby. I was wrong.…

2 weeks ago

MicroCeph: why it’s the superior MinIO alternative (and how to use it)

Recently, the team at MinIO moved the open source project into maintenance mode and will…

2 weeks ago