Timely patching with Ubuntu Pro for fully secured MLOps
Generative AI projects like ChatGPT have motivated enterprises to rethink their AI strategy and make it a priority. In a report published by PwC, 72% of respondents said they were confident in the ROI of artificial intelligence. More than half of respondents also state that their AI projects are compliant with applicable regulations (57%) and protect systems from cyber attacks, threats or manipulations (55%).
Production-grade AI initiatives are not an easy task. Organisations need to go through different stages to prepare data, develop the model and deploy it. Reproducibility and portability are essential for such projects. This is where machine learning operations (MLOps) can help.
What is MLOps?
Machine learning operations (MLOps) is a set of practices that aim to simplify workflow processes and automate machine learning and deep learning deployments. It accomplishes the deployment and maintenance of models reliably and efficiently for production, at a large scale.
MLOps is slowly evolving into an independent approach to the machine learning lifecycle that includes all steps – from data gathering to governance and monitoring. It will become a standard as artificial intelligence is moving towards becoming part of everyday business, rather than just an innovative activity.
MLOps plays a crucial role in aligning business demands and regulatory requirements. Its benefits include:
- Increased productivity
- Cost reduction
With a fast-changing landscape, many tools are available on the market to enable MLOps adoption, some of which are open source. Kubeflow, MLFlow or Seldon are some of the most popular options. Charmed Kubeflow is a production-grade, end-to-end MLOps platform that translates steps in the data science workflow into Kubernetes jobs. It is one of the official distributions of the Kubeflow upstream project. Using it, data scientists and machine learning engineers benefit from having ML deployments that are simple, portable and scalable. Charmed Kubeflow has capabilities that cover a wide range of tasks, from experimentation using Notebooks, to training using Kubeflow Pipelines or tuning, using Katib.
Charmed Kubeflow is a great companion for teams adopting the MLOps approach. MLOps brings together best practices to productise machine learning initiatives, with clear principles that take the data that is being used, the ML model and the code into account. As the market evolves, the need to have stable and secure tools to handle MLOps becomes more evident. Charmed Kubeflow addresses this challenge and allows data scientists to focus on modelling.
However, improving productivity alone is not enough. Security is also high on the agenda, which is why at Canonical, we have put careful thought into offering MLOps tooling that protects professionals from any malicious attack.
Are you looking to learn more about MLOps? Canonical’s guide is a great start.
Securing your MLOps platform
At the beginning of the year, PyTorch reported a security breach that affected PyTorch nightly, a version of the AI tool that contains new features which are still being developed. While this particular tool had a limited user base, due to its novelty, both attacks and vulnerabilities are becoming more popular in the AI landscape. At the same time, in 2022 over 25,000 CVEs have been published (source), with a 20% increase from the previous year. It showcases a clear trend that becomes a burden for enterprises. On one hand, tracking vulnerabilities is challenging, but more importantly, patching them and ensuring that all dependencies are not breaking the systems is time-consuming.
At the same time, AI/ML initiatives usually have access to a lot of highly sensitive data. Professionals need to ensure that both the environment and artifacts are secure. They need to secure the data used within the projects, and models, as well as the platform and the layers underneath at different stages. This is true for all MLOps tooling, including open-source solutions.
Managing open-source software and all of its dependencies securely is crucial for any enterprise. Organisations look for secure open-source MLOps platforms to both develop and deploy machine learning models, without compromising on any of their standards or industry requirements. As more organisations are both reconsidering their AI/ML strategies and adopting more open-source solutions, it is crucial that open-source libraries and AI/ML toolchains also come from a trusted source with assured long-term security maintenance and platform stability.
For organisations who want to run AI/ML at scale, Canonical offers Charmed Kubeflow and Ubuntu Pro.
What is Ubuntu Pro?
Ubuntu Pro, Canonical’s comprehensive subscription for secure open source and compliance, helps teams get timely CVE patches, harden their systems at scale and remain compliant with regimes such as FedRAMP, HIPAA and PCI-DSS.
The subscription expands Canonical’s ten-year security coverage and optional technical support to an additional 23,000 packages beyond the main Ubuntu operating system. It is ideal for organisations looking to improve their security posture, not just for the Main repository of Ubuntu, but for thousands of open-source packages and toolchains.
Securing ML workloads with Ubuntu Pro
Ubuntu Pro includes CVE patches for a wide range of images, including the ones that are specific to the Charmed Kubeflow bundle. It ensures security for all the components of the MLOps platform, enabling professionals to focus on machine learning development and deployment.
By using one of the official distributions of the upstream projects, data scientists and machine learning engineers can both benefit from automating the machine learning workflows and securing the components at different layers. With an aim to grow the MLOps ecosystem, Charmed Kubeflow integrates with various other AI-specific or data-specific platforms that include Kafka, Spark, and MLFlow. Ubuntu Pro covers the full stack, from infrastructure to the operating system and application layer.
Ubuntu Pro is ideal for organisations that want to focus on innovation and be confident of ongoing security maintenance and dependency tracking. Canonical backports security fixes from newer versions of applications, giving data scientists, ML engineers and operational teams a path to long-term security with no forced upgrades. The result is a decade of open-source platform stability, code reproducibility, and peace of mind for those looking to adopt MLOps securely.