Operationalize the complete life cycle of modern AI applications at scale by using Red Hat OpenShift AI.
Developing and Deploying AI/ML Applications on Red Hat OpenShift AI (AI267) provides students with the fundamental knowledge to manage the complete life cycle of modern AI applications. This course helps students build core skills for using Red Hat OpenShift AI to efficiently train, test, deploy, and monitor both predictive and generative AI models at scale.
This course is based on Red Hat OpenShift ® 4.18, and Red Hat OpenShift AI 2.25.
Introduction to Red Hat OpenShift AI
Identify how Red Hat OpenShift AI provides a complete MLOps and GenAIOps platform and how to use it to configure data science projects for team collaboration.
Using Workbenches for AI/ML Development
Use workbench environments for AI/ML development and connect them to data sources and stores.
Fundamentals of Model Serving
Prepare, deploy, and serve models by using OpenShift AI model serving capabilities.
Serving Generative and Predictive AI Models
Deploy and serve AI models with specific runtimes, including OpenVINO for predictive models and vLLM for large language models.
Monitoring AI Models
Monitor deployed models for bias, data drift, and performance by using TrustyAI and observability tools to ensure reliable and ethical AI performance in production.
Introduction to Data Science Pipelines
Create and manage basic data science pipelines by using Elyra and Kubeflow SDK to automate fundamental AI/ML workflows.
Advanced Kubeflow Pipelines Development and Experiments
Implement advanced pipeline features including container components, artifacts management, Kubernetes configuration, and systematic experimentation for production MLOps workflows.
GenAI Model Selection, Optimization, and Evaluation
Systematically select, optimize, and evaluate large language models by using RHOAI's model catalog, compression techniques, and evaluation frameworks.
Building GenAI Applications
Build production-ready GenAI applications by using industry patterns including RAG, agentic workflows, and trustworthy AI practices, and move beyond basic model serving to ship complete intelligent solutions.