Course Description
An introduction to operationalizing AI/ML with Red Hat AI
This free technical overview describes the complex AI landscape and the significant enterprise challenges associated with bringing AI applications into production. It details how the Red Hat AI portfolio provides a unified, consistent, enterprise-ready AI platform built on the foundation of OpenShift for deployment anywhere across the hybrid cloud. The platform accelerates model development and delivery by offering features like high-performance inference with vLLM and llm-d, model customization via RAG and fine-tuning, and pipelines and observability features to govern and automate the entire model lifecycle.
Course summary
- Challenges of operationalizing AI
- Red Hat AI portfolio value
- Inference
- Connecting models to data
- Agentic AI
- Scaling AI in the Enterprise
- Demos of each major feature area
Outline for this course
- The challenges of operationalizing AI
- The value of Red Hat AI
- Demo: Introduction to Red Hat AI platform
- Fast, flexible and scalable inference
- Benchmarking and evaluating models
- Distributed inference and models-as-a-service
- Demo: Inference
- Connecting models to data
- Model customization tools
- Distributed training
- AI pipelines
- Demo: Model customization
- Agentic AI
- Demo: Agentic AI
- Scaling AI across the hybrid cloud
- Managing resources
- Safety, monitoring and observability
- Demo: Guardrails and observability
Audience for this course
- Data scientists and AI practitioners who want to build and train ML models
- Developers who want to build and integrate agentic AI applications
- Platform engineers responsible for installing, configuring, deploying, and monitoring AI applications at scale
Recommended training
Technology considerations