This session demonstrates how you can use Kubernetes and cloud-native tools to reduce operational overhead in building and running intelligent apps.
Thursday, 23rd January 2025
13:15 – 14:00
Mélony Qin
Microsoft
Deploying AI-enabled applications involves an application or microservice interacting with a LLM inferencing endpoint. Microservices architectures and a cloud native approach is ideal for hosting your intelligent apps. This session demonstrates how you can use Kubernetes and cloud-native tools to reduce operational overhead in building and running intelligent apps.
Mélony is a technology specialist at Microsoft, a CNCF Ambassador, and a member of the current release team for the Kubernetes SIG release. She is the author of four books, including Azure Integration Guide for Business, The Kubernetes Workshop, Certified Kubernetes Administrator (CKA) Exam Guide, and Microsoft Azure Infrastructure, all published by Packt Publishing.
Mélony has a passion for cloud-native and AI technologies and runs the Cloud Native Innovators newsletter and blogs on cvisiona.com and her YouTube channel, CVisiona. She is committed to helping customers and partners worldwide succeed in the cloud-native and AI space.
Jetzt anmelden!