Speaker: Adam Grzywaczewski, Dr., Senior Deep Learning Data Scientist at NVIDIA
Learn how to use GPUs to deploy machine learning models to production scale with the Triton Inference Server. At scale machine learning models can interact with up to millions of users in a day. As usage grows, the cost of both money and engineering time can prevent models from reaching their full potential. It’s these types of challenges that inspired creation of Machine Learning Operations (MLOps). Practice Machine Learning Operations by: Deploying neural networks from a variety of frameworks onto a live Triton Server Measuring GPU usage and other metrics with Prometheus Sending asynchronous requests to maximize throughput Upon completion, learners will be able to deploy their own machine learning models on a GPU server.
Registration: inferencetransform.splashthat.com
The workshop takes place as part of a free online conference DevRain Transform 2022
Немає коментарів
Додати коментар Підписатись на коментаріВідписатись від коментарів