Tutorial from Sergiy Matusevych “Model Compression for Deep Learning” (подія в архіві)

Took place
31 August (Wednesday)
Time
18:00
Place
online
Price
All donations go to the Come Back Alive fund.

Sergiy Matusevych is a Principal Data and Applied Scientist at the Microsoft Azure research team, working on new scalable distributed algorithms for ML. Typically, his role is some combination of machine learning researcher, polyglot developer, data scientist, and mentor. Sergiy has a long track record of building high-performance scalable systems for big data and implementing distributed machine learning algorithms.

During this session, we will discuss why and when to compress ML models, survey major model compression techniques and best practices, and review state-of-the-art approaches to model compression. We will focus on pruning and quantization but also cover other techniques, like knowledge distillation, deep mutual learning, and architecture search.

This is an introductory-level tutorial for all ML practitioners interested in the optimization of their models in production.

Join the lecture via the registration and a free donation here: aiforukraine.aihouse.club

All donations go to the Come Back Alive fund.

👍ПодобаєтьсяСподобалось1
До обраногоВ обраному1
LinkedIn
Дозволені теги: blockquote, a, pre, code, ul, ol, li, b, i, del.
Ctrl + Enter
Дозволені теги: blockquote, a, pre, code, ul, ol, li, b, i, del.
Ctrl + Enter

Підписатись на коментарі