Assistive music AI (подія в архіві)

Took place
31 August (Wednesday)

Developers Shore are announcing a webinar — Assistive music AI

Systems for automatic music transcription are an established topic in music informatics because they can be of help in many situations. A typical use case is when a musician performs a piece, and a computing system recognizes the played notes and writes them down in musical notation and/or responds with feedback on the performance.

There are also use cases where the recognized notes can be used to generate new musical elements, such as a countermelody or a chord progression. This is of particular interest in human-machine interactive live composition, which is in the focus of my company, Algoriffix AB.

In this webinar, I will present a method and a system for recognizing patterns of basic elements in sound, and tonal sounds in particular, such as music or chant. In the case of music, these basic elements simply refer to a plurality of notes, which can appear simultaneously. The system has a low algorithmic delay, is lightweight, and can be tuned to various settings without the need of data. Due to its novelty, the method was recently granted a patent.

🎙️Speaker: Dr. Stanislaw Gorlow (Founder and interim CEO/CTO at Algoriffix AB)
📣 Language — English
⏰ August 31, 13.00 (Kyiv time)
👉 Registration

До обраногоВ обраному0
Дозволені теги: blockquote, a, pre, code, ul, ol, li, b, i, del.
Ctrl + Enter
Дозволені теги: blockquote, a, pre, code, ul, ol, li, b, i, del.
Ctrl + Enter

Запис вебінару Assistive music AI:

Підписатись на коментарі