Related work: Lui, S., Horner, A., and So, C. 2010. “Re-targeting expressive musical style from classical music recordings using a support vector machine”, Journal of the Audio Engineering Society (JAES), Vol. 58, No. 12, pp. 1032-1044.
We propose a method for re-targeting musical style from audio recordings to MIDI. First, the audio file is divided into phrases according to cadence, pitch pattern, and local energy. The phrases are then trained with SVM to obtain the style parameters including dynamics, tempo, and articulation. The extracted performance style is then applied to a raw MIDI note list to make it expressive. Experiments show that our method reproduces a performer’s style with a high level of correlation to real performances.
In this work, we propose to extract the expressive performance style from famous violinist’s CD recording solo. Then retarget the style to plain MIDI to produce expressive performance.
Figure 1. expressive style cluster of a performer
Figure 2. The dynamic (top) and pitch (below). Certain pith pattern guarantee certain dynamic pattern (we call it the dynamic style)