artificial intelligence music
Home Arts Artificial Intelligence and Music: Can Machines Produce Music?
Arts - September 21, 2022

Artificial Intelligence and Music: Can Machines Produce Music?

Artificial intelligence is a subject that has continued to develop since our past and has entered almost every area of our lives. This subject, which learns from the data and develops itself, helps us in many areas by imitating the intelligence of people; It is used in important areas such as smart assistants, industrial robots and the health sector. Alan Turing’s “Can machines think?”, which has come to the present day from the question of, can be used in more abstract fields such as “art”, as it is used in many fields? Can an artificial intelligence produce music just like a human produces music that takes place in our daily lives? In this article, we will talk about the potential development of artificial intelligence in music production.

First of all, we can change our main topic from “Can machines think?” to “Can machines be creative?”. Because we want to bring the creativity characteristic of humans into machines. However, we can teach this creativity to machines only with the data we give. So, how can we transform the music produced by humans into data and teach it to the machine? Converting music, which is even older than today’s languages, into a digital data has been a process that has progressed over time. Music that emerged with nature in his first works was followed by musical instruments. In a world where digitalization has increased, musical instruments and vocal sounds have also been digitized. And these digitized voices have provided us with numerical data. Thanks to these numerical data, music has the potential to become an algorithm that can be represented mathematically. The fact that mathematics and music are actually distantly related has been a source of light for us in terms of producing music from the machine. We can see from this video that the harmony that emerges when the number pi is matched with piano notes and played is actually an example of this (Song from π!).

In the field of music production, studies were carried out on the basis of the Markov Chain (Markov Chain), which is still used in many fields today. The Markov Chain is a stochastic probabilistic model that shows that the next phase of a sequence always depends on the current phase. However, since the notes of the music produced using this model will be dependent on the notes in the given music sample, it is not very possible to produce a new music with this model. With the introduction of Deep Learning into our lives, it has been seen that it is possible to create a new music thanks to the abstract representation of each note transition by looking at all the given data rather than probabilistic predictions related to the previous stage, such as the Markov Chain. Recurrent Neural Network (RNN) and Long-Short Term Memory (LSTM) networks, which emerged after Neural Networks, enabled the model to establish a structure that reminds of its previous versions, and added a memory feature to the model. In this way, using these networks during music production increases the possibility of getting better results. With Tensorflow, one of Google’s very popular and important libraries in the field of Deep Learning, this numerical information of the music can be used to generate a regression model by giving a piece of music to the machine as a MIDI file. Thanks to this regression model, it has become possible to predict the notes that may follow the music, and to advance it in a way that does not disturb the harmony of the music with the new notes it has produced (Tensorflow Music Generation – Demo).

As a result, music production has been made by humans since today, and it has become possible to produce music with the developing artificial intelligence technology. However, one of the shortcomings of this method, which is based on numerical data instead of music made from the emotions of people during this music production, is that it does not have the emotions it wants to feel in the music it produces. We will all witness whether it is possible to add emotion to the music produced in the coming days. But if there is one thing for sure, it is that artificial intelligence is only at the beginning of its journey in music production, and this field may be at very different levels in the future.

Evren Çetinkaya


I am senior year student at ITU Computer Engineering Department. I am working on Machine Learning, Deep Learning and Computer Vision topics. I am interested in Aerospace, Metaverse and Virtual Reality. I like following up-to- date scientific and technological research.

Check Also

Classical Guitar and Classical Music

We all know the guitar, which is one of the most well-known musical instruments today. Thi…