Human and AI-Generated Music: Indiscernible or Uncanny Valley?

Human and AI-Generated Music: Indiscernible or Uncanny Valley?



Artificial Intelligence (AI) technology has advanced to the point where AI systems can create new musical compositions and songs. As AI becomes more prevalent in the music-making process, both artists and listeners must address the looming question: Do humans or machines make better music?

Before we dig into this question, we must first acknowledge that musical taste, like any art, is subjective. There's certainly no way to reach a consensus on which human artists create the best music, so it's to be expected that there will be equally diverse viewpoints in comparing AI- and human-generated sounds. However, there are some important aspects of AI-based music to consider when framing this question, and the question itself could have significant implications for all aspects of the music industry.

AI Music: The Learning Process

As of this writing, few AI systems designed for music production can truly create a new song out of thin air. Many others can take user input to make adjustments to existing pitches or beats, but this can result in music that sounds similar to songs produced by users of other systems. In fact, two systems that can create new music are Google MusicLM and Jukebox. OpenAI, the creator of ChatGPT. None were released to the public until May 2023.

okex

Apart from the busy copyright aspectThe main reason these instruments are not widely available is the level of quality of music they produce these days. In songs created on these and similar platforms, listeners have commented on strange sounds, uneven mixes, confusing sounds, and strange hybrids of different genres and styles.

This is to be expected with AI-generated music. This is because many of these systems use machine or deep learning to generate new music based on detailed analysis of countless pre-existing musical examples. As AI systems generate new songs, they are examined to see how well they fit the rules and patterns found by the AI. Improvement is expected over time. So one of the reasons AI-based music sounds inferior to human-made music is because it hasn't had time to figure out how to make compelling songs.

Research says AI-based music doesn't exist yet.

In the year A 2023 study by researchers at the University of York attempted to determine whether deep or shallow learning methods could be used to generate better-rated music compared to human-generated music. They recruited 50 participants with “relatively high musical literacy” who each rated computer- and human-generated music based on dimensions including stylistic achievement, aesthetic pleasure, repetition/self-reference, melody, harmony, and rhythm. The works were all in the classical style and included string quartets and piano improvisations.

The results of the study show that not only is human-composed music more popular than AI-generated sounds, but the strongest deep learning method is equal to the deep learning method. This latter point suggests that deep learning may not be the key to ultimate success with AI-generated music.

What is the difference?

Another critical question in the debate over whether AI or human-generated music is better is how well listeners can tell the difference between the two. In the aforementioned study, listeners were often able to distinguish differences based on the parameters used in the evaluation. Other listeners may point to a subtle ignorance and lack of nuance in AI-generated music.

Still, some studies show that in some cases it may be difficult to distinguish between these two musical groups. The study, conducted by the team behind AI music generator Amper and audio researchers Veritonic, asked participants to tell the difference between AI-based music, human-generated music and stock music. The common man could not tell the difference. Customer service platform Tidio conducted a 2022 survey on AI- versus human-created art and found that respondents found music to be one of the most difficult categories to differentiate between machines and humans. Participants tended to describe songs to the AI ​​that they felt were “too good” or “too complicated,” suggesting that they doubted the abilities of human musicians.

Cheat sheet

As machine-generated music becomes more and more common, listeners and artists must consider the question of who (or what) creates the best music.
In the year By mid-2023, most artificial intelligence (AI) systems designed to make music will not yet be able to create new songs from scratch. Instead, many take input from users or from databases of music samples to create new works.
Two AI tools that can create new songs from scratch are Google's MusicLM and OpenAI's Jukebox.
Both MusicLM and Jukebox have been criticized for creating music that sounds disjointed, disorganized, and generally poor in some examples of human-made works.
Music AI tools are expected to get better over time at creating music that listeners will enjoy listening to.
In the year In a 2023 study by researchers at the University of York, participants generally rated human-composed classical music better than machine-generated music, including stylistic success, aesthetic appeal, melody, and more.
Still, some studies show that many listeners can't tell the difference between computer-generated music and human-composed music.

Stay on top of crypto news, get daily updates in your inbox.

Leave a Reply

Pin It on Pinterest