当前位置: 首页 > 工具软件 > Soul Capture > 使用案例 >

计算机音乐A谱,[CoDesign]计算机可以谱出有灵魂的音乐吗?/Can Computers Write Music That Has A Soul?...

令狐晟
2023-12-01

One day, you bump into your favorite author walking down the

street. His works have made an indelible imprint on you, expressing

sentiments and feelings about the world that you’ve always had but

never quite known how to express. So you grab him by the crook of

his elbow and make all the usual gushing and gurgling noises that

fans make when they meet their favorite authors. You tell him how

deeply his works speak to you, how amazing his plots are. How does

he write such beautiful, haunting prose?

But your favorite author doesn’t brush you aside by claiming

some inner muse. Instead, he confides in you. "The truth is, I

wrote a computer program that allows me to algorithmically generate

entire novels in any style I want!" he excitedly explains. He tells

you that your favorite novel was, in fact, written by this

algorithm: He just told it to analyze his favorite authors, told

his computer about the characters, setting, and themes he wanted to

use, and then asked it to output him a novel he could call his

own.

Not only that, your favorite author says any novel could be

written this way. "I could tell my program to analyze the works of

Vladimir Nabokov for style, Dan Brown for plot, use the complete

cast of Scooby-Doo for characters, and the themes of James Joyce’s

Ulysses, and my algorithm could generate a thousand different

unique novels in just a few days!" he explains. "All you need to do

is know how to tell my algorithm what all those things mean."

Novels, of course, are not written this way, at least not yet.

If they were, you’d likely feel betrayed. But music is, and more so

every day. The future of music, in fact, may largely be written by

algorithm and could even be used to engineer the next big hits. But

how does that impact our understanding of music as an authentic

form of art?

THE PROCESS

"Ninety-nine percent of the time, when I’m composing, I’m

actually naming things."

Slightly lupine and with the quiet nervous energy of the

programmer and musical composer alike, 29-year-old Josiah

Oberholtzer lives in front of a computer. Looking over his slight

shoulder and past the laboratory of Chemexes that sustain him,

Oberholtzer’s computer screen looks like any hacker’s. But to

Oberholtzer, the code he writes in Vim is analogous to any other

composer’s score sheet, largely thanks to Abjad, a Python library

Oberholtzer co-created that allows him to generate human-readable

scores through the music engraving program LilyPond.

A grad student pursuing his doctorate in composition at

Harvard University, Oberholtzer applies the techniques of

electronic music to compose works meant to be played by human

orchestras. Instead of just stringing note after note, Oberholtzer

uses a series of custom tools to translate a nebulous musical

intention into a human-readable score. He does this by trying to

define in words what the finished piece will sound like.

"When I set out to compose, I start out by describing what I’m

trying to do in prose." Oberholtzer tells Co.Design. "It starts out

abstract and poetic, but it gets more concrete over time. Some

things are easy to describe, like harmonies, but others are harder,

and I’ll have to come up with my own methods to describe them. So

that sound that gets made when 20 violins all sort of slap down

together? I’ll try to describe it: How long it is, how thick it is,

and so on."

MY COMPUTER ISN’T WRITING MY MUSIC FOR ME. IT’S JUST HANDLING

THE VERSION CONTROL.

Over time, Oberholtzer establishes a taxonomy of musical

meaning that a computer can understand. It’s the DNA of his own

musical spirit, written in code: What he likes, what he wants his

music to sound like, what different melodies and harmonies mean to

him, what he thinks people should feel when they hear a piece. And

having teased out, coil by coil, the strand of his intentions,

Oberholtzer can use a series of custom tools to help him actualize

what he wants to do.

"When I want to capture some new music concept or idea, I’ll

usually write a tool first, then think about it a lot and work it

into a piece," says Oberholtzer. "These tools are kind of like

meta-instruments, and I can even write tools on top of tools,

giving me a wider palette."

For Oberholtzer, this seems like a perfectly natural way to

write music. "All art is a kind of curatorship. You work through

all these possibilities mentally, and then in the end, you try to

reproduce the one you’ve decided upon. There’s no difference for

me. My computer isn’t writing my music for me. It’s just handling

the version control.”

THE PIONEER

To many, there’s something affronting about composers using

algorithms to write their music, but the technique is as old as the

hills, a fact that David Cope is quick to remind you of.

"Mozart and Bach wrote music using algorithms," says Cope, a

73-year-old pioneer in the field of generative music composition.

"They didn’t have computers, but they used a system to randomly

generate music from a number of pre-composed sections, just by

rolling dice."

Rolling dice is an algorithm, albeit a simple one. And

algorithms, to Cope, are as natural as breathing. "For most of my

life, I’ve felt that we, as humans, are walking algorithms,"

explains Cope. "The way we blink, think, move, and tie our shoes…

all of that’s decided by algorithms in our DNA."

There was a time when Cope composed music more traditionally.

But then writer’s block--or in this case, composer’s block--struck.

"Around 1980, I had this commission for around $5,000 to write a

new piece. I hadn’t written a single note of music in five years,

but I took it anyway, because I had four children to support. I had

already spent the money. Now it was time to actually deliver the

composition and I was stuck."

With deadline looming, Cope’s panicked eyes fell upon an old

terminal in his garage, linked to his university’s mainframe. "It

wasn’t even a computer," Cope laughs. "It was just a TV set and a

modem without any guts in it. There was no computer in it at all."

But the terminal gave Cope an idea. He was already used to using

algorithms to help him compose, albeit simpler, analog ones. What

if he wrote a computer program to help him get over the hump?

"I’d taught Bach in music theory courses for decades, so what

I decided to do was input everything I knew about Bach into a

database," remembers Cope. "For every note of Bach I put in, I had

to record five different parameters. Once I was done, though, I was

able to program an analysis tool that could examine the collected

work of Bach for its salient features, and then produce entirely

new music in that style."

But Cope’s system didn’t just work for Bach. It worked for

Cope. With his tools, Cope could actually input his existing body

of work into a database and come up with entirely new “Cope

originals.” "It ended up being quite the windfall," laughs Cope.

Not only that, but he was able to generate an entire series of

compositions in which a computer modeled classic composers, like

Bach, Mozart and Rachmaninov. Often times, even trained ears have a

hard time distinguishing these compositions from real "lost"

works.

"Early on, I did a lot of blind tests of my synthesized works

versus real ones," Cope recalls. "Eventually, I stopped, because

people got so mad I feared for my safety." Once, Cope’s latest

composition was given a scaling review, in which the reviewer said

that the composition was obviously computer generated. Later, that

same reviewer came to Cope after another performance. "He said that

the two compositions were night and day, because the latter had

'soul.'"

It was the exact same piece.

THE BUSINESS

What using computer algorithms to compose music requires is a

way of explaining how music relates to human beings in a language a

computer can understand. That isn’t the future, or even the outlier

of the present. We already live in an age when even the computers

in our pants pockets know how to do this. Load up Spotify or Rdio

on your iPhone and, behind the music, you’re loading up an engine

that knows how to examine the music you listen to and recommend new

songs to you based upon what you like. That engine is built by the

Echo Nest, a music intelligence company that uses computers to

'listen’ to music, analyze it, and then contextualize what its

hearing by crawling the web for human reviews.

Every single day, the Echo Nest recommends tens of millions of

songs to people based upon algorithms. In all likelihood, you have

listened to a song that a computer in the server room of the

company’s Somerville, Massachusetts, headquarters has told you that

you will like. What you might not know is that the same techniques

used by composers like Oberholtzer and Cope to generate entirely

new musical works are also being used by your favorite streaming

music app to recommend your next jam.

It’s a multimillion-dollar business, and growing more

lucrative every day. But back in 2005, the Echo Nest’s core

business was built on the back of a Ph.D. thesis by co-founder

Tristan Jehan, not in how to recommend music to customers in the

digital age, but in how to generate entirely new musical works just

by listening.

The difference between Oberholtzer and Cope’s techniques and

those of the Echo Nest come down to approach. "The way we did it

was very different from the way some composers do it," Jehan tells

Co.Design. "We came up with a system where a computer could just

listen to existing audio and figure out all the rhythms, harmonies,

tempos, and beats. We didn’t examine existing works and try to tell

our computer what it all meant. We taught the machine to listen to

music and derive the meaning for itself."

Once Jehan’s algorithm had listened to enough music by an

artist, it could then synthesize new works in that artist’s style.

"The way it worked was by finding the closest sounds in a huge

library of audio examples of an artist’s works," recalls Jehan.

"Let’s take James Brown. We could use James Brown’s existing sound

library and recombine them intelligently in such a way that it

would produce an entirely new work, not based on Brown’s intention

or what he wrote down, but based just on what James Brown’s music

actually sounds like."

COMPUTERS AND ALGORITHMS DON’T USURP MY AGENCY AS AN ARTIST.

THEY EMPOWER IT.

Teaming up with fellow MIT alum Brian Whitman, who was working

on the other half of the problem--teaching computers what music

meant to us--the Echo Nest was eventually born. But it almost

became something more than a recommendation engine.

"Early on, when we started the Echo Nest, some people wanted

to use our technology to help them come up with algorithms that

could come up with the next big hit," says Jehan. "We could do it,

but we find that to be evil in some way. It would help publishers,

but what we really want to do is help people find the music they

like, not strip music down to a single formula of success."

Music. Composed by algorithms. Programmed for big hits. Even

music publishers are interested. Could this be the future of music?

And would anything be lost if it were?

THE AUTHENTIC

All of this returns us to the question of authenticity. Is

music that a computer or algorithm has helped to write as

"authentic" as music that has been written in a more traditional

way?

"If you write music with a computer, there’s this perception

from people that somehow authenticity has been removed," says

Oberholtzer. "What people are really saying here is that computers

are magic. That’s not true. I know how computers and algorithms

work. They are my tools. When people talk about how computers

remove authenticity, or soul, or the human touch, what they are

talking about is agency: an artist’s ability as a human to make

creative decisions based upon what they want to express. But

computers and algorithms don’t usurp my agency as an artist. They

empower it."

AT LEAST IN PART, MUSIC HAS ALWAYS BEEN WRITTEN BY

ALGORITHM.

Echo Nest co-founder Tristan Jehan agrees. "Electronic, rap,

hip-hop. There’s all kind of music that couldn’t exist if the

technology to do it wasn’t there. Machine listening and automatic

composition are just tools for humans to use, and a human always

needs to program them. The machine will never replace the artist.

There’s always a creative process."

None of this is new, as David Cope is eager to stress. "At

least in part, music has always been written by algorithm,” he

says. “All computers do is allow the algorithms we write to be more

exquisite and more sophisticated. I can’t think of anything we

can’t do in the future with them."

Throughout history, every artist has ultimately faced the same

task: To examine the creative murk inside themselves and come up

with a process to explain the unique way they see the world to

another human being. These processes have always been algorithms,

and every brain on Earth is a computer of a kind. The computer

algorithm is as old as art is. If that’s not authentic, what

is?

 类似资料: