Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You specifically brought up Alpha Go dismissing the conventional wisdom on how Go should be played. Many of the things we thought we knew about the game turned out to be wrong and the game as a whole was turned on its head.

None of that applies to music. Nobody who studies this stuff seriously is under any sort of illusion that 12-TET is the "right" way to play music. I know a fair few professional musicians, and I've "talked shop" with as many of them as I could, and the deficiencies of 12-TET recurringly come up. There is nothing here to "dismiss".

Don't get me wrong: The idea of computationally-optimised tuning sounds really interesting, and the discussion of what we should be optimising for would itself be fascinating to follow. It's just that people are already doing that sort of thing manually today, so there's no big "oh no we're doing it wrong" dismissal of the status quo waiting at the end.



> None of that applies to music.

But how would we know that? People thought music was figured out and then atonal music was invented/discovered/re-discovered (whatever you prefer).

We are somewhat speaking about different things. You talk about people playing instruments, and you are sort of right, all possibilities were explored.

I'm talking about audio files with songs, many of which are currently being produced with software using a specific tuning (typically 12-TET). But in this world the tuning is just an artifact of the production process, it's not fundamental like in your world.

The current picture producing AIs don't start with a blank digital canvas and drag digital brushes over it, they synthesize the image in a holistic way and in this world the "brush" can be unique at each position.

More precisely, I'm thinking that music producing AIs could make music where the first 5 seconds of the lead instrument uses 12-TET and then switches to another, the backing bass track uses a different tuning, the vocal sings to yet another one yet it all comes together beautifully. And the tunings used could morph during the song duration. In a way this means that there is no tuning at all.


Again, the point isn't that there's nothing left to learn. There's plenty to learn, and plenty to explore, and the whole field of applying computational methods of all sorts to music is a treasure trove waiting to be explored.

What I'm saying is that the situation with Go was completely different. The Go community was utterly convinced that the state of the art was within a couple stones of the hand of god, and Alpha Go thoroughly disabused them of that notion. The status quo was completely shattered, and the community's understanding of the game as a whole was completely upended. It's entirely fair to describe the situation as "and then AlphaGo appeared and dismissed all this"

The situation in music is very different. Ethnomusicology has been a thing since the mid-20th century, and musicology in general has swung away from prescriptivism and more towards descriptivism. There can be no earth-shattering revelations here, not because our current understanding of music is unassailable, but simply because there is no earth to shatter to begin with. AI-drive computational music might produce some innovative work around how we understand pitch and tunings, but that work won't dismiss our current understanding of those things, it'll sit alongside it.

Ok, this is fairly long winded, but the point is that I take issue with the "dismiss" part of it all, I guess.


I think the key difference is that playing go is about winning (at least, presumably that's what the AI is optimized for). Music is not.

(I also agree with others in this thread that the popular commitment to equal temperament is exaggerated -- it's not all that uncommon to hear good musicians of various styles playing/singing/synthesizing "out of tune" music for various effects).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: