Claude just analyzed the emotional content of a piece of music for me -- quite accurately -- by just looking at an uploaded PDF of the score. How does that work? "It's a nonlinear system" or "it's a bunch of matrix multiplication" is in no useful way an explanation. That's way down at the bottom of an explanatory abstraction hierarchy that we have only begun to make tools to begin to explore. It's like asking how humans work and getting the answer "it's just undergraduate chemistry!"
The study of LLMs is much closer to biology than engineering.
Did it show you the reasoning? Did it recognize the notes, the scale, tempo and determine the emotion effect of these or did it use some other reasoning?
I don't know about the reasoning, but it gave "evidence" for its observations in much the same way a human composer might. In terms of "understanding", it seemed in the ballpark of what I get when I ask it to explain some code.
I don't want to paste in the whole giant thing, but if you're curious: [0]
Impressive, it clearly is able to read the score, see patterns, timing, chords, and apply music theory to it. Would be interesting to give it editing and playback capabilities, e.g., by connecting it to something like strudel.
This article describes how Belgian supermarkets are replacing music played in stores by AI music to save costs, but you can easily imagine that the ai could also generate music to play to the emotions of customers to maybe influence their buying behavior: https://www.nu.nl/economie/6372535/veel-belgische-supermarkt...
The study of LLMs is much closer to biology than engineering.