Show HN: Ableton Live MCP

(github.com)

61 points | by bschoepke 5 hours ago

12 comments

  • windowliker 3 hours ago
    For me, the point of making music is making it myself. If want to have something done for me I could just play someone else's record and pretend like I made it.
    • ollysb 10 minutes ago
      Each genre has a fairly tight envelope within which to operate. Regardless 90% of tracks never make it to the finish line because hobbyists haven't learnt them well enough to groove them out. If with a little help these tracks were all finished then bedroom producers will over time learn what works and be able to explore more.
      • moritzwarhier 1 minute ago
        I think the parent comment was saying that the problem is not quantity, but quality.

        Warping my mind back into a hobby-enthusiast music producer mindset:

        an MCP that generates presets for a limited pipeline with many sweet spots sounds... interesting?

        To me, the idea of being able to have, say, a chain of a simple VA synth + delay + compressor and a very simple step sequencer, combined with prompting and a genAI model that spits out patches, sounds very endearing and interesting.

        Much more interesting than Gemini or Suno for example.

        Depends on the training and input space of course.

        I deliberately described a limited setup, the controls of which could be described in less than a kilobyte.

        Many dance music synth patterns could be described by simple means (tracker/step sequencer, looping, a few knobs).

        That's what makes a lot of music interesting.

        I can easily imagine a producer creating very individual and interesting output by unleashing the right models.

        I think, just like with human producers, constraints liberate.

        An AI controlling a very limited synthesis chain is more interesting than a very complex synthesis chain controlled by a human with no musical "vibe".

    • PaulDavisThe1st 43 minutes ago
      When we recently added MCP to Ardour (a cross-platform FLOSS DAW), the goal wasn't to get the machine to make the music for you, it was to provide alternate ways of interacting with the DAW (particularly for those with visual impairments that make voice control preferable).
    • brookst 2 hours ago
      The is the age-old music parochial thing. "Oh, he's just in a cover band, he doesn't write anything" / "Oh, she's just a composer, she can't even play the stuff she writes" / "Oh, he writes and plays his own stuff but knows fuck all about theory so it's not real music" / etc.

      Me, I'm having a blast with claude code, MCP, and Ableton. I'm directing harmony and asking for arrangements and variations in rhythm, mixing, and production. Don't know if that counts as "making it myself", but then I was writing music before I could actually play any instrument at all, so :shrug:

    • brandonb 2 hours ago
      Previous generations might have said the same thing about Ableton itself, vs playing a physical instrument. In that regard, AI might become just another power tool for creative expression.
      • vunderba 1 hour ago
        I’ve always said that the more divergent the input is from the resulting output, then the less personal expression you have. For me, in order of moving away from meaningful control in generative models, it goes: “text → code,” “text → picture,” and, at the very bottom, “text → music.”

        For me personally, music composition begins and ends with the motif - the melody itself. It’s the part I enjoy the most, and it’s also the part I have the most individual control over since I can sing.

        Everybody makes music differently, but if you lack the ability to play an instrument and you also can’t whistle or sing, it’s hard for me to imagine how you’d have any meaningful control over the melody.

        How would a non‑musician express an actual melody that they came up with (beyond simple things like instrumentation and general “feelings”) in text? RED RED RED BLUE. (Sorry couldn't resist a Mission Hill reference here.)

        With all that out of the way, there's still lots of room for using AI in music. I’ve used it to take some of my existing songs, mostly pianistic in nature, and swap out instrumentation and arrangements just to play around with different soundscapes. It's like BIAB on steroids.

      • tkiolp4 1 hour ago
        Agree to some extent. At some point though we jump the thin line between creative expression and… magic?

        Like if at some point I can just say “Generate a song similar to Smooth Criminal, different enough to not trigger copyright claims” and it just works, and everyone loves it… well is that creative thinking?

      • cardanome 49 minutes ago
        > AI might become just another power tool for creative expression

        It is NOT a digital tool to create art. Yes, people used to be snobbish about digital art. Some still are. This doesn't say anything about generative AI because that isn't a tool.

        The closest equivalent is hiring someone on fiverr to create music for you and claiming you created the music because you wrote the "prompt".

        There is nothing creative about using generative AI. Is is a form of management. The difference is that instead of extracting labor directly your are extracting dead labor from the million of artists whose work was stolen to train the AI.

    • windowliker 2 hours ago
      I will caveat my first comment by also noting that I am well versed in computer music history, and read many many papers in CMJ[1] and elsewhere about generative and automatic composition tools such as Emily Howell[2]. I do NOT have a problem with generative, algorithmic and automatic composition in this sense, as an extension of the creative intentions of the human composer, in the right context. See also Autechre[3] for what can be done with Markov chains and good taste. What we are discussing here is the musical equivalent of a dishwasher.

      [1] http://www.computermusicjournal.org/

      [2] https://en.wikipedia.org/wiki/David_Cope#Emily_Howell

      [3] http://autechre.ws/

      Addendum: I would highly recommend the Margaret Boden book referenced in the wiki on David Cope/Emily Howell, which is an absolutely fascinating read and was incredibly far-sighted in its enquiries on this topic.

      • PaulDavisThe1st 42 minutes ago
        > What we are discussing here is the musical equivalent of a dishwasher.

        A dishwasher that may have been taught about Markov chains ...

      • jrajav 1 hour ago
        Can I ask what the specific markers / qualifiers are for you to consider (let's call them) 'classical' generative and algorithmic techniques fair game in creative composition, but LLM agent based techniques not so?

        To me, it seems like the "do it for me" aspect is similar, just at different levels of abstraction.

        • windowliker 1 hour ago
          Firstly, they all came to the use of those techniques after having been through years of work the 'hard way', often being able to play to a conservatoire standard, and had a very extensive grounding in the tradition that came with that. Then they owned* or designed the thing they were asking to 'do it for me' and could modify it at their discretion, effectively making it an integral element of the composition. The prior training was crucial in getting anything good out of any of it IMO (high level reflection based on canon knowledge and deeply considered personal sensibility, etc.)

          * I suppose in the early days, running on an mainframe would belie the definition of ownership per se, as it required access and was limited to that specific machine/institution, but then we are talking about a time where personal computing wasn't available.

          • jrajav 16 minutes ago
            Thanks for your well considered response. I disagree with the notion that extensive classical training is required in order to make beautiful, noteworthy music. There are innumerable counterproofs of this in every era of music. I also disagree that fully and deeply owning/designing one's tools is required - though I understand that we are more specifically talking about generative tools, I personally argue there's not enough meaningful distinction. One chooses to exercise intent, whether the tool is acoustic or digital, general or hyper-focused. And fully understanding the workings of every tool is a fool's errand in this modern age.

            Whether these then extend to AI and LLMs I still can't fully say. There is, obviously, some kind of qualitative leap here. I'm not fully settled.

            But I guess I lean more towards - it is a tool, let people use it to make their own beauty.

        • semolino 1 hour ago
          The main difference is tweakability: With classical generative and algorithmic composition, the human can change parameters in real time and more closely guide the shape of the piece.
          • windowliker 57 minutes ago
            This as well. Most 'classical' algorithmic music had an element of expressiveness allowed to the composer in the moment.
    • jrm4 2 hours ago
      I get why people make gut statements like this, and to me something does feel different about AI.

      But I realize I have not seen any criticisms of AI generated music that are meaningfully different from criticisms I've heard of other advances/changes in music technology, whether performance or recording.

      Sampling, scratching, drum machines, autotune, electric guitars even.

      • windowliker 12 minutes ago
        The main unconsidered criticism that used to come from old-school musos was that 'you press a button and the synthesizer/drum machine/whatever does it all for you'... Only now is that perhaps coming to be true.

        There's a difference between technology/technique that adds a new sonic palette to the canon, and one that takes away the necessity to have any direct input in the process of production. I guess we'll find out which this is if there's a wave of novel AI assisted genres that emerge, or not, as may be the case.

      • Jtarii 1 hour ago
        Well in "traditional" music production every individual component of a song has the creative intent of the artist in it. With AI you have no idea if there is any intent or if its just something an LLM spat out.

        If all you care about is the raw sound file created and you don't care about the connection you might feel with the artist behind it then maybe intent isn't relevant to you.

    • plastic-enjoyer 3 hours ago
      Welcome to the era of instant gratification.
  • ssalka 2 hours ago
    Things I would use AI for in music production:

    1. Generating track layouts (add tracks + empty audio/midi clips throughout)

    2. Generating MIDI sequences

    3. Generating Serum patches

    4. Extracting stems from existing audio

    5. Automating common workflows (eg sidechaining)

    6. Semantic search of sample library

    That being said, I don't think I want a full agentic workflow for vibe-producing. Point solutions seems like a better fit for me, personally.

  • breakall 1 hour ago
    Yes! I want this for MainStage -- this would allow me to automate my weekly template setup for playing at my church. Each week before practicing I look up the songs in Planning Center and create a new MainStage concert file with one patch per song, and add notes to each patch screen with the song's key, etc. Automating this would save me the time of doing the busy work and get right to practicing.
  • PaulDavisThe1st 45 minutes ago
    MCP for Ardour was added more than a month ago, thanks to contributor zabooma:

    https://github.com/Ardour/ardour/commit/d582a0b042a68ccb22c0...

  • jhurliman 2 hours ago
    Very cool! I posted my own experiments in this area a few months back, which were an iteration on an existing Ableton MCP. It’s great to see more people experimenting in the spaces of interfacing with complex applications and music production.

    https://news.ycombinator.com/item?id=46428922

  • bschoepke 5 hours ago
    Ever wanted to control Ableton with just your voice? Me too! I made this MCP server so I could just ask Codex to do anything in Ableton Live for me, while I was nap-trapped by my baby.

    The chat messages I sent to Codex to make this:

    in ableton, make a self reflective song, with audio vocals (via macos say) and chip tunes and 80's drum machines. should be a real edm banger

    i want midi for everything but vocals please, with ableton devices. not prerendered audio for instruments

    needs some fills

    and should hit way harder after "3-2-1 i become the sound"

    the vocals are squished too much (read too quickly), give them a little more length

    add some dynamics, the song is basically one volume. and some pumping side chain

    improve dynamics of the clap, seems a bit flat and indistinguished, want it harder after the 3-2-1 drop

    introduce a new element on a new track after the 3-2-1 drop, that comes in but then recedes before the final exit

    doesn't seem like the new thing has any notes

    the element is a bit muddy/indistinct. perhaps it needs simplification and more space, different instrument choice, i dunno

    • wartywhoa23 4 hours ago
      > Ever wanted to control Ableton with just your voice?

      Never.

      • jrajav 1 hour ago
        I guess the guidelines don't apply to you, as long as you disagree vehemently enough with the OP's basic intent.
        • wartywhoa23 1 hour ago
          No amount of guidelines will make me lie in my replies.
    • deng 3 hours ago
      > should be a real edm banger

      I'm afraid Codex ignored that one.

  • xrd 2 hours ago
    Does anyone know of other MCP servers for similar music creative tools? I'm interested in things like sonic-pi, strudel.cc and orcas. But very open to anything. I think there is a good opportunity for kids to learn using these tools, especially if I can wire it into my mycroft.ai/neon device.
  • ktbwrestler 1 hour ago
    this is awesome. Does anyone recommend one for Logic Pro X? I see a few in the wild but would love to help support if anyone is tinkering with one
  • robotswantdata 3 hours ago
    If you’ve gone to the trouble of setting up Ableton MCP, you’ve already worked harder than Suno requires to make a banger
    • cyclopeanutopia 2 hours ago
      XD
    • dyauspitr 2 hours ago
      Honestly, music generation is solved. You don’t have fine control at this point but people are unable to tell the difference anymore. There are tons of YouTube videos with blind tests between real artists and AI and people have no idea.
      • wartywhoa23 1 hour ago
        The Matrix wants everything solved, doesn't it?
      • siquick 1 hour ago
        Solved !== Good
  • markalby 3 hours ago
    is this using M4L or the LOM ?
  • m_ramdhan 2 hours ago
    [flagged]