The Lobster Programming Language

(strlen.com)

120 points | by keyle 5 days ago

11 comments

  • BoppreH 1 day ago
    Lots of good ideas here.

    Flow-sensitive type inference with static type checks is, IMHO, a massively underrated niche. Doubly so for being in a compiled language. I find it crazy how Python managed to get so popular when even variable name typos are a runtime error, and how dreadful the performance is.

    All the anonymous blocks lend themselves to a very clean and simple syntax. The rule that 'return' refers to the closest named function is a cute solution for a problem that I've been struggling with for a long time.

    The stdlib has several gems:

    - `compile_run_code`: "compiles and runs lobster source, sandboxed from the current program (in its own VM)."

    - `parse_data`: "parses a string containing a data structure in lobster syntax (what you get if you convert an arbitrary data structure to a string) back into a data structure."

    - All the graphics, sound, VR (!), and physics (!!) stuff.

    - imgui and Steamworks.

    I'll definitely be watching it, and most likely stealing an idea or two.

    • benrutter 1 day ago
      > I find it crazy how Python managed to get so popular when even variable name typos are a runtime error

      Tangential point, but I think this might be one of the reasons python did catch on. Compile checks etc are great for production systems, but a minor downside is that they introduce friction for learners. You can have huge errors in python code, and it'll still happily try to run it, which is helpful if you're just starting out.

      • em-bee 23 hours ago
        i don't think typing was the issue. at the time there didn't exist any typed languages that were as easy to use as python or ruby. (ok, not true, there did exist at least one: pike (and LPC which it is based on). pike failed for other reasons. otherwise if you wanted typed your options were C, C++ and then java. none of which were as easy and convenient to use as python or ruby. java was in the middle, much easier than C/C++, and that did catch on.
      • dmit 1 day ago
        What about when you have a long-running program. You can't both brag about NumPy, Django, and the machine learning library ecosystem while also promoting "It's great for when you just want to get the first 100 lines out as soon as possible!"

        I am guessing that Python, like Ruby, is dynamic enough that it's impossible to detect all typos with a trivial double-pass interpreter, but still.

        Wonder if there was ever a language that made the distinction between library code (meant to be used by others; mandates type checking [or other ways of ensuring API robustness]), and executables: go nuts, you're the leaf node on this compilation/evaluation graph; the only one you can hurt is you.

      • grey-area 1 day ago
        Is it though?

        As long as warnings are clear I’d rather find out early about mistakes.

        • discreteevent 1 day ago
          People learn by example. They want to start with something concrete and specific and then move to the abstraction. There's nothing worse than a teacher who starts in the middle of the abstraction. Whereas if a teacher describes some good concrete examples the student will start to invent the abstraction themselves.
        • torginus 22 hours ago
          I think languages with strong support for IDE type hints as well as tooling that takes advantage of it are a fairly recent phenomenon, except for maybe Java and C# which I think are regarded by the wider hacker community as uncool.

          C++/C IDE support is famously horrible owning to macros/templates. I think the expectation that you could fire up VS Code and get reliable typescript type hints has been a thing only for a decade or so - for most of modern history, a lot of people had to make do without.

        • lock1 1 day ago
          It looks like it.

          Based on what I observe as an occasional tutor, it looks like compiler warnings & errors are scary for newcomers. Maybe it's because it shares the same thing that made math unpopular for most people: a cold, non-negotiable set of logical rules. Which in turn some people treat warnings & errors like a "you just made a dumb mistake, you're stupid" sign rather than a helpful guide.

          Weirdly enough, runtime errors don't seem to trigger the same response in newcomers.

          • anticristi 1 day ago
            Interesting angle: Compiler errors brings back math teacher trauma. I noticed Rust tries to be a bit more helpful, explaining the error and even trying to suggest improvements. Perhaps "empathic errors" is the next milestone each language needs to incorporate.
            • Hemospectrum 1 day ago
              I suddenly understand part of why experienced programmers seem to find Rust so much more difficult than those who are just beginning to learn. Years of C++ trauma taught them to ignore the content of the error messages. It doesn't matter how well they're written if the programmer refuses to read.
          • grey-area 1 day ago
            Interesting. I think over the long term many people come to realise it's better to know at compile time (when they mistype something and end up with a program that runs but is incorrect it's worse than not running and just telling you your mistake). But perhaps for beginners it can be too intimidating having the compiler shout at you all the time!

            Perhaps nicer messages explaining what to do to fix things would help?

          • gdulli 1 day ago
            That's surprising because runtime debugging depends on the state of the call stack, all the variables, etc. Syntax errors happen independent of any of that state.
        • foolfoolz 1 day ago
          this is the “types make me slow” argument that everyone self debunks after they program that way for a handful of years
          • zahlman 1 day ago
            > that everyone self debunks

            Speak for yourself.

  • rf15 1 day ago
    This is made by aardappel/Wouter van Oortmerssen, semi-famous OS game developer (Cube/Sauerbraten) and programming language designer. It's probably related to his recent game development work.
    • aquariusDue 1 day ago
      He also made Treesheets which is where I first heard about him. I recommend people interested in Personal Knowledge Management or related stuff check Treesheets out because while uglier compared to Obsidian there's some really great ideas in there. I won't spoil the fun but if you've got 15 minutes it's pretty easy to go through the tutorial.

      https://github.com/aardappel/treesheets

  • skybrian 1 day ago
    It's used for a new game called Voxile which is discussed here:

    https://news.ycombinator.com/item?id=47239042

  • Luxusio 1 day ago
    Built-in vector ops, ImGui, and WebAssembly target? This reads like it was designed by someone who actually ships games.
    • jasonjmcghee 1 day ago
      You forgot you include in your list

      > Dynamic code loading

      • em-bee 23 hours ago
        can it reload code at runtime? can i change the definition of a function or class while the program is running?
        • Aardappel 23 hours ago
          No. It is a fairly static language, with whole program compilation and optimization. The dynamic loading the parent refers to is like a new VM instance, very different from class/function (re)loading.
  • throw10920 22 hours ago
    Lobster's design where the borrow checker/lifetime analysis automatically inserts reference counters if it can't statically determine lifetime is so vastly superior to Rust's approach (force you to do it by hand like it's 1980 and you are the compiler) that it's not even funny.

    This is what good language design looks like.

  • netless 22 hours ago
    Ha, this is by amiga E language author! Back in the day it was quite an interesting language
  • crabsand 20 hours ago
    I see implementation inheritance there and I don't like it. Otherwise cool language.
  • b112 1 day ago
  • mastermage 1 day ago
    Another Crusty language
    • jgavris 1 day ago
      Almost spit out my coffee
      • mastermage 23 hours ago
        what can i say except your welcome.
  • irenetusuq 1 day ago
    [dead]
  • v3ss0n 1 day ago
    With LLM do we actually need new programming languages?
    • Aardappel 23 hours ago
      Even though this just showed up on HN (for the 10th time?) the Lobster project started in ~2010. No LLMs on the horizon back then.

      And while Lobster is a niche language LLMs don't know as well, they do surprisingly well coding in it, especially in the context of a larger codebase as context. It occasionally slips in Python-isms but nothing that can't be fixed easily.

      Not suitable for larger autonomous coding projects, though.

    • mpweiher 1 day ago
      Yes.

      In fact, LLMs have shown that we really, really need new programming languages.

      1. They have shown that the information density in our existing languages is extremely low: small prompts can generate very large programs.

      2. But the only way to get that high information density now (with LLMs) is to give up any hope of predictability. I want both.

      • grey-area 1 day ago
        Small prompts leading to large programs has absolutely nothing to do with programming languages and everything to do with the design of the word generators used to produce the programs — which ingest millions of programs in training and can spit out almost entire examples based on them.
      • mikkupikku 1 day ago
        APL?
      • hagbard_c 1 day ago
        > They have shown that the information density in our existing languages is extremely low: small prompts can generate very large programs.

        "Write a book about a small person who happens upon a magical ring which turns out to be the repository of an evil entities power. The small person needs to destroy the ring somehow, probably using the same means it was created"

        ...wait a few minutes...

        THE LORD OF THE RINGS

        http://lotrproject.com/statistics/books/wordscount

    • nu11ptr 1 day ago
      Sadly, probably not. I fear new languages will struggle from here on out. As a language guy, very few things in this new AI world make me more sad than this.
      • brabel 23 hours ago
        I don't get the feeling this will happen. LLMs are extremely good at learning new languages because that's basically their whole point. If your new language has a standard library, and the LLM can see its source code, I am sure you can give it to any last-generation AI and it will happily spit out perfectly correct new code in it. If you give it access to a reference docs, then it can even ensure it never generates syntactically incorrect code quite well. As long as your error messages are enough to understand what a problem's root cause is, the LLM will iterate and explore until it gets it right.

        Not sure if this is a good example, but I used ChatGPT (not even Codex) to fix some Common Lisp code for me, and it absolutely nailed it. Sure, Common Lisp has been around for a long time, but there's not so much Common Lisp code around for LLMs to train on... but OTOH it has a hyperspec which defines the language and much of the standard libraries so I believe the LLM can produce perfect Common Lisp based on mostly that.

    • xscott 1 day ago
      I think it would be cool if a language specifically for LLMs came about. It should have something like required preconditions and postconditions so that a deterministic compiler can verify the assumptions the LLM is claiming. Something like a theorem prover, but targeted specifically for programming and efficient compilation/runtime. And it doesn't need all the niceties human programmers tend to prefer (implicit conversions comes to mind).
      • flir 1 day ago
        If you're that confident in the LLM's output, just train it to output some kind of intermediate language, or even machine code.

        And if you're not that confident, shouldn't you still be optimising for humans, because humans have to check the LLM's output?

        • CuriouslyC 23 hours ago
          At least in programming, humans have to check the product of the LLM's output rather than the output itself.
      • onlyrealcuzzo 1 day ago
        I'm working on this now.

        It's a Profile Guided Optimization language - with memory safety like Rust.

        It's extremely easy to optimize assuming you either 1) profile it in production (obviously has costs) or 2) can generate realistic workloads to test against.

        It's like Rust, in that it makes expressing common illegal states just outright impossible. Though it goes much further than Rust.

        And it's easier to read than Swift or Go.

        There's a lot of magic that happens with defaults that languages like Zig or Rust don't want, because they want every cost signal to be as visible as possible, so you can understand the cost of a line and a function.

        LLMs with tests can - I hope - do this without that noise.

        We shall see.

        • ModernMech 1 day ago
          Do you have a repo?
          • onlyrealcuzzo 1 day ago
            Yes.

            I'm almost ready to launch v0.1 - but the documentation is especially a mess right now, so I don't want to share yet.

            I'll update this comment in a week or so [=

    • CuriouslyC 23 hours ago
      Possibly. First, I think there's still low hanging fruit in creating a programming language designed to be as easy as possible for agents to work with that we won't try to unlock until people writing code is a curiosity. Second, agents don't care about verbosity of code, so we can do verbosity/correctness/tooling tradeoffs that wouldn't have made sense when humans were the sole consumers of the code.
    • EricRiese 1 day ago
      A more concise language is more efficient for an LLM to produce and easier for a human to verify.

      I don't think LLMs have solved the problem of wanting code that's concise and also performant.

      • CuriouslyC 23 hours ago
        But a less concise language is (theoretically, if you're doing useful stuff with the verbosity) easier for machines to verify.
    • cgio 1 day ago
      Yes, languages that talk to llms that is.
    • ModernMech 1 day ago
      Yes, very much so. Programming languages are tools for thought, if you want your LLMs to think better, then they'll need better tools for thinking than the ones we have today. That they are mostly thinking in and writing Python is incidental to when they were born and a limitation of current AI technology, not its final evolution.
    • synergy20 1 day ago
      imho no
      • Muhammad523 1 day ago
        Novel programming languages have still educational value for those building them and, yes, we still need programming languages. I dont see any reason we would not need them. Even if Ai is going to write the code for you; how is it going to write it with no programming language? With raw binary? Absolutly not.
        • cv5005 1 day ago
          Eventually it wont need to write any code at all. The end goal for AI is "The Final Software" - no more software needs to be written, you just tell the AI what you actually want done and it does it, no need for it to generate a program.
          • ModernMech 1 day ago
            But how do you know AI can generate programs without writing code? It can't today -- in fact the best thinking models work by writing code as part of the process. Natural intelligence requires it as well, as all our jobs are about expressing problem domains formally. So why would we expect an artificial intelligence should be able to reason about the kinds of programming problems we want them to without a formal language?

            Maybe they will not be called programming languages anymore because that name was mostly incidental to the fact they were used to write programs. But formal, constructed languages are very much still needed because without them, you will struggle with abstraction, specification, setting constraints and boundaries, etc. if all you use is natural language. We invented these things for a reason!

            Also the AI will have to communicate output to you, and you'll have to be sure of what it means, which is hard to do with natural language. Thus you'll still have to know how to read some form of code -- unless you're willing to be fooled by the AI through its imprecise use of natural language.

        • synergy20 1 day ago
          how did you get that 'no programming language' conclusion? there are so many well established languages that are more than we need already, the market has picked the winners too, and AI has well trained with them, these are facts. If there is a new language needed down the road for AI coders, most likely it will be created by using AI itself. for the moment, human created niche language is too late for the party, move on.