Flow-sensitive type inference with static type checks is, IMHO, a massively underrated niche. Doubly so for being in a compiled language. I find it crazy how Python managed to get so popular when even variable name typos are a runtime error, and how dreadful the performance is.
All the anonymous blocks lend themselves to a very clean and simple syntax. The rule that 'return' refers to the closest named function is a cute solution for a problem that I've been struggling with for a long time.
The stdlib has several gems:
- `compile_run_code`: "compiles and runs lobster source, sandboxed from the current program (in its own VM)."
- `parse_data`: "parses a string containing a data structure in lobster syntax (what you get if you convert an arbitrary data structure to a string) back into a data structure."
- All the graphics, sound, VR (!), and physics (!!) stuff.
- imgui and Steamworks.
I'll definitely be watching it, and most likely stealing an idea or two.
> I find it crazy how Python managed to get so popular when even variable name typos are a runtime error
Tangential point, but I think this might be one of the reasons python did catch on. Compile checks etc are great for production systems, but a minor downside is that they introduce friction for learners. You can have huge errors in python code, and it'll still happily try to run it, which is helpful if you're just starting out.
i don't think typing was the issue. at the time there didn't exist any typed languages that were as easy to use as python or ruby. (ok, not true, there did exist at least one: pike (and LPC which it is based on). pike failed for other reasons. otherwise if you wanted typed your options were C, C++ and then java. none of which were as easy and convenient to use as python or ruby. java was in the middle, much easier than C/C++, and that did catch on.
What about when you have a long-running program. You can't both brag about NumPy, Django, and the machine learning library ecosystem while also promoting "It's great for when you just want to get the first 100 lines out as soon as possible!"
I am guessing that Python, like Ruby, is dynamic enough that it's impossible to detect all typos with a trivial double-pass interpreter, but still.
Wonder if there was ever a language that made the distinction between library code (meant to be used by others; mandates type checking [or other ways of ensuring API robustness]), and executables: go nuts, you're the leaf node on this compilation/evaluation graph; the only one you can hurt is you.
People learn by example. They want to start with something concrete and specific and then move to the abstraction. There's nothing worse than a teacher who starts in the middle of the abstraction. Whereas if a teacher describes some good concrete examples the student will start to invent the abstraction themselves.
I think languages with strong support for IDE type hints as well as tooling that takes advantage of it are a fairly recent phenomenon, except for maybe Java and C# which I think are regarded by the wider hacker community as uncool.
C++/C IDE support is famously horrible owning to macros/templates. I think the expectation that you could fire up VS Code and get reliable typescript type hints has been a thing only for a decade or so - for most of modern history, a lot of people had to make do without.
Based on what I observe as an occasional tutor, it looks like compiler warnings & errors are scary for newcomers. Maybe it's because it shares the same thing that made math unpopular for most people: a cold, non-negotiable set of logical rules. Which in turn some people treat warnings & errors like a "you just made a dumb mistake, you're stupid" sign rather than a helpful guide.
Weirdly enough, runtime errors don't seem to trigger the same response in newcomers.
Interesting angle: Compiler errors brings back math teacher trauma. I noticed Rust tries to be a bit more helpful, explaining the error and even trying to suggest improvements. Perhaps "empathic errors" is the next milestone each language needs to incorporate.
I suddenly understand part of why experienced programmers seem to find Rust so much more difficult than those who are just beginning to learn. Years of C++ trauma taught them to ignore the content of the error messages. It doesn't matter how well they're written if the programmer refuses to read.
Interesting. I think over the long term many people come to realise it's better to know at compile time (when they mistype something and end up with a program that runs but is incorrect it's worse than not running and just telling you your mistake). But perhaps for beginners it can be too intimidating having the compiler shout at you all the time!
Perhaps nicer messages explaining what to do to fix things would help?
That's surprising because runtime debugging depends on the state of the call stack, all the variables, etc. Syntax errors happen independent of any of that state.
This is made by aardappel/Wouter van Oortmerssen, semi-famous OS game developer (Cube/Sauerbraten) and programming language designer. It's probably related to his recent game development work.
He also made Treesheets which is where I first heard about him. I recommend people interested in Personal Knowledge Management or related stuff check Treesheets out because while uglier compared to Obsidian there's some really great ideas in there. I won't spoil the fun but if you've got 15 minutes it's pretty easy to go through the tutorial.
No. It is a fairly static language, with whole program compilation and optimization. The dynamic loading the parent refers to is like a new VM instance, very different from class/function (re)loading.
Lobster's design where the borrow checker/lifetime analysis automatically inserts reference counters if it can't statically determine lifetime is so vastly superior to Rust's approach (force you to do it by hand like it's 1980 and you are the compiler) that it's not even funny.
Even though this just showed up on HN (for the 10th time?) the Lobster project started in ~2010. No LLMs on the horizon back then.
And while Lobster is a niche language LLMs don't know as well, they do surprisingly well coding in it, especially in the context of a larger codebase as context. It occasionally slips in Python-isms but nothing that can't be fixed easily.
Not suitable for larger autonomous coding projects, though.
Small prompts leading to large programs has absolutely nothing to do with programming languages and everything to do with the design of the word generators used to produce the programs — which ingest millions of programs in training and can spit out almost entire examples based on them.
> They have shown that the information density in our existing languages is extremely low: small prompts can generate very large programs.
"Write a book about a small person who happens upon a magical ring which turns out to be the repository of an evil entities power. The small person needs to destroy the ring somehow, probably using the same means it was created"
Sadly, probably not. I fear new languages will struggle from here on out. As a language guy, very few things in this new AI world make me more sad than this.
I don't get the feeling this will happen. LLMs are extremely good at learning new languages because that's basically their whole point. If your new language has a standard library, and the LLM can see its source code, I am sure you can give it to any last-generation AI and it will happily spit out perfectly correct new code in it. If you give it access to a reference docs, then it can even ensure it never generates syntactically incorrect code quite well. As long as your error messages are enough to understand what a problem's root cause is, the LLM will iterate and explore until it gets it right.
Not sure if this is a good example, but I used ChatGPT (not even Codex) to fix some Common Lisp code for me, and it absolutely nailed it. Sure, Common Lisp has been around for a long time, but there's not so much Common Lisp code around for LLMs to train on... but OTOH it has a hyperspec which defines the language and much of the standard libraries so I believe the LLM can produce perfect Common Lisp based on mostly that.
I think it would be cool if a language specifically for LLMs came about. It should have something like required preconditions and postconditions so that a deterministic compiler can verify the assumptions the LLM is claiming. Something like a theorem prover, but targeted specifically for programming and efficient compilation/runtime. And it doesn't need all the niceties human programmers tend to prefer (implicit conversions comes to mind).
It's a Profile Guided Optimization language - with memory safety like Rust.
It's extremely easy to optimize assuming you either 1) profile it in production (obviously has costs) or 2) can generate realistic workloads to test against.
It's like Rust, in that it makes expressing common illegal states just outright impossible. Though it goes much further than Rust.
And it's easier to read than Swift or Go.
There's a lot of magic that happens with defaults that languages like Zig or Rust don't want, because they want every cost signal to be as visible as possible, so you can understand the cost of a line and a function.
LLMs with tests can - I hope - do this without that noise.
Possibly. First, I think there's still low hanging fruit in creating a programming language designed to be as easy as possible for agents to work with that we won't try to unlock until people writing code is a curiosity. Second, agents don't care about verbosity of code, so we can do verbosity/correctness/tooling tradeoffs that wouldn't have made sense when humans were the sole consumers of the code.
Yes, very much so. Programming languages are tools for thought, if you want your LLMs to think better, then they'll need better tools for thinking than the ones we have today. That they are mostly thinking in and writing Python is incidental to when they were born and a limitation of current AI technology, not its final evolution.
Novel programming languages have still educational value for those building them and, yes, we still need programming languages. I dont see any reason we would not need them. Even if Ai is going to write the code for you; how is it going to write it with no programming language? With raw binary? Absolutly not.
Eventually it wont need to write any code at all. The end goal for AI is "The Final Software" - no more software needs to be written, you just tell the AI what you actually want done and it does it, no need for it to generate a program.
But how do you know AI can generate programs without writing code? It can't today -- in fact the best thinking models work by writing code as part of the process. Natural intelligence requires it as well, as all our jobs are about expressing problem domains formally. So why would we expect an artificial intelligence should be able to reason about the kinds of programming problems we want them to without a formal language?
Maybe they will not be called programming languages anymore because that name was mostly incidental to the fact they were used to write programs. But formal, constructed languages are very much still needed because without them, you will struggle with abstraction, specification, setting constraints and boundaries, etc. if all you use is natural language. We invented these things for a reason!
Also the AI will have to communicate output to you, and you'll have to be sure of what it means, which is hard to do with natural language. Thus you'll still have to know how to read some form of code -- unless you're willing to be fooled by the AI through its imprecise use of natural language.
how did you get that 'no programming language' conclusion? there are so many well established languages that are more than we need already, the market has picked the winners too, and AI has well trained with them, these are facts. If there is a new language needed down the road for AI coders, most likely it will be created by using AI itself. for the moment, human created niche language is too late for the party, move on.
Flow-sensitive type inference with static type checks is, IMHO, a massively underrated niche. Doubly so for being in a compiled language. I find it crazy how Python managed to get so popular when even variable name typos are a runtime error, and how dreadful the performance is.
All the anonymous blocks lend themselves to a very clean and simple syntax. The rule that 'return' refers to the closest named function is a cute solution for a problem that I've been struggling with for a long time.
The stdlib has several gems:
- `compile_run_code`: "compiles and runs lobster source, sandboxed from the current program (in its own VM)."
- `parse_data`: "parses a string containing a data structure in lobster syntax (what you get if you convert an arbitrary data structure to a string) back into a data structure."
- All the graphics, sound, VR (!), and physics (!!) stuff.
- imgui and Steamworks.
I'll definitely be watching it, and most likely stealing an idea or two.
Tangential point, but I think this might be one of the reasons python did catch on. Compile checks etc are great for production systems, but a minor downside is that they introduce friction for learners. You can have huge errors in python code, and it'll still happily try to run it, which is helpful if you're just starting out.
I am guessing that Python, like Ruby, is dynamic enough that it's impossible to detect all typos with a trivial double-pass interpreter, but still.
Wonder if there was ever a language that made the distinction between library code (meant to be used by others; mandates type checking [or other ways of ensuring API robustness]), and executables: go nuts, you're the leaf node on this compilation/evaluation graph; the only one you can hurt is you.
As long as warnings are clear I’d rather find out early about mistakes.
C++/C IDE support is famously horrible owning to macros/templates. I think the expectation that you could fire up VS Code and get reliable typescript type hints has been a thing only for a decade or so - for most of modern history, a lot of people had to make do without.
Based on what I observe as an occasional tutor, it looks like compiler warnings & errors are scary for newcomers. Maybe it's because it shares the same thing that made math unpopular for most people: a cold, non-negotiable set of logical rules. Which in turn some people treat warnings & errors like a "you just made a dumb mistake, you're stupid" sign rather than a helpful guide.
Weirdly enough, runtime errors don't seem to trigger the same response in newcomers.
Perhaps nicer messages explaining what to do to fix things would help?
Speak for yourself.
https://github.com/aardappel/treesheets
https://news.ycombinator.com/item?id=47239042
> Dynamic code loading
This is what good language design looks like.
https://m.youtube.com/watch?v=fjtVsYqAR3s
And while Lobster is a niche language LLMs don't know as well, they do surprisingly well coding in it, especially in the context of a larger codebase as context. It occasionally slips in Python-isms but nothing that can't be fixed easily.
Not suitable for larger autonomous coding projects, though.
In fact, LLMs have shown that we really, really need new programming languages.
1. They have shown that the information density in our existing languages is extremely low: small prompts can generate very large programs.
2. But the only way to get that high information density now (with LLMs) is to give up any hope of predictability. I want both.
"Write a book about a small person who happens upon a magical ring which turns out to be the repository of an evil entities power. The small person needs to destroy the ring somehow, probably using the same means it was created"
...wait a few minutes...
THE LORD OF THE RINGS
http://lotrproject.com/statistics/books/wordscount
Not sure if this is a good example, but I used ChatGPT (not even Codex) to fix some Common Lisp code for me, and it absolutely nailed it. Sure, Common Lisp has been around for a long time, but there's not so much Common Lisp code around for LLMs to train on... but OTOH it has a hyperspec which defines the language and much of the standard libraries so I believe the LLM can produce perfect Common Lisp based on mostly that.
And if you're not that confident, shouldn't you still be optimising for humans, because humans have to check the LLM's output?
It's a Profile Guided Optimization language - with memory safety like Rust.
It's extremely easy to optimize assuming you either 1) profile it in production (obviously has costs) or 2) can generate realistic workloads to test against.
It's like Rust, in that it makes expressing common illegal states just outright impossible. Though it goes much further than Rust.
And it's easier to read than Swift or Go.
There's a lot of magic that happens with defaults that languages like Zig or Rust don't want, because they want every cost signal to be as visible as possible, so you can understand the cost of a line and a function.
LLMs with tests can - I hope - do this without that noise.
We shall see.
I'm almost ready to launch v0.1 - but the documentation is especially a mess right now, so I don't want to share yet.
I'll update this comment in a week or so [=
I don't think LLMs have solved the problem of wanting code that's concise and also performant.
Maybe they will not be called programming languages anymore because that name was mostly incidental to the fact they were used to write programs. But formal, constructed languages are very much still needed because without them, you will struggle with abstraction, specification, setting constraints and boundaries, etc. if all you use is natural language. We invented these things for a reason!
Also the AI will have to communicate output to you, and you'll have to be sure of what it means, which is hard to do with natural language. Thus you'll still have to know how to read some form of code -- unless you're willing to be fooled by the AI through its imprecise use of natural language.