Clojure is a lot of fun to tinker with, but man… I love my static types. I think I’d hate to work on a large codebase in Clojure and constantly be wondering what exactly “m” is.
As a programmer with almost exclusively statically-typed compiled language experience, I used to strongly believe this too. In the last year, though, I've seriously toyed around with a number of dynamic languages, really trying to grok the dynamic mindset, and it's been very eye-opening. I never expected quite the degree of productivity boost that I have felt in these languages, and I must admit that I've also found many of them quite joyful to work with. Dynamic languages are profoundly creative, in all the senses of creative. I've found myself thinking about my programs in a totally different way, which has been really lovely (and honestly a timely reminder of what I loved about programming in the first place).
To be fair, I will readily say that the lack of static analysis really does bite when refactoring, though I think that good design principles and the overall productivity multiplier may offset that cost (also unique, descriptive, grep-able names!). I guess I've also seen enough C++ template spaghetti to know that static typing is no panacea either.
I don't know to what extent I'll use dynamic languages going forward, though for now I'm kind of in love with opening up a window into the computer and building up my digital sandcastles. Many of these languages also have a great FFI story, which is making me dream up cool bilingual projects that play on the strengths of both approaches.
All in all, no regrets about my adventures in dynamic-land.
It is my favourite language on the JVM, when not using Java.
Because of being a Lisp based language, bringing something else to the table besides "lets replace Java", and the community being welcoming of the host environments where Clojure is a guest.
Clojure is not only on JVM. I often use babashka for shell-scripting and nbb for tinkering on node.js. There's ClojureDart if you like Flutter. For Lua, there's Fennel which is not Clojure but has similar syntax and inspiration. There's Clojerl for Erlang, and glojure, joker and let-go for Golang. There's clj-python and clojure-rs. There's jank-lang (which is not production-ready yet, but already is very promising).
> Clojure is a lot of fun to tinker with, but man… I love my static types.
Static types are great, but boy... I love my REPL. I think I'd hate to actually work while writing code. REPL-driven interactivity with Clojure allows me to treat the work like I'm playing a video game.
It's not just that. Static types do help, yet dismissing an entire language because of a single aspect of it is extremely short-sighted. It's like rejecting Russian or Turkish, only because they have no concept of definite or indefinite articles.
Sure, Clojure is dynamically typed, but it is also strongly typed. That in practice means that for example Clojurescript when compiling to Javascript enforces those type guarantees, sometimes emitting safer code than even statically typed Typescript cannot.
That is exactly my feeling, like lately everything must be “safe” and statically typed. While I do see some (big, sure) pros I see also some cons that I feel are systematically ignored or neglected. For me it seems to be a kind of fade/hype… but maybe I’m just connected to the wrong news feeds.
> maybe I’m just connected to the wrong news feeds
I don't think so. It's just like said in another comment:
For everything, there's a trade-off. Some just accept those trade-offs, build their vision, and launch it into the world; Some waste time, lamenting that reality doesn't align with their ideals.
Static typing works - just like formal methods, just like dependent types, just like unit testing, just like generative testing, just like many other different ideas and techniques. They each have their own place and use cases, strengths and weaknesses, pros and cons. Picking one single paradigm, technique, design pattern, or methodology - no matter how amazingly powerful they are - and just dogmatizing and fetishizing it is simply immature. Reaching the point where you clearly understand that there are truly no silver bullets and everything is about making compromises is a sign of professional growth and true, genuine experience.
For us the combination of malli and clj-kondo worked really well. Also we haven't faced that problem yet, as the codebase is fairly small. But I can totally see when types become quite useful when navigating large codebases.
Having worked with large Clojure codebases I found the Malli/Spec duct-taping of types to be a poor man's statically typed language, especially with the developer experience being quite poor. While it will runtime validate, I still have no idea what shape anything is by just hovering it - having to constantly navigate between the definitions file, and they are also more cumbersome to use and maintain.
I've come to the conclusion that it is just a better experience using a language that already has static types for large projects, than trying to make a dynamic language have similar things. Having to wrap every function in a error boundary to get somewhat of a meaningful debug experience is just .. awful.
> duct-taping of types to be a poor man's statically typed language
On the other hand, they allow you to do some very interesting things like using specs for complex validation. Once written specs can be then used for generating data for both - UI testing and property-based unit-tests. We once have build set of specs to validate an entire ledger - imagine having to be able to generate a bunch of transactions where numbers intelligently build based on previous entries.
Other languages even though have similar capabilities, like type providers in F#/OCaml, zod in Typescript, quckcheck/scalacheck in Haskell & Scala - Clojure is quite unique here in combining runtime validation, generative testing, and data definition in such a cohesive way. The ability to compose specs and use them across different contexts (validation, generation, documentation) is particularly powerful.
Another impressive thing is that you can easily share the logic between different runtimes - same specs can used in both - JVM and Javascript, which surprisingly difficult to achieve even when writing in Node with TS/JS - you cannot easily share the same validation logic between backend and the browser, even for the same javascript runtime, using its native language. Clojure lets you do that with ease.
For everything, there's a trade-off. Some just accept those trade-offs, build their vision, and launch it into the world; Some waste time, lamenting that reality doesn't align with their ideals.
Yes, you would know what it is at that moment. You would not however know if what it is at that moment in time is actually correct, or what the expected shape is, without deconstructing the entire function that is the receiver of the data. That's where static types are useful - I can just hover my mouse over a function and it will show me what is the expected input, and expected output, and I do not need to read and understand the contents of the function to know if the data is correct, because it would throw an exception if it is not, like if a string suddenly becomes a number or is missing a piece of information, etc.
Theoretically, yes. And trust me, I loved static types. I use Rust for almost everything. However, the programming loop or the iteration loop that you get into with Lisp, especially with things like Common Lisp, it's not that much of a concern. But I agree with any other language which is not a Lisp, a static type system is far superior.
As nice as nrepl/cider are, doing what amounts to setting a breakpoint in the middle of a function to see what `m` looks like isn't a replacement for knowing the type without executing code. It's just something we put up with.
yeah as I also commented on the sibling comment the real thing here is that the way you program a clojure application or a common Lisp application especially because I used to use Steelbank common Lisp so I can talk about that is you immediately go into the REPL and you jack in it's actually a little bit difficult in clojure to do that compared to common lisp.
But the mental model is fundamentally different. It's not like you write a bunch of code, set a breakpoint and see what things are. You essentially boot up a lisp image and then you make changes to it. It's more like carving out a statue from a piece of rock rather than building a statue layer by layer.
I've been using Clojure for a while and I rarely ever wonder "what 'm' is" - that almost never happens, despite the language being dynamically typed.
Data shapes in Clojure typically explicit and consistent. The context usually makes things quite obvious. Data is self-describing - you can just look at a map and immediately see its structure and contents - the keywords serve as explicit labels and the data, well... is just fucking data. That kind of "data transparency" makes Clojure code easier to reason about.
In contrast, in many other PLs, you often need to know the class definition (or some other obscured shit) to understand what properties exist or are accessible. The object's internal state may be encapsulated/hidden, and its representation might be spread across a class hierarchy. You often can't even just print it to see what's inside it in a meaningful way. And of course, it makes nearly impossible to navigate such codebases without static types.
And of course the REPL - it simply feels extremely liberating, being able to connect to some remote service, running in a container or k8s pod and directly manipulate it. It feels like walking through walls while building a map in a video game. Almost like some magic that allows you to inspect, debug, and modify production systems in real-time, safely and interactively, without stopping or redeploying them.
Not to mention that Clojure does have very powerful type systems, although of course, skeptics would argue that Malli and Spec are not "true" types and they'd be missing the point - they are intentionally different tools solving real problems pragmatically. They can be used for runtime validation when and where you need it. They can be easily manipulated as data. They have dynamic validation mechanisms that static types just can't easily express.
One thing I learned after using dozens of different programming languages - you can't just simply pick one feature or aspect in any of them and say: "it's great or horrible because of one specific thing", because programming languages are complex ecosystems where features interact and complement each other in subtle ways. A language's true value emerges from how all its parts work together, e.g.,
- Clojure's dynamic nature + REPL + data orientation
- Haskell's type system + purity + lazy evaluation
What might seem like a weakness in isolation often enables strengths in combination with other features. The language's philosophy, tooling, and community also play crucial roles in the overall development experience.
If one says: "I can't use Clojure because it doesn't have static types", they probably have learned little about the trade they chose to pursue.
Correction: It's not that you "don't have to use the REPL"; you simply cannot even have it in that case. REPL-driven development is quite a powerful technique, and no, "many other languages too" don't have it. For it to be "a true REPL," it must be in the context of a homoiconic language, which Clojure is.
Sure, static typing is great, but perhaps you have no idea what it actually feels like - spinning up a Clojurescript REPL and being able to interactively "click" and "browse" through the web app programmatically, controlling its entire lifecycle directly from your editor. Similarly, you can do the same thing with remote service running in a kubernetes pod. It's literally like playing a video game while coding. It's immensely fun and unbelievably productive.
With a REPL-connected editor (and most have a way to do this), you can simply hover over it in your editor as well. Even though most languages can have a REPL today, few integrate it in the development experience the way lisps do.
The compiler should know it for you, so you cannot get it wrong no matter what. The REPL here is a band-aid not a solution.
I mean, I love Clojure, and used it for personal and work projects for 10+ years, some of which have hundreds of stars on github. But I cannot count the time wasted to spot issues where a map was actually a list of maps. Here Elixir is doing the right thing - adding gradual typing.
> But I cannot count the time wasted to spot issues where a map was actually a list of maps.
Sorry, I'm having hard time believing that. I don't know when was the last time you've used the language, but today there are so many different ways to easily see and analyze the data you're dealing with in Clojure - there are tons of ways in CIDER, if you don't use Emacs - there are numerous ways of doing it in Calva (VSCode) and Cursive (IntelliJ), even Sublime. There are tools Like Portal, immensely capable debuggers like Flowstorm, etc. You can visualize the data, slice it, dice it, group it and sort it - all interactively, with extreme ease.
I'm glad you've found great fondness for Elixir, it is, indeed a great language - hugely inspired by Clojure btw.
You still don't need to bash other tools for no good reason. It really does sound fake - not a single Clojure developer, after using it for more than a decade, would call a Lisp REPL "a band-aid and not a solution". It smells more like someone with no idea of how the tool actually works.
Maybe it's so. Or maybe you run my code in your deps. As you can see, there is at least one Clojure dev who thinks so.
I found spec very useful and damn expressive (and I miss it in other languages), but again that's runtime. I know Rich says such errors are "trivial", but they waste your time (at least mine).
To each their own. Some people (not me) say that Rust's pedantic compiler feels like bureaucratized waste of time akin passing through medieval Turkish customs. For me personally, working with Clojure dialects feels extremely productive. Even writing in Fennel, which is not Clojure, but syntactically somewhat similar is much faster for me than dealing with Lua. Even when I have to write stuff in other PLs, I sometimes first build a prototype in Clojure and then rewrite it. Although it sounds like spending twice the effort, it really helps me not to waste time.
What do you mean? Clojure is strongly typed language - every value always has a definite type. It's not like Javascript. Types in Clojure are fixed and consistent during runtime, they just aren't declared in advance.
Do you think there's only one path to your function? There could be thousands in a big system. The type of the value you'll get will depend on the path you call it from. Even if it's only one path, you could easily have code doing stuff like this:
if x > 10:
call_my_function 10
else:
call_my_function "foo"
Can't you see that unless you test every path, you won't know what type you will receive??
Your contrived example is a bad smell in ANY language. No sensible coder ever writes a function that accepts both numbers and strings - handling multiple types should be done through proper polymorphic constructs, not arbitrary conditional branches.
There's a wide spectrum of correctness guaranties in programming - dynamic weak, dynamic strong, static, dependent, runtime validation & generative testing, refinement types, formal verification, etc.
Sure, if your domain needs extreme level of correctness (like in aerospace or medical devices) you do need formal methods and static typing just isn't enough.
Clojure is very fine, and maybe even more than just fine for certain domains - pragmatically it's been proven to be excellent e.g., in fintech and data analysis.
> Can't you see that unless you test every path ...
Sure, thinking in types is crucial, no matter what PL you use.
And, technically speaking, yes, I agree, you do need to know all paths to be 100% certain about types. But that is true even with static typing - you still need to test logical correctness of all paths. Static typing isn't some panacea - magical cure for buggy software. There's not a single paradigm, technique, design pattern, or set of ideas that guarantee excellent results. Looking at any language from a single angle of where it stands in that spectrum of correctness guaranties is simple naivety. Clojure BY DESIGN is dynamically typed, in return it gives you several other tools to help writing software.
There's an entire class of applications that requires significantly more effort and mental overhead to build using other languages. Just watch some Hyperfiddle/Electric demos and feel free to contemplate what would it take to build similar things in some other PL, statically typed or whatnot. https://www.youtube.com/watch?v=nEt06LLQaBY
> myriad reasons why Common Lisp is far superior to Clojure
Some narrow view. Have you tried thinking that maybe Clojure intentionally chose not to include type declarations because they can lead to a messy middle ground? After all, maybe not every feature from Common Lisp needs to be replicated in every Lisp dialect? Besides, Clojure's Spec and Malli can be far more powerful for validation as they can define complex data structures, you can generate test data from them, you can validate entire system states, and they can be manipulated as data themselves.
If CL so "far superior" like you say, why then it can't be 'hosted' like Clojure? Why Clojure has Clojurescript, ClojureCLR, ClojureDart, babashka, nbb, sci, etc.? I'm not saying that to argue your specific point. Neither of them is 'superior' to another. They both have different purposes, philosophies, and use cases. Each has its strengths, pros, and cons. And that is actually very cool.
> not to include type declarations because they can lead to a messy middle ground?
What? Type declarations in CL (which came from prior Lisp dialects) were added, so that optimizing Lisp compilers can use those to create fast machine code on typical CPUs (various CISC and RISC processors). Several optimizing compilers have been written, taking advantage of that feature. The compiler of SBCL would be an example. SBCL (and CMUCL before that) also uses type declarations as assertions. So, both the SBCL runtime and the SBCL compiler use type declarations.
I've only played with Clojure (not used it professionally, I'm working with Scala) but Clojure interop with Java is way better than what I can see here: https://abcl.org/doc/abcl-user.html The way it's integrated with the host platform makes it better for most use cases IMHO.
> The way it's integrated with the host platform makes it better for most use cases IMHO.
That may be. ABCL is running on the host system and can reuse it, but it aims to be a full implementation of Common Lisp, not a blend of a subset of Lisp plus the host runtime. For example one would expect the full Common Lisp numerics.
One of its purposes is to be able to run portable Common Lisp code on the JVM. Like Maxima or like bootstrapping the SBCL system.
There is a bit more about the interop in the repository and in the manual:
I didn't say "type declarations can lead to a messy middle ground in Common Lisp" - obviously they exist there for a reason, but, maybe they DON'T exist in Clojure, also for good reasons, no?
ABCL does exist, sure, and there's also LCL for Lua. Yet, 8 out of 10 developers today, for whatever reasons would probably use Fennel to write Lispy-code to target Lua and probably more devs would choose Clojure (not ABCL) to target JVM. That doesn't make either Fennel nor Clojure "far superior" than Common Lisp and vice-versa.
> I didn't say "type declarations can lead to a messy middle ground in Common Lisp" - obviously they exist there for a reason, but, maybe they DON'T exist in Clojure, also for good reasons, no?
Until recently (2023), the type inference was very weak and did not work with higher-order functions (map, filter, reduce, etc.).
As a result, Typed Clojure was practically unusable for most applications. That has changed as of last year. For instance, the type checker can now handle the following kinds of expressions.
(let [f (comp (fn [y] y)
(fn [x] x))]
(f 1))
This expression was a type error before early 2023, but now it is inferred as a value of type (Val 1).
Unfortunately, many Clojure users think types are somehow a bad thing and will usually repeat something from Rich Hickey's "Maybe Not" talk.
I've worked with Clojure professionally. The codebases I've seen work around dynamic types by aggressively spec'ing functions and enabling spec instrumentation in development builds. Of course, this instrumentation had to be disabled in production because spec validation has measurable overhead.
Although Typed Clojure has made remarkable progress, the most editor tooling I recall for Typed Clojure is an extension to CIDER that hasn't been maintained for several years. (The common excuse given in the Clojure community is that some software is "complete" thus doesn't need updates, but I have regularly found bugs in "complete" Clojure libraries, so I don't have much confidence here).
Overall, if one wants static typing, then Clojure will disappoint. I still use Clojure for small, personal-use tools. Having maintained large Clojure codebases, however, I no longer think the DX (and fearless refactoring) in languages like Rust and TypeScript is worth trading off.
I think the consensus is that it is not really mature enough for general adoption. Also, most people prefer to use one of the specification libraries that are available (spec, schema, malli). These allow you to do a sort of design-by-contract style of programming.
Interesting story. I am not entirely convinced that all credit should go to the programming language here, though.
My theory is that communicating abstractions is hard. If you work on your own, or in a (very) small team, you can come up with powerful abstractions that allow you to build amazing systems, quickly. However, sharing the underlying ideas and philosophy with new team members can be daunting. As systems grow, and mistakes are made, it becomes more and more likely that you run into serious problems.
This may also be why Java and similar object oriented programming languages are so successful for systems that have to be maintained for ages, by large teams of developers. There are but few abstractions and patterns, and it does not allow you to shoot yourself in the foot, nor to blow your whole leg off. Conversely, this may also be why complex frameworks, such as Spring, are not always so nice, because they introduce (too?) powerful abstractions, for example through annotations. It may also clarify why more powerful languages such as Scala, Common Lisp, Smalltalk, Haskell, etc, consistently fail to pick up steam.
Another theory is that not every developer is comfortable with abstract concepts, and that it simply takes a team of smart people to handle those.
Another theory is that C inspired languages are very mechanistic and easier to visualize. Same goes for OOP with the Animal->{Cat,Dog} explanation. But that's just surface level and once you get to the difficult part (memory management in C and software design in Java) where the ability to grasp abstractions is required, we're back to square one.
I believe once you've got to some point, dealing with abstractions is a way of life. It's either in the language, the technical requirements, or the software design.
"Objects are the way we think" is one of the largest design traps ever laid in software development. Because if you design your program like it, unless in certain special circumstances, it will be shit.
There's the way we think about the problem and it's solution and there's the way that the machine can execute these solutions. More often there's no direct mapping other than a tower of abstractions. The issue is that the problem and our model are too fluid and it's best not to rely that much on a certain paradigm.
> It may also clarify why more powerful languages such as Scala, Common Lisp, Smalltalk, Haskell, etc, consistently fail to pick up steam.
Languages need a window of opportunity, and many of those squandered it.
Clojure won over Scala because at the time when people were loooking for an alternative JVM langauge, Clojure was more of a departure from Java and seemed to have better tooling (compile times and syntax support) than Scala.
Smalltalk and Common Lisp wasted their moment by not being cheap/free to people using micros in the 1980s.
Lisp, especially, very much wasted its moment with micros. The fact that no Lisper had the vision to dump a Lisp onto the bank switched micros (which makes GC really easy and useful) of the mid to late 1980s is a self-inflicted bullet wound. Lots of us hated doing assembly language programming but had no real alternative. This was a loss born of pure arrogance of Lispers who looked down on those micros as not being "real machines".
I weep for all the hours I wasted doing assembly language as a teenager that I could have been writing Lisp. How much software could have been written that would have been <100 lines of Lisp if only someone had written that tool?
...in what sense has Clojure actually won over Scala?
I see way more Scala in companies last ~5y and have the impression of its ecosystem being more robust. Not uncommon for greenfields. It's longer than that I even encountered an active Clojure codebase. This is from a data-engineer perspective.
Clojure may be more popular for some niche of app startups perhaps? We are in different "bubbles" I suppose.
I can't really speak to modern stuff, and it is certainly possible my memory is faulty. Scala was a PITA in the early 2000s and you were generally better served with something else if you could move off the JVM. Clojure came in about mid 2000s and seemed to be what a bunch of people stuck on the JVM but doing data processing were desperate to find.
My feeling was that a lot of Clojure folks moved on as the data processing stuff moved on from Java/JVM.
My impression has been that JVM-based languages have effectively been on a steady general decline for a while now. Java has fixed a lot of its issues; Kotlin gave the Java expats somewhere to go. And Javascript/Node along with Go drained out the general masses who didn't really want to be on the JVM anyhow.
However, it is interesting that Clojure has effectively disappeared in those rankings.
Lisps really only come into their own above a certain size/amount of resources. For early Lisp the PDP-11 with 2-4 MB RAM was considered to be nice. There were some Lisp implementations for the PCs but they suffered from the need for compatibility with older hardware.
The bank switched memory architectures were basically unused in mid 80s micros (C128, CoCo3, etc.).
Lots of utility software like spell checkers and the like still existed. These would be trivial to implement in Lisp but are really annoying in assembler.
Lisp would have been really good relative to BASIC interpreters at the time--especially since you could have tokenized the atoms. It also would have freed people from line numbers. Linked lists work well on these kinds of machines. 64K is solid for a Lisp if you own the whole machine. You can run over a bank of 16K of memory for GC in about 50 milliseconds or so on those architectures.
Had one of the Lisperati evangelized Lisp on micros, the world would look very different. Alas, they were off charging a gazillion bucks to government contracts.
However, to be fair, only Hejlsberg had the correct insights from putting Pascal on the Nascom.
> Lisp would have been really good relative to BASIC interpreters at the time
I see no evidence for that. Lisp was a pain on tiny machines with bad user interface.
> 64K is solid for a Lisp if you own the whole machine.
I had a Lisp on an Apple II. It was a useless toy. I was using UCSD Pascal and Modula 2 on it. Much better.
I had Cambridge Lisp on an Atari with 68k CPU. It was next to unusable due to frequent crashes on calling FFI functions.
The first good Lisp implementation I got was MacScheme on the Mac and then the breakthrough was Macintosh Common Lisp from Coral Software.
> Had one of the Lisperati evangelized Lisp on micros
There were articles for example in the Byte magazine. Lisp simply was a bad fit to tiny machines. Lisp wasn't very efficient for small memory. Maybe with lots of work implementing a tiny Lisp in assembler. But who would have paid for it? People need to eat. The tiny Lisp for the Apple II was not usable, due to the lack of useful programming environment.
> Alas, they were off charging a gazillion bucks to government contracts.
> There were articles for example in the Byte magazine.
And they were stupid. Even "good" Lisp references didn't cover the important things like hashes and arrays. Everybody covered the recursive crap over and over and over ad nauseam while people who actually used Lisp almost always sidestepped those parts of the language.
> I had a Lisp on an Apple II. It was a useless toy. I was using UCSD Pascal and Modula 2 on it. Much better.
And yet UCSD Pascal was using a P-machine. So, the problem was the implementation and not the concept. Which was exactly my point.
> At least there were people willing to pay for it.
Temporarily. But then it died when the big money went away and left Lisp all but dead. All the while all the people using languages on those "toys" kept right on going.
> And yet UCSD Pascal was using a P-machine. So, the problem was the implementation and not the concept. Which was exactly my point.
My point is that implementations don't come from nothing. You can't just demand them to be there. They have to be invented/implemented/improved/... Companies at that time did not invest any money in micro implementations of Lisp. I also believe that there was a reason for that: it would have been mostly useless.
> Temporarily. But then it died when the big money went away and left Lisp all but dead. All the while all the people using languages on those "toys" kept right on going.
This has actually been my experience. When I started with Clojure I was writing it badly. I came from NodeJS world. It even took me a week's time just to setup the working environment.
With time you get to understand the power of simplicity. How to break the problem and compose the solutions to achieve your intended result..
Powerful abstractions tend to come back and bite you a few years later when the industry trends shift and everyone else starts using a different set of abstractions. Now that small team is stuck maintaining those custom abstractions forever and is unable to take advantage of new abstractions from vendors or open source projects. So their progress stagnates while competitors race ahead. I've been on the wrong side of that before.
> object oriented programming languages are so successful for systems that have to be maintained for ages,
ehmmm.... excuse me.... erghmm... what about Emacs? I'm sure, it absolutely can be count for a "successful system that have to be maintained for ages". For far, far longer than any Java-based project that ever existed.
Even though Elisp lacks:
- static typing
- OOP class system (until relatively recently)
- Modern package management (until ELPA/MELPA)
- Multi-threading model
- JIT compilation
Perhaps "the secret sauce" of successful software is in simplicity? Maybe some programmers just get it, and for others, it is such an obscure and mysterious entity. Some programmers write "programs to program computers", and some may have realized that they are not trying to solve purely technological problems, but they are, in fact, tackling socio-technological problems, and they write programs to communicate their ideas to fellow human beings, not machines.
I've been using emacs for over 10 years. Maybe close to 15. I can't get rid of it, because even for all its faults, I love it. I'm hopelessly stuck with it.
However, emacs is a fucking mess, and there is a reason "init.el bankruptcy" is a thing and why the most popular way to use emacs is through various frameworks such as doom or spacemacs.
In emacs, nearly everything can(and often does) mess with everything else. It is serious integration hell to actually get things to work together, and the work that goes into e.g. doom is basically all about managing that complexity through good abstractions and more rigid ways to configure and install things.
Emacs is also objectively dogshit in a lot of ways compared to most modern editors. LSP is ridiculously slow and a constant source of performance issues, many of which are probably directly related to emacs internals. Eglot seems to do better but it's a lot more limited(you can't use multiple language servers together, for example). Then there's things like the buffer being the data-structure for everything, which is sort of like modeling nearly everything as one long string. Things that would be trivial to do in most other languages or contexts are difficult and error-prone in emacs.
> Emacs is also objectively dogshit in a lot of ways compared to most modern editors
Yet not a single modern editor can even come close to it when it comes to extensibility and customization; self-documenting; complete programmability; malleability; ability to perform virtually any computing task without leaving the editor. Modern editors excel at being user-friendly out of the box. Emacs excels at becoming exactly what each user needs it to be. While you find yours to be "objectively dogshit" in comparison, I can probably easily demonstrate to you how mine eats their "modern" shit without even chocking.
> LSP is ridiculously slow
Have you tried to get to the bottom of it? Sometimes it just the lsp-server implementation that is slow. Have you tried https://github.com/blahgeek/emacs-lsp-booster? Did you build Emacs --with-native-comp flag? Have you tried using plists for deserialization https://emacs-lsp.github.io/lsp-mode/page/performance/#use-p...? Have you used Emacs' built-in profiler? Sometimes the issue might be somewhere else, e.g., some fancy modeline settings.
> Things that would be trivial to do in most other languages or contexts
Sure, that's why we see so many "Emacs killers" built in Java, because replicating Org-mode is so trivial in it. /s
Yes, I've wasted a considerable amount of time trying to get to the bottom of the performance problems. So have multiple other avid emacs users I know that regularly have to deal with these problems. One of them is trying to live with eglot because it feels a lot faster - now they are also trying to use a sidecar process multiplexer to support multiple language servers. This is like the emacs experience in a nutshell.
My conclusion is basically: some language servers are slow, which doesn't help, and some are also very noisy. Both scenarios are handled extremely poorly by emacs, i.e. locked ui when parsing or waiting sometimes, stuff like that.
You really have to be a special kind of oblivious to argue so vehemently while literally suggesting I run a separate program on the side to get what most would consider to be a basic, functioning editing environment.
Re the org-mode argument: I really think you mistake lack of interest for insurmountable complexity. Emacs is not a magic machine that can do things no other software can do. You can probably count the number of people who genuinely care about org-mode in the world on 10 sets of hands.
Okay, I'm just trying to help. I, for one, don't experience extremely vexing problems with performance like you're describing. But of course, I'm not going to pretend that I'm unaware that Emacs needs to improve on that front, and over the years there have been numerous improvements, so it's not a completely hopeless situation.
> basic, functioning editing environment
Different priorities. For me, "basic, functioning editing environment" means stuff like indirect buffers - which not a single other modern, popular editor offers.
> Emacs is not a magic machine that can do things no other software can do.
In some cases, it literally feels exactly like that. So while not literally magical, Emacs enables workflows and capabilities that can feel transformative and, yes, almost magical to its users. The "magic" comes from its architecture and philosophy. I can easily list dozens of use cases I've adopted in my workflow that are simply impractical to even try to replicate in other editors.
One can criticize pretty much any software product. Yet Emacs still falls in the category of "successful systems that have been maintained for ages". Anyway, we're getting sidetracked. My main point in the comment wasn't about Emacs, the main point is in the last paragraph. Maybe you just didn't even get to it.
It's embarassing to have to even say this , but a counterexample, or even a few, does not invalidate the argument. You would need some sort of representative sample of successful projects, and then figure out which paradigm was used for each one and see if there's any statistically significant pattern. Good luck doing that reliably though.
While focusing on the first part of my comment it seems you completely ignored the [main] point in the last paragraph.
All programming languages are man-made, human constructs and not a single one is the most ideal for the task of programming - just like not a single spoken language can claim absolute superiority for communication. The nature of programming often eludes clear categorization, challenging us to define whether it belongs to the domains of art, business, or engineering. Maybe it's all of these? Maybe it's none of it? We, programmers endlessly engage in passionate debates about language superiority, constructing complex arguments to defend our preferences. I'm not a musician, but can you imagine guitar players vehemently arguing with drummers that guitars are ultimately better instruments because, I dunno, one never can perform Für Elise using drums. And then some experienced drummer comes and performs it in way no one ever imagined. I think, the beauty of programming, like music, lies in its diversity. Instead of engaging in futile debates about superiority, we should celebrate how different languages and paradigms enrich our craft, each bringing its own unique perspective and elegance to the art of problem-solving.
One of the things I feel grateful for after learning Clojure is that many, perhaps most people in the community are genuinely experienced, seasoned software developers, driven to Clojure by curiosity after many years spent in other programming languages. Clojure allowed me to fall in love with my trade once again. The combined malleability of Lisp that can emulate (almost) any known programming paradigm and the simplicity and data-centric design in Clojure helped me (truly) understand the core principles of programming. Ironically, after using numerous different languages for many years, spending years of grokking their syntactic features and idiosyncrasies, many stackoverflow threads and tutorials later, only after switching to Clojure did I feel like I understood those languages better. One peculiar aspect about Clojurians, that they don't typically engage in general debates about programming languages without specific context, subtly breaking the famous Perlis quote about Lispers knowing the value of everything and cost of nothing. Clojurians known for their grounded perspective and non-dogmatic approach to problem-solving. So, they typically don't bash on someone else's favorite language, instead they'd try to learn and steal some good ideas from it.
I'm curious if Elixir could provide a similar development environment?
Seems like many similar capabilities, like a focus on immutable data structures, pure functions, being able to patch and update running systems without a restart, etc.
I came to the opposite conclusion for the following reasons:
1. IEx provides a robust and interactive debugging environment that allows me to dig into whatever I want, even when running in production. I've never lost state in IEx, but that happens fairly often in CIDER and nREPL.
2. IEx uses Elixir's compilation model, which is a lot faster than CIDER and nREPL, leading to faster debugging cycles.
3. IEx is tightly integrated with Elixir whereas Clojure's tools are more fragmented.
4. IEx doesn't carry the overhead of additional middleware that CIDER and nREPL do.
I'm also not a fan of JVM deployments, so I've migrated all my code away from Clojure to Elixir during the past 10 years.
Maybe this architecture approach would be challenging in Java or Go, but the style of immutable data, don’t go crazy wrapping stuff in classes is very doable in most languages. We enforce “no mutation of data you did not just instantiate” at Notion, and use TypeScript’s powerful type system with tagged union types to ensure exhaustive handling of new variants which I really miss in languages that don’t have it (go).
I guess the major advantage for Closure with this style is the “persisted” data structures end up sharing some bytes behind the scenes - it’s nice the language is explicitly situated around this style, rather than TypeScript’s chaotic Wild West kitchen sink design. What I don’t understand the advantage for “state management”. Like, you build a new state object, and then mutate some pointer from prevState to nextState… that’s what everyone else is doing too.
There are times though when it’s nice to switch gears from function-and-data to an OO approach when you need to maintain a lot of invariants, interior mutability has substantial performance advantages, or you really want to make sure callers are interpreting the data’s semantics correctly. So our style has ended up being “functional/immutable business logic and user data” w/ “zero inheritance OO for data structures”.
Whenever I read some open source TypeScript code that’s using the language like it’s Java like `class implements ISomething` ruining cmd-click go to method or an elaborate inheritance hierarchy it makes me sad.
> What I don’t understand the advantage for “state management”. Like, you build a new state object, and then mutate some pointer from prevState to nextState… that’s what everyone else is doing too.
Because Clojure treats data as first-class citizens, we could build our own lightweight conflict resolution system using pure functions that operate on these transactions.
What does it mean to say Clojure "treat data a first-class citizen"? I understand FP would treat function as first-class citizen, but the statement seems to mean something different.
OOP generally "hides" data as internal state of class instances. Everything is private unless expressed as a method on an object.
The two sentences around the one you quoted should answer the question as well:
> With Clojure, we modeled the entire collaboration system as a stream of immutable data transformations. Each user action becomes a transaction in our system.
And
> When conflicts occur, our system can merge changes intelligently because we're working with pure data structures rather than complex objects.
Whereas OOP languages combine behavior and data into a single thing (classes with methods model behavior and hide state i.e. data) functional languages separate them: functions model behavior, and data is treated more like an input and output rather than "state".
In particular with clojure, data structures tend to be immutable and functions tend to not have side effects. This gives rise to the benefits the article talks about, though is not without its own drawbacks.
In Clojure, treating data as a first-class citizen means that data structures (like maps, vectors, sets) can be:
1. Passed as arguments
2. Returned from functions
3. Stored in variables
4. Manipulated directly
5. Compared easily
Unlike some languages where data needs special handling or conversion, Clojure lets you work with data structures directly and consistently throughout your program.
This philosophy extends to how Clojure handles data transformations. For example, transducers are composable algorithmic transformations that can work on any data source - whether it's a collection, stream, or channel. They treat the data transformation itself as a first-class value that can be composed, stored, and reused independently of the input source.
This first-class treatment of both data and data transformations makes Clojure particularly powerful for data processing and manipulation tasks.
That's why Clojure often finds strong adoption in data analytics, fintech and similar domains. The ability to treat both data and transformations as first-class citizens makes it easier to build, for example: reliable financial systems where data integrity is crucial.
They most likely refer to homoiconicity [1], as Clojure is a dialect of Lisp. However, it's hard to say for sure, and maybe they were simply referring to the built-in syntax for maps, lists, etc.
Not only due to homoiconic nature. All (well, technically not all, let's say most) Lisp dialects are homoiconic. Yet, there are some other aspects that make Clojure specifically well-suited for data manipulation:
- immutability and persistent data structures (makes code easier to reason about [the data]; enables efficient concurrency - no locks; some algorithmic tricks that makes it very performant despite having to create copies of collections),
- seq abstraction - unlike other Lisp where sequence functions are often specialized for different types, Clojure simplifies things by making baked-in abstraction central to the language - all core functions work with seqs by default. it emphasizes lazy sequences as a unified way to process data, i.e., memory efficiency and infinite sequences, etc.
- rich standard library of functions for data transformation
- destructuring - makes code both cleaner and more declarative
- emphasis on pure functions working on simple data structures
The combination of these features makes data processing in Clojure particularly elegant and efficient.
@OP "Model our domain as a graph of attributes and relationships" and "generate resolvers". I'm curious what your model looks like so that you are able to "generate resolvers"? I had looked into using Malli as the model, but curious what route you took.
Incredible history, I feel like Clojure makes magic. What I like about functional programming is that it brings other perspectives of how things CAN work!!
> Today, we're building Vade Studio with just three developers – myself and two developers who joined as interns when in college. (...) Here's what we've accomplished: (...)
In how many man-hours/days? It's hard to know if the list is long or short only knowing that calendar time should be multiplied by three for calculating people time spent...
We started working on it full time around 1.5 years ago.
2 years if you count when I exploring building it in other languages.
Alongside building Vade Studio:
I have been working as a contractor for 2 clients. Developing systems for them
Other two developers have been managing their college curriculum as well.
I am not sure how to do the math around it, but anecdotally I don't think this would be possible in any other environment.
> I don't think this would be possible in any other environment.
A few years ago, I worked in a small group (6 devs) for a retail business. We had all sorts of different third-party integrations - from payment processors to coupon management and Google Vision (so people wouldn't upload child porn or some other shit through our web and mobile apps). The requirements would constantly change - our management would be like: "Hey guys, we want this new program to launch, can we do it next week?", then a day later: "Turns out we can't do it in the state of Tennessee, let's add some exceptions, okay?", then some time later: "Folks, we really want to A/B test this but only for a specific class of customers..." Jesus, and it wasn't just an "occasional state of affairs" every once in a while; it was a constant, everyday, business-as-usual flow. We had to quickly adapt, and we had to deploy continuously. We had several services, multiple apps - internal, public, web and mobile, tons of unit and E2E tests, one legacy service in Ruby, we had Terraform, containers, API Gate, load balancers, etc.
I can't speak for myself, but a couple of my peers were super knowledgeable. They used all sorts of tools and languages before. I remember my team-lead showing me some Idris features (tbh, I don't even remember anymore exactly what) and asking my opinion if we should find a way to implement something like that, and I couldn't hide my confusion as I didn't know nothing about Idris.
Numerous times, we had discussions on improving our practices, minimizing tech debt, etc. And I remember distinctly - many times we have speculated how things would've turned out if we used some other stack, something that's not Clojure. We would explore various scenarios, we even had some prototypes build in Golang, Swift and Kotlin. And every single time, we found compelling and practical reasons for why Clojure indeed was the right choice for the job.
Sure, if we had a larger team, maybe we could've done it using a different stack. But it was a startup, and we had just the six of us.
Yes, Clojure can be very terse without being extremely cryptic. Even simple data representation, if you compare JSON and EDN - the latter would be almost twice as compact yet remain more readable than JSON. Clojure is not as terse as e.g., Haskell, but I think it wins by being more pragmatic. Of course, some seasoned Haskelites may disagree, in some rare cases, Haskell can prove to be fantastically pragmatic, but let's agree not to go down that rabbit hole of argumentation.
That's a nice point, it's not about the shortest or the most verbose code.. maybe where relevant, something one or more people can learn quickly and become productive in contributing value.
With many languages and frameworks having a decent amount of similar functionality and performance available, more and more is left to personal preference and interpretation of what to use.
Popularity might matter when trying to hire juniors. Given how many juniors seem to appreciate sincere mentorship when it's mutual, I'm not super sure on this anymore.
Popularity might not matter when trying to hire other types of developers, including seniors. It's less about what's popular, or the right badge to signal.
Of the polyglot folks I get to know and are humble about their smarts, it's interesting how many independently have ended up on Clojure, or a few others. Universally there's usually a joke of how long can bash scripts do what's needed until a decision has been tied in.
> Of the polyglot folks I get to know and are humble about their smarts, it's interesting how many independently have ended up on Clojure
Yeah, Clojure is a weird thing. I myself, after using numerous different PLs - ranging from Basic/Pascal/Delphi to .NET languages like C#/F#, then later Python and, of course, "script" options like Javascript, Typescript, Coffeescript, and many others - still never felt like ever obtaining "the polyglot" status. I just went from one language to another, and every time it felt like burning the old clothes. I was always a "Blub programmer" at any given point of my career, every time moving from one "Blub PL" to another.
Learning Clojure somehow renewed my passion for the craft and forced me into deeper understanding of core programming ideas. Ironically, long-abandoned old friends became more familiar than ever before. I don't know exactly how that happened. Either because of the hosted nature of Clojure that forced me to always be on at least two different platforms at the same time - JVM and Javascript - or maybe because it exposed me to a lot of smarter and more experienced people than ever before. Maybe because of "Lisp-weirdness", thinking how to program with "no syntax" or being forced to think in terms of "everything is a function" where you don't even need primitives, even numbers, since you can express just about anything with functions. It could be because Clojure is of sufficiently higher level, it forced me to work on more complicated problems; I really can't say. One thing I know: I was a "Blub" programmer before Clojure. After a few years of Clojure, I have become an "experienced Blub programmer" in the sense that I no longer care in what programming language I need to write, debug, troubleshoot, or build things, with maybe a few exceptions like Haskell, Prolog and Forth - they'd probably require me to put on a student hat again to become productive, even though I know some bits of each.
Sorry, I didn't mean to make this "about me." What I'm trying to say is that I am sad that it took me years of wasted time. I wish someone had truly forced me into learning something like Clojure sooner, but again, maybe learning Haskell would have had the same or even better effect. I just want to encourage young programmers to try different languages instead of obsessing with their favorite ones. "Well," you may say, "aren't you right now fetishizing Clojure?" Here's the thing - I don't really consider Clojure any different from numerous other Lisp dialects I use - for me, all Lisps feel like the same language. The mental overhead of switching between Clojure/nbb/Elisp/Common Lisp/Fennel/etc. is so small, it's not even funny. Even jumping between Typescript and Javascript (same family) is more taxing than switching between different Lisps. Perhaps, true value of Lisps (including Clojure) is not so much technological - maybe it's rather cerebral and epistemic.
Is there a technical reason I can't sign into Studio with email? I'll really try to avoid signing in with other platforms, but I'll consider Github if there's some reason it has to be. I'll never sign into a service with Google.
We wanted to get the users on the Dashboard, building apps in as fewer clicks as possible while keeping the accounts secure. Google being the most widely used auth and GitHub, the developers' favorite, they have been our first choice.
Your concern makes sense though and we'll be considering it in the next feature rollout.
I think you have the right idea with getting people in as fast as possible.
It is probably safe to say the email login folks don’t mind and maybe appreciate the extra step.
In terms of starting quick as possible with the tool, maybe it’s possible to do shadow profiles where the app starts against a yet unnamed account (tied to a cookie) and when they start using the wizard and playing with it, at some point they have enough to save with a popup with login by id provider or email.
> When conflicts occur, our system can merge changes intelligently because we're working with pure data structures rather than complex objects. This would have been significantly more complex in an object-oriented language.
Not really, in an OO language state could have been stored in some data structure as well, with a way to serialize and deserialize. E.g. React made this very popular.
I've built similar systems using Apache Airflow and Temporal, but the complexity was overwhelming. Using simple maps with enter/leave phases for workflow steps is much cleaner than dealing with DAG frameworks.
I can't find pricing on the same. Though it is no code, there must be a way for me to work with code directly if i wish to do so. No mobile apps. It would be great if you can generate both web apps and mobile apps.
Ultimately, it all comes down to build what you're comfortable with. Additionally, when you're managing large organizations and teams. Build with what you can hire quickly for and easily scale with.
Quick (and cheap?) hires are not necessarily good hires.
In my experience (and my theory) developer productivity can range from 0.5x to 5x and more, and those developers in the upper range tend to look for certain programming language which they enjoy, like Rust, Go, Elixir, Scala and Clojure. They are hard to get if you are on a "boring" stack like Java, NodeJS, PHP.
So if you might need to invest some time and money to find the right people, but at the end you make a better deal: Even if the salary is twice as much, the productiviy is even more. Additionally less people means less communication overhead, which is another advante.
I find the opposite to be true, that best and most productive developers tend to be more language agnostic than average, although I'm not saying they don't have their preferences.
Specifically, I find language evangelists particularly likely to be closer to .5x than 5x. And that's before you even account for their tendency to push for rewriting stuff that already works, because "<insert language du jour here> is the future, it's going to be great and bug free," often instead of solving the highest impact problems.
I've worked with language zealots and it's awful. Especially the ones with the hardcore purely functional obsession. But that can apply to almost anything: folks that refuse to use anything but $TECH (K8S, FreeBSD, etc). Zealots like this general care less about delivering and more about what they get to play with.
Then you have the folks that care about delivering. They're not language agnostic, they have strong opinions. But also: they communicate and collaborate, they actually CARE: they have real empathy for their users and their co-workers, they're pragmatic. Some of these folks have a lot of experience in pushing hard to make things work, and they've learned some valuable lessons in what (not) to do again. Sometimes that can manifest as preferences for languages / frameworks / etc.
It's a messy industry, and it can be extremely hard to separate the wheat from the chaff. But a small team with a few of those can do truly game changing work. And there's many local optima to be had. Get a highly motivated and gelled team using any of: Elixr / Typescript / Zig / Rust / Ada / Ocaml / Scala / Python / etc, and you'll see magic. Yes, you don't need fancy tech to achieve that. There's more than a few of those writing C for example, but you're unlikely to see these folks writing COBOL.
Yeah, this has been my experience too. The mentality seems similar to "productivity hackers" who spend more time figuring out the quickest, most optimal way to do a thing than people who just do the thing.
One of the things I've noticed is that people who just do the thing, take note of what's annoying, and fix the most annoying things about a process later on tend to make the most impressive dents in a system or process, especially since they spend time mulling over the idea in their head and so by the time they implement, they aren't "zero-shotting" a solution to what's generally a complex issue.
I agree with you but also agree with the above, if youre stuck permanently in some tangled codebase with a boring language/style, the really good programmers tend to find something more fun to work on - unless they can bring their new skills/experience to bear. personally I'll only go back to doing boring stuff if i can't find a job doing the fun stuff
100% agree. You have hit the nail on the head. I went from Common Lisp to Go to now Rust and find that Rust devs are the best so far on average.
There are fewer of them, they ask for more money, but they really are exceptional. Especially Rust devs right now because there are not a lot of jobs you only find the most passionate and the most brilliant in that space. A short window though which will close as Rust gets more popular to startups, take advantage of it now.
In my case, it was definitely worth becoming uncomfortable for a bit to learn Clojure because I was very uncomfortable with the experience of many of the other languages. It’s also great to have endless backwards compatibility and little reliance on changing external libraries baked in.
> Each new layer of complexity fed my developer ego.
I'm unable to understand this mindset. All the time I read things like "Developers love complexity because it feeds their egos" but I've never encountered a situation in which added complexity made me more proud of the work. Just the opposite: being able to do more made me more proud of the work I put in, and complexity was the price I paid for that ability. The greatest hacks, the ones that etch people's names into history, are the ones -- like Unix and the Doom engine -- that achieve phenomenal feats with very little code and/or extreme parsimony of design. This is no more true than in that famous ego-stroking/dick-measuring contest of programming, the demoscene. My favorite example being the 4k demo Omniscent: https://www.youtube.com/watch?v=G1Q9LtnnE4w
Being able to stand up a 100-node K8s cluster to provide a basic web service, connected to a React SPA front end to provide all the functionality of a Delphi program from the 90s doesn't stroke the ego of any programmer I know of; but it might stroke their manager's ego because it gives them an opportunity to empire-build and requisition a larger budget next year.
Indeed, I often tell people that one of the “hardest” things to do in software development is actually managing complexity (on any significant sized code base that is, on smaller ones it’s probably not going to be an issue).
Big long lived code bases are all about this battle against complexity and the speed at which you can add new or update features largely comes down to how well you’re doing at management of complexity.
Look these folks can do whatever the heck they want, use whatever language they want.
However my criteria for selecting a language for use in a professional context:
0: fit to task - obviously the language has to be able to do the job - to take this seriously you must define the job and what its requirements are and map those against the candidate languages
1: hiring and recruiting - there must be a mainstream sized talent pool - talent shortages are not acceptable - and I don't buy the argument that "smart people are attracted to non mainstream languages which is how we find smart people", it is simply not true that "most smart people program with Scala/Haskell/Elixir/whatever" - there's smart and smarter working on the mainstream languages.
2: size of programming community, size of knowledge base, size of open source community - don't end up with a code base stuck in an obscure corner of the Internet where few people know what is going on
3: AI - how well can AI program in this language? The size of the training set counts here - all the mainstream languages have had vast amounts of knowledge ingested and thus Claude can write decent code or at least has a shot at it. And in future this will likely get better again based on volume of training data. AI counts for a huge amount - if you are using a language that the AI knows little about then there's little productivity related benefits coming to your development team.
4: tools, IDE support, linters, compilers, build tools etc. It's a real obstacle to fire up your IDE and find that the IDE knows nothing about the language you are using, or that the language plugin was written by some guy who did it for the love and its not complete or professional or updated or something.
5: hiring and recruiting - it's the top priority and the bottom and every priority in between. If you can't find the people then you are in big trouble I have seen this play out over and over where the CTO's favorite non-mainstream language is used in a professional context and for years - maybe decades after the company suffers trying to find people. And decades after the CTO moved on to a new company and a new favorite language.
So what is a mainstream language? Arguable but personally it looks like Python, Java, JavaScript/TypeScript, C#, Golang. To a lesser extent Ruby only because Ruby developers have always been hard to find even though there is lots of community and knowledge and tools etc. Rust seems to have remained somewhat niche when its peer Golang has grown rapidly. Probably C and C++ depending on context. Maybe Kotlin? How cares what I think anyway its up to you. My main point is - in a professional context the language should be chosen to service the needs of the business. Be systematic and professional and don't bring your hobbies into it because the business needs come first.
And for home/hobbies/fun? Do whatever the heck you like.
Small correction: finding experienced people is difficult. There's no shortage of engineers who only briefly tried Clojure and would love to use it at their full-time gig.
> talent shortages are not acceptable - and I don't buy the argument that "smart people are attracted to non mainstream languages which is how we find smart people", it is simply not true that "most smart people program with Scala/Haskell/Elixir/whatever" - there's smart and smarter working on the mainstream languages.
Smart people can be trained in any language and become effective in a reasonably short period of time. I remember one company I worked at, we hired a couple of fresh grads who'd only worked with Java at school based on how promising they seemed; they were contributing meaningfully to our C++ code base within months. If you work in Lisp or Haskell or Smalltalk or maybe even Ruby, chances are pretty good you've an interesting enough code base to attract and retain this kind of programmer. Smart people paired with the right language can be effective in far smaller numbers as well.
The major drawback, however, is that programmers who are this intelligent and this interested in the work itself (rather than the money or career advancement opportunities) are likely to be prickly individualists who have cultivated within themselves Larry Wall's three programmer virtues: Laziness, Impatience, and Hubris. So either you know how to support the needs of such a programmer, or you want to hire from a slightly less intelligent and insightful, though still capable, segment of the talent pool which means no, you're not going to be targeting those powerful languages off the beaten track. (But you are going to have to do a bit more chucklehead filtering.)
> if you are using a language that the AI knows little about then there's little productivity related benefits coming to your development team.
This is vacuously true because the consequent is always true. The wheels are kind of falling off "Dissociated Press on steroids" as a massive productivity booster over the long haul. I think that by the time you have an AI capable of making decisions and crystallizing intent the way a human programmer can, then you really have to consider whether to give that AI the kind of rights we currently only afford humans.
To be fair, I will readily say that the lack of static analysis really does bite when refactoring, though I think that good design principles and the overall productivity multiplier may offset that cost (also unique, descriptive, grep-able names!). I guess I've also seen enough C++ template spaghetti to know that static typing is no panacea either.
I don't know to what extent I'll use dynamic languages going forward, though for now I'm kind of in love with opening up a window into the computer and building up my digital sandcastles. Many of these languages also have a great FFI story, which is making me dream up cool bilingual projects that play on the strengths of both approaches.
All in all, no regrets about my adventures in dynamic-land.
Because of being a Lisp based language, bringing something else to the table besides "lets replace Java", and the community being welcoming of the host environments where Clojure is a guest.
Static types are great, but boy... I love my REPL. I think I'd hate to actually work while writing code. REPL-driven interactivity with Clojure allows me to treat the work like I'm playing a video game.
Not that I “feel” it was no problem, but there were no bugs found that could be traces down to that.
It was not a small codebase.
Sure, Clojure is dynamically typed, but it is also strongly typed. That in practice means that for example Clojurescript when compiling to Javascript enforces those type guarantees, sometimes emitting safer code than even statically typed Typescript cannot.
I don't think so. It's just like said in another comment:
For everything, there's a trade-off. Some just accept those trade-offs, build their vision, and launch it into the world; Some waste time, lamenting that reality doesn't align with their ideals.
Static typing works - just like formal methods, just like dependent types, just like unit testing, just like generative testing, just like many other different ideas and techniques. They each have their own place and use cases, strengths and weaknesses, pros and cons. Picking one single paradigm, technique, design pattern, or methodology - no matter how amazingly powerful they are - and just dogmatizing and fetishizing it is simply immature. Reaching the point where you clearly understand that there are truly no silver bullets and everything is about making compromises is a sign of professional growth and true, genuine experience.
I've come to the conclusion that it is just a better experience using a language that already has static types for large projects, than trying to make a dynamic language have similar things. Having to wrap every function in a error boundary to get somewhat of a meaningful debug experience is just .. awful.
On the other hand, they allow you to do some very interesting things like using specs for complex validation. Once written specs can be then used for generating data for both - UI testing and property-based unit-tests. We once have build set of specs to validate an entire ledger - imagine having to be able to generate a bunch of transactions where numbers intelligently build based on previous entries.
Other languages even though have similar capabilities, like type providers in F#/OCaml, zod in Typescript, quckcheck/scalacheck in Haskell & Scala - Clojure is quite unique here in combining runtime validation, generative testing, and data definition in such a cohesive way. The ability to compose specs and use them across different contexts (validation, generation, documentation) is particularly powerful.
Another impressive thing is that you can easily share the logic between different runtimes - same specs can used in both - JVM and Javascript, which surprisingly difficult to achieve even when writing in Node with TS/JS - you cannot easily share the same validation logic between backend and the browser, even for the same javascript runtime, using its native language. Clojure lets you do that with ease.
For everything, there's a trade-off. Some just accept those trade-offs, build their vision, and launch it into the world; Some waste time, lamenting that reality doesn't align with their ideals.
But the mental model is fundamentally different. It's not like you write a bunch of code, set a breakpoint and see what things are. You essentially boot up a lisp image and then you make changes to it. It's more like carving out a statue from a piece of rock rather than building a statue layer by layer.
Data shapes in Clojure typically explicit and consistent. The context usually makes things quite obvious. Data is self-describing - you can just look at a map and immediately see its structure and contents - the keywords serve as explicit labels and the data, well... is just fucking data. That kind of "data transparency" makes Clojure code easier to reason about.
In contrast, in many other PLs, you often need to know the class definition (or some other obscured shit) to understand what properties exist or are accessible. The object's internal state may be encapsulated/hidden, and its representation might be spread across a class hierarchy. You often can't even just print it to see what's inside it in a meaningful way. And of course, it makes nearly impossible to navigate such codebases without static types.
And of course the REPL - it simply feels extremely liberating, being able to connect to some remote service, running in a container or k8s pod and directly manipulate it. It feels like walking through walls while building a map in a video game. Almost like some magic that allows you to inspect, debug, and modify production systems in real-time, safely and interactively, without stopping or redeploying them.
Not to mention that Clojure does have very powerful type systems, although of course, skeptics would argue that Malli and Spec are not "true" types and they'd be missing the point - they are intentionally different tools solving real problems pragmatically. They can be used for runtime validation when and where you need it. They can be easily manipulated as data. They have dynamic validation mechanisms that static types just can't easily express.
One thing I learned after using dozens of different programming languages - you can't just simply pick one feature or aspect in any of them and say: "it's great or horrible because of one specific thing", because programming languages are complex ecosystems where features interact and complement each other in subtle ways. A language's true value emerges from how all its parts work together, e.g.,
- Clojure's dynamic nature + REPL + data orientation
- Haskell's type system + purity + lazy evaluation
- Erlang's processes + supervision + fault tolerance
What might seem like a weakness in isolation often enables strengths in combination with other features. The language's philosophy, tooling, and community also play crucial roles in the overall development experience.
If one says: "I can't use Clojure because it doesn't have static types", they probably have learned little about the trade they chose to pursue.
Sure, static typing is great, but perhaps you have no idea what it actually feels like - spinning up a Clojurescript REPL and being able to interactively "click" and "browse" through the web app programmatically, controlling its entire lifecycle directly from your editor. Similarly, you can do the same thing with remote service running in a kubernetes pod. It's literally like playing a video game while coding. It's immensely fun and unbelievably productive.
I mean, I love Clojure, and used it for personal and work projects for 10+ years, some of which have hundreds of stars on github. But I cannot count the time wasted to spot issues where a map was actually a list of maps. Here Elixir is doing the right thing - adding gradual typing.
Sorry, I'm having hard time believing that. I don't know when was the last time you've used the language, but today there are so many different ways to easily see and analyze the data you're dealing with in Clojure - there are tons of ways in CIDER, if you don't use Emacs - there are numerous ways of doing it in Calva (VSCode) and Cursive (IntelliJ), even Sublime. There are tools Like Portal, immensely capable debuggers like Flowstorm, etc. You can visualize the data, slice it, dice it, group it and sort it - all interactively, with extreme ease.
I'm glad you've found great fondness for Elixir, it is, indeed a great language - hugely inspired by Clojure btw.
You still don't need to bash other tools for no good reason. It really does sound fake - not a single Clojure developer, after using it for more than a decade, would call a Lisp REPL "a band-aid and not a solution". It smells more like someone with no idea of how the tool actually works.
I found spec very useful and damn expressive (and I miss it in other languages), but again that's runtime. I know Rich says such errors are "trivial", but they waste your time (at least mine).
There's a wide spectrum of correctness guaranties in programming - dynamic weak, dynamic strong, static, dependent, runtime validation & generative testing, refinement types, formal verification, etc.
Sure, if your domain needs extreme level of correctness (like in aerospace or medical devices) you do need formal methods and static typing just isn't enough.
Clojure is very fine, and maybe even more than just fine for certain domains - pragmatically it's been proven to be excellent e.g., in fintech and data analysis.
> Can't you see that unless you test every path ...
Sure, thinking in types is crucial, no matter what PL you use. And, technically speaking, yes, I agree, you do need to know all paths to be 100% certain about types. But that is true even with static typing - you still need to test logical correctness of all paths. Static typing isn't some panacea - magical cure for buggy software. There's not a single paradigm, technique, design pattern, or set of ideas that guarantee excellent results. Looking at any language from a single angle of where it stands in that spectrum of correctness guaranties is simple naivety. Clojure BY DESIGN is dynamically typed, in return it gives you several other tools to help writing software.
There's an entire class of applications that requires significantly more effort and mental overhead to build using other languages. Just watch some Hyperfiddle/Electric demos and feel free to contemplate what would it take to build similar things in some other PL, statically typed or whatnot. https://www.youtube.com/watch?v=nEt06LLQaBY
Some narrow view. Have you tried thinking that maybe Clojure intentionally chose not to include type declarations because they can lead to a messy middle ground? After all, maybe not every feature from Common Lisp needs to be replicated in every Lisp dialect? Besides, Clojure's Spec and Malli can be far more powerful for validation as they can define complex data structures, you can generate test data from them, you can validate entire system states, and they can be manipulated as data themselves.
If CL so "far superior" like you say, why then it can't be 'hosted' like Clojure? Why Clojure has Clojurescript, ClojureCLR, ClojureDart, babashka, nbb, sci, etc.? I'm not saying that to argue your specific point. Neither of them is 'superior' to another. They both have different purposes, philosophies, and use cases. Each has its strengths, pros, and cons. And that is actually very cool.
What? Type declarations in CL (which came from prior Lisp dialects) were added, so that optimizing Lisp compilers can use those to create fast machine code on typical CPUs (various CISC and RISC processors). Several optimizing compilers have been written, taking advantage of that feature. The compiler of SBCL would be an example. SBCL (and CMUCL before that) also uses type declarations as assertions. So, both the SBCL runtime and the SBCL compiler use type declarations.
> why then it can't be 'hosted' like Clojure?
ABCL does not exist?
https://abcl.org
I've only played with Clojure (not used it professionally, I'm working with Scala) but Clojure interop with Java is way better than what I can see here: https://abcl.org/doc/abcl-user.html The way it's integrated with the host platform makes it better for most use cases IMHO.
That may be. ABCL is running on the host system and can reuse it, but it aims to be a full implementation of Common Lisp, not a blend of a subset of Lisp plus the host runtime. For example one would expect the full Common Lisp numerics.
One of its purposes is to be able to run portable Common Lisp code on the JVM. Like Maxima or like bootstrapping the SBCL system.
There is a bit more about the interop in the repository and in the manual:
https://abcl.org/releases/1.9.2/abcl-1.9.2.pdf
ABCL does exist, sure, and there's also LCL for Lua. Yet, 8 out of 10 developers today, for whatever reasons would probably use Fennel to write Lispy-code to target Lua and probably more devs would choose Clojure (not ABCL) to target JVM. That doesn't make either Fennel nor Clojure "far superior" than Common Lisp and vice-versa.
What were those reasons?
> ABCL does exist, sure,
Would that count as a hosted implementation?
As a result, Typed Clojure was practically unusable for most applications. That has changed as of last year. For instance, the type checker can now handle the following kinds of expressions.
This expression was a type error before early 2023, but now it is inferred as a value of type (Val 1).Unfortunately, many Clojure users think types are somehow a bad thing and will usually repeat something from Rich Hickey's "Maybe Not" talk.
I've worked with Clojure professionally. The codebases I've seen work around dynamic types by aggressively spec'ing functions and enabling spec instrumentation in development builds. Of course, this instrumentation had to be disabled in production because spec validation has measurable overhead.
Although Typed Clojure has made remarkable progress, the most editor tooling I recall for Typed Clojure is an extension to CIDER that hasn't been maintained for several years. (The common excuse given in the Clojure community is that some software is "complete" thus doesn't need updates, but I have regularly found bugs in "complete" Clojure libraries, so I don't have much confidence here).
Overall, if one wants static typing, then Clojure will disappoint. I still use Clojure for small, personal-use tools. Having maintained large Clojure codebases, however, I no longer think the DX (and fearless refactoring) in languages like Rust and TypeScript is worth trading off.
My theory is that communicating abstractions is hard. If you work on your own, or in a (very) small team, you can come up with powerful abstractions that allow you to build amazing systems, quickly. However, sharing the underlying ideas and philosophy with new team members can be daunting. As systems grow, and mistakes are made, it becomes more and more likely that you run into serious problems.
This may also be why Java and similar object oriented programming languages are so successful for systems that have to be maintained for ages, by large teams of developers. There are but few abstractions and patterns, and it does not allow you to shoot yourself in the foot, nor to blow your whole leg off. Conversely, this may also be why complex frameworks, such as Spring, are not always so nice, because they introduce (too?) powerful abstractions, for example through annotations. It may also clarify why more powerful languages such as Scala, Common Lisp, Smalltalk, Haskell, etc, consistently fail to pick up steam.
Another theory is that not every developer is comfortable with abstract concepts, and that it simply takes a team of smart people to handle those.
I believe once you've got to some point, dealing with abstractions is a way of life. It's either in the language, the technical requirements, or the software design.
In the end it's about designing abstraction, and community's focus on designing simple abstractions drove me in designing the whole system.
Now once I have the working system, I am fairly sure it can be implemented in any language.
Languages need a window of opportunity, and many of those squandered it.
Clojure won over Scala because at the time when people were loooking for an alternative JVM langauge, Clojure was more of a departure from Java and seemed to have better tooling (compile times and syntax support) than Scala.
Smalltalk and Common Lisp wasted their moment by not being cheap/free to people using micros in the 1980s.
Lisp, especially, very much wasted its moment with micros. The fact that no Lisper had the vision to dump a Lisp onto the bank switched micros (which makes GC really easy and useful) of the mid to late 1980s is a self-inflicted bullet wound. Lots of us hated doing assembly language programming but had no real alternative. This was a loss born of pure arrogance of Lispers who looked down on those micros as not being "real machines".
I weep for all the hours I wasted doing assembly language as a teenager that I could have been writing Lisp. How much software could have been written that would have been <100 lines of Lisp if only someone had written that tool?
I see way more Scala in companies last ~5y and have the impression of its ecosystem being more robust. Not uncommon for greenfields. It's longer than that I even encountered an active Clojure codebase. This is from a data-engineer perspective.
Clojure may be more popular for some niche of app startups perhaps? We are in different "bubbles" I suppose.
EDIT: Data disagrees with you also.
https://www.tiobe.com/tiobe-index/
https://redmonk.com/sogrady/2024/09/12/language-rankings-6-2...
https://survey.stackoverflow.co/2024/technology#1-programmin...
My feeling was that a lot of Clojure folks moved on as the data processing stuff moved on from Java/JVM.
My impression has been that JVM-based languages have effectively been on a steady general decline for a while now. Java has fixed a lot of its issues; Kotlin gave the Java expats somewhere to go. And Javascript/Node along with Go drained out the general masses who didn't really want to be on the JVM anyhow.
However, it is interesting that Clojure has effectively disappeared in those rankings.
I think both are challenging your notion of closing opportunity windows for programming languages (:
Smalltalk and Common Lisp are not individuals.
"$99 Smalltalk Announced" 1986 InfoWorld Jun 30
https://books.google.com/books?id=Wi8EAAAAMBAJ&pg=PA11&dq=Sm...
I kind of fail to see Lisp as an alternative to assembler on mid 80s micros.
Though, there were several cheap Lisps for PCs...
Lots of utility software like spell checkers and the like still existed. These would be trivial to implement in Lisp but are really annoying in assembler.
Lisp would have been really good relative to BASIC interpreters at the time--especially since you could have tokenized the atoms. It also would have freed people from line numbers. Linked lists work well on these kinds of machines. 64K is solid for a Lisp if you own the whole machine. You can run over a bank of 16K of memory for GC in about 50 milliseconds or so on those architectures.
Had one of the Lisperati evangelized Lisp on micros, the world would look very different. Alas, they were off charging a gazillion bucks to government contracts.
However, to be fair, only Hejlsberg had the correct insights from putting Pascal on the Nascom.
I see no evidence for that. Lisp was a pain on tiny machines with bad user interface.
> 64K is solid for a Lisp if you own the whole machine.
I had a Lisp on an Apple II. It was a useless toy. I was using UCSD Pascal and Modula 2 on it. Much better.
I had Cambridge Lisp on an Atari with 68k CPU. It was next to unusable due to frequent crashes on calling FFI functions.
The first good Lisp implementation I got was MacScheme on the Mac and then the breakthrough was Macintosh Common Lisp from Coral Software.
> Had one of the Lisperati evangelized Lisp on micros
There were articles for example in the Byte magazine. Lisp simply was a bad fit to tiny machines. Lisp wasn't very efficient for small memory. Maybe with lots of work implementing a tiny Lisp in assembler. But who would have paid for it? People need to eat. The tiny Lisp for the Apple II was not usable, due to the lack of useful programming environment.
> Alas, they were off charging a gazillion bucks to government contracts.
At least there were people willing to pay for it.
And they were stupid. Even "good" Lisp references didn't cover the important things like hashes and arrays. Everybody covered the recursive crap over and over and over ad nauseam while people who actually used Lisp almost always sidestepped those parts of the language.
> I had a Lisp on an Apple II. It was a useless toy. I was using UCSD Pascal and Modula 2 on it. Much better.
And yet UCSD Pascal was using a P-machine. So, the problem was the implementation and not the concept. Which was exactly my point.
> At least there were people willing to pay for it.
Temporarily. But then it died when the big money went away and left Lisp all but dead. All the while all the people using languages on those "toys" kept right on going.
My point is that implementations don't come from nothing. You can't just demand them to be there. They have to be invented/implemented/improved/... Companies at that time did not invest any money in micro implementations of Lisp. I also believe that there was a reason for that: it would have been mostly useless.
> Temporarily. But then it died when the big money went away and left Lisp all but dead. All the while all the people using languages on those "toys" kept right on going.
Lot's of toys and languages for them died.
Rapid application technologies, methedologies, or frameworks are not unusual.
I know some wonderfully productive polyglot developers who by their own choice end up at Clojure. It doesn't have to be for everyone.
I wouldn't rule out that Clojure doesn't deserve credit. I wouldn't think it's a good idea to discredit Clojure from not having tried it myself.
I do hope someone with extensive Clojure experience can weigh in on the advantages.
How easy something is a codebase grows is something to really consider.
This product regardless of how it's built is pretty impressive. I'd be open to learning advantages and comparisons without denying it.
With time you get to understand the power of simplicity. How to break the problem and compose the solutions to achieve your intended result..
That's where the power of Clojure came in for us.
ehmmm.... excuse me.... erghmm... what about Emacs? I'm sure, it absolutely can be count for a "successful system that have to be maintained for ages". For far, far longer than any Java-based project that ever existed.
Even though Elisp lacks:
- static typing
- OOP class system (until relatively recently)
- Modern package management (until ELPA/MELPA)
- Multi-threading model
- JIT compilation
Perhaps "the secret sauce" of successful software is in simplicity? Maybe some programmers just get it, and for others, it is such an obscure and mysterious entity. Some programmers write "programs to program computers", and some may have realized that they are not trying to solve purely technological problems, but they are, in fact, tackling socio-technological problems, and they write programs to communicate their ideas to fellow human beings, not machines.
However, emacs is a fucking mess, and there is a reason "init.el bankruptcy" is a thing and why the most popular way to use emacs is through various frameworks such as doom or spacemacs.
In emacs, nearly everything can(and often does) mess with everything else. It is serious integration hell to actually get things to work together, and the work that goes into e.g. doom is basically all about managing that complexity through good abstractions and more rigid ways to configure and install things.
Emacs is also objectively dogshit in a lot of ways compared to most modern editors. LSP is ridiculously slow and a constant source of performance issues, many of which are probably directly related to emacs internals. Eglot seems to do better but it's a lot more limited(you can't use multiple language servers together, for example). Then there's things like the buffer being the data-structure for everything, which is sort of like modeling nearly everything as one long string. Things that would be trivial to do in most other languages or contexts are difficult and error-prone in emacs.
Yet not a single modern editor can even come close to it when it comes to extensibility and customization; self-documenting; complete programmability; malleability; ability to perform virtually any computing task without leaving the editor. Modern editors excel at being user-friendly out of the box. Emacs excels at becoming exactly what each user needs it to be. While you find yours to be "objectively dogshit" in comparison, I can probably easily demonstrate to you how mine eats their "modern" shit without even chocking.
> LSP is ridiculously slow
Have you tried to get to the bottom of it? Sometimes it just the lsp-server implementation that is slow. Have you tried https://github.com/blahgeek/emacs-lsp-booster? Did you build Emacs --with-native-comp flag? Have you tried using plists for deserialization https://emacs-lsp.github.io/lsp-mode/page/performance/#use-p...? Have you used Emacs' built-in profiler? Sometimes the issue might be somewhere else, e.g., some fancy modeline settings.
> Things that would be trivial to do in most other languages or contexts
Sure, that's why we see so many "Emacs killers" built in Java, because replicating Org-mode is so trivial in it. /s
My conclusion is basically: some language servers are slow, which doesn't help, and some are also very noisy. Both scenarios are handled extremely poorly by emacs, i.e. locked ui when parsing or waiting sometimes, stuff like that.
You really have to be a special kind of oblivious to argue so vehemently while literally suggesting I run a separate program on the side to get what most would consider to be a basic, functioning editing environment.
Re the org-mode argument: I really think you mistake lack of interest for insurmountable complexity. Emacs is not a magic machine that can do things no other software can do. You can probably count the number of people who genuinely care about org-mode in the world on 10 sets of hands.
Okay, I'm just trying to help. I, for one, don't experience extremely vexing problems with performance like you're describing. But of course, I'm not going to pretend that I'm unaware that Emacs needs to improve on that front, and over the years there have been numerous improvements, so it's not a completely hopeless situation.
> basic, functioning editing environment
Different priorities. For me, "basic, functioning editing environment" means stuff like indirect buffers - which not a single other modern, popular editor offers.
> Emacs is not a magic machine that can do things no other software can do.
In some cases, it literally feels exactly like that. So while not literally magical, Emacs enables workflows and capabilities that can feel transformative and, yes, almost magical to its users. The "magic" comes from its architecture and philosophy. I can easily list dozens of use cases I've adopted in my workflow that are simply impractical to even try to replicate in other editors.
One can criticize pretty much any software product. Yet Emacs still falls in the category of "successful systems that have been maintained for ages". Anyway, we're getting sidetracked. My main point in the comment wasn't about Emacs, the main point is in the last paragraph. Maybe you just didn't even get to it.
All programming languages are man-made, human constructs and not a single one is the most ideal for the task of programming - just like not a single spoken language can claim absolute superiority for communication. The nature of programming often eludes clear categorization, challenging us to define whether it belongs to the domains of art, business, or engineering. Maybe it's all of these? Maybe it's none of it? We, programmers endlessly engage in passionate debates about language superiority, constructing complex arguments to defend our preferences. I'm not a musician, but can you imagine guitar players vehemently arguing with drummers that guitars are ultimately better instruments because, I dunno, one never can perform Für Elise using drums. And then some experienced drummer comes and performs it in way no one ever imagined. I think, the beauty of programming, like music, lies in its diversity. Instead of engaging in futile debates about superiority, we should celebrate how different languages and paradigms enrich our craft, each bringing its own unique perspective and elegance to the art of problem-solving.
One of the things I feel grateful for after learning Clojure is that many, perhaps most people in the community are genuinely experienced, seasoned software developers, driven to Clojure by curiosity after many years spent in other programming languages. Clojure allowed me to fall in love with my trade once again. The combined malleability of Lisp that can emulate (almost) any known programming paradigm and the simplicity and data-centric design in Clojure helped me (truly) understand the core principles of programming. Ironically, after using numerous different languages for many years, spending years of grokking their syntactic features and idiosyncrasies, many stackoverflow threads and tutorials later, only after switching to Clojure did I feel like I understood those languages better. One peculiar aspect about Clojurians, that they don't typically engage in general debates about programming languages without specific context, subtly breaking the famous Perlis quote about Lispers knowing the value of everything and cost of nothing. Clojurians known for their grounded perspective and non-dogmatic approach to problem-solving. So, they typically don't bash on someone else's favorite language, instead they'd try to learn and steal some good ideas from it.
Seems like many similar capabilities, like a focus on immutable data structures, pure functions, being able to patch and update running systems without a restart, etc.
CIDER and nREPL is better tech than IEX though. I live in both and Clojure is much more enjoyable.
1. IEx provides a robust and interactive debugging environment that allows me to dig into whatever I want, even when running in production. I've never lost state in IEx, but that happens fairly often in CIDER and nREPL.
2. IEx uses Elixir's compilation model, which is a lot faster than CIDER and nREPL, leading to faster debugging cycles.
3. IEx is tightly integrated with Elixir whereas Clojure's tools are more fragmented.
4. IEx doesn't carry the overhead of additional middleware that CIDER and nREPL do.
I'm also not a fan of JVM deployments, so I've migrated all my code away from Clojure to Elixir during the past 10 years.
I guess the major advantage for Closure with this style is the “persisted” data structures end up sharing some bytes behind the scenes - it’s nice the language is explicitly situated around this style, rather than TypeScript’s chaotic Wild West kitchen sink design. What I don’t understand the advantage for “state management”. Like, you build a new state object, and then mutate some pointer from prevState to nextState… that’s what everyone else is doing too.
There are times though when it’s nice to switch gears from function-and-data to an OO approach when you need to maintain a lot of invariants, interior mutability has substantial performance advantages, or you really want to make sure callers are interpreting the data’s semantics correctly. So our style has ended up being “functional/immutable business logic and user data” w/ “zero inheritance OO for data structures”.
Whenever I read some open source TypeScript code that’s using the language like it’s Java like `class implements ISomething` ruining cmd-click go to method or an elaborate inheritance hierarchy it makes me sad.
Clojure's real super power is its reference type(s) (in particular the atom). Rich does an excellent job explaining them in this video: https://www.youtube.com/watch?v=wASCH_gPnDw&t=2278s
The two sentences around the one you quoted should answer the question as well:
And Whereas OOP languages combine behavior and data into a single thing (classes with methods model behavior and hide state i.e. data) functional languages separate them: functions model behavior, and data is treated more like an input and output rather than "state".In particular with clojure, data structures tend to be immutable and functions tend to not have side effects. This gives rise to the benefits the article talks about, though is not without its own drawbacks.
1. Passed as arguments
2. Returned from functions
3. Stored in variables
4. Manipulated directly
5. Compared easily
Unlike some languages where data needs special handling or conversion, Clojure lets you work with data structures directly and consistently throughout your program.
This philosophy extends to how Clojure handles data transformations. For example, transducers are composable algorithmic transformations that can work on any data source - whether it's a collection, stream, or channel. They treat the data transformation itself as a first-class value that can be composed, stored, and reused independently of the input source.
This first-class treatment of both data and data transformations makes Clojure particularly powerful for data processing and manipulation tasks.
That's why Clojure often finds strong adoption in data analytics, fintech and similar domains. The ability to treat both data and transformations as first-class citizens makes it easier to build, for example: reliable financial systems where data integrity is crucial.
[1]: https://en.wikipedia.org/wiki/Homoiconicity
- immutability and persistent data structures (makes code easier to reason about [the data]; enables efficient concurrency - no locks; some algorithmic tricks that makes it very performant despite having to create copies of collections),
- seq abstraction - unlike other Lisp where sequence functions are often specialized for different types, Clojure simplifies things by making baked-in abstraction central to the language - all core functions work with seqs by default. it emphasizes lazy sequences as a unified way to process data, i.e., memory efficiency and infinite sequences, etc.
- rich standard library of functions for data transformation
- destructuring - makes code both cleaner and more declarative
- emphasis on pure functions working on simple data structures
The combination of these features makes data processing in Clojure particularly elegant and efficient.
I think having a clear example would help in understanding.
Subscribe the newsletter to know when it's live.
https://pathom3.wsscode.com/
Incredible history, I feel like Clojure makes magic. What I like about functional programming is that it brings other perspectives of how things CAN work!!
Congratulations by the life change
In how many man-hours/days? It's hard to know if the list is long or short only knowing that calendar time should be multiplied by three for calculating people time spent...
2 years if you count when I exploring building it in other languages.
Alongside building Vade Studio: I have been working as a contractor for 2 clients. Developing systems for them Other two developers have been managing their college curriculum as well.
I am not sure how to do the math around it, but anecdotally I don't think this would be possible in any other environment.
A few years ago, I worked in a small group (6 devs) for a retail business. We had all sorts of different third-party integrations - from payment processors to coupon management and Google Vision (so people wouldn't upload child porn or some other shit through our web and mobile apps). The requirements would constantly change - our management would be like: "Hey guys, we want this new program to launch, can we do it next week?", then a day later: "Turns out we can't do it in the state of Tennessee, let's add some exceptions, okay?", then some time later: "Folks, we really want to A/B test this but only for a specific class of customers..." Jesus, and it wasn't just an "occasional state of affairs" every once in a while; it was a constant, everyday, business-as-usual flow. We had to quickly adapt, and we had to deploy continuously. We had several services, multiple apps - internal, public, web and mobile, tons of unit and E2E tests, one legacy service in Ruby, we had Terraform, containers, API Gate, load balancers, etc.
I can't speak for myself, but a couple of my peers were super knowledgeable. They used all sorts of tools and languages before. I remember my team-lead showing me some Idris features (tbh, I don't even remember anymore exactly what) and asking my opinion if we should find a way to implement something like that, and I couldn't hide my confusion as I didn't know nothing about Idris.
Numerous times, we had discussions on improving our practices, minimizing tech debt, etc. And I remember distinctly - many times we have speculated how things would've turned out if we used some other stack, something that's not Clojure. We would explore various scenarios, we even had some prototypes build in Golang, Swift and Kotlin. And every single time, we found compelling and practical reasons for why Clojure indeed was the right choice for the job.
Sure, if we had a larger team, maybe we could've done it using a different stack. But it was a startup, and we had just the six of us.
With many languages and frameworks having a decent amount of similar functionality and performance available, more and more is left to personal preference and interpretation of what to use.
Popularity might matter when trying to hire juniors. Given how many juniors seem to appreciate sincere mentorship when it's mutual, I'm not super sure on this anymore.
Popularity might not matter when trying to hire other types of developers, including seniors. It's less about what's popular, or the right badge to signal.
Of the polyglot folks I get to know and are humble about their smarts, it's interesting how many independently have ended up on Clojure, or a few others. Universally there's usually a joke of how long can bash scripts do what's needed until a decision has been tied in.
Yeah, Clojure is a weird thing. I myself, after using numerous different PLs - ranging from Basic/Pascal/Delphi to .NET languages like C#/F#, then later Python and, of course, "script" options like Javascript, Typescript, Coffeescript, and many others - still never felt like ever obtaining "the polyglot" status. I just went from one language to another, and every time it felt like burning the old clothes. I was always a "Blub programmer" at any given point of my career, every time moving from one "Blub PL" to another.
Learning Clojure somehow renewed my passion for the craft and forced me into deeper understanding of core programming ideas. Ironically, long-abandoned old friends became more familiar than ever before. I don't know exactly how that happened. Either because of the hosted nature of Clojure that forced me to always be on at least two different platforms at the same time - JVM and Javascript - or maybe because it exposed me to a lot of smarter and more experienced people than ever before. Maybe because of "Lisp-weirdness", thinking how to program with "no syntax" or being forced to think in terms of "everything is a function" where you don't even need primitives, even numbers, since you can express just about anything with functions. It could be because Clojure is of sufficiently higher level, it forced me to work on more complicated problems; I really can't say. One thing I know: I was a "Blub" programmer before Clojure. After a few years of Clojure, I have become an "experienced Blub programmer" in the sense that I no longer care in what programming language I need to write, debug, troubleshoot, or build things, with maybe a few exceptions like Haskell, Prolog and Forth - they'd probably require me to put on a student hat again to become productive, even though I know some bits of each.
Sorry, I didn't mean to make this "about me." What I'm trying to say is that I am sad that it took me years of wasted time. I wish someone had truly forced me into learning something like Clojure sooner, but again, maybe learning Haskell would have had the same or even better effect. I just want to encourage young programmers to try different languages instead of obsessing with their favorite ones. "Well," you may say, "aren't you right now fetishizing Clojure?" Here's the thing - I don't really consider Clojure any different from numerous other Lisp dialects I use - for me, all Lisps feel like the same language. The mental overhead of switching between Clojure/nbb/Elisp/Common Lisp/Fennel/etc. is so small, it's not even funny. Even jumping between Typescript and Javascript (same family) is more taxing than switching between different Lisps. Perhaps, true value of Lisps (including Clojure) is not so much technological - maybe it's rather cerebral and epistemic.
I only use google for email logins for services I don't take seriously and am willing to lose.
Your concern makes sense though and we'll be considering it in the next feature rollout.
I think you have the right idea with getting people in as fast as possible.
It is probably safe to say the email login folks don’t mind and maybe appreciate the extra step.
In terms of starting quick as possible with the tool, maybe it’s possible to do shadow profiles where the app starts against a yet unnamed account (tied to a cookie) and when they start using the wizard and playing with it, at some point they have enough to save with a popup with login by id provider or email.
Not really, in an OO language state could have been stored in some data structure as well, with a way to serialize and deserialize. E.g. React made this very popular.
Specifically, I find language evangelists particularly likely to be closer to .5x than 5x. And that's before you even account for their tendency to push for rewriting stuff that already works, because "<insert language du jour here> is the future, it's going to be great and bug free," often instead of solving the highest impact problems.
I've worked with language zealots and it's awful. Especially the ones with the hardcore purely functional obsession. But that can apply to almost anything: folks that refuse to use anything but $TECH (K8S, FreeBSD, etc). Zealots like this general care less about delivering and more about what they get to play with.
Then you have the folks that care about delivering. They're not language agnostic, they have strong opinions. But also: they communicate and collaborate, they actually CARE: they have real empathy for their users and their co-workers, they're pragmatic. Some of these folks have a lot of experience in pushing hard to make things work, and they've learned some valuable lessons in what (not) to do again. Sometimes that can manifest as preferences for languages / frameworks / etc.
It's a messy industry, and it can be extremely hard to separate the wheat from the chaff. But a small team with a few of those can do truly game changing work. And there's many local optima to be had. Get a highly motivated and gelled team using any of: Elixr / Typescript / Zig / Rust / Ada / Ocaml / Scala / Python / etc, and you'll see magic. Yes, you don't need fancy tech to achieve that. There's more than a few of those writing C for example, but you're unlikely to see these folks writing COBOL.
There are fewer of them, they ask for more money, but they really are exceptional. Especially Rust devs right now because there are not a lot of jobs you only find the most passionate and the most brilliant in that space. A short window though which will close as Rust gets more popular to startups, take advantage of it now.
It depends; exact meaning, application and personal preference play a big role.
I'm unable to understand this mindset. All the time I read things like "Developers love complexity because it feeds their egos" but I've never encountered a situation in which added complexity made me more proud of the work. Just the opposite: being able to do more made me more proud of the work I put in, and complexity was the price I paid for that ability. The greatest hacks, the ones that etch people's names into history, are the ones -- like Unix and the Doom engine -- that achieve phenomenal feats with very little code and/or extreme parsimony of design. This is no more true than in that famous ego-stroking/dick-measuring contest of programming, the demoscene. My favorite example being the 4k demo Omniscent: https://www.youtube.com/watch?v=G1Q9LtnnE4w
Being able to stand up a 100-node K8s cluster to provide a basic web service, connected to a React SPA front end to provide all the functionality of a Delphi program from the 90s doesn't stroke the ego of any programmer I know of; but it might stroke their manager's ego because it gives them an opportunity to empire-build and requisition a larger budget next year.
Big long lived code bases are all about this battle against complexity and the speed at which you can add new or update features largely comes down to how well you’re doing at management of complexity.
Being able to use kubernetes for infrastructure, grafana, prometheus, etc Elasticsearch for search, Mongodb as database, redis as caching layer.
Knowing all these tech and being able to say you know these very well used to massage my developer ego...
Now I am much more like: Use one system to best of it's capability. Use Postgres. Mostly you won't need anything else.
I never resisted to urge to try out something new and shiny in production in earlier days.
Now I mostly use boring technologies and things I am comfortable with running in production.
However my criteria for selecting a language for use in a professional context:
0: fit to task - obviously the language has to be able to do the job - to take this seriously you must define the job and what its requirements are and map those against the candidate languages
1: hiring and recruiting - there must be a mainstream sized talent pool - talent shortages are not acceptable - and I don't buy the argument that "smart people are attracted to non mainstream languages which is how we find smart people", it is simply not true that "most smart people program with Scala/Haskell/Elixir/whatever" - there's smart and smarter working on the mainstream languages.
2: size of programming community, size of knowledge base, size of open source community - don't end up with a code base stuck in an obscure corner of the Internet where few people know what is going on
3: AI - how well can AI program in this language? The size of the training set counts here - all the mainstream languages have had vast amounts of knowledge ingested and thus Claude can write decent code or at least has a shot at it. And in future this will likely get better again based on volume of training data. AI counts for a huge amount - if you are using a language that the AI knows little about then there's little productivity related benefits coming to your development team.
4: tools, IDE support, linters, compilers, build tools etc. It's a real obstacle to fire up your IDE and find that the IDE knows nothing about the language you are using, or that the language plugin was written by some guy who did it for the love and its not complete or professional or updated or something.
5: hiring and recruiting - it's the top priority and the bottom and every priority in between. If you can't find the people then you are in big trouble I have seen this play out over and over where the CTO's favorite non-mainstream language is used in a professional context and for years - maybe decades after the company suffers trying to find people. And decades after the CTO moved on to a new company and a new favorite language.
So what is a mainstream language? Arguable but personally it looks like Python, Java, JavaScript/TypeScript, C#, Golang. To a lesser extent Ruby only because Ruby developers have always been hard to find even though there is lots of community and knowledge and tools etc. Rust seems to have remained somewhat niche when its peer Golang has grown rapidly. Probably C and C++ depending on context. Maybe Kotlin? How cares what I think anyway its up to you. My main point is - in a professional context the language should be chosen to service the needs of the business. Be systematic and professional and don't bring your hobbies into it because the business needs come first.
And for home/hobbies/fun? Do whatever the heck you like.
The amount of knuckleheads that Ive had to interview just to get a single coherent developer is mind boggling (remote first).
Rust Common Lisp Go Ruby/Elixir C++ Python C# Typescript Java Javascript
Clojure is my personal favorite language, and I am planning to build very small team. So it would work for us.
Small correction: finding experienced people is difficult. There's no shortage of engineers who only briefly tried Clojure and would love to use it at their full-time gig.
Smart people can be trained in any language and become effective in a reasonably short period of time. I remember one company I worked at, we hired a couple of fresh grads who'd only worked with Java at school based on how promising they seemed; they were contributing meaningfully to our C++ code base within months. If you work in Lisp or Haskell or Smalltalk or maybe even Ruby, chances are pretty good you've an interesting enough code base to attract and retain this kind of programmer. Smart people paired with the right language can be effective in far smaller numbers as well.
The major drawback, however, is that programmers who are this intelligent and this interested in the work itself (rather than the money or career advancement opportunities) are likely to be prickly individualists who have cultivated within themselves Larry Wall's three programmer virtues: Laziness, Impatience, and Hubris. So either you know how to support the needs of such a programmer, or you want to hire from a slightly less intelligent and insightful, though still capable, segment of the talent pool which means no, you're not going to be targeting those powerful languages off the beaten track. (But you are going to have to do a bit more chucklehead filtering.)
> if you are using a language that the AI knows little about then there's little productivity related benefits coming to your development team.
This is vacuously true because the consequent is always true. The wheels are kind of falling off "Dissociated Press on steroids" as a massive productivity booster over the long haul. I think that by the time you have an AI capable of making decisions and crystallizing intent the way a human programmer can, then you really have to consider whether to give that AI the kind of rights we currently only afford humans.
An example: When existing technology team members can run an interview and decide if someone could learn what they're working with.