He has some good points. This one is from a different paper (Good Ideas, Through the Looking Glass):
Designers had ignored both the issue of efficiency and that a language serves the human reader, not just the automatic parser. If a language poses difficulties to parsers, it surely also poses difficulties for the human reader. Many languages would be clearer and cleaner had their designers been forced to use a simple parsing method.
Who are Wirths, Dijkstras, Hoares, McCarthies and Keys of today? I mean - who represents current generation of such thinkers? Genuinely asking. Most stuff I see here and in other places is about blogposts, videos and rants made by contemporary "dev influencers" and bloggers (some of them very skilled and capable of course, very often more than I am), but I would like to be in touch with something more thoughtful and challenging.
very hot and edgy take: theoretical CS is vastly overrated and useless. as someone who actively studied the field, worked on contemporary CPU archs and still doing some casual PL research - asides from VERY FEW instances from theoretical CS about graphs/algos there is little to zero impact on our practical developments in the overall field since 80s. all modern day Dijkstras produce slop research about waving dynamic context into java program by converting funds into garbage papers. more deep CS research is totally lost in some type gibberish or nonsense formalisms. IMO research and science overall is in a deep crisis and I can clearly see it from CS perspective
Well, I think there is something to it. Computers were at some point newly invented so research in algorithms suddenly became much more applicable. This opened up a gold mine of research opportunities. But like real life mines at some point they get depleted and then the research becomes much less interesting unless you happen to be interested in niche topics. But, of course, the paper mill needs to keep running and so does the production of PhDs.
I assume that you are talking about modern "theoretical CS", because among the "theoretical CS" papers from the fifties, sixties, seventies, and even some that are more recent I have found a lot that remain very valuable and I have seen a lot of modern programmers who either make avoidable mistakes or they implement very suboptimal solutions, just because they are no longer aware of ancient research results that were well known in the past.
I especially hate those who attempt to design new programming languages today, but then demonstrate a complete lack of awareness about the history of programming languages, by introducing a lot of design errors in their languages, which had been discussed decades ago and for which good solutions had been found at that time, but those solutions were implemented in languages that never reached the popularity of C and its descendants, so only few know about them today.
The key, then, lies not so much in minimising the number of basic features of a language, but rather in keeping the included facilities simple to understand in all their consequences of usage and free from unexpected interactions when they are combined. A form must be found for these facilities which is convenient to remember and intuitively clear to a programmer, and which acts as a natural guidance in the formulation of [their] ideas.
We've successfully found some strong patterns for structuring programs that transform data in various ways for the kinds of programs Wirth was imagining. The best patterns have proven themselves by being replicated across languages (for example discriminated unions and pattern matching) and the worst have died away (things like goto and classical inheritance).
There's still work to do to find better languages though. A language is good if it fits the shape of the problem and, while we've found some good patterns for some shapes of problems, there are a lot more problems without good patterns.
I had hoped there'd be more languages for everyday end-user problems by now. At the start of the SaaS era it seemed like a lot of services were specific solutions that might fit into a more general modelling language. That hasn't happened yet but maybe a programming language at just the right level of abstraction could make that possible.
> and the worst have died away (things like goto and classical inheritance)
What's so wrong about classical inheritance, and how it died away while being well-supported in most popular programming languages of today (Python, C++, Java, C#, TS, Swift)?
In a sense, it’s like global variables. About every complex program [1] has a few of them, so languages have to support them, but you shouldn’t have too many of them, and people tend to say “don’t use globals”.
[1] some languages such as classical Java made it technically impossible to create them, but you can effectively create one with
class Foo {
public static int bar;
}
If you’re opposed to that, you’ll end up with making that field non-static and introducing a singleton instance of “Foo”, again effectively creating a global.
In some Java circles, programmers will also wrap access to that field in getters and setters, and then use annotations to generate those methods, but that doesn’t make such fields non-global.
I think the legend goes Wirth created the Pascal language to be the most easily compilable. To show my age, I recall a class used Modula-2 when I was in college, also from Wirth, very Pascal-like.
Yes, that was a common trend across all programming languages designed by him.
That is also how P-Code came to be, he didn't want to create a VM for Pascal, rather the goal was to make porting easier, by requiring only a basic P-Code interpreter, it was very easy to port Pascal, a design approach he kept for Modula-2 (M-Code) and Oberon (Slim binaries).
I think it was more that it would be easy to write a compiler for, which meant that CS students could write one. Don't have a source for this that I can remember, though.
I saw on page 25 (the third PDF page) a nice argument against variable shadowing. I can think of a couple of modern languages I wish had learned this ;)
Designers had ignored both the issue of efficiency and that a language serves the human reader, not just the automatic parser. If a language poses difficulties to parsers, it surely also poses difficulties for the human reader. Many languages would be clearer and cleaner had their designers been forced to use a simple parsing method.
I especially hate those who attempt to design new programming languages today, but then demonstrate a complete lack of awareness about the history of programming languages, by introducing a lot of design errors in their languages, which had been discussed decades ago and for which good solutions had been found at that time, but those solutions were implemented in languages that never reached the popularity of C and its descendants, so only few know about them today.
We've successfully found some strong patterns for structuring programs that transform data in various ways for the kinds of programs Wirth was imagining. The best patterns have proven themselves by being replicated across languages (for example discriminated unions and pattern matching) and the worst have died away (things like goto and classical inheritance).
There's still work to do to find better languages though. A language is good if it fits the shape of the problem and, while we've found some good patterns for some shapes of problems, there are a lot more problems without good patterns.
I had hoped there'd be more languages for everyday end-user problems by now. At the start of the SaaS era it seemed like a lot of services were specific solutions that might fit into a more general modelling language. That hasn't happened yet but maybe a programming language at just the right level of abstraction could make that possible.
What's so wrong about classical inheritance, and how it died away while being well-supported in most popular programming languages of today (Python, C++, Java, C#, TS, Swift)?
In a sense, it’s like global variables. About every complex program [1] has a few of them, so languages have to support them, but you shouldn’t have too many of them, and people tend to say “don’t use globals”.
[1] some languages such as classical Java made it technically impossible to create them, but you can effectively create one with
If you’re opposed to that, you’ll end up with making that field non-static and introducing a singleton instance of “Foo”, again effectively creating a global.In some Java circles, programmers will also wrap access to that field in getters and setters, and then use annotations to generate those methods, but that doesn’t make such fields non-global.
This I can agree with, but it is far from being "worst pattern". Everything can be like salt.
https://gcc.gnu.org/onlinedocs/gcc-15.2.0/gm2
Even available on compiler explorer to play with, https://godbolt.org/z/ev9Pbxn9K
Yes, that was a common trend across all programming languages designed by him.
That is also how P-Code came to be, he didn't want to create a VM for Pascal, rather the goal was to make porting easier, by requiring only a basic P-Code interpreter, it was very easy to port Pascal, a design approach he kept for Modula-2 (M-Code) and Oberon (Slim binaries).
I think it was more that it would be easy to write a compiler for, which meant that CS students could write one. Don't have a source for this that I can remember, though.