It’s unfortunate that they didn’t eval using subagents/orchestration for such a complex set of tasks (from what I can tell), e.g. analyze program to produce initial spec -> code -> review and rinse&repeat with each of those steps being a separate subagent allocated
I would be interested to see if there’s a significant quantifiable difference.
> Models favor monolithic, single-file implementations that diverge sharply from human-written code.
Well, all of our code is monolithic with some files close 20K lines of code and we do use coding agents - not for the original code but as of late. I've always had that hunch that splitting everything into tiny files does not improve AI coding agent performance although it feels counterintuitive due to model context constraints.
To me the important parts of a program should be clustered together so the implementation is obvious. Scattering the implementation in various files all over the source tree does not help much building the mental model.
That also closely match how software used to be written in the past too.
Kinda surprising to me, since i had some trouble with Cursor & Co. once the file went over ~800 lines. It repeatedly failed to edit it, until i split it up into multiple logical components. As it should have been from the beginning...
Though, it was some time ago, so things might have improved?
How long until AI is not even writing code but producing machine code?
Think about it, all these compilers, tooling, what a waste!
I imagine a future where chipset makers will provide a model you can just prompt to "act upon that chipset" and voila, "You're absolutely right! Here is your binary."
We won't be developers, we won't be devops, we'll be rollmops! /s
Coding agents can write ASM. But if you mean writing the actual byte-code that will require a very different approach at a very different level of abstraction that LLMs are not designed to do. Keep in mind that all LLMs are trained first on text and then fine-tuned on code.
My hunch is that it would take years of hundreds of thousands of developers working with machine code, posting stackoverflow questions with machine code, and publishing github repos written on it with documentation. Thats all the free labor LLMs leveraged to use high level langs.
>We won't be developers, we won't be devops, we'll be modelops! /s
I can still see this happening with higher level langs. the thing is the compiler is not replaced in the training data, more likely LLMs will give rise to semideterministic layers on the compilers
I could see nvidia achieving this first with how nice the devex is with CUDA
I would be interested to see if there’s a significant quantifiable difference.
> Models favor monolithic, single-file implementations that diverge sharply from human-written code.
Well, all of our code is monolithic with some files close 20K lines of code and we do use coding agents - not for the original code but as of late. I've always had that hunch that splitting everything into tiny files does not improve AI coding agent performance although it feels counterintuitive due to model context constraints.
To me the important parts of a program should be clustered together so the implementation is obvious. Scattering the implementation in various files all over the source tree does not help much building the mental model.
That also closely match how software used to be written in the past too.
Though, it was some time ago, so things might have improved?
We have a lint that caps source code files at 650 LOC and it works really well.
Think about it, all these compilers, tooling, what a waste!
I imagine a future where chipset makers will provide a model you can just prompt to "act upon that chipset" and voila, "You're absolutely right! Here is your binary."
We won't be developers, we won't be devops, we'll be rollmops! /s
>We won't be developers, we won't be devops, we'll be modelops! /s
I can still see this happening with higher level langs. the thing is the compiler is not replaced in the training data, more likely LLMs will give rise to semideterministic layers on the compilers
I could see nvidia achieving this first with how nice the devex is with CUDA