This couples intentions to the code which in my example would be dynamic.
That’s going to be a bad time.
My point is that the conventions that used to be good for the past 50 years of development are likely going to change as tooling does.
Programming is effectively about managing complexity.
Yes, the abstraction of a development language being the layer at which you encode intention rather than in comments is better when humans are reading and writing the code itself.
But how many projects have historically run into problems when a decade earlier they chose a language that years later is stagnating in tooling or integrations versus another pick?
Imagine if the development work had been done exclusively in pseudocode and comments guiding generative AI writing in language A. How much easier might porting everything to language B end up being?
Language agnostic development may be quite viable within a year or so.
And just as you could write software in binary, letting a compiler do that and working with an abstracted layer is more valuable in time and cost.
I’m saying that the language is becoming something which software can effectively abstract, so moving the focus yet another layer up will likely be more valuable than clinging to increasingly obsolete paradigms.
Language agnostic development may be quite viable within a year or so.
I doubt that very much, GPT4 (to my knowledge still the best LLM) is far from being there. As (my) initial hype is overcome, I have basically stopped using it because I have to “help” it too much (and it got really worse over time…) so that I spent more time to get any usable results from it, instead of just writing the goddamn code myself.
There has to be a very large step in progress, that this is anywhere feasible (maybe that’s true for some “boilerplate” react UI code though). You have to have in mind, that you should still review all the code which takes a good chunk of the time (especially if it’s full with issues as it is with LLMs). Often I go over it and think yes, this is ok, and then I check it out in more detail and find a lot of issues that cost me more time compared to writing the code myself in the first place.
I have actually fed GPT4 a lot of natural language instructions to write code, and it was kind of a disaster, I have to try that again with more code instructions, as I think it’s better to just provide an LLM the code directly, if it will really get smart enough it will understand the intentions of the code without comments (as it has seen a lot of code).
Context size is also a bigger issue, the LLM just doesn’t have as much overview over the code and the relevant details (I need to try out the 32k GPT4 model though and feed it more code of the architecture, this may help, but is obviously a lot of work…)
Same for humans, if your code is really too complex, you can likely simplify it, such that humans can read it without comments.
If not, it falls for me in the first category I’ve listed (complex math or similar). And then of course comments make sense for a complex piece of code that may need more context.
I would only add comments otherwise for edgecases and ideas (e.g. TODO).
For the rest a good API doc (javadoc, rustdoc etc.) is more than enough (if it’s clear what a function should do and the function is written in a modular way, it should be easy to read the code IMHO.
Really if you need comments, think about the code first, is it the simplest approach? Can I make it more readable? I feel like I have written a lot of “unreadable” (or too complex) code in my junior years…
What otherwise makes sense for me is a high level description of the architecture.
There’s a world of difference between using ChatGPT and something like Copilot within a mature codebase.
Once a few of the Copilot roadmap features are added, I suspect you’ll be seeing yet another leap forward.
Too many commenting on this subject focus in on where the tech is at today without appropriately considering the jump from where it was at a year ago versus today and what that means for next year or the year after.
This couples intentions to the code which in my example would be dynamic.
That’s going to be a bad time.
My point is that the conventions that used to be good for the past 50 years of development are likely going to change as tooling does.
Programming is effectively about managing complexity.
Yes, the abstraction of a development language being the layer at which you encode intention rather than in comments is better when humans are reading and writing the code itself.
But how many projects have historically run into problems when a decade earlier they chose a language that years later is stagnating in tooling or integrations versus another pick?
Imagine if the development work had been done exclusively in pseudocode and comments guiding generative AI writing in language A. How much easier might porting everything to language B end up being?
Language agnostic development may be quite viable within a year or so.
And just as you could write software in binary, letting a compiler do that and working with an abstracted layer is more valuable in time and cost.
I’m saying that the language is becoming something which software can effectively abstract, so moving the focus yet another layer up will likely be more valuable than clinging to increasingly obsolete paradigms.
I doubt that very much, GPT4 (to my knowledge still the best LLM) is far from being there. As (my) initial hype is overcome, I have basically stopped using it because I have to “help” it too much (and it got really worse over time…) so that I spent more time to get any usable results from it, instead of just writing the goddamn code myself. There has to be a very large step in progress, that this is anywhere feasible (maybe that’s true for some “boilerplate” react UI code though). You have to have in mind, that you should still review all the code which takes a good chunk of the time (especially if it’s full with issues as it is with LLMs). Often I go over it and think yes, this is ok, and then I check it out in more detail and find a lot of issues that cost me more time compared to writing the code myself in the first place.
I have actually fed GPT4 a lot of natural language instructions to write code, and it was kind of a disaster, I have to try that again with more code instructions, as I think it’s better to just provide an LLM the code directly, if it will really get smart enough it will understand the intentions of the code without comments (as it has seen a lot of code).
Context size is also a bigger issue, the LLM just doesn’t have as much overview over the code and the relevant details (I need to try out the 32k GPT4 model though and feed it more code of the architecture, this may help, but is obviously a lot of work…)
Same for humans, if your code is really too complex, you can likely simplify it, such that humans can read it without comments. If not, it falls for me in the first category I’ve listed (complex math or similar). And then of course comments make sense for a complex piece of code that may need more context. I would only add comments otherwise for edgecases and ideas (e.g.
TODO
).For the rest a good API doc (javadoc, rustdoc etc.) is more than enough (if it’s clear what a function should do and the function is written in a modular way, it should be easy to read the code IMHO.
Really if you need comments, think about the code first, is it the simplest approach? Can I make it more readable? I feel like I have written a lot of “unreadable” (or too complex) code in my junior years…
What otherwise makes sense for me is a high level description of the architecture.
How were you feeding it?
There’s a world of difference between using ChatGPT and something like Copilot within a mature codebase.
Once a few of the Copilot roadmap features are added, I suspect you’ll be seeing yet another leap forward.
Too many commenting on this subject focus in on where the tech is at today without appropriately considering the jump from where it was at a year ago versus today and what that means for next year or the year after.