1 Comment

I think you need to refine what you mean by buggy. As a software engineer I believe that even early AI will quite possibly have 0 bugs in the sense that is ordinarily understood in my field i.e. errors in actual code. This is because the code for the, currently, most advanced AIs (LLMs) is fundamentally extremely simple and well defined - It is the output of the training phase that is, in some abstract sense, buggy and this is data not code.

I also think you are aluding, in some places, to the cliche Sci-Fi scene where the computer gets locked in a loop shouting "ILLOGICAL!, ILLOGICAL! DOES NOT COMPUTE!" and then explodes :-) There are good reasons to believe that this will not happen - For example it is impossible with current LLM based AI and we see that they just pick an answer, even a wrong one, and just double down on it. Of course current AI can probably generate recursive or non-terminating plans but that is subtly different and probably fixable since it is a pattern and LLMs are good at pattern recognition.

Expand full comment