Are we on the verge of new disruptive AI technology?

I don’t understand why any of this is terribly impressive. At the dawn of computing, it was easy to simulate human speech using pattern matching and some built-in knowledge of context. People would develop strong reactions to the exchange, and find it hard to believe it was not another person.

He (inventor Joseph Weizenbaum) was surprised and shocked that individuals, including Weizenbaum’s secretary, attributed human-like feelings to the computer program

The only “advance” today is that it is possible to use large data sets and statistics to train these pattern matching algorithms on a much larger variety of contexts.

In the past month, I’ve seen 2 examples where it was painfully obvious to tell that these algorithms were purely confabulating confidence and knowledge.

  1. In a retro computing group I belong to, someone asked one of these LLMs to generate Python code demonstrating “deduction” in propositional logic. It produced BS code that didn’t even come close to generating a truth table, which is what would be expected in any intro logic class.

  2. Why are LLMs Bad at Math:

See also:

It’s not just that the performance on MathGLM steadily declines as the problems gets bigger, with the discrepancy between it and a calculator steadily increasing, it’s that the LLM based system is generalizing by similarity, doing better on cases that are in or near the training set, never, ever getting to a complete, abstract, reliable representation of what multiplication is …That, in a nutshell, is why we should never trust pure LLMs; even under carefully controlled circumstances with massive amounts of directly relevant data, they still never really get even the most basic linear functions.

The oldest scientific society, the Royal Society, has as its motto “Nullius in verba” meaning “on the word of no one.” This needs to be extended to programs that simulate human speech.

1 Like