I don’t know if somebody started a topic on this already. However, the thing is I’m quiet impressed by the release of the video of the new OpenAI robot: https://www.youtube.com/watch?v=Sq1QZB5baNw
I am of those sort of people that believes that there is too much of a bluff (and probably a bubble on the market) regarding AI. But this totally changed my mind, and I wanted to hear the opinion of the people on this forum.
What I believe is that in many senses, specially in academia, AI has been overextended, where researchers have been trying to apply machine learning and AI on realms where its not really the optimal tool. However, the building of new robots based on generative AI I think that can be a real disruptive technology, in the sense that I believe that we are going to see in the next few years this new technologies all around us.
I am impressed by AI tools like generative tools for text and pictures (if I was between 15 and 25 years old, I would probably be fascinated, but as I am older I am just impressed).
We can therefore expect also impressing things in robotics. A step further toward the Asimov’s world (but probably without his laws and therefore more dystopic).
Another thought, I don’t like the adjective disruptive as the disruption word was generally a negative one. In the 20th century, you could talk of progress, sometimes of revolution, never of disruption. Probably it now means “breaking a market”, which is positive for people positioning themselves in the new market.
Well, I’m concerned on how this advances will take jobs of people. I think this technologies will make the rich people richer, and the working people will struggle even more to survive. I see this robot, and I would like to have one of my own, like a pet. But certainly this will have a serious impact on the job market.
I’m always positive towards technology that make people working less. But the problem is that it will not take the jobs of people, but it will lower its value without reducing its amount.
For what concerns us, we all know that you can ask chatGPT to write a code. And we all know that the code will not be correct because there is no such thing as a “correctness check” behind, so you have to check it line by line after, even if it compiles. Will your future Boss know this though? Will your Boss value your hard work as before even if “you can write codes with chatGPT and they compile”? Will you be able to keep your job if you know that these tools are just assistants but your Boss is pretty much sure that every highschooler with chatGPT can write valuable code now?
I personally don’t like AI, mostly because is a technology highly used in the military/surveillance and such applications are against my personal ethics, so I try to be as far as I can from it.
I don’t understand why any of this is terribly impressive. At the dawn of computing, it was easy to simulate human speech using pattern matching and some built-in knowledge of context. People would develop strong reactions to the exchange, and find it hard to believe it was not another person.
He (inventor Joseph Weizenbaum) was surprised and shocked that individuals, including Weizenbaum’s secretary, attributed human-like feelings to the computer program
The only “advance” today is that it is possible to use large data sets and statistics to train these pattern matching algorithms on a much larger variety of contexts.
In the past month, I’ve seen 2 examples where it was painfully obvious to tell that these algorithms were purely confabulating confidence and knowledge.
In a retro computing group I belong to, someone asked one of these LLMs to generate Python code demonstrating “deduction” in propositional logic. It produced BS code that didn’t even come close to generating a truth table, which is what would be expected in any intro logic class.
Why are LLMs Bad at Math:
See also:
It’s not just that the performance on MathGLM steadily declines as the problems gets bigger, with the discrepancy between it and a calculator steadily increasing, it’s that the LLM based system is generalizing by similarity, doing better on cases that are in or near the training set, never, ever getting to a complete, abstract, reliable representation of what multiplication is …That, in a nutshell, is why we should never trust pure LLMs; even under carefully controlled circumstances with massive amounts of directly relevant data, they still never really get even the most basic linear functions.
The oldest scientific society, the Royal Society, has as its motto “Nullius in verba” meaning “on the word of no one.” This needs to be extended to programs that simulate human speech.
Yes, I think AI is just the continuation of computer science. Personally, I would tend (but it may be a cultural bias!) to make the history of both AI and computer science begin in the 17th century with Blaise Pascal and its Pascaline machine (he started working on it in 1642, as he was only 18-19 years old, a kind of Steve Wozniak). Computing, which was before thought to be a human only capacity, could now be done by a machine. That was amazing in those days…
In that sense, computer science and AI are two names for the same thing.
Hello. I think in many senses like you. I believe there is a lot of smoke and mirrors regarding AI, and that it is not what it promises to be. We mostly see the cherry picking results by the creators of this technology. On the other hand, as you mention, it is a powerful tool for surveillance, and also for propagandist purposes. I believe there are more “bots” in social media now than ever before, and this trend will keep growing. Noam Chomsky says that AI is not more than a form of sophisticated plagiarism (which I believe is true), and that the major danger it poses is because it will be used for the manipulation and shaping of public opinion.
I think that generative large language models are a good coding assistant, and a good overall assistant, but this thing of having automated robots everywhere, at least right now, is pure science fiction.
Hi. My concern is mostly regarding how this technology is going to be used. It is already being used for propaganda. And now, I believe this robots will take jobs of people. Probably not the jobs of specialized people, but all the types of jobs that are, in some sense, easy to learn and automatize. And I know, nobody should maybe have to do that work, but what is going to happen with people that actually depend on that kind of work to survive?
Regarding coding, it is not that programmers are going to disappear, but it will be needed much less workforce in order to tackle big projects with this kind of technology. On the other hand, new jobs will probably emerge. But in general, I believe that this technology will only make thing worse for the layman, will make the richer even richer, and the oppressing powers even more powerful.
I’m aware, roughly speaking, that these are just programs trained on large data sets, and use statistics to generate text (which I believe is why they fail so bad at math). However, I do believe that there is something disruptive in this technology, because now you have this automated robot that can interact with the environment and learn from it. And it won’t take much time until this robots are able to perform simple tasks, on the one hand. On the other, the text and even spoken language generated by this LLM is now hard to distinguish for most people from something written by a human. And last, but not least, this big data era is also the era of propaganda and the post truth, these are huge propagandistic tools which are being used by powerful people, since at least the Cambridge Analytica scandal, where you can see how the big data and machine learning can be utilized to misguide entire societies and entire countries.
I think Leibnitz deserves an honorable mention, not only for his “stepped reckoner” but for his important contributions to mathematical logic. He anticipated the rigorous development of the infinitesimal calculus by almost 400 years.
Software that isn’t as smart as a 5th grader in terms of learning linear functions isn’t going to replace anyone, but I’m sure someone will try. This is entirely propaganda and as you say “smoke and mirrors.”
Yes, of course and he worked on binary computing. He even imagined a binary computing machine using balls and holes in the same manuscript (1679), but never built. His ideas came far too early…
Yesterday I had this cool idea that in a new terminator movie there should be scientists robots that do the work of creating the human tissue of the robots and direct all the work of mass production of robots I imagined the scene as woman in a lab doing the staff and a human around doesn’t know it is a robot.