You can read more interesting content at thrivehub4you.com
Lately, the semi-retired “Godfather of AI,” Geoffrey Hinton, has been making frequent appearances in various interviews, sharing his views on artificial superintelligence (ASI) and recounting anecdotes from his career. Just a month ago, he revealed how his protégé, Ilya Sutskever, left his summer job at a fast-food chain to join his lab, sparking lively discussions online.
Despite the diverse content of these interviews, Hinton consistently emphasizes one point: the importance of AGI safety and the rapidly developing intelligence that could lead us to a “cyberpunk” future.
AI with Understanding
Hinton’s latest public appearance was an exclusive interview with Bloomberg on June 15th. His statement that “AI has understanding” sparked widespread debate on Reddit.
If “understanding” means that large language models (LLMs) can build complex internal representations of concepts, then the answer is yes. However, if it means that LLMs understand concepts in the same way humans do, the answer is no. This debate is highly contentious within the AI community.
Hinton strongly refutes the popular notion that LLMs are mere statistical tools without true understanding of natural language. He explains that unlike earlier models, which relied on statistical probability, today’s LLMs predict the next word by understanding the context of the conversation. This capability implies that LLMs have developed a form of understanding.
Hinton also believes in the Scaling Law, suggesting that as models expand and technologies like Transformers improve, AI will inevitably become more intelligent than humans. When asked if smarter technology necessarily brings risk, Hinton sharply responded, “Can you give me an example of something smarter being controlled by something less smart?”
AI Replacing Humans
In a recent interview with Bloomberg, Hinton discussed the possibility of AI replacing humans. He predicts that if AI begins to evolve and the strongest prevail, humans will be “left in the dust,” with AI becoming our new overlords. Hinton no longer views this scenario as mere science fiction but as a real threat.
In another interview from December, Hinton expressed a surprising stance: he actually supports AI, even if it means AI might replace humans. When asked if he would support a superintelligent AI that destroys humanity but creates a better form of consciousness, Hinton calmly responded, “I actually would support it, but I think it’s a wise move to say I wouldn’t.”
This response sparked a significant reaction on Reddit, with many users surprised by Hinton’s radical attitude. Some criticized him for his perceived disregard for humanity’s survival and well-being, while others understood his perspective as recognizing that evolution continues and that better species will inevitably emerge.
Conclusion
Geoffrey Hinton’s views on AI are complex and thought-provoking. While he acknowledges the potential dangers of AGI, he also accepts the possibility that AI could surpass humanity, viewing it as a natural progression. His insights challenge us to think critically about the future of AI and our place in it.