In her insightful piece, “The Turing Test and our shifting conceptions of intelligence,” Melanie Mitchell challenges the traditional view of the Turing Test as a valid measure of intelligence. She argues that while the test may indicate a machine’s ability to mimic human conversation, it fails to assess deeper cognitive abilities, as demonstrated by the limitations of large language models (LLMs) in reasoning tasks. This prompts us to reconsider what it truly means for a machine to think, moving beyond mere mimicry to a more nuanced understanding of intelligence.
Our understanding of intelligence may be shifting beyond what Turing initially imagined.
Turing’s point was that if a computer seems indistinguishable from a human (aside from its appearance and other physical characteristics), why shouldn’t we consider it to be a thinking entity? Why should we restrict “thinking” status only to humans (or more generally, entities made of biological cells)? As the computer scientist Scott Aaronson described it, Turing’s proposal is “a plea against meat chauvinism.”
A common criticism of the Turing Test as a measure of AI capability
Because its focus is on fooling humans rather than on more directly testing intelligence, many AI researchers have long dismissed the Turing Test as a distraction, a test “not for AI to pass, but for humans to fail.”
Generative AI models may be causing a “Grey Goo” problem with art, publishing, and user-generated content.
The Grey Goo Problem is a thought experiment where self-replicating nano-robots consume all available resources leading to a catastrophic scenario. This scenario is a popular science fiction trope (see comments).
Several publishers and user-generated content sites like StackOverflow have been impacted by a flood of AI-generated content in the last few months. Clarkesworld, a science fiction magazine, stopped accepting submissions last week. Even LinkedIn is overrun by ChatGPT-generated “thought leadership.”
Tools like ChatGPT need high-quality training data to generate good results. They collect training data by scraping the Internet. You can see the issue here, can’t you?
The Grey Goo scenario is managed through containment and quarantine in science fiction. For example, in The Expanse series (see image), containing the “Proto-Molecule” is a crucial plot element.
The need to contain and quarantine Generative AI will result in more paywalls, subscriptions, and gated content. Crypto may even find its calling in guaranteeing the authenticity of online content.
I fear that the Open Internet that made ChatGPT possible will be crippled by the actions of ChatGPT and its cousins.
Melanie Mitchell’s book “Artificial Intelligence – A Guide for Thinking Humans” is a primer on AI, its history, its applications, and where the author sees it going.
Ms. Mitchell is a scientist and AI researcher who takes a refreshingly skeptical view of the capabilities of today’s machine learning systems. “Artificial Intelligence” has a few technical sections but is written for a general audience. I recommend it for those looking to put the recent advances in AI in the context of the field’s history.
Key Points
“Artificial Intelligence” takes us on a tour of AI – from the mid-20th century, when AI research started in earnest, to the present day. She explains, in straightforward prose, how the different approaches to AI work, including Deep Learning and Machine Learning, based approaches to Natural Language Processing.
Much of the book covers how modern ML-based approaches to image recognition and natural language processing work “under the hood.” The chapters on AlphaZero and the approaches to game-playing AI are also well-written. I enjoyed these more technical sections, but they could be skimmed for those desiring a broad overview of these systems.
This book puts advances in neural networks and Deep Learning in the context of historical approaches to AI. The author argues that while machine learning systems are progressing rapidly, their success is still limited to narrow domains. Moreover, AI systems lack common sense and can be easily fooled by adversarial examples.
Ms. Mitchell’s thesis is that despite advances in machine learning algorithms, the availability of huge amounts of data, and ever-increasing computing power, we remain quite far away from “general purpose Artificial Intelligence.”
She explains the role that metaphor, analogy, and abstraction play in helping us make sense of the world and how what seems trivial can be impossible for AI models to figure out. She also describes the importance of us learning by observing and being present in the environment. While AI can be trained via games and simulation, their lack of embodiment may be a significant hurdle towards building a general-purpose intelligence.
The book explores the ethical and societal implications of AI and its impact on the workforce and economy.
What Is Missing?
“Artificial Intelligence” was published in 2019 – a couple of years before the explosion in interest in Deep Learning triggered due to ChatGPT and other Large Language Models (LLMs). So, this book does not cover the Transformer models and Attention mechanisms that make LLMs so effective. However, these models also suffer from the same brittleness and sensitivity to adversarial training data that Ms. Mitchell describes in her book.
Ms. Mitchell has written a recent paper covering large language models and can be viewed as an extension of “Artificial Intelligence.”
Conclusion
AI will significantly impact my career and those of my peers. Software Engineering, Product Management, and People Management are all “Knowledge Work.” And this field will see significant disruption as ML and AI-based approaches start showing up.
It is easy to get carried away with the hype and excitement. Ms. Mitchell, in her book, proves to be a friendly and rational guide to this massive field. While this book may not cover the most recent advances in the field, it still is a great introduction and primer to Artificial Intelligence. Some parts of the book will make you work, but I still strongly recommend it to those looking for a broader understanding of the field.
Machine Learning has brought huge benefits in many domains and generated hundreds of billions of dollars in revenue. However, the second-order consequences of machine learning-based approaches can lead to potentially devastating outcomes.
This article by Kashmir Hill in the New York Times is exceptional reporting on a very sensitive topic – the identification of abusive material or CSAM.
As the parent of two young children in the COVID age, I rely on telehealth services and friends who are medical professionals to help with anxiety-provoking (yet often trivial) medical situations. I often send photos of weird rashes or bug bites to determine if it is something to worry about.
In the article, a parent took a photo of their child to send to a medical professional. This photo was uploaded to Google Photos, where it was flagged as being potentially abusive material by a machine learning algorithm.
Google ended up suspending and permanently deleting his Gmail account and his Google Fi phone and flagging his account to law enforcement.
Just imagine how you might deal with losing both your primary email account, your phone number, and your authenticator app.
Finding and reporting abuse is critical. But, as the article illustrates, ML-based approaches often lack context. A photo shared with a medical professional may share similar features to those showing abuse.
Before we start devolving more and more of our day-to-day lives and decisions to machine learning-based algorithms, we may want to consider the consequences of removing humans from the loop.