Generative Models and the “Grey Goo Problem”

Generative AI models may be causing a “Grey Goo” problem with art, publishing, and user-generated content. 

Thomas Jane encounters the Protomolecule in The Expanse

The Grey Goo Problem is a thought experiment where self-replicating nano-robots consume all available resources leading to a catastrophic scenario. This scenario is a popular science fiction trope (see comments).

Several publishers and user-generated content sites like StackOverflow have been impacted by a flood of AI-generated content in the last few months. Clarkesworld, a science fiction magazine, stopped accepting submissions last week. Even LinkedIn is overrun by ChatGPT-generated “thought leadership.” 

Tools like ChatGPT need high-quality training data to generate good results. They collect training data by scraping the Internet. You can see the issue here, can’t you? 

The Grey Goo scenario is managed through containment and quarantine in science fiction. For example, in The Expanse series (see image), containing the “Proto-Molecule” is a crucial plot element. 

The need to contain and quarantine Generative AI will result in more paywalls, subscriptions, and gated content. Crypto may even find its calling in guaranteeing the authenticity of online content. 

I fear that the Open Internet that made ChatGPT possible will be crippled by the actions of ChatGPT and its cousins.

Book Review: “Artificial Intelligence – A Guide for Thinking Humans” by Melanie Mitchell

Artificial Intelligence – A Guide For Thinking Humans

Introduction

Melanie Mitchell’s book “Artificial Intelligence – A Guide for Thinking Humans” is a primer on AI, its history, its applications, and where the author sees it going. 

Ms. Mitchell is a scientist and AI researcher who takes a refreshingly skeptical view of the capabilities of today’s machine learning systems. “Artificial Intelligence” has a few technical sections but is written for a general audience. I recommend it for those looking to put the recent advances in AI in the context of the field’s history.

Key Points

“Artificial Intelligence” takes us on a tour of AI – from the mid-20th century, when AI research started in earnest, to the present day. She explains, in straightforward prose, how the different approaches to AI work, including Deep Learning and Machine Learning, based approaches to Natural Language Processing. 

Much of the book covers how modern ML-based approaches to image recognition and natural language processing work “under the hood.” The chapters on AlphaZero and the approaches to game-playing AI are also well-written. I enjoyed these more technical sections, but they could be skimmed for those desiring a broad overview of these systems. 

This book puts advances in neural networks and Deep Learning in the context of historical approaches to AI. The author argues that while machine learning systems are progressing rapidly, their success is still limited to narrow domains. Moreover, AI systems lack common sense and can be easily fooled by adversarial examples. 

Ms. Mitchell’s thesis is that despite advances in machine learning algorithms, the availability of huge amounts of data, and ever-increasing computing power, we remain quite far away from “general purpose Artificial Intelligence.” 

She explains the role that metaphor, analogy, and abstraction play in helping us make sense of the world and how what seems trivial can be impossible for AI models to figure out. She also describes the importance of us learning by observing and being present in the environment. While AI can be trained via games and simulation, their lack of embodiment may be a significant hurdle towards building a general-purpose intelligence.

The book explores the ethical and societal implications of AI and its impact on the workforce and economy.

What Is Missing?

“Artificial Intelligence” was published in 2019 – a couple of years before the explosion in interest in Deep Learning triggered due to ChatGPT and other Large Language Models (LLMs). So, this book does not cover the Transformer models and Attention mechanisms that make LLMs so effective. However, these models also suffer from the same brittleness and sensitivity to adversarial training data that Ms. Mitchell describes in her book. 

Ms. Mitchell has written a recent paper covering large language models and can be viewed as an extension of “Artificial Intelligence.”

Conclusion

AI will significantly impact my career and those of my peers. Software Engineering, Product Management, and People Management are all “Knowledge Work.” And this field will see significant disruption as ML and AI-based approaches start showing up. 

It is easy to get carried away with the hype and excitement. Ms. Mitchell, in her book, proves to be a friendly and rational guide to this massive field. While this book may not cover the most recent advances in the field, it still is a great introduction and primer to Artificial Intelligence. Some parts of the book will make you work, but I still strongly recommend it to those looking for a broader understanding of the field.

Machine Learning and its consequences

Machine Learning has brought huge benefits in many domains and generated hundreds of billions of dollars in revenue. However, the second-order consequences of machine learning-based approaches can lead to potentially devastating outcomes. 

This article by Kashmir Hill in the New York Times is exceptional reporting on a very sensitive topic – the identification of abusive material or CSAM. 

As the parent of two young children in the COVID age, I rely on telehealth services and friends who are medical professionals to help with anxiety-provoking (yet often trivial) medical situations. I often send photos of weird rashes or bug bites to determine if it is something to worry about.  

In the article, a parent took a photo of their child to send to a medical professional. This photo was uploaded to Google Photos, where it was flagged as being potentially abusive material by a machine learning algorithm. 

Google ended up suspending and permanently deleting his Gmail account and his Google Fi phone and flagging his account to law enforcement. 

Just imagine how you might deal with losing both your primary email account, your phone number, and your authenticator app. 

Finding and reporting abuse is critical. But, as the article illustrates, ML-based approaches often lack context. A photo shared with a medical professional may share similar features to those showing abuse. 

Before we start devolving more and more of our day-to-day lives and decisions to machine learning-based algorithms, we may want to consider the consequences of removing humans from the loop.