The Mountain in the Sea by Ray Naylor

In times of rapid change, fiction serves as a reflective lens, casting light on current anxieties and offering insights beyond simple commentary. “The Mountain in the Sea” by Ray Nayler navigates the complex relationship between humans and technology.

But Nayler’s work goes further. While the book revolves around first contact with a civilization of Octopii, it delves into the nature of consciousness. It critiques our relentless drive to build, optimize, and consume. Nayler raises pertinent questions about loneliness, isolation, and the role of technology in our lives.

In the pages of “The Mountain in the Sea,” these themes come alive through well-realized characters and intricate plotlines, providing a vital tool for understanding our relationship with the worlds we live in – social, internal, external, and digital.

There are three PoV characters – Ha Nguyen is a scientist who has spent years studying Cephalods – the family of animals that include Octopus, Squid, and Cuttlefish. The second character is a hacker, Rustem, who specializes in breaking AIs. The third is a young Japanese man, Eiko, who, through a series of unfortunate events, ends up a slave aboard an AI-powered fishing vessel.

Each character in the book deals with loneliness and isolation and has somewhat awkward if dependent, relationships with technology.

In general, AI, or the nature of intelligence, is a key theme that runs through the various plot lines of the book. Ha Nguyen and her team try to make sense of the culture and symbolic language of the Octopus civilization. Eiko has to deal with a murderous and indifferent AI driven by optimization algorithms built to maximize the amount of protein the ship hauls from the depleted oceans.

While I picked up the book because of the striking cover and because I love First Contact books – I read it in a couple of sittings because of the underlying themes of our relationship and dependence on technology and what it does to us and the world around us resonated deeply with me. As someone excited about technology’s promises and challenges, this book prompted me to consider where our pursuit of innovation is taking us.

For example, people in “The Mountain..” have AI companions called point-fives. These companions form relationships but do not make any demands on their human owners. They give, but they do not take. There is only one point five instead of two “people” in a relationship. Hence the moniker.

The loneliness of people in this world is mollified by technology, but it is not solved. The only way is through genuine contact, through a process of both taking and giving.

I spend a lot of time working on and thinking about systems that would save time, optimize workflows, and make more money. Despite the potential for disruption and displacement, I welcome new technology like Generative AI.

But, there are clearly issues and risks in the somewhat reckless attitude to embracing technology. Threats not just to our environment but also to society and to ourselves.

“The Mountain in the Sea” is a cautionary tale and a story of hope. Each character’s arc in the novel is discovery and possible redemption. This book had me thinking long and hard about where our obsession with optimization and technology is taking us.

Smallville, Agent Based Modeling, and Capital Markets

Google and Stanford cooked up something intriguing—a virtual village called Smallville, populated by agents running on the #ChatGPT API.

The researchers witnessed interesting emergent behavior, from coordination and communication to downright adorable interactions among the village’s wholesome residents.

Smallville even comes with cute graphics. But beyond the little sprites organizing Valentine’s Parties (yes, that’s what happens in Smallville): this experiment made me think of my time, a long time ago and in a City far away, in Capital Markets.

Smallville (courtesy Ars Technica)

Sidebar

Derivatives are a vast market. And derivatives, like options, are priced using a somewhat arcane mathematical field called Stochastic Calculus – the Black-Scholes equation being a famous example.

The underlying assumption is that markets behave randomly, and Stochastic Calculus provides a way of modeling this behavior. But – this approach can have problems. Even the famous creators of the Black-Scholes equation spectacularly blew up their fund LTCM.


Enter Agent Based Modelling (ABM): a nifty but niche approach that relies on simulating the behavior of market participants via Agents. The idea is that these simulations provide a better insight into how the market may evolve under different conditions.

Smallville shows us that LLM-driven agents are a possibility. Is it a stretch to envision specialized LLMs, trained on financial data, being used in ABM to predict how a particularly temperamental market might behave?

If you are a quantitative analyst on a sell-side firm looking to market-make a particularly exotic derivative, an LLM-powered approach may be viable. Or at least less boring than reaching for the Stochastic Calculus textbook.

The future might find traders armed with their own simulated worlds to forecast the price of, oh, let’s say, a derivative on the price of an exotic tulip of a non-fungible JPEG of a smoking Ape.. who knows?

PS – The painting is called “The Copenhagen Stock Exchange” by P.S. Krøyer. You can see why an agent-based approach to simulating capital markets is a .. possibility..

The Future is Here..

It’s just not very evenly distributed ..

This thought-provoking quote by William Gibson has been on my mind recently. The frantic pace of AI development contrasts sharply with the casual indifference of friends and family who do not care about cutting-edge technology.

Most people outside the tech community may have heard about ChatGPT, LLMs, or other “autonomous” technology in passing.

However, we will increasingly see these worlds intersect. Take, for example, this amusing video of a San Francisco police officer attempting to reason with a wayward Waymo car.

The cop steps before the slow-moving vehicle, commanding it to stop and stay like an errant puppy. He then lights a flare in front of the car, hoping the smoke would make it stop.

The video is funny but is also a cautionary tale of the types of issues that we will face when introducing autonomous agents to the broader public.

Just like the bewildered cop, we will have to deal with users who do not understand the capabilities and limitations of new technology.

Designing effective User Interfaces and Experiences for these complex new technologies will be critical to broad and safe adoption.

Book Review – A Philosophy of Software Design by John Ousterhout

“A Philosophy of Software Design” by John Ousterhout is a short and thought-provoking book about practical software development.

Key Concept

The book starts with a bold claim – the most critical job of a software engineer is to reduce and manage complexity.

Mr. Ousterhout defines complexity as “anything related to the structure of a software system that makes it hard to understand and modify the system.”

This definition serves as a motivating principle for the book. The author explores where complexity comes from and how to reduce it in a series of short chapters, which often include real-world code examples.

My well-thumbed copy of the book

Summary

The book starts with identifying the symptoms of complexity:

  1. The difficulty in making seemingly simple changes to a system.
  2. Increasing cognitive load – i.e., a developer’s ability to understand a system’s behavior.
  3. The presence of “Unknown unknowns” – undocumented and non-obvious behavior.

Mr. Ousterhout states that there are two leading causes of complexity in a software system:

  1. Dependencies – A given piece of code cannot be understood or modified in isolation
  2. Obscurity – When vital information is not apparent. Obscurity arises due to a need for more consistency in how the code is written and missing documentation.

To reduce complexity, a developer must focus not only on writing correct code (“Tactical Programming”) but also invest time to produce clean designs, and effective comments and fix problems as they arise (“Strategic Programming”).

The book provides several actionable approaches to reducing complexity.

Some highlights:

  • Modular design can help encapsulate complexity, freeing developers to focus on one problem at a time. It is more important for a module to have a simple interface than a simple implementation.
  • Prevent information leakage between modules and write specialized code that implements specific features (once!).
  • Functions (or modules) should be deep – and developers should prioritize sound design over writing short and easy-to-read functions.
  • Consider multiple options when faced with a design decision. Exploring non-obvious solutions before implementing them could result in more performant and less complex code.
  • Writing comments should be part of the design process, and developers should use comments to describe things that are not obvious from the code.

The book concludes with a discussion of trends in software development, including agile development, test-driven development, and object-oriented programming.

Conclusion

“A Philosophy of Software Design” is an opinionated and focused book. It provides a clear view of the challenges of writing good code, which I found valuable.

Mr. Ousterhout provides actionable advice for novice and experienced developers by focusing on code, comments, and modules.

However, the book is also relatively low-level. The book contains little discussion around system design, distributed systems, or effective communication (outside of good code and effective comments).

While books such as “The Pragmatic Programmer” provide a more rounded approach to software engineering, I admire that Mr. Ousterhout sticks to the core concepts in his book.

Generative Models and the “Grey Goo Problem”

Generative AI models may be causing a “Grey Goo” problem with art, publishing, and user-generated content. 

Thomas Jane encounters the Protomolecule in The Expanse

The Grey Goo Problem is a thought experiment where self-replicating nano-robots consume all available resources leading to a catastrophic scenario. This scenario is a popular science fiction trope (see comments).

Several publishers and user-generated content sites like StackOverflow have been impacted by a flood of AI-generated content in the last few months. Clarkesworld, a science fiction magazine, stopped accepting submissions last week. Even LinkedIn is overrun by ChatGPT-generated “thought leadership.” 

Tools like ChatGPT need high-quality training data to generate good results. They collect training data by scraping the Internet. You can see the issue here, can’t you? 

The Grey Goo scenario is managed through containment and quarantine in science fiction. For example, in The Expanse series (see image), containing the “Proto-Molecule” is a crucial plot element. 

The need to contain and quarantine Generative AI will result in more paywalls, subscriptions, and gated content. Crypto may even find its calling in guaranteeing the authenticity of online content. 

I fear that the Open Internet that made ChatGPT possible will be crippled by the actions of ChatGPT and its cousins.

Google, Microsoft and the Search Wars

A demo cost Google’s shareholders $100bn dollars last week. Why?

Google’s Share Price after the Bard event

Google has dominated search and online advertising for the last twenty years. And yet, it seems badly shaken by Microsoft’s moves to include a ChatGPT-like model in Bing search results. 

Why is this a threat to Google?

1️⃣ Advertising: Google’s revenues are driven by the advertisements it displays next to search results. The integration of language models allows users to get answers – removing the need to navigate to websites or view ads for a significant subset of queries.

2️⃣ Capital Expenditure: Search queries on Google cost around $0.01 (see link in the comments for some analysis). Integrating an LLM like ChatGPT *could* cost an additional 4/10th of a cent per query since the costs of training and inference are high. Even with optimization, integrating LLMs into Google search will increase costs in running search queries. According to some estimates, this directly impacts the bottom line to almost $40bn. 

3️⃣ Microsoft’s Position: Bing (and, more broadly, search) represents a small portion of Microsoft’s total revenues. Microsoft can afford to make search expensive and disrupt Google’s near-monopoly. Indeed Satya Nadella, in his interviews last week, said as much (see comments). 

4️⃣ Google’s Cautious AI Strategy: Google remains a pioneer in AI research. After all, the “T” in GPT stands for Transformer – a type of ML model created at Google! Google’s strategy has to sprinkle AI in products such as Assistant, Gmail, Google Docs, etc. While they probably have sophisticated LLMs (see LaMDA, for example) on hand, Google seems to have held off releasing an AI-first product to avoid disrupting their search monopoly. 

5️⃣ Curse of the demo: Google’s AI presentation seemed rushed and a clear reaction to Microsoft’s moves. LLMs are known to generate inaccurate results, but they didn’t catch a seemingly obvious error made by their BARD LLM in a recorded video. This further reinforced the market sentiment that Google seems to have lost its way.

References and Further Reading

Explaining Reinforcement Learning with Human Feedback with Star Trek

Microsoft announced today that it will include results from a Large Language Model based on GPT-3 in Bing results. They will also release a new version of the Edge browser that will include a ChatGPT-like bot. 

GPT-3 has been around for almost two years. What has caused this sudden leap forward in the capabilities of Large Language Models 🤔?

The answer is – *Reinforcement Learning From Human Feedback* or RLHF. 

By combining the capabilities of a large language model with those of another model trained on the end-users preferences, we end up with the uncannily accurate results that ChatGPT seems to produce.

Ok – but how does RLHF work? Let me try and explain with a (ridiculous) analogy. 

In the Star Trek series, the Replicator is a device that can produce pretty much anything on demand. 

When Captain Picard says, “Tea, Earl Grey, Hot!” it produces the perfect cup of tea. But how might you train a Replicator? With RLHF, of course!

Explaining RLHF

Let’s see how:

1. Feed the Replicator with all the beverage recipes in the known universe.

2. Train it to try and predict what a recipe would be when given a prompt. I.e. when a user says “Tea, Earl Gray, Hot!” – it should be able to predict what goes into the beverage.

3. Train *another* model – let’s call it the “Tea Master 2000” with Captain Picard’s preferences. 

4. When the Replicator generates a beverage, the Tea Master responds with a score. +10 for a perfect cup of tea, -10 for mediocre swill. 

5. We now use Reinforcement Learning (RL) to optimize the Replicator to get a perfect ten score. 

6. After much optimization, the Replicator can generate the perfect cup of tea – tuned to Captain Picard’s preferences.

If you substitute the Replicator with an LLM like GPT-3, and substitute the Tea Master with another ML model called the *Preference* model, then you have seen RLHF in action! 

It is a lot more complicated, but I will take any opportunity to generate Star Trek TNG-themed content 🖖.

Further Reading

Hugging Face has a fantastic blog post explaining RLHF in detail: https://huggingface.co/blog/rlhf

For those more visually inclined, Hugging Face also has a YouTube video about RLHF: https://www.youtube.com/live/2MBJOuVq380?feature=share

Anthropic AI has a paper that goes into a lot of detail on how they use RLHF to train their AI Assistant: https://arxiv.org/abs/2204.05862

Ben Thomson’s “4 Horsemen of the Tech Recession”

In the last month, we have had huge layoffs across technology, yet the “real economy” seems robust. What is going on?

Meta is making 2023 ‘a year of efficiency’. Microsoft, Alphabet, and many other companies have stated economic headwinds as the reason for letting thousands of people go. 

However, last week, the US posted the lowest unemployment numbers in 50 years(!) while adding half a million jobs. 

Ben Thomson discusses this in this week’s excellent Stratechery article

He points to 4 factors that are causing this disconnect:

1️⃣ 😷 The COVID Hangover -> Companies assumed COVID meant a permanent acceleration of eCommerce spending. Customer behavior has reverted (to a certain extent) to pre-pandemic patterns

2️⃣ 💻 The Hardware Cycle -> Hardware spending is cyclical. After bringing forward spending due to the pandemic, customers are unlikely to buy new hardware for a while.

3️⃣ 📈 Rising interest rates -> The era of free money is over. Investing in loss-making technology companies in anticipation of a future payout is no longer attractive.

4️⃣ 🛑 Apple’s Application Tracking Transparency (ATT) -> ATT has made it difficult to track the effectiveness of advertising spending. This caused enormous problems for companies like Meta, Snap, etc. that rely on advertising.

Book Review: “Artificial Intelligence – A Guide for Thinking Humans” by Melanie Mitchell

Artificial Intelligence – A Guide For Thinking Humans

Introduction

Melanie Mitchell’s book “Artificial Intelligence – A Guide for Thinking Humans” is a primer on AI, its history, its applications, and where the author sees it going. 

Ms. Mitchell is a scientist and AI researcher who takes a refreshingly skeptical view of the capabilities of today’s machine learning systems. “Artificial Intelligence” has a few technical sections but is written for a general audience. I recommend it for those looking to put the recent advances in AI in the context of the field’s history.

Key Points

“Artificial Intelligence” takes us on a tour of AI – from the mid-20th century, when AI research started in earnest, to the present day. She explains, in straightforward prose, how the different approaches to AI work, including Deep Learning and Machine Learning, based approaches to Natural Language Processing. 

Much of the book covers how modern ML-based approaches to image recognition and natural language processing work “under the hood.” The chapters on AlphaZero and the approaches to game-playing AI are also well-written. I enjoyed these more technical sections, but they could be skimmed for those desiring a broad overview of these systems. 

This book puts advances in neural networks and Deep Learning in the context of historical approaches to AI. The author argues that while machine learning systems are progressing rapidly, their success is still limited to narrow domains. Moreover, AI systems lack common sense and can be easily fooled by adversarial examples. 

Ms. Mitchell’s thesis is that despite advances in machine learning algorithms, the availability of huge amounts of data, and ever-increasing computing power, we remain quite far away from “general purpose Artificial Intelligence.” 

She explains the role that metaphor, analogy, and abstraction play in helping us make sense of the world and how what seems trivial can be impossible for AI models to figure out. She also describes the importance of us learning by observing and being present in the environment. While AI can be trained via games and simulation, their lack of embodiment may be a significant hurdle towards building a general-purpose intelligence.

The book explores the ethical and societal implications of AI and its impact on the workforce and economy.

What Is Missing?

“Artificial Intelligence” was published in 2019 – a couple of years before the explosion in interest in Deep Learning triggered due to ChatGPT and other Large Language Models (LLMs). So, this book does not cover the Transformer models and Attention mechanisms that make LLMs so effective. However, these models also suffer from the same brittleness and sensitivity to adversarial training data that Ms. Mitchell describes in her book. 

Ms. Mitchell has written a recent paper covering large language models and can be viewed as an extension of “Artificial Intelligence.”

Conclusion

AI will significantly impact my career and those of my peers. Software Engineering, Product Management, and People Management are all “Knowledge Work.” And this field will see significant disruption as ML and AI-based approaches start showing up. 

It is easy to get carried away with the hype and excitement. Ms. Mitchell, in her book, proves to be a friendly and rational guide to this massive field. While this book may not cover the most recent advances in the field, it still is a great introduction and primer to Artificial Intelligence. Some parts of the book will make you work, but I still strongly recommend it to those looking for a broader understanding of the field.

The Limits of Generative AI

AI is having a moment. The emergence of Generative AI models showcased by ChatGPT, DALL-E, and others has caused much excitement and angst. 

Will the children on ChatGPT take our jobs? 

Will code generation tools like Github Copilot built on top of Large Language Models make software engineers as redundant as Telegraph Operators? 

As we navigate this brave new world of AI, prompt engineering, and breathless hype, it is worth looking at these AI models’ capabilities and how they function. 

Models like the ones ChatGPT uses are trained on massive amounts of data to act as prediction machines. 

I.e., they can predict that “Apple” is more likely than “Astronaut” to occur in a sentence starting with: “I ate an.. “.

The only thing these models know is what is in their training data. 

For example, GitHub Copilot will generate better Python or Java code than Haskell. 

Why? Because there is way less open-source code available in Haskell than in Python. 

If you ask ChatGPT to create the plot of a science fiction film involving AI, it defaults to the most predictable template. 

“Rogue AI is bent on world domination until a group of plucky misfit scientists and tough soldiers stops it.” 

Not quite HAL9000 or Marvin the Paranoid Android. 

Why? Because this is the most common science fiction film plot.

Cats and Hats

Generative AI may generate infinite variations of a cat wearing a hat, but it has yet to be Dr. Suess. 

AI is not going to make knowledge work obsolete. But, the focus will shift from Knowledge to Creativity and Problem-Solving.