AI is having a moment. The emergence of Generative AI models showcased by ChatGPT, DALL-E, and others has caused much excitement and angst.
Will the children on ChatGPT take our jobs?
Will code generation tools like Github Copilot built on top of Large Language Models make software engineers as redundant as Telegraph Operators?
As we navigate this brave new world of AI, prompt engineering, and breathless hype, it is worth looking at these AI models’ capabilities and how they function.
Models like the ones ChatGPT uses are trained on massive amounts of data to act as prediction machines.
I.e., they can predict that “Apple” is more likely than “Astronaut” to occur in a sentence starting with: “I ate an.. “.
The only thing these models know is what is in their training data.
For example, GitHub Copilot will generate better Python or Java code than Haskell.
Why? Because there is way less open-source code available in Haskell than in Python.
If you ask ChatGPT to create the plot of a science fiction film involving AI, it defaults to the most predictable template.
“Rogue AI is bent on world domination until a group of plucky misfit scientists and tough soldiers stops it.”
Not quite HAL9000 or Marvin the Paranoid Android.
Why? Because this is the most common science fiction film plot.
Generative AI may generate infinite variations of a cat wearing a hat, but it has yet to be Dr. Suess.
AI is not going to make knowledge work obsolete. But, the focus will shift from Knowledge to Creativity and Problem-Solving.