This was a nice and understandable introduction to ChatGPT, by the noted scientist Stephen Wolfram who is also the creator of Mathematica software. Given that there would be a lot of maths underpinning the workings of ChatGPT, Wolfram does a good job not overloading you with the details. He uses simplified models and lots of diagrams to convey concepts. The “chapters” too are short and concise rather than taking on too much in one shot. At the same time, it doesn’t feel so watered down that you begin to wonder whether you are learning anything at all. Overall, high school level mathematical knowledge should be sufficient to understand this book, as long as you are willing to pause at times to re-read some sections carefully, and look up any concept which you cannot recollect with ease. The book is also available to read online for free.
There are a few philosophical musings at the end, like whether the capabilities of ChatGPT mean that language and writing essays are “computationally” easier than we believed. But the bulk of the book is about the workings of ChatGPT itself, and doesn’t get into any larger questions about Artificial Intelligence. Another thing I liked about this book is that at many places he admits that we just don’t know why certain aspects of ChatGPT work, in any deep theoretical sense. This extends to some aspects of the training of ChatGPT as well. He repeatedly labels these bits of arbitrary wisdom as “Neural Net Lore” which I thought was so appropriate! These kind of tidbits can only be interspersed by someone well versed in the field, and hence might make this book stand out from other introductions to the topic.
I found the first few chapters to be the best, where Wolfram clarifies what ChatGPT is (and by implication what it is not). As he repeatedly exclaims, it’s amazing how a piece of software that only tries to generate the most likely next word (or ‘token’) can accomplish such a wide range of tasks so well. The examples he takes up here in terms of how many words the English language has, how much text there is out there on the Web, and how the probability aspects of these quantities work are very illuminating. Since Wolfram is a trained physicist, he is also able to articulate well what a “model” is. Elsewhere in the book, he points out how our understanding of ChatGPT is nowhere close to our understanding of how physical laws work. Also key here is the concept of “embeddings”, without which any input data wouldn’t be expressible as numbers in the first place! Overall, I came away awestruck that when ChatGPT is generating output, it is not even repeatedly scanning (called “looping” in programming terms) any piece of its internal data, unlike almost all programs!
The relatively hard parts of the book are in the middle, where he tries to explain what a “neural network” is with the common example of getting a computer to recognize digits (0-9) . For someone encountering the concept for the first time, the idea of weights, connections and how “training” happens to set up these weights and connections may not be very clear. You might benefit from reading animation-based explanations, like this one on YouTube: “But what is a neural network?“. The image recognition task felt like an unexpected detour from the text based problems he had been describing thus far. It didn’t tie back cleanly to the rest of the book. Then there are sections dealing with the inner working of ChatGPT, which I thought were the weakest; the concept of “attention” and “attention heads” wasn’t clear to me from his explanation. Instead of the image recognition task, he could have shown how the weights for each word (i.e., the list of numbers associated with each word) is arrived at from a corpus of text with an example.
Towards the end, he looks for potential patterns in the inner workings of ChatGPT. Of course, there isn’t much conclusive evidence to show here. But it’s like a sneak peek into what scientists might be trying to figure out. Or, to end with a quote from the book itself: “Perhaps we’re just looking at the wrong variables (or wrong coordinate system) and if only we looked at the right one, we’d immediately see that ChatGPT is doing something “mathematical-physics-simple” like following geodesics. But as of now, we’re not ready to empirically decode from its internal behavior what ChatGPT has “discovered” about how human language is put together”