2 Comments
User's avatar
Thomas Mercer's avatar

Thanks for this very enlightening post, as I enjoy all your posts and podcasts. Coming from a more technical background, I offer a short observation:

You shouldn't think of ChatGPT as an *algorithm*, in the sense of a step-by-step computer program where the output is pre-determined by the inputs. That's not quite what's happening at a technical level.

A better analogy auto-complete, only with the entire corpus of human knowledge as the "sentence" you want completed. Auto-complete "knows" the words that are most likely to follow one another, and systems like ChatGPT simply take that to the sentence or paragraph level. The apparent coherence of the resulting generated output is a consequence of its disgustingly large corpus of human-generated text that lets it suggest follow-on sentences and paragraphs that appear to make sense.

It's not doing a step-by-step algorithm. Rather, it's more like a sieve that shakes large amounts of dirt into just the grains that match the patterns on the sieve (i.e. your prompt).

In that sense I wonder if ChatGPT is better described as Northrup Frye's Order of Words: there is no connection whatsoever between its output and the Real World, but the words themselves *do* relate to one another. It finds consistent patterns, in the same way that auto-complete output makes sense if you know the likelihood of various words appearing together.

For example, if the Book of Revelation had never been written, but you had the entire Western Canon as your base corpus, a well-done ChatGPT would generate it by feeding it the other 65 books of the Bible. It's not thinking: it's just tying words together in an order that maintains consistency with the rest of the Western corpus.

What this says about being human, or what it means to truly think, I'll leave to your future essays.

Expand full comment
Michael Dolzani's avatar

Thanks for this stimulating response, Richard. I am grateful for the amiable correction--I follow what you're saying (I think), and see the point. And you do indeed generate possibilities for further newsletters. Auto-complete generates typical or most-likely completions of verbal patterns, and that's okay if that's what we're looking for. I could imagine it will be useful generating form acceptance/rejection letters, for example. But what if we want to express connections that are NOT formula? Amy Tan has an essay called "Mother Tongue" in which she says she always did poorly on standardized tests because she resisted filling in formula answers to examples like "John thought Mary was ____ but she thought he was ____. We all know the kinds of cliches that are being called for--but is that "intelligence" or gaming the system? Anyway, thanks for the kind words, and for the thoughtful note. Best, Michael

Expand full comment