2 Comments

Thanks for this very enlightening post, as I enjoy all your posts and podcasts. Coming from a more technical background, I offer a short observation:

You shouldn't think of ChatGPT as an *algorithm*, in the sense of a step-by-step computer program where the output is pre-determined by the inputs. That's not quite what's happening at a technical level.

A better analogy auto-complete, only with the entire corpus of human knowledge as the "sentence" you want completed. Auto-complete "knows" the words that are most likely to follow one another, and systems like ChatGPT simply take that to the sentence or paragraph level. The apparent coherence of the resulting generated output is a consequence of its disgustingly large corpus of human-generated text that lets it suggest follow-on sentences and paragraphs that appear to make sense.

It's not doing a step-by-step algorithm. Rather, it's more like a sieve that shakes large amounts of dirt into just the grains that match the patterns on the sieve (i.e. your prompt).

In that sense I wonder if ChatGPT is better described as Northrup Frye's Order of Words: there is no connection whatsoever between its output and the Real World, but the words themselves *do* relate to one another. It finds consistent patterns, in the same way that auto-complete output makes sense if you know the likelihood of various words appearing together.

For example, if the Book of Revelation had never been written, but you had the entire Western Canon as your base corpus, a well-done ChatGPT would generate it by feeding it the other 65 books of the Bible. It's not thinking: it's just tying words together in an order that maintains consistency with the rest of the Western corpus.

What this says about being human, or what it means to truly think, I'll leave to your future essays.

Expand full comment