A great deal of excitement has been stirred up by the release in November of a new form of artificial intelligence, or AI. In addition to being an interesting phenomenon in its own right, ChatGPT, as it is called, has revived the debate over what we really mean by “intelligence.” In the past several decades, “artificial intelligence” has basically meant computational ability, the ability to analyze quantitative patterns and use the results to perform various operations, from calculating prime numbers to regulating the operations of my car to winning games against chess masters. But, as its name implies, what is newsworthy about ChatGPT is its ability to use language, and use it with startling facility. You can have conversations with it, conversations that might not pass the famous Turing Test, which says that a machine can be considered genuinely intelligent if its responses are capable of fooling someone into believing it is a human being, but which come closer than ever before. Also, perhaps even more startlingly, it can generate research-paper style essays that appear remarkably authentic. I have not experimented with ChatGPT myself, but have browsed discussion boards of those who have, and there are already those who are predicting “the death of the college essay.” One posted an example, a comparison of the theories of two economists, that was not only coherent but written in a fluent prose that would be beyond the capability of many undergraduates. It was impressive, and a bit unnerving. I have just at the moment of writing this newsletter received an email from IT at Baldwin Wallace University, informing faculty that the anti-plagiarism program on our course websites will now include AI-detecting software of some sort.
Thanks for this very enlightening post, as I enjoy all your posts and podcasts. Coming from a more technical background, I offer a short observation:
You shouldn't think of ChatGPT as an *algorithm*, in the sense of a step-by-step computer program where the output is pre-determined by the inputs. That's not quite what's happening at a technical level.
A better analogy auto-complete, only with the entire corpus of human knowledge as the "sentence" you want completed. Auto-complete "knows" the words that are most likely to follow one another, and systems like ChatGPT simply take that to the sentence or paragraph level. The apparent coherence of the resulting generated output is a consequence of its disgustingly large corpus of human-generated text that lets it suggest follow-on sentences and paragraphs that appear to make sense.
It's not doing a step-by-step algorithm. Rather, it's more like a sieve that shakes large amounts of dirt into just the grains that match the patterns on the sieve (i.e. your prompt).
In that sense I wonder if ChatGPT is better described as Northrup Frye's Order of Words: there is no connection whatsoever between its output and the Real World, but the words themselves *do* relate to one another. It finds consistent patterns, in the same way that auto-complete output makes sense if you know the likelihood of various words appearing together.
For example, if the Book of Revelation had never been written, but you had the entire Western Canon as your base corpus, a well-done ChatGPT would generate it by feeding it the other 65 books of the Bible. It's not thinking: it's just tying words together in an order that maintains consistency with the rest of the Western corpus.
What this says about being human, or what it means to truly think, I'll leave to your future essays.
Thanks for this very enlightening post, as I enjoy all your posts and podcasts. Coming from a more technical background, I offer a short observation:
You shouldn't think of ChatGPT as an *algorithm*, in the sense of a step-by-step computer program where the output is pre-determined by the inputs. That's not quite what's happening at a technical level.
A better analogy auto-complete, only with the entire corpus of human knowledge as the "sentence" you want completed. Auto-complete "knows" the words that are most likely to follow one another, and systems like ChatGPT simply take that to the sentence or paragraph level. The apparent coherence of the resulting generated output is a consequence of its disgustingly large corpus of human-generated text that lets it suggest follow-on sentences and paragraphs that appear to make sense.
It's not doing a step-by-step algorithm. Rather, it's more like a sieve that shakes large amounts of dirt into just the grains that match the patterns on the sieve (i.e. your prompt).
In that sense I wonder if ChatGPT is better described as Northrup Frye's Order of Words: there is no connection whatsoever between its output and the Real World, but the words themselves *do* relate to one another. It finds consistent patterns, in the same way that auto-complete output makes sense if you know the likelihood of various words appearing together.
For example, if the Book of Revelation had never been written, but you had the entire Western Canon as your base corpus, a well-done ChatGPT would generate it by feeding it the other 65 books of the Bible. It's not thinking: it's just tying words together in an order that maintains consistency with the rest of the Western corpus.
What this says about being human, or what it means to truly think, I'll leave to your future essays.