My Thoughts on ChatGPT – Cal Newport

0
101


In recent months, I’ve received quite a few emails from readers expressing concerns about ChatGPT. I remained quiet on this topic, however, as I was writing a big New Yorker piece on this technology and didn’t want to scoop my own work. Earlier today, my article was finally published, so now I’m free to share my thoughts.

If you’ve been following the online discussion about these new tools you might have noticed that the rhetoric about their impact has been intensifying. What started as bemused wonder about ChatGPT’s clever answers to esoteric questions moved to fears about how it could be used to cheat on tests or eliminate jobs before finally landing on calls, in the pages of the New York Times, for world leaders to “respond to this moment at the level of challenge it presents,” buying us time to “learn to master AI before it masters us.”

The motivating premise of my New Yorker article is the belief that this cycle of increasing concern is being fueled, in part, by a lack of a deep understanding about how this latest generation of chatbots actually operate. As I write:

“Only by taking the time to investigate how this technology actually works—from its high-level concepts down to its basic digital wiring—can we understand what we’re dealing with. We send messages into the electronic void, and receive surprising replies. But what, exactly, is writing back?”

I then spend several thousand words trying to detail the key ideas that explain how the large language models that drive tools like ChatGPT really function. I’m not, of course, going to replicate all of that exposition here, but I do want to briefly summarize two relevant conclusions:

  • ChatGPT is almost certainly not going to take your job. Once you understand how it works, it becomes clear that ChatGPT’s functionality is crudely reducible to the following: it can write grammatically-correct text about an arbitrary combination of known subjects in an arbitrary combination of known styles, where “known” means it encountered it sufficiently many times in its training data. This ability can produce impressive chat transcripts that spread virally on Twitter, but it’s not useful enough to disrupt most existing jobs. The bulk of the writing that knowledge workers actually perform tends to involve bespoke information about their specific organization and field. ChatGPT can write a funny poem about a peanut butter sandwich, but it doesn’t know how to write an effective email to the Dean’s office at my university with a subtle question about our hiring policies.
  • ChatGPT is absolutely not self-aware, conscious, or alive in any reasonable definition of these terms. The large language model that drives ChatGPT is static. Once it’s trained, it does not change; it’s a collection of simply-structured (though massive in size) feed-forward neural networks that do nothing but take in text as input and spit out new words as output. It has no malleable state, no updating sense of self, no incentives, no memory. It’s possible that we might one day day create a self-aware AI (keep an eye on this guy), but if such an intelligence does arise, it will not be in the form of a large language model.

I’m sure that I will have more thoughts to share on AI going forward. In the meantime, I recommend that you check out my article, if you’re able. For now, however, I’ll leave you with some concluding thoughts from my essay.

“It’s hard to predict exactly how these large language models will end up integrated into our lives going forward, but we can be assured that they’re incapable of hatching diabolical plans, and are unlikely to undermine our economy,” I wrote. “ChatGPT is amazing, but in the final accounting it’s clear that what’s been unleashed is more automaton than golem.”





Source link