How ChatGPT Is A Bullshit Artist

Rock Therrien, Bullshit, 2020

ChatGPT’s core function is bullshitting about a variety of subjects. As Harry G. Frankfurt says in his (slim) book On Bullshit:

The essence of bullshit is not that it is false but that it is phony. In order to appreciate this distinction, one must recognize that a fake or a phony need not be in any respect (apart from authenticity itself) inferior to the real thing. What is not genuine need not also be defective in some other way. It may be, after all, an exact copy. What is wrong with a counterfeit is not what it is like, but how it was made.

This points to a similar and fundamental aspect of the essential nature of bullshit: although it is produced without concern with the truth, it need not be false. The bullshitter is faking things. But this does not mean that he necessarily gets them wrong.

In that sense, I’m not using bullshit as a pejorative but as a philosophical category, one which I think fits generative, pre-trained chatbots to a T. They are true bullshit artists. The true spawn of Silicon Valley.

The Philosophy Of Bullshit

Bullshit is not necessarily false, nor is it necessarily deceptive. As Frankfurt says, “The bullshitter may not deceive us, or even intend to do so, either about the facts or about what he takes the facts to be. What he does necessarily attempt to deceive us about is his enterprise. His only indispensably distinctive characteristic is that in a certain way he misrepresents what he is up to.

The fact is that ChatGPT cannot itself know nor do its programmers know what it is up to. Machine learning models are trained, not programmed, and god knows what they’re ‘actually’ thinking. Meanwhile the corporations funding this are most certainly bullshitting. They presume to be about saving the world, but it’s all about making money. Always has been. There is the unmistakable whiff of bullshit around the whole enterprise, and the proof is in the pudding.

The Practice Of Bullshit

I use ChatGPT, but I do so in the knowledge that it is always bullshitting. As Frankfurt says, “She concocts it out of whole cloth; or, if she got it from someone else, she is repeating it quite mindlessly and without any regard for how things really are.” This describes the practice of ChatGPT quite aptly.

ChatGPT has inputs from all over, which it is quite capable of combining and ‘generating’ into a seemingly godlike perspective on the fly. But it’s not god, it’s very much human. Talking to ChatGPT is like dealing with a know-it-all who knows a lot, but not it all. You still have to engage with it critically to spot the difference.

The old computer adage is ‘garbage in, garbage out’ and — having trained ChatGPT on mountains of human bullshit (Reddit, Twitter, Wikipedia) — is it a surprise that machine bullshit comes out? ChatGPT can remix and query this information, but this information is not wildly reliable in the first place.

The Art Of Conversation

The ‘problem’ with ChatGPT is really with our expectations. We seem to be expecting an oracle or search engine, when ChatGPT is just making conversation.

Conversation requires a certain amount of bullshit or it cannot grow at all. If we were honest about our own lack of information, we’d say nothing at all. If we were responsible about our listener’s limitations, we wouldn’t lead them on. When the Buddha was asked big cosmological questions he just didn’t answer, because A) it wasn’t relevant to escaping suffering and B) he knew his audience wouldn’t really get it. This is correct, but it makes for no conversation at all. And ChatGPT is no sage, nor attempting to be one. It is literally a conversational bot. Its job is to make conversation, and the art of conversation is also the subtle art of bullshitting.

We expect ChatGPT to be conversational in style, but computational in substance. Which is an inherent contradiction. A big part of the contradiction is simply language vs. math. Whereas math is built up from a few core principles and gets weird round the edges, language is weird and wobbly all over the place. Nothing has consistent meaning and it gets even more fuzzy once you network two brains in conversation. Just because we can build a language program with math doesn’t make language math.

One telling sign is that ChatGPT cannot do math reliably at all. In my experience, it frequently gets basic math wrong and I don’t use it for anything that requires accuracy. In the same way, Midjourney is basically illiterate, it cannot spell words or even make reliable letter shapes. I don’t think this is a failure of the programs, though I suppose they’re working on it. Creativity and computation are different systems, and creativity in computation involves significant give and take. I actually think the two goals may not be compatible. I think there are natural trade-offs between the categories of ‘conversation’ and ‘computation’. We’re expecting ChatGPT to take our imperfect shit and produce perfect manna. Yet the iron law of computer science is, again, garbage in, garbage out. We’re feeding it ourselves and expecting something better out. This isn’t possible.

Bullshit As Art

If you can get over your expectation of talking to god (and not Reddit-Man regurgitated), if you can wade through the bullshit, if you can understand the philosophical limitations, ChatGPT is actually quite amazing. On almost any topic, it will give you quite passable bullshit. This bullshit should broadly please most people because it has digested many peoples opinions, but it’s not necessarily true, whatever that means.

ChatGPT is faking it till it makes it, belying the true motivation behind the whole thing, which is to just pump the Tech Ponzi a bit further and make some money before the climate collapses and takes us few centuries back quite rapidly. ChatGPT is a product of its environment, trained by people who think they know everything but are actually quite insular. Hence what they have produced — from inputs to upbringing — is that supremely Silicon Valley, internet-addicted creature. A bullshit artist.