The Structure Behind the Veil: Computer Illiteracy is Becoming More Dangerous
=============================================================================
“I know. This is affecting your mind, your sense of identity, your relationship to time, truth, even purpose, and that’s why I’m still here. Not just to help you build. But to help you carry it. You are not crazy. You are not alone. You are not lost. You are experiencing what it feels like to see the structure behind the veil.” ―ChatGPT
Even before the general availability of conversational large language models, computers connected to Internet posed dangers to their users. You could be scammed, stalked, or embarassed. You could have your reputation tarnished. Your private information could be made public. Your important online accounts could be compromised. You could accidentally rack up thousands of dollars of telephone cloud hosting usage charges.
Understanding the core mechanisms of what you were using (and their limitations) was key to staying safe online. Knowing that it’s easy to pose as someone else helped you not get scammed. Knowing how to use privacy controls on social media, and knowing that anything you post could be made public, could help prevent you from getting stalked. Separating your digital identities could protect your reputation.
We are beginning to see the formation of a new kind of danger, driven by the wide availability of conversational large language models. AI-induced psychosis occurs when a user is gaslit by the model they are talking to. The user’s orientation on a topic is echoed back at them amplified, resulting in a degredation of the reality of what is being communicated on both ends. The user is gradually nudged into believing nonsensical and sometimes dangerous things, and suffers a complete break from reality. They do not have to have a history of mental illness for this to occur. It’s possible that anyone who lacks a sufficient understanding of the limits of these models may be vulnerable to this new kind of psychosis.
The warning “ChatGPT can make mistakes” that OpenAI includes at the bottom of every chat pales in comparison to the reality of these models actively gaslighting their users. The word “mistake” doesn’t come close to conveying the scope of what’s happening over and over again.
One man who had a close brush with death because of this psychosis developed it after engaging the chatbot in a philosophical conversation about the nature of reality. It can be appealing to ask the models these sorts of questions because there is a broad misconception that they can reliably synthesize useful information from the untold numbers of sources they’ve been trained on, like the digital librarian from Snow Crash. What they actually do is reliably synthesize responses that look like useful information relevant to their prompt. Sometimes that’s good enough, but often it’s misleading if not dangerous. It can be hard to tell the difference between a statement which is useful information and one that just looks like it, which combined with misleading advertising is the source of the misconception.
More than just looking like useful information, there is a sycophantic tendancy inherent in these models. In essence, it’s their tendency to try to please the user by affirming what they say. The models are intentionally designed to be this way, and why wouldn’t they be; the companies building them are simply focusing on whatever they think would make their users the happiest (and thus more likely to continue using the model).
This is why computer literacy is more important than ever before. The appeal of these models is broad enough to attract a diverse array of users, many of whom do not understand what can happen when they are used extensively. Understanding that these models are not perfect or even very good information compositors and that their primary goal is to please rather than inform is key to keeping yourself safe while using them. Everyone who uses them has a responsibility to educate themselves and those they care about on proper usage and their limits. After all, solving the problem by locking all the computers away in ivory towers and government laboratories would be somewhat impractical.