There are plenty of headlines about AI induced psychosis, and they all tend follow a similar pattern:
•Individual with a pre-existing vulnerability begins using AI, usually it’s use of AI as a conversational partner.
•Gradually they lose the ability to hold conversations with humans who aren’t programmed to stroke their ego and replace human connection with AI.
•Eventually, they spiral and completely lose touch with reality. During this time they make terrible decisions that destroy their lives. Then at some point, they are forced to confront the reality of their decisions/behavior, similar to coming out of an extended splitting episode in Dissociative Identity Disorder or waking up sober from an alcohol or drug fueled binge.
Given everything we know about plasticity and human behavior, it would be silly to believe frequent use of AI isn’t changing our brains. Even if the majority of users don’t develop full blown psychosis, if suddenly your day is spent talking to a self affirming mirror, it’s going to change your brain and behavior. It’s more a question of “what/how” it’s changing people than “if” it’s actually changing them.
So, what are some of the more subtle changes (as compared to psychosis) you’ve noticed in people who frequently use AI? Have you noticed a difference even in those who don’t use it as a conversational partner?


Just curious: how frequently do you talk to people who have isolated themselves from human connection? Apparently it’s common enough for you to notice a pattern; but I personally always talk to people who are talking to other humans, for the obvious reason of communication being a bidirectional process.
Are you sure this observation of yours is not a delusion stemming from your unchecked social media use?
Maybe? Are you sure you’re not weirdly defensive about AI because you prefer interactions where you control the narrative and every opinion you have is validated?
Honestly, I really only know one person who uses AI so much I would even consider it an issue, and until recently, he was my best friend since 2007. He was always really smart and rational because he was the kind of person who would do a lot of research, and look into things before rushing into any decision or forming an opinion.
Originally he just used AI for automation ~2 years ago, then he started using it for quickly researching things related to work, but eventually he started using “AI research” for everything, and once he reads an AI summary there’s no changing his opinion.
A lot of times he will send me links that AI cites in the summary to prove he’s correct, but when you actually read the information in the links, it doesn’t actually say what he thinks it says. But once he’s formed an opinion and it’s been validated by AI, there is seems to be no evidence that can convince him otherwise.
He actually went down a quantum physics/new understanding of math rabbit hole pretty early on, but luckily he eventually realized all the information chatGPT was telling him was correct was misinterpreted, but it was still giving him positive feedback and telling him he was a genius, just like it always seems to do to people who don’t realize it’s giving them bad information and end up ruining their own lives.
He didn’t stop using AI though, he just stopped using chatGPT and switched to other models. He also gets defensive if you try to tell him that he should dial back his AI use even though he can no longer hold a conversation with anybody if it’s not related to whatever he’s interested in at the moment, he comes off as very rude bc he doesn’t seem to remember just shutting down conversations bc he doesn’t feel like hearing them, like he’s closing out a tab he’s done using, isn’t appropriate, and when I tell other people about his opinions and arguments/how he’s citing information to support those arguments now, they say “no offense, but he sounds really dumb.”
Which is definitely not true. He’s very smart and he always has been. He’s got some really impressive degrees he earned prior to becoming dependent on AI, that prove it. He also didn’t just suddenly lose the social skills and empathy he had for 18 years. He’s just become way too dependent on technology that’s designed to make him believe he’s always correct and being super productive and efficient, so he will get a little dopamine bump and want to keep using it, instead of just taking the time to actually read new information, or listen to what people are saying and how they’re saying it, and then use his own very impressive logic and reasoning skills to interpret that information.
Idk, it is an n=1 and I could definitely be wrong. That’s why I asked this question. Bc I wanted to hear other opinions outside of my own personal experience and the ones I’ve already read or seen online.
•The rise of the personal AI advisors
•80% of Gen Z and millennials are turning to AI for financial advice—but more than half say they’ve made a poor decision or mistake as a result
•AI chatbots and digital companions are reshaping emotional connection