How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use
By
Josh Clark
Published Apr 21, 2025
A study by MIT Media Lab finds that heavy use of chatbots travels with loneliness, emotional dependence, and other negative social impacts.
Overall, higher daily usage–across all modalities and conversation types–correlated with higher loneliness, dependence, and problematic use, and lower socialization. Exploratory analyses revealed that those with stronger emotional attachment tendencies and higher trust in the AI chatbot tended to experience greater loneliness and emotional dependence, respectively.
Artificial personality has always been the third rail of interaction design—from potential Clippy-style annoyance to damaging attachments of AI companions. Thing is, people tend to assign personality to just about anything—and once something starts talking, it becomes nearly unavoidable to infer personality and even emotion. The more human something behaves, the more human our responses to it:
These findings underscore the complex interplay between chatbot design choices (e.g., voice expressiveness) and user behaviors (e.g., conversation content, usage frequency). We highlight the need for further research on whether chatbots’ ability to manage emotional content without fostering dependence or replacing human relationships benefits overall well-being.
Go carefully. Don’t assume that your AI-powered interface must be a chat interface. There are other ways for interfaces to have personality and presence without making them pretend to be human. (See our Sentient Scenes demo that changes style, mood, and behavior on demand.)
And if your interface does talk, be cautious and intentional about the emotional effect that choice may have on people—especially the most vulnerable.