Why AI agents need to learn to read the room
By
Josh Clark
Published Mar 14, 2026
Researcher Genna Bridgeman shared practical findings about how AI interactions are affected by social expectations of the specific communication channel.
Bridgeman is a product researcher for Intercom, the company behind the Fin customer service agent. Fin is remarkably effective at managing routine support tasks, and it does it in live phone conversations, chat, email, and WhatsApp.
Each of those channels has its own etiquette, of course. The ways—and even the reasons—people use those channels create expectations for how info will be delivered. Bridgeman’s research found that when AI didn’t get the etiquette right, the result undermined trust as much as any human faux-pas might:
When interactions felt wrong, users didn’t blame the answer. They questioned the system’s understanding. And once that doubt set in, every subsequent response was judged more harshly.
The core takeaways:
- In chat: Brevity, clarity, and structure are more important than completeness.
- In email: The absence of a formal greeting and a thorough (even dense) answer can seem dismissive or incomplete.
- On the phone: If the agent talks like a bot, users will start talking like a bot by simplifying language and avoiding nuance, which makes the system less effective.
- In WhatsApp: Users expect speed and continuity more than traditional chat, with little patience for re-establishing context even in new sessions.





