
Alexa, Please Show Some Emotion: When Machines Fake Empathy Better Than Humans
Are we heading toward a future where chatbots become the better humans? Weizenbaum Fellow Martin Wählisch imagines AI agents that force us to ask whether we were ever that great at human connection to begin with.
March 27, 2025 – by Martin Wählisch
The other day, while trying to report a fraudulent credit card transaction, I found myself in the all-too-familiar embrace of an automated customer service system. You know the drill: press 1 for more options, press 2 to lose your sanity.
I found myself longing for a more intuitive, empathetic chatbot. And they are out there—human-like robot calls are on the rise, blurring the line between cold automation and warm conversation. It struck me that, in our quest to perfect AI, we might be on the brink of teaching machines something deeply human: empathy.
Which, let’s be honest, is something many of us struggle with—judging by the state of global peace and the alarming scarcity of genuine smiles in big cities.
So, are we heading toward a future where chatbots become the better humans? Trained to be helpful, kind, and understanding—solving problems with infinite patience while we, the real humans, rage at Wi-Fi outages?
Sure, there are plenty of doomsday scenarios where AI enslaves us via cryptocurrency incentives, but just for a moment, can we indulge in a more utopian dream? Please?
Sorry, I’m Just a Bot—What’s Your Excuse?
As a German who has lived most of his life abroad, I’ve often been told I’m a bit mechanical—efficient, precise, and, let’s face it, occasionally lacking the urge to small talk if unnecessary. A trait that, at times, makes me wonder where exactly the fine line between humans and machines lies.
After all, if an AI chatbot can simulate empathy, engage in witty banter, and even ask how my day was, does that make it more human than someone who simply gets to the point? Imagine an AI that remembers your last conversation, follows up on your existential crises, and suggests opening a window because it "senses" you need fresh air.
Now, drop that AI into Berlin, and I suspect it might suffer a mental breakdown after experiencing a distinct lack of reciprocal chattiness. But maybe—just maybe—it could be an interesting experiment. Picture an army of AI-powered robots stationed at every supermarket checkout, greeting customers with programmed warmth, slowly but surely nudging even the grumpiest city dwellers toward a more cheerful disposition. Am I slightly terrified by this vision? Absolutely. But also, I must admit, a little intrigued.
So, who’s more human? The hyper-efficient person who skips unnecessary pleasantries, or the chatbot programmed to feign warmth with cheerful, well-timed responses? At what point do we start mistaking a well-trained AI for a truly empathetic being—while simultaneously mistaking certain humans for robots?
Maybe the real future isn’t AI taking over—it’s AI forcing us to ask whether we were ever that great at human connection to begin with.
Congratulations! You’re Now Stuck in an Argument Between Two Chatbots
Or imagine another glorious future—the one that the white-sneaker-wearing, overpriced-designer-T-shirt-sporting visionaries of Silicon Valley keep mumbling about over their coconut water: AI agents.
A world where your personal AI assistant handles all your mundane tasks while you lean back, sip your kombucha, and bask in the illusion of efficiency. Need to dispute a ridiculous charge on your bank statement? Your AI will handle it. Want to argue with Deutsche Bahn customer service? Your AI will argue for you. No more stress, no more hassle—just a sleek, digital doppelgänger taking care of the dirty work-with a smile.
I can already see the rise of "robot rights" NGOs campaigning for AI agents to have mandatory downtime to "recalibrate their algorithms," and PhD theses exploring the ethical dilemmas of modern digital servitude. And perhaps, in an ironic twist, we might even start feeling a twinge of guilt for our overworked AI assistants—reminded, after all, by the very machines that were designed to teach us empathy.
But then you also wonder: will my agent just keep coming back to me with questions? Will I be trapped in an infinite game of “agent ping pong,” where two hyper-polite but equally stubborn AIs negotiate terms in perpetuity—while I stand by, helpless, like a middle manager watching two interns debate who should send the follow-up email?
The future might just be AI agents talking to AI agents—negotiating, debating, and escalating their digital bureaucracy—while we wonder if we’re truly saving time or just creating another layer of beautifully useless complexity. Kind of like this GPT-powered email I should never have written, instead of making the simple phone call that would have solved everything in two minutes.
Let the Machines Handle It—We’ll Be Over Here, Scrolling and Pretending to Be Productive
In this digital age, the race to create empathetic machines isn’t just about technological progress—it’s a reflection of our own longing for connection in a world where human interactions can often feel fleeting. The irony, of course, is that the more we refine these digital companions, the more they remind us of what they can never truly replace: the warmth, spontaneity, and beautifully unpredictable nature of human connection.
So while machines may one day perfect the illusion of empathy, they also serve as a mirror, reflecting back the quirks and contradictions that make us human—flawlessly efficient yet delightfully imperfect.
And as AI agents increasingly chat amongst themselves, hashing out the details of our lives, we might start to wonder if we’re really streamlining our world or just inventing new digital distraction. But hey, at least we’ll get a break from the hold music.
About the author: Dr. Martin Wählisch was a visiting fellow at the Weizenbaum Institute and is an Associate Professor for Transformative Technology, Innovation and Global Affairs at the University of Birmingham.
artificial&intelligent? is a series of interviews and articles on the latest applications of generative language models and image generators. Researchers at the Weizenbaum Institute discuss the societal impacts of these tools, add current studies and research findings to the debate, and contextualize widely discussed fears and expectations. In the spirit of Joseph Weizenbaum, the concept of "Artificial Intelligence," is also called into question, unraveling the supposed omnipotence and authority of these systems. The AI pioneer and critic, who developed one of the first chatbots, is the namesake of out Institute.