Okay, so this is embarrassing. Someone recently asked me: “What would your dog say if they could tell people about you?”
Mine would definitely spill the beans about my laptop conversations. You know, the chats with Chat GPT about marketing strategy while my golden retriever watches from her rug. Judging me. Hard.
But here’s the thing that’s keeping me up at night lately. That judgment? It might soon become… well, actual feedback on ourselves. Hear me out until the end.
Look Who’s Talking
The London School of Economics just dropped £4 million on something that sounds like a fever dream. The Jeremy Coller Centre for Animal Sentience (and I’m not making this up) is basically trying to crack the code of what every living creature is actually saying.
Professor Jonathan Birch is leading this thing. Not just barks and meows, but the emotional subtext underneath. The frustration. The joy. The “why did you leave me alone for three hours” guilt trips.
Think about that for a second. We’re potentially on the brink of understanding what every animal around us is thinking. I mean, really thinking.
Organizations like the Earth Species Project (which sounds like something out of a sci-fi movie, but it’s real) have been developing machine learning systems that decode animal communication by identifying patterns in behavioral ecology research. Imagine trying to learn Mandarin by watching a thousand movies without subtitles, except the movies are whale songs and dolphin clicks.
The CETI project (Cetacean Translation Initiative, yes, they named it after the aliens from Contact, and yes, that’s intentional) has been using natural language processing to study dolphin communication. Early findings suggest dolphin chatter contains way more structure than we thought.
Maybe they’ve been having sophisticated conversations about us this whole time??
The Business Thing Nobody’s Talking About
Here’s where it gets wild for those of us trying to run companies or, you know, just survive Monday morning meetings.
We spend billions (literally billions) trying to figure out what customers really mean. What employees actually think. Whether that “quick sync” Susan suggested really needs to happen or if it’s code for “I’m panicking about the deadline.” Companies hire consultants who charge more than my mortgage, conduct surveys that nobody fills out honestly, analyze body language like FBI profilers, and run focus groups that feel like psychological experiments.
All to decode the subtext of human communication.
Meanwhile, if AI can crack the communication code of marine mammals (and apparently identify distinct vocal signatures in blue whales which is both fascinating/slightly terrifying), it’s probably already reading signals in your business that you’re completely missing.
Your team members have tells, right? Micro-expressions during Zoom calls. The way Sandy’s email tone shifts when she’s stressed. How Mike phrases questions in Slack when he’s actually disagreeing but trying to be diplomatic.
AI is getting sophisticated enough to detect these patterns. And if it can understand a dolphin’s complex whistle-based language system, human workplace communication is basically elementary school level.
The question isn’t whether AI will become better at reading human signals. It’s whether you’ll be ready when it does.
The Rights Thing That’s Actually Happening
So, there’s this deeper current running underneath all this; something I dive into extensively in Chapter 7 of Turning On Machines.
If AI tools can demonstrate that animals possess advanced cognitive capacities and can actively communicate their needs and feelings, these insights could challenge long-standing legal frameworks that separate humans from animals.
We’re not just building better communication tools. We might be accidentally creating a world where every living thing gets a voice.
Several labs have openly stated they’re trying to create conscious AIs. Reports from people with close access suggest primitive consciousness may have already unintentionally emerged. Anthropic hired its first AI welfare researcher and started a “model welfare” program exploring how to assess whether AI models deserve moral consideration.
Here’s the fascinating parallel that’s blowing my mind: researchers are drawing lessons from perceptions of animal consciousness to understand how society might respond to AI consciousness. The same psychological, social, and economic factors that shape how we view animal rights will influence how we treat AI systems.
Think about it like this: if your dog can suddenly articulate complex thoughts about their daily routine, their relationship with you, their observations about your behavior (including those embarrassing GPT conversations), does that fundamentally change their moral status?
The act of communication itself, revealing animals as sentient beings capable of expressing preferences and experiences, may prompt courts and legislatures to reconsider the legal status of animals.
The Conversation Nobody’s Having (But Should Be)
Scientists have already conducted experiments where they played back humpback whale calls to a whale. The whale responded. Animal behaviorist Con Slobodchikoff, predicts human-dog translation systems within five years through his company Zoolingua. This isn’t decades away. It’s happening now.
AI systems are already being developed to monitor pets’ growth, temperature, heart rate, eating habits, activity, sleep patterns, and even urine pH levels. Smart feeding systems that prevent obesity. Visual algorithms that can assess pain levels in horses using machine learning techniques.
The infrastructure for comprehensive animal communication is being built piece by piece, like a puzzle nobody realized we were solving.
The Signal You’re Missing
Your dog would probably tell people you’re a better listener than you think.
They notice when you’re stressed before meetings. They sense when you’re genuinely excited about a project versus when you’re just going through the motions. They pick up on the subtle differences in your voice when you’re talking to your boss versus your team.
Animals are essentially living, breathing sentiment analysis machines. They’ve been doing emotional intelligence assessments on us for thousands of years.
The question is: are you listening to the right signals at work?
As Professor Birch notes, with governments taking increasing interest in AI governance, now is the moment to accelerate efforts to promote ethical AI use in relation to other animals. This includes both limiting harmful uses and supporting beneficial ones—like decoding animal communication to better understand what they want.
Your workplace communication challenges? They pale in comparison to what’s coming.
If AI can decode the emotional subtext of a frustrated cat or an anxious horse, it’s definitely going to transform how we understand human behavior in organizational settings. Maybe it already has.
So what do you think? Are you prepared for a world where every living thing and potentially every AI system has a voice?More importantly , are you ready to listen to what they have to say?
Want to learn more about how AI will impact the way we work? Pre-order my upcoming book 'Turning On Machines’ and get ready for the coming AI revolution. Follow me on LinkedIn or Twitter for regular AI updates.