Listening to Elephants with AI: Hearing Without Controlling
Across the savannas and forests where elephants live, much of their social world unfolds beyond the limits of human hearing. Elephants communicate using low-frequency, infrasound rumbles — too low for humans to hear unaided — that can travel several kilometers, carrying information about identity, emotion, movement, and social bonds. For decades, researchers have worked to understand these signals through patient observation and long-term fieldwork, such as Cornell’s Elephant Listening Project. Now, artificial intelligence is extending that work—not by replacing human listening, but by helping scale it.
AI in Bioacoustics
Bioacoustics — the study of how animals use sound — has always been constrained by sheer volume: elephants produce far more sound than any team of humans can manually analyze, especially in dense habitats and over long periods. Dense forests, overlapping calls, and continuous recording have historically limited real-time interpretation. Now, machine-learning tools can detect elephant vocalizations within vast soundscapes, separate overlapping callers, and link calls to specific social contexts that would be difficult to track by ear alone. These systems build on decades of groundwork by elephant researchers who first established how sound relates to behavior, social relationships, and landscape use.
One of the most influential figures in this field is Dr. Angela Stöger, based at the University of Veterinary Medicine Vienna. For more than 20 years, Stöger has recorded and analyzed African savanna elephant vocalizations, building one of the world’s most comprehensive, behaviorally annotated sound archives. Her work combines anatomical knowledge of elephant hearing and sound production with detailed field observation, creating the biological context AI models require to function meaningfully.
Stöger’s ongoing “Decoding Elephant Communication with AI” project explicitly integrates traditional ethology — the study of animal behavior — with machine learning to identify which acoustic features carry social information, and which do not. Rather than attempting to “translate” elephant language in a literal sense, the work asks grounded questions: Who is calling, in what situation, and how do other elephants respond.
Breakthroughs in Elephant “Names”
This approach has already yielded remarkable insights. In a widely reported recent study of African elephants in Kenya, researchers used machine learning to analyze hundreds of rumbles and found that elephants appear to address one another using individualized, name-like calls. Playback experiments supported this interpretation: elephants responded more quickly and intensely when hearing calls directed specifically at them, suggesting these vocal labels are meaningful rather than generic.
Research on elephant populations also reveals how human violence reshapes who elephants are. In places heavily impacted by ivory poaching, scientists have documented rapid increases in tuskless females and skewed offspring sex ratios, with tuskless mothers producing about two-thirds female young — a pattern linked to intense selection during conflict and poaching. These changes stem from decades in which tusked elephants were disproportionately killed, altering both genetic traits and the fabric of family groups. Even when we focus on sound, we can’t ignore how human pressure reshapes elephants’ bodies, social structures, and the communication systems that arise within them.
Acoustic Monitoring in Forests
In forested regions, where elephants are nearly impossible to track visually, acoustic monitoring has become a critical conservation tool. The Elephant Listening Project, based at Cornell University and working with partners such as the Wildlife Conservation Society, deploys networks of audio sensors across Central African forests to monitor elephant presence, activity, and hunting pressure. AI systems scan continuous recordings to detect rumbles, estimate group presence in different areas, and distinguish elephant calls from sounds like gunshots and logging noise.
This form of listening allows conservation teams to protect elephants without collars, drones, or constant human pursuit — an important ethical shift toward less intrusive monitoring. By mapping when and where gunshots occur, acoustic networks can guide patrols to the most at-risk areas and evaluate whether anti-poaching efforts are working over time.
Cross-Species AI Efforts
Alongside species-specific projects, broader efforts are underway to build large, cross-species acoustic “foundation models” — general-purpose AI systems trained on sounds from many animals that can later be adapted to particular species and questions. Organizations like Earth Species Project (ESP) are developing open-source models on vocalizations from thousands of species, including elephants, whales, birds, and bats. These models learn general acoustic structures that can be fine-tuned to specific contexts, accelerating research while avoiding a future where proprietary systems lock up animal data and concentrate control in a few hands.
ESP’s work emphasizes that AI outputs should be treated as hypotheses rather than translations — tools that help researchers ask better questions, not claim mastery over nonhuman communication. This perspective echoes broader conversations in whale-communication work, which stress that the real frontier is learning how to listen without assuming animals’ worlds must resemble our own.
Insights to Action
When applied carefully, AI-assisted listening can support conservation in practical, grounded ways. Real-time or near-real-time detection can alert rangers when elephants move into high-risk areas near farms, roads, or known poaching routes. Long-term acoustic data can reveal migration patterns, seasonal movements, social disruptions, and responses to human infrastructure such as roads, logging, or mining. Over time, these insights can inform corridor design, land-use planning, and conflict-prevention strategies that reflect elephants’ actual needs rather than human assumptions.
At the same time, the power of these tools raises serious ethical concerns. Acoustic data could be misused to locate elephants for harm or to expand extractive tourism and surveillance. Overconfident claims about “decoding” animal language risk flattening complex social worlds into simplistic stories that serve human interests. Recording wild animals also raises questions about intrusion, consent, and who gets to decide how this information is used — questions that conservation science and AI governance have only begun to confront.
Earth Species Project’s global survey on AI and animal communication suggests that many people support careful, noninvasive use of these technologies while expressing deep concern about misuse, misinterpretation, and exploitation. Large majorities favor strict rules and oversight, particularly for commercial or military applications, and prioritize animal welfare and environmental protection over profit. Safeguards matter: noninvasive methods, limited and well-governed access to sensitive data, community-level participation in decision-making, and accountability to those human communities who live alongside elephants are essential if these technologies are to do more good than harm.
Used responsibly, AI-assisted research can strengthen protections for elephants, reduce human–elephant conflict, and help convey the depth of elephants’ social lives to a public that has too often seen them only as spectacle or symbol. But listening must remain paired with restraint. The goal is not simply to extract meaning from elephants, but to make space for it — and to act differently once we begin to understand what they are already telling us.
As AI brings elephants’ voices into clearer focus, the question is no longer whether they are communicating something rich and complex. The question is whether humans are willing to listen without trying to control the conversation.