Launch of Forest Listeners in partnership with Google Arts & Culture and Google DeepMind


We’re excited to introduce Forest Listeners, an AI-powered citizen-science experiment developed in partnership with Google Arts & Culture and Google DeepMind. This initiative invites the public to help address one of the most persistent challenges in biodiversity science: the lack of high-quality, species-verified training data for bioacoustic models.

Through interactive sound-tagging, people worldwide can now contribute directly to the fine-tuning of Perch, an audio classification model from Google DeepMind designed to detect species automatically across large-scale acoustic datasets. For the scientific community, this represents a meaningful opportunity to accelerate how we assess biodiversity, track ecosystem change, and measure restoration outcomes.

Why This Matters for Biodiversity Science

Monitoring biodiversity at scale increasingly relies on passive acoustic monitoring (PAM). Yet, while PAM devices can record continuously across large landscapes, the bottleneck lies in analysis:

  • Millions of hours of audio go unprocessed,

  • Training datasets for many taxa remain sparse,

  • Regional sound libraries lack representation, particularly for species in highly diverse biomes like the Amazon and Atlantic Forest.

Forest Listeners is designed to help address these gaps. By distributing the task of species-call verification to a global public audience, we can substantially increase the volume and diversity of labelled audio needed to improve model accuracy.

How Forest Listeners Works

Users enter a virtual 3D rainforest environment and listen to short audio clips collected from PAM devices deployed across Brazilian ecosystems. They are asked a simple question: “Do you hear this species?” and respond yes or no.

Behind this simplicity is a scientifically valuable mechanism:

  • Each response acts as a verification signal for model training,

  • Aggregated across thousands of participants, the system generates large, consensus-labeled datasets,

  • These datasets are then used to fine-tune Perch, improving its capacity to detect species automatically at scale.

WildMon’s role includes curating acoustic datasets, advising on ecological relevance, and ensuring that the data generated feeds directly into ongoing biodiversity monitoring and restoration assessment efforts.

A Dataset of Over 1.2 Million Recordings

The current experiment uses more than 1.2 million recordings from Atlantic and Amazon rainforest monitoring sites, including restoration plots, protected areas, and degraded landscapes under recovery.

For biodiversity scientists, this influx of verified annotations improves species detection in acoustically complex environments, strengthens representation of intra-specific and community-level variation, and enhances our ability to interpret soundscapes as indicators of ecological change. As Perch becomes more accurate, it can better detect species turnover, the return of key taxa, signatures of degradation or recovery, and temporal shifts linked to climate or land use.

To Conclude

Forest Listener’s crowd-supported training data will enable more robust, automated biodiversity assessment systems, which can then support restoration monitoring, baseline establishment, community-led conservation, and large-scale ecological modelling.

This work lays essential groundwork for monitoring ecosystems at the speed and scale demanded by global restoration targets.



Forest Listeners

Get listening now and contribute

goo.gle/ForestListeners


Next
Next

Bezos Earth Fund & Audubon Announcement