New open-source image database unleashes the power of AI for ocean exploration

A new collaborative effort between MBARI and other research institutions is harnessing the power of artificial intelligence and machine learning to accelerate efforts to study the ocean.

In order to manage the impacts of climate change and other threats, researchers urgently need to know more about the people, ecosystems and processes of the ocean. As scientists and engineers develop advanced robotics capable of visualizing marine life and environments to monitor changes in the health of the ocean, they face a fundamental problem: collecting images, videos, and data. ‘ other visual data greatly exceeds the analytical capacity of researchers.

FathomNet is an open-source image database that uses state-of-the-art data processing algorithms to help process the backlog of visual data. The use of artificial intelligence and machine learning will reduce the bottleneck for analyzing underwater images and accelerate important research into ocean health.

“A big ocean needs big data. Researchers collect large amounts of visual data to observe life in the ocean. How can we process all this information without automation? Machine learning is leading the way, but these approaches rely on massive datasets for training. FathomNet was designed to fill this gap,” said MBARI Principal Engineer Kakani Katija.

Katija project co-founders Katy Croff Bell (Ocean Discovery League) and Ben Woodward (CVision AI), along with members of the extended FathomNet team, detailed the development of this new image database in a recent research publication in Scientific reports.

Recent advances in machine learning allow rapid and sophisticated analysis of visual data, but the use of artificial intelligence in ocean research has been limited by the lack of a standard set of existing images that could be used to train machines to recognize and catalog underwater objects. And life. FathomNet addresses this need by aggregating imagery from multiple sources to create a publicly accessible, expert-curated underwater imagery training database.

“Over the past five years, machine learning has revolutionized the landscape of automated visual analysis, largely through massive collections of labeled data. ImageNet and Microsoft COCO are benchmark datasets for terrestrial applications which machine learning and computer vision researchers flock to, but we haven’t even begun to scratch the surface of machine learning capabilities for underwater visual analysis,” said Ben Woodward, co-founder and CEO of CVision AI and co-founder of FathomNet “With FathomNet, we aim to provide a rich and interesting reference to engage the machine learning community in a new field.”

Over the past 35 years, MBARI has recorded nearly 28,000 hours of deep-sea video and collected over a million deep-sea images. This wealth of visual data has been annotated in detail by research technicians from the MBARI video lab. MBARI’s video archive includes approximately 8.2 million annotations that record sightings of animals, habitats, and objects. This rich dataset is an invaluable resource for institute researchers and collaborators around the world.

FathomNet incorporates a subset of MBARI’s dataset, as well as assets from National Geographic and NOAA.

The National Geographic Society’s Exploration Technology Lab has been deploying versions of its autonomous benthic lander platform, the Deep Sea Camera System, since 2010, collecting more than 1,000 hours of video data from locations across ocean basins and in a variety of marine habitats. These videos were then ingested into CVision AI’s cloud-based collaborative analytics platform and annotated by subject matter experts from the University of Hawaii and OceansTurn.

The National Oceanic and Atmospheric Administration (NOAA) Ocean Exploration has begun collecting video data with a dual remotely operated vehicle system aboard the NOAA ship Okeanos Explorer in 2010. Over 271 terabytes are archived and publicly available from NOAA’s National Centers for Environmental Information (NCEI). NOAA Ocean Exploration initially collected annotations through participating volunteer scientists and began supporting expert taxonomists in 2015 to further annotate collected videos.

“FathomNet is a great example of how collaboration and community science can foster breakthroughs in how we learn about the ocean. With data from MBARI and other collaborators as the backbone, we hope FathomNet can help to accelerate ocean research at a time when understanding the ocean is more important than ever,” said Lonny Lundsten, Senior Research Technician at MBARI’s Video Lab, co-author and FathomNet team member.

As an open-source web resource, other institutions can contribute and use FathomNet instead of traditional, resource-intensive efforts to process and analyze visual data. MBARI has launched a pilot program to use machine learning models trained by FathomNet to annotate videos captured by remotely operated underwater vehicles (ROVs). The use of AI algorithms reduced human effort by 81% and increased the labeling rate tenfold.

Machine learning models trained with FathomNet data also have the potential to revolutionize ocean exploration and monitoring. For example, equipping robotic vehicles with cameras and improved machine learning algorithms may eventually enable automated search and tracking of marine animals and other underwater objects.

“Four years ago we envisioned using machine learning to analyze thousands of hours of ocean video, but at the time this was not possible primarily due to a lack of annotated images. FathomNet will now make that vision a reality, unlocking discoveries and enabling explorers, scientists and the public to use tools to accelerate the pace of ocean discovery,” said Katy Croff Bell, Founder and President of the Ocean Discovery League and co-founder of FathomNet.

As of September 2022, FathomNet contained 84,454 images, representing 175,875 locations from 81 distinct collections for 2,243 concepts, with additional contributions ongoing. FathomNet aims to obtain 1,000 independent observations for over 200,000 animal species in various poses and imaging conditions – possibly over 200 million observations in total. For FathomNet to achieve its goals, significant community engagement – including high quality contributions across a wide range of groups and individuals – and extensive use of the database will be required.

“While FathomNet is an API-based web platform where people can upload tagged data to train new algorithms, we also want it to serve as a community where explorers and ocean enthusiasts from all walks of life can contribute their knowledge and expertise and help solve ocean visual data challenges that are impossible without widespread engagement,” said Katija.


Seed funding for FathomNet was provided by the National Geographic Society (#518018), the National Oceanic and Atmospheric Administration (NA18OAR4170105), and MBARI through the generous support of the David and Lucile Packard Foundation. Additional financial support was provided by the National Geographic Society (NGS-86951T-21) and the National Science Foundation (OTIC #1812535 & Convergence Accelerator #2137977).