Fish Detection AI, optic and sonar-trained object detection models
The Fish Detection AI project aims to improve the efficiency of fish monitoring around marine energy facilities to comply with regulatory requirements. Despite advancements in computer vision, there is limited focus on sonar images, identifying small fish with unlabeled data, and methods for underwater fish monitoring for marine energy.
A YOLO (You Only Look Once) computer vision model was developed using the Eyesea dataset (optical) and sonar images from Alaska Fish and Games to identify fish in underwater environments. Supervised methods were used within YOLO to detect fish based on training using labeled data of fish. These trained models were then applied to different unseen datasets, aiming to reduce the need for labeling datasets and training new models for various locations. Additionally, hyper-image analysis and various image preprocessing methods were explored to enhance fish detection.
In this research we achieved:
1. Enhanced YOLO Performance, as compared to a published article (Xu, Matzner 2018) using earlier yolo versions for fish object identification. Specifically, we achieved a best mean Average Precision (mAP) of 0.68 on the Eyesea optical dataset using YOLO v8 (medium-sized model), surpassing previous YOLO v3 benchmarks from that previous article publication. We further demonstrated up to 0.65 mAP on unseen sonar domains by leveraging a hyper-image approach (stacking consecutive frames), showing promising cross-domain adaptability.
This submission of data includes:
- The actual best-performing trained YOLO model neural network weights, which can be applied to do object detection (PyTorch files, .pt). These are found in the Yolo_models_downloaded zip file
- Documentation file to explain the upload and the goals of each of the experiments 1-5, as detailed in the word document (named 20250310_Yolo_object_detection_hot_tos...)
- Coding files, namely 5 sub-folders of python, shell, and yaml files that were used to run the experiments 1-5, as well as a separate folder for yolo models. Each of these is found in their own zip file, named after each experiment
- Sample data structures (sample1 and sample2, each with their own zip file) to show how the raw data should be structured after running our provided code on the raw downloaded data
- link to the article that we were replicating (Xu, Matzner 2018)
- link to the Yolo documentation site from the original creators of that model (ultralytics)
- link to the downloadable EyeSea data set from PNNL (instructions on how to download and format the data in the right way to be able to replicate these experiments is found in the How To word document)
Citation Formats
Water Power Technology Office. (2014). Fish Detection AI, optic and sonar-trained object detection models [data set]. Retrieved from https://mhkdr.openei.org/submissions/600.
Slater, Katherine, Yoder, Delano, Noyes, Carlos, and Scott, Brett. Fish Detection AI, optic and sonar-trained object detection models. United States: N.p., 25 Jun, 2014. Web. https://mhkdr.openei.org/submissions/600.
Slater, Katherine, Yoder, Delano, Noyes, Carlos, & Scott, Brett. Fish Detection AI, optic and sonar-trained object detection models. United States. https://mhkdr.openei.org/submissions/600
Slater, Katherine, Yoder, Delano, Noyes, Carlos, and Scott, Brett. 2014. "Fish Detection AI, optic and sonar-trained object detection models". United States. https://mhkdr.openei.org/submissions/600.
@div{oedi_600, title = {Fish Detection AI, optic and sonar-trained object detection models}, author = {Slater, Katherine, Yoder, Delano, Noyes, Carlos, and Scott, Brett.}, abstractNote = {The Fish Detection AI project aims to improve the efficiency of fish monitoring around marine energy facilities to comply with regulatory requirements. Despite advancements in computer vision, there is limited focus on sonar images, identifying small fish with unlabeled data, and methods for underwater fish monitoring for marine energy.
A YOLO (You Only Look Once) computer vision model was developed using the Eyesea dataset (optical) and sonar images from Alaska Fish and Games to identify fish in underwater environments. Supervised methods were used within YOLO to detect fish based on training using labeled data of fish. These trained models were then applied to different unseen datasets, aiming to reduce the need for labeling datasets and training new models for various locations. Additionally, hyper-image analysis and various image preprocessing methods were explored to enhance fish detection.
In this research we achieved:
1. Enhanced YOLO Performance, as compared to a published article (Xu, Matzner 2018) using earlier yolo versions for fish object identification. Specifically, we achieved a best mean Average Precision (mAP) of 0.68 on the Eyesea optical dataset using YOLO v8 (medium-sized model), surpassing previous YOLO v3 benchmarks from that previous article publication. We further demonstrated up to 0.65 mAP on unseen sonar domains by leveraging a hyper-image approach (stacking consecutive frames), showing promising cross-domain adaptability.
This submission of data includes:
- The actual best-performing trained YOLO model neural network weights, which can be applied to do object detection (PyTorch files, .pt). These are found in the Yolo_models_downloaded zip file
- Documentation file to explain the upload and the goals of each of the experiments 1-5, as detailed in the word document (named 20250310_Yolo_object_detection_hot_tos...)
- Coding files, namely 5 sub-folders of python, shell, and yaml files that were used to run the experiments 1-5, as well as a separate folder for yolo models. Each of these is found in their own zip file, named after each experiment
- Sample data structures (sample1 and sample2, each with their own zip file) to show how the raw data should be structured after running our provided code on the raw downloaded data
- link to the article that we were replicating (Xu, Matzner 2018)
- link to the Yolo documentation site from the original creators of that model (ultralytics)
- link to the downloadable EyeSea data set from PNNL (instructions on how to download and format the data in the right way to be able to replicate these experiments is found in the How To word document)}, doi = {}, url = {https://mhkdr.openei.org/submissions/600}, journal = {}, number = , volume = , place = {United States}, year = {2014}, month = {06}}
Details
Data from Jun 25, 2014
Last updated Mar 10, 2025
Submission in progress
Organization
Water Power Technology Office
Contact
Victoria Sabo
Authors
Keywords
MHK, Marine, Hydrokinetic, energy, power, AI, YOLO model, object detection, you only look once model, neural networks, EyeSea datasetDOE Project Details
Project Lead Samantha Eaves
Project Number EERE T 540.210-09