'Are there any existing models using YOLO (or otherwise) that would be easily adaptable to count pellets produced by an individual mouse from video?

I am trying my best to not reinvent the wheel in order to automate my data collection for murine gastric emptying. Currently, members of our very small lab have to manually count net pellets per cage. I am hoping to find code that would be easily adapted for the following scenario:

Aerial view camera of a cage with a maximum of 3 mice. Bedding would be minimal and the color would be in stark contrast to pellets. Each mouse would be recognized as an individual. Each new pellet that appears would be labeled "object#" so that pellets are not counted twice. If mice are close together when a new pellet is produced, having the program guess or use probability to assign the pellet to a mouse in proximity would occur.



Solution 1:[1]

There are models for object detection. For example "Yolo" the problem is that you would have to train your model bc there is no generalized model where you can say that it should detect online your pellets. The training process itself is very simple since the process is built in the Yolo framework. The biggest amount of effort you will need to invest is in your training which you would need to manually label and convert into the format of your training framework. There are tools to simplify the labelling process but it is still an enormous amount of effort to gather good quality training data since you would need to label each frame individually.

I used this website to label my data and this framework is called darknet to train my models.

#NO ADVERTISEMENT

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 gerda die gandalfziege