The aim of the project is to calculate the frequency of hearth and breath rate of a fish.
Provided data
Suggested/tested approaches
The pipeline consists of multiple steps where each step has a drastic influence on the resulting quality. The stage consist of a single fish or each fish can be tracked individually and independently on other fishes present in the scene. From the tracking stage, we have a bounding box. The image within the bounding box is then fed to a minimalist encoder-decoded network where the latent space is used to detect different stages of the fish (either hearth or beat rate).
The architecture of the network is very minimalist due extremely small training dataset (3x3 consecutive convolutions, the image is re-scaled to 64x64 pixels with three channels) which on average consist of no more than 2000 frames. The over-fitting is prevented by using a very short training stage (100 iterations is incomparably less than what is usually used). The bounding boxed from different videos cannot be used because of the setting (fishes, scene) or alignment is different for each video.
Latent space usually consists of multiple dimensions, some of them are redundant, but we calculate the average of the entire latent space for each frame. The absolute value provides us a rough clue of at which stage the fish is.
The popular approach of embedding (TSNE) seems not to work unfortunately (see results, left figure)
(Top) TSNE Embedding of the latent space to 2D after being clustered with EM (the different color is a different cluster) (Bottom) Assignment of state to each frame based on clustering of the latent space
(Top) Sum of the absolute value of the latent space (Bottom) Filtered lower frequency, the red dots are detected peaks which should correspond to the detected breath phase
(Top) TSNE Embedding of the latent space to 2D after being clustered with EM (the different color is a different cluster) (Bottom) Assignment of state to each frame based on clustering of the latent space
(Top) Sum of the absolute value of the latent space (Bottom) Filtered lower frequency, the red dots are detected peaks which should correspond to the detected breath phase
(Top) TSNE Embedding of the latent space to 2D after being clustered with EM (the different color is a different cluster) (Bottom) Assignment of state to each frame based on clustering of the latent space
(Top) Sum of the absolute value of the latent space (Bottom) Filtered lower frequency, the red dots are detected peaks which should correspond to the detected breath phase
(Top) TSNE Embedding of the latent space to 2D after being clustered with EM (the different color is a different cluster) (Bottom) Assignment of state to each frame based on clustering of the latent space
(Top) Sum of the absolute value of the latent space (Bottom) Filtered lower frequency, the red dots are detected peaks which should correspond to the detected breath phase