Difference between revisions of "Projects:Software framework for multi-sensory environments"

From Collective Computational Unit
Jump to navigation Jump to search
(Created page with "== Overview == The tracking output of the Vicon system and the Acoustic tracking system (and maybe additional options, such as visual tracking) have to be fused into joint t...")
 
(Contact)
 
(17 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
== Overview ==  
 
== Overview ==  
  
The tracking output of the Vicon system and the Acoustic tracking system (and maybe additional options, such as visual tracking) have to be fused into joint tracks for individuals, ideally augmenting each other, leading to a more robust and accurate track estimate.
+
The purpose of this project is to develop of a software framework that will allow effective use
 +
of experimental facilities with multiple sensors e.g. imaging barn. The framework will
 +
standardize the process of synchronized data collection, data sharing (formats) and data
 +
manipulation (processing). Such standardization will promote collaborative development and
 +
support technology or methods transfer between different facilities i.e. Barn, Imaging Hangar,
 +
human tracking facility at Psychology Dept.
  
 
Closely related to [[Projects:Improve tracking of individual markers and marker patterns]] and [[Projects:Augment marker tracking with visual tracking]].
 
Closely related to [[Projects:Improve tracking of individual markers and marker patterns]] and [[Projects:Augment marker tracking with visual tracking]].
 +
 +
Details:[[file:Cluster-Medium-Project-Grant_HemalNaik.pdf]]
  
 
== Contact ==
 
== Contact ==
 +
* [[Hemal Naik]], hnaik@ab.mpg.de
 +
* [[Mathias Günther]]
  
 +
== Collaborators ==
 +
* Oliver Deussen (host)
 +
* Iain Couzin
 +
* Britta Renner
 
* Mate Nagy, mnagy@ab.mpg.de
 
* Mate Nagy, mnagy@ab.mpg.de
* Hemal Naik, hnaik@ab.mpg.de
 
 
* Bastian Goldluecke, bastian.goldluecke@uni-konstanz.de
 
* Bastian Goldluecke, bastian.goldluecke@uni-konstanz.de
  
 
== Aims ==
 
== Aims ==
 +
The main aim of the project is to have a repository with useful code snippets that allow users to quickly use the data generated in the imaging facilities.
 +
The code will ideally work as base code on top of which user can build their own project.
 +
 +
== Key Features ==
 +
 +
* Reading data generated by [[Vicon:Data format documentation|VICON mo-cap]] system in the [[Imaging Barn|Imaging Barn]].
 +
* Augmenting 3D data on videos
 +
* Annotation of keypoints on video
 +
* Verification of 6-DOF patterns
 +
* 6-DOF pose independent of VICON system
 +
* Stereo triangulation of points from video
 +
* 3D-Transformations in different coordinate systems
 +
* Reading data stream over network for real-time experiments [https://www.vicon.com/software/datastream-sdk/ [Info<nowiki>]</nowiki>]
 +
 +
'' Note : The features are in constant development and the page is updated every month ''
 +
 +
== Milestones ==
 +
 +
The first soft-release with example code is planned on 22 Dec. It will be done during the presentation with researchers from the cluster.
 +
Ideally, we will introduce basic repository with preliminary functionalities that can be utilized immediately.
 +
  
List the aims of your project, or what you expect anyone taking up the project is supposed to hopefully achieve. The more specific, the better.
 
  
 +
== Example code ==
  
== Estimated level of difficulty ==
+
* Realtime stream reading with VICON SDK
 +
* Custom 6-DOF tracking with VICON 3D Points and comparison with VICON 6-DOF tracking
 +
* Realtime stream reading without VICON SDK
 +
* Manual annotation of custom frames
 +
* Creating 2D-3D annotation datasets with manual annotation or vicon positions.
 +
* Stereo triangulation
  
If you have an estimate, classify level of difficulty according to
+
== Test Data ==
the description of the CCU in the cluster proposal into
 
* Standard problems which just require applying existing methods (Hiwi level)
 
* Elaborate problems which require substantial adaptation or extension of existing methods (Master student level)
 
* Special problems which require research of entirely new methods and might lead to a paper or two (Ph.D. student level)
 
Maybe add a short clarification of what you believe are the main difficulties, and why you believe this is the right classification.
 
  
 +
The sample datasets will be available soon for trying out different stuff.
  
== Provided data ==
+
PigeonPostureDataset
  
Give a specific description of the datasets you provide or can provide which people need to use to solve your problem. If available and/or necessary, also suggest some means for reading the data format. If you can provide links to the data so people can download an take a look, all the better. Also list any known limitations, whether you can easily acquire/record new data, or any other useful information.
+
StarlingDataset
  
<strong>Note:</strong> Once the CCU server is up and running, datasets should be stored there for easy availability. See the howtos on storage for details.
+
PigeonGazeDataset
  
 +
== Example Projects ==
  
== Suggested/tested approaches ==
+
These projects will be presented in complete form at the end of the project.
  
If you have an idea about how to approach the problem, or have tried something already which did not work well, please provide details here. If available, link some papers or code which might provide a possible solution or algorithm.
+
* Adding External camera calibration [ Showing hardware integrations ]
 +
* AR visualization of 3D movement [ Showing Interactive vislualization capabilities]
 +
* Interactive drone flight and recording data from area of interest interactively [ Showing real-time control and optimized data capturing ]
 +
* Live tracking of posture [ Showing ML capabilities]
 +
*

Latest revision as of 16:34, 8 December 2020

Overview

The purpose of this project is to develop of a software framework that will allow effective use of experimental facilities with multiple sensors e.g. imaging barn. The framework will standardize the process of synchronized data collection, data sharing (formats) and data manipulation (processing). Such standardization will promote collaborative development and support technology or methods transfer between different facilities i.e. Barn, Imaging Hangar, human tracking facility at Psychology Dept.

Closely related to Projects:Improve tracking of individual markers and marker patterns and Projects:Augment marker tracking with visual tracking.

Details:File:Cluster-Medium-Project-Grant HemalNaik.pdf

Contact

Collaborators

  • Oliver Deussen (host)
  • Iain Couzin
  • Britta Renner
  • Mate Nagy, mnagy@ab.mpg.de
  • Bastian Goldluecke, bastian.goldluecke@uni-konstanz.de

Aims

The main aim of the project is to have a repository with useful code snippets that allow users to quickly use the data generated in the imaging facilities. The code will ideally work as base code on top of which user can build their own project.

Key Features

  • Reading data generated by VICON mo-cap system in the Imaging Barn.
  • Augmenting 3D data on videos
  • Annotation of keypoints on video
  • Verification of 6-DOF patterns
  • 6-DOF pose independent of VICON system
  • Stereo triangulation of points from video
  • 3D-Transformations in different coordinate systems
  • Reading data stream over network for real-time experiments [Info]

Note : The features are in constant development and the page is updated every month

Milestones

The first soft-release with example code is planned on 22 Dec. It will be done during the presentation with researchers from the cluster. Ideally, we will introduce basic repository with preliminary functionalities that can be utilized immediately.


Example code

  • Realtime stream reading with VICON SDK
  • Custom 6-DOF tracking with VICON 3D Points and comparison with VICON 6-DOF tracking
  • Realtime stream reading without VICON SDK
  • Manual annotation of custom frames
  • Creating 2D-3D annotation datasets with manual annotation or vicon positions.
  • Stereo triangulation

Test Data

The sample datasets will be available soon for trying out different stuff.

PigeonPostureDataset

StarlingDataset

PigeonGazeDataset

Example Projects

These projects will be presented in complete form at the end of the project.

  • Adding External camera calibration [ Showing hardware integrations ]
  • AR visualization of 3D movement [ Showing Interactive vislualization capabilities]
  • Interactive drone flight and recording data from area of interest interactively [ Showing real-time control and optimized data capturing ]
  • Live tracking of posture [ Showing ML capabilities]