Difference between revisions of "Projects:Software framework for multi-sensory environments"

From Collective Computational Unit
Jump to navigation Jump to search
(Contact)
 
(15 intermediate revisions by 2 users not shown)
Line 10: Line 10:
 
Closely related to [[Projects:Improve tracking of individual markers and marker patterns]] and [[Projects:Augment marker tracking with visual tracking]].
 
Closely related to [[Projects:Improve tracking of individual markers and marker patterns]] and [[Projects:Augment marker tracking with visual tracking]].
  
Details:[[file:Cluster-Medium-Project-Grant_HemalNaik.pdf|Original Proposal]]
+
Details:[[file:Cluster-Medium-Project-Grant_HemalNaik.pdf]]
  
 
== Contact ==
 
== Contact ==
* Hemal Naik, hnaik@ab.mpg.de
+
* [[Hemal Naik]], hnaik@ab.mpg.de
* Mathias Günther (long term support), mathias.guenther@uni-konstanz.de
+
* [[Mathias Günther]]
  
 
== Collaborators ==
 
== Collaborators ==
Line 24: Line 24:
  
 
== Aims ==
 
== Aims ==
 +
The main aim of the project is to have a repository with useful code snippets that allow users to quickly use the data generated in the imaging facilities.
 +
The code will ideally work as base code on top of which user can build their own project.
  
List the aims of your project, or what you expect anyone taking up the project is supposed to hopefully achieve. The more specific, the better.
+
== Key Features ==
  
 +
* Reading data generated by [[Vicon:Data format documentation|VICON mo-cap]] system in the [[Imaging Barn|Imaging Barn]].
 +
* Augmenting 3D data on videos
 +
* Annotation of keypoints on video
 +
* Verification of 6-DOF patterns
 +
* 6-DOF pose independent of VICON system
 +
* Stereo triangulation of points from video
 +
* 3D-Transformations in different coordinate systems
 +
* Reading data stream over network for real-time experiments [https://www.vicon.com/software/datastream-sdk/ [Info<nowiki>]</nowiki>]
  
== Estimated level of difficulty ==
+
'' Note : The features are in constant development and the page is updated every month ''
  
If you have an estimate, classify level of difficulty according to
+
== Milestones ==
the description of the CCU in the cluster proposal into
 
* Standard problems which just require applying existing methods (Hiwi level)
 
* Elaborate problems which require substantial adaptation or extension of existing methods (Master student level)
 
* Special problems which require research of entirely new methods and might lead to a paper or two (Ph.D. student level)
 
Maybe add a short clarification of what you believe are the main difficulties, and why you believe this is the right classification.
 
  
 +
The first soft-release with example code is planned on 22 Dec. It will be done during the presentation with researchers from the cluster.
 +
Ideally, we will introduce basic repository with preliminary functionalities that can be utilized immediately.
  
== Provided data ==
 
  
Give a specific description of the datasets you provide or can provide which people need to use to solve your problem. If available and/or necessary, also suggest some means for reading the data format. If you can provide links to the data so people can download an take a look, all the better. Also list any known limitations, whether you can easily acquire/record new data, or any other useful information.
 
  
<strong>Note:</strong> Once the CCU server is up and running, datasets should be stored there for easy availability. See the howtos on storage for details.
+
== Example code ==
  
 +
* Realtime stream reading with VICON SDK
 +
* Custom 6-DOF tracking with VICON 3D Points and comparison with VICON 6-DOF tracking
 +
* Realtime stream reading without VICON SDK
 +
* Manual annotation of custom frames
 +
* Creating 2D-3D annotation datasets with manual annotation or vicon positions.
 +
* Stereo triangulation
  
== Suggested/tested approaches ==
+
== Test Data ==
  
If you have an idea about how to approach the problem, or have tried something already which did not work well, please provide details here. If available, link some papers or code which might provide a possible solution or algorithm.
+
The sample datasets will be available soon for trying out different stuff.
 +
 
 +
PigeonPostureDataset
 +
 
 +
StarlingDataset
 +
 
 +
PigeonGazeDataset
 +
 
 +
== Example Projects ==
 +
 
 +
These projects will be presented in complete form at the end of the project.  
 +
 
 +
* Adding External camera calibration [ Showing hardware integrations ]
 +
* AR visualization of 3D movement [ Showing Interactive vislualization capabilities]
 +
* Interactive drone flight and recording data from area of interest interactively [ Showing real-time control and optimized data capturing ]
 +
* Live tracking of posture [ Showing ML capabilities]
 +
*

Latest revision as of 16:34, 8 December 2020

Overview

The purpose of this project is to develop of a software framework that will allow effective use of experimental facilities with multiple sensors e.g. imaging barn. The framework will standardize the process of synchronized data collection, data sharing (formats) and data manipulation (processing). Such standardization will promote collaborative development and support technology or methods transfer between different facilities i.e. Barn, Imaging Hangar, human tracking facility at Psychology Dept.

Closely related to Projects:Improve tracking of individual markers and marker patterns and Projects:Augment marker tracking with visual tracking.

Details:File:Cluster-Medium-Project-Grant HemalNaik.pdf

Contact

Collaborators

  • Oliver Deussen (host)
  • Iain Couzin
  • Britta Renner
  • Mate Nagy, mnagy@ab.mpg.de
  • Bastian Goldluecke, bastian.goldluecke@uni-konstanz.de

Aims

The main aim of the project is to have a repository with useful code snippets that allow users to quickly use the data generated in the imaging facilities. The code will ideally work as base code on top of which user can build their own project.

Key Features

  • Reading data generated by VICON mo-cap system in the Imaging Barn.
  • Augmenting 3D data on videos
  • Annotation of keypoints on video
  • Verification of 6-DOF patterns
  • 6-DOF pose independent of VICON system
  • Stereo triangulation of points from video
  • 3D-Transformations in different coordinate systems
  • Reading data stream over network for real-time experiments [Info]

Note : The features are in constant development and the page is updated every month

Milestones

The first soft-release with example code is planned on 22 Dec. It will be done during the presentation with researchers from the cluster. Ideally, we will introduce basic repository with preliminary functionalities that can be utilized immediately.


Example code

  • Realtime stream reading with VICON SDK
  • Custom 6-DOF tracking with VICON 3D Points and comparison with VICON 6-DOF tracking
  • Realtime stream reading without VICON SDK
  • Manual annotation of custom frames
  • Creating 2D-3D annotation datasets with manual annotation or vicon positions.
  • Stereo triangulation

Test Data

The sample datasets will be available soon for trying out different stuff.

PigeonPostureDataset

StarlingDataset

PigeonGazeDataset

Example Projects

These projects will be presented in complete form at the end of the project.

  • Adding External camera calibration [ Showing hardware integrations ]
  • AR visualization of 3D movement [ Showing Interactive vislualization capabilities]
  • Interactive drone flight and recording data from area of interest interactively [ Showing real-time control and optimized data capturing ]
  • Live tracking of posture [ Showing ML capabilities]