Projects:Software framework for multi-sensory environments

From Collective Computational Unit
Revision as of 16:14, 25 November 2020 by Hemal.naik (talk | contribs) (Example code)
Jump to navigation Jump to search

Overview

The purpose of this project is to develop of a software framework that will allow effective use of experimental facilities with multiple sensors e.g. imaging barn. The framework will standardize the process of synchronized data collection, data sharing (formats) and data manipulation (processing). Such standardization will promote collaborative development and support technology or methods transfer between different facilities i.e. Barn, Imaging Hangar, human tracking facility at Psychology Dept.

Closely related to Projects:Improve tracking of individual markers and marker patterns and Projects:Augment marker tracking with visual tracking.

Details:File:Cluster-Medium-Project-Grant HemalNaik.pdf

Contact

  • Hemal Naik, hnaik@ab.mpg.de
  • Mathias Günther (long term support), mathias.guenther@uni-konstanz.de

Collaborators

  • Oliver Deussen (host)
  • Iain Couzin
  • Britta Renner
  • Mate Nagy, mnagy@ab.mpg.de
  • Bastian Goldluecke, bastian.goldluecke@uni-konstanz.de

Aims

The main aim of the project is to have a repository with useful code snippets that allow users to quickly use the data generated in the imaging facilities. The code will ideally work as base code on top of which user can build their own project.

Key Features

  • Reading data generated by VICON mo-cap system in the Imaging Barn.
  • Augmenting 3D data on videos
  • Annotation of keypoints on video
  • Verification of 6-DOF patterns
  • 6-DOF pose independent of VICON system
  • Stereo triangulation of points from video
  • 3D-Transformations in different coordinate systems
  • Reading data stream over network for real-time experiments [Info]

Note : The features are in constant development and the page is updated every month

Example code

  • Realtime stream reading with VICON SDK
  • Custom 6-DOF tracking with VICON 3D Points and comparison with VICON 6-DOF tracking
  • Realtime stream reading without VICON SDK
  • Manual annotation of custom frames
  • Creating 2D-3D annotation datasets with manual annotation or vicon positions.
  • Stereo triangulation

Test Data

Give a specific description of the datasets you provide or can provide which people need to use to solve your problem. If available and/or necessary, also suggest some means for reading the data format. If you can provide links to the data so people can download an take a look, all the better. Also list any known limitations, whether you can easily acquire/record new data, or any other useful information.

Note: Once the CCU server is up and running, datasets should be stored there for easy availability. See the howtos on storage for details.