Difference between revisions of "CCU:Roadmap"
Jump to navigation
Jump to search
(→Computational resources) |
|||
| Line 11: | Line 11: | ||
== Computational resources == | == Computational resources == | ||
| − | * GPU node nVidia DGX-2 (16 x V100) has been ordered | + | * Compute servers for machine learning |
| − | + | :- GPU node nVidia DGX-2 (16 x V100) has been ordered | |
| + | :- GPU node with 4x Titan RTX has been ordered (reserved for SFB TRR 161) | ||
| + | |||
* CCU access server has been ordered, once this is here: | * CCU access server has been ordered, once this is here: | ||
:- set up new user authentication systems | :- set up new user authentication systems | ||
Revision as of 10:02, 9 May 2019
This is a general roadmap for the CCU, rough timelines will be added later on.
Wikimedia site
- Finish help pages for projects
- Finish help pages for computational resources (once those are available and all systems are in place)
Computational resources
- Compute servers for machine learning
- - GPU node nVidia DGX-2 (16 x V100) has been ordered
- - GPU node with 4x Titan RTX has been ordered (reserved for SFB TRR 161)
- CCU access server has been ordered, once this is here:
- - set up new user authentication systems
- - move Wiki to new server
- - set up git and container repositories
- - set up data storage
- - set up Kubernetes as a cluster scheduler
- - link to GPU nodes and make them available
- Existing multi-GPU systems might be integrated in the cluster in the future.