Changes

Jump to navigation Jump to search

CCU:GPU Cluster Quick Start

5 bytes removed, 4 years ago
m
Overview
The typical workflow if you want to run your own applications is as follows:
1. - Log in to the cluster and configure kubectl, the command line tool to talk to Kubernetes, to use your login credentials.2. - Create a persistent container to access the global file system, and mount the Ceph volumes inside it. Use this container to transfer code and data to the cluster and back.3 - (optional): Create your own custom container image with special libraries etc. which you need to run your code.4. - Create a GPU-enabled container based on your own image or one of the ready-made images with Deep Learning toolkits or whatever workload you want to run.5. - Start your workloads by logging into the container and running your code manually (only good for debugging), or by defining a job script which automatically runs a specified command inside the container until successful completion (recommended).
We will cover these points in more detail below.
 
== Pod configuration on the new cluster ==

Navigation menu