Container Orchestration Using the OpenStack Horizon GUI Interface

Please note that container orchestration on Jetstream is in early development. We will do our best to help you with issues, but it may be a mutual learning effort.

This page is a work in progress! 

This will also focus on Kubernetes, though Docker Swarm and Mesos both work on Jetstream. Full container orchestration user docs (OpenStack Magnum) are here: https://docs.openstack.org/magnum/latest/user/ . It is assumed that you have created a network, subnet, and router configuration for your project already. 

Create Orchestration Template


1: Login to Horizon:

IU: https://iu.jetstream-cloud.org/dashboard

TACC: https://tacc.jetstream-cloud.org/dashboard

Domain: tacc

User Name: your TACC username

Password: your TACC password




Login to Horizon

2: Click on Project → Container Infra → Cluster Templates → +Create Cluster Template



3:  Fill in the appropriate information on the Info screen:

Enter a cluster template name, for example, sample_cluster_template

Then choose the orchestration engine that you'll use → we're going to select Kubernetes for this example

After giving it a descriptive name and choosing the orchestration engine, press the NEXT button.





4:  Fill in the Node Spec information

Per the Magnum docs (https://docs.openstack.org/magnum/latest/user/#clustertemplate) the OS type for Kubernetes is either CoreOS or Fedora, Swarm is Fedora, and Mesos is Ubuntu.

Image: For Kubernetes, you will need to use Fedora-Atomic-29-JS-Latest – it is a known, working image for K8s. Due to OpenStack Magnum's requirements, only CoreOS and Fedora Atomic images work for Kubernetes on OpenStack.

The only image we guarantee to work on Jetstream presently is Fedora-Atomic-29-JS-Latest (UUID ae275170-b48c-4104-8af1-4d271f33a43c)

Keypair: Choose the appropriate SSH keypair from the drop down - these will be all of the keypairs you've uploaded to your account.

Flavor (worker flavor): Choose an appropriate size for your work load. We might suggest starting with an m1.medium as a starting point. You can change this in your template if you find another size is more fitting.

Master Flavor: Again, choose the appropriate size may be something you have to experiment with. We suggest starting with an m1.small, m1.quad, or m1.medium

Volume Driver: Set to Cinder

Docker Storage Driver: Set to Device Mapper

Docker Volume Size: Fill in the size volume for each node, if desired. If specified, container images will be stored in a cinder volume of the specified size in GB. Each cluster node will have a volume attached of the above size. If not specified, images will be stored in the compute instance’s local disk. For the ‘devicemapper’ storage driver, the minimum value is 3GB.

Then press the NEXT button.



5:  Fill in the Network information

Network Driver: Set to Flannel

External Network ID: Set to Public

Fixed Network: Choose your fixed network that you've created previously. 

Fixed Subnet: Choose your subnet that you've created previously.

Check the Floating IP box to assign a floating IP for your master node. If you do not check this, your cluster will NOT be accessible from the outside. 


Then press the blue SUBMIT button

Create a Cluster Using the Orchestration Template


6:  Click on Project → Container Infra → Clusters → +Create Cluster

7: Fill in the appropriate information on the Info screen:

Enter a cluster name, for example, sample_cluster

Please note that you cannot start the cluster name with anything but a letter.

Then choose the Kubernetes orchestration template that you created 

You should see the screen change with the template details once you do that. 

After giving it a descriptive name and choosing the orchestration engine, press the NEXT button.

8:  Fill in the Size information:

Master Count: The number of servers that will serve as master for the cluster. The default is 1. Set to more than 1 master to enable High Availability.

Node Count: The number of servers that will serve as node in the cluster. The default is 1.

Docker Volume Size: Fill in the size volume for each node, if desired. If specified, container images will be stored in a cinder volume of the specified size in GB. Each cluster node will have a volume attached of the above size. If not specified, images will be stored in the compute instance’s local disk. For the ‘devicemapper’ storage driver, the minimum value is 3GB.

If nothing is specified for the volume size, the size set in the cluster template will be used.

Press the NEXT button.

9: Fill in the Misc information:

Keypair: Choose the appropriate SSH keypair from the drop down - these will be all of the keypairs you've uploaded to your account.

Master Flavor ID: Again, choose the appropriate size may be something you have to experiment with. We suggest starting with an m1.small, m1.quad, or m1.medium

Flavor ID (worker flavor): Choose an appropriate size for your work load. We might suggest starting with an m1.medium as a starting point. You can change this in your template if you find another size is more fitting.


Then press the blue SUBMIT button

Check the Cluster Build

Click on Project → Container Infra → Clusters

You should see your cluster building (CREATE_IN_PROGRESS) in the Clusters screen. This can take some time, especially if you've created a large cluster. The containers with all of the software are being downloaded and activated for each master and node. You'll see this change to CREATE_COMPLETE when it's done.

You can use the build in monitoring with Kubernetes by going to:

http://NODE_IP:4194

Note: You'll need to have port 4194 open in your security policies. We suggest, as always, limiting that access to specific hosts or subnets as a best practice.

You can also do this to see the construction and get a visualization of your cluster:

Click on Project → Orchestration → Stacks

and then click on your cluster to look at various views such as the Topology

Other Advanced Topics:

Once you've created your Kubernetes cluster, you may wish to explore such advanced topics as scaling:

https://docs.openstack.org/magnum/latest/user/#scaling

There are other advanced topics that Andrea Zonca has created linked from the Advanced API Topics page that may be worth exploring, including autoscaling Jupyter deployments using Kubernetes.