Skip to content

HOWTO: Preparing your cloud to be driven by CBTOOL

maugustosilva edited this page Jan 23, 2018 · 70 revisions

Generic info

  • The CBTOOL Orchestrator Node (ON) will require access to the every newly created instance (just once). If you are running the ON inside the cloud, it should not be a problem. If you are running you CBTOOL Orchestretor Node outside of the cloud, please follow these instructions

  • For each cloud, you will need basically three pieces of information :

    a) Access info (i.e., URL for the API endpoint)

    b) Authentication info (i.e., username/password, tokens)

    c) Location info (e.g., Region or Availability Zone)

    d) The name or identifier for at least one base (unconfigured) Ubuntu or RHEL/Centos/Fedora image to be used later for the creation of the workloads (e.g., "ami-a9d276c9" on EC2's "us-west-2" or "ubuntu-1604-xenial-v20161221" on Google Compute Engine"). Please take note of the username that should be used to connected to this image.

Cloud-specific info:

  1. Amazon EC2
  2. OpenStack
  3. Google Compute Engine
  4. DigitalOcean
  5. Docker/Swarm (Parallel Docker Manager)
  6. LXD/LXC (Parallel Container Manager)
  7. Kubernetes
  8. Libvirt (Parallel Libvirt Manager)
  9. VMWare vCloud
  10. CloudStack
  11. SoftLayer
  12. Azure Service Management

Amazon EC2

  • The CBTOOL Orchestrator Node (ON) is supposed to have network access to both the EC2 API and to the instantiated VMs (once created), through either their private or public IP addresses.

  • Pieces of information needed for your [private configuration file (https://github.com/ibmcb/cbtool/wiki/FAQ-S#wiki-sq2) :

    a) AWS access key (EC2_ACCESS)

    b) EC2 security group (EC2_SECURITY_GROUPS)

    c) AWS secret key (EC2_CREDENTIALS)

    d) EC2 Region, being the default us-east-1 (EC2_INITIAL_VMCS)

    e) The name of user already existing on the VM images that CBTOOL will use to login on these (EC2_LOGIN)

  • IMPORTANT: On the most current version of the code, SSH key pairs are automatically created and managed by CBTOOL. If you insist in using your own keys (** NOT RECOMMENDED **), then there are two additional parameters that will require changes:

    I) Create a new key pair on EC2 (e.g., cbkey) and download the private key file to ~/cbtool/credentials/ (don't forget to chmod 600) (EC2_KEY_NAME)

    II) Just repeat the key name from item I). It is complicated :-) (EC2_SSH_KEY_NAME), but it can be a lot more than really complicated :-) (it depends mostly on your cloud's capabilities and configurations).


Openstack

  • The CBTOOL Orchestrator Node (ON) is supposed to have network access to both the nova API endpoints and to the instantiated VMs (once created), through their fixed IP addresses.

  • Pieces of information needed for your private configuration file:

    a) IP address for the nova API endpoint (OSK_ACCESS) If the URL is reachable through a hostname that contains dashes (-), please replace those with the word _dash_.

    For instance, OSK_ACCESS=http://my-cloud-controller.mycloud.com:5000/v2.0/ should be rewritten as OSK_ACCESS=http://my_dash_cloud_dash_controller.mycloud.com:5000/v2.0/
    

    b) API username, password and tenant name (OSK_CREDENTIALS). Normally, this is simply a triple <user>-<password>-<tenant> (e.g., admin-temp4now-admin). If HTTPS access is required, then the parameter should be <user>-<password>-<tenant>-<cacert> (path to certificate) instead of simply <user>-<password>-<tenant> (e.g., OSK_CREDENTIALS = admin-abcdef-admin-/etc/openstack/openstack.crt).

    c) The name of an already existing security group, obtained with nova secgroup-list (OSK_SECURITY_GROUPS)

    d) The name of an already existing Region (usually, just "RegionOne" OSK_INITIAL_VMCS

    e) The name of user already existing on the VM images that CBTOOL will use to login on these (OSK_LOGIN)

    f) The name of an already existing network, obtained with the command nova network-list (OSK_NETNAME).


Google Compute Engine

  • Execute gcloud auth login --no-launch-browser. This command will output an URL that has to be accessed from a browser. It will produce an authentication string that has to be pasted back on the command’s prompt.

  • Execute gcloud config set project YOUR-PROJECT-ID, where YOUR-PROJECT-ID is the ID of the project.

  • Test the success of the configuration authentication by running a command such as gcloud compute machine-types list.

  • Pieces of information needed for your private configuration file :

    a) The project name where instances will be effectively run, and the project name that houses the images. Both as a comma-separated string pair (this can be obtained with "gcloud info") (GCE_ACCESS)

    b) Google Compute Engine Region, the default being us-east1-b (GCE_INITIAL_VMCS)

    e) The name of user already existing on the VM images that CBTOOL will use to login on these (GCE_LOGIN)


Digital Ocean

TBD


Parallel Docker Manager

  • The CBTOOL Orchestrator Node (ON) is requires network access to all Docker daemons running on each Host.

  • Each Docker daemon should be listening on a TCP port, which allows a remote client (CBTOOL) to establish communication and issue commands to it. This is not the default configuration for a newly installed Docker engine (by default, it only listens on /var/run/docker.sock, and thus a change in the start configuration option will most likely be required. Basically, you need to make use of the option -H. For instance, if your Docker daemon is managed by SystemD (Ubuntu/Centos/Fedora), you will have to change the file /lib/system/system/docker.service and make sure that ExecStart parameter contains a string like -H tcp://0.0.0.0:2375.

  • CBTOOL will attempt to SSH into the running Docker instances. This will require an image that has a running SSH daemon ready. Examples can be found on https://hub.docker.com/r/ibmcb/cbtoolbt-ubuntu/ and https://hub.docker.com/r/ibmcb/cbtoolbt-phusion/.

  • If multiple Docker hosts are used, make sure that the containers can communicate through the (Docker) overlay network (https://docs.docker.com/engine/userguide/networking/get-started-overlay/).

  • Pieces of information needed for your private configuration file :

    a) Comma-separated connection URLs in the format tcp://<IP>:<PORT> (PDM_ACCESS)

    b) A "Docker Region", for now just set as "world" (PDM_INITIAL_VMCS)

    c) Docker network name (PDM_NETNAME)

    d) The name of user already existing on the Docker images that CBTOOL will use to login on these (PDM_LOGIN)

  • IMPORTANT: By default, during the first execution, the CBTOOL "PDM" Cloud Adapter will try to pull pre-configured Docker images from https://hub.docker.com/r/ibmcb/


Parallel Container Manager

  • The CBTOOL Orchestrator Node (ON) requires network access to all LXD daemons running on each Host.

  • The CBTOOL ON will also require passwordless root access via SSH to all hosts running an LXD daemon. We fully agree that it is a far from acceptable situation, but until we have a "blessed" (by the LXD community) way of performing instance port mapping on the hosts, we have to rely on rinetd.

  • The package rinetd (sudo apt-get install rinetd/sudo yum install rinetd) has to be installed on all hosts running an LXD daemon.

  • CBTOOL will attempt to SSH into the running instances. This will require an image that has a running SSH daemon ready, such as ubuntu:16.04 or images:fedora/24(e.g., execute sudo lxc image copy ubuntu:16.04 local: on each host), from https://us.images.linuxcontainers.org.

  • If multiple LXD hosts are used, make sure that the containers can communicate through the some type of overlay network (useful example: https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/).

  • Pieces of information needed for your private configuration file :

    a) Comma-separated connection URLs in the format https://:) (PCM_ACCESS)

    b) A password required to connect to each LXD host (PCM_CREDENTIALS)

    c) A "LXD Region", for now just set as "world" (PCM_INITIAL_VMCS)

    c) LXD network name, typically lxdbr0 (PCM_NETNAME)

    d) The name of user already existing on the Docker images that CBTOOL will use to login on these (PCM_LOGIN)


Kubernetes

  • The CBTOOL Orchestrator Node (ON) requires network access to all worker nodes on the Kubernetes cluster.

  • By default, the (Docker) images used by different Virtual Application types are pulled for the "ibmcb" repository on Docker Hub. This can be changed by altering the parameter IMAGE_PREFIX on the section [VM_DEFAULTS : KUB_CLOUDCONFIG] on your private configuration file.

  • Pieces of information needed for your [private configuration file (https://github.com/ibmcb/cbtool/wiki/FAQ-S#wiki-sq2) :

    a) Path to your kubeconfig file, on the Orchestrator Node (KUB_ACCESS)

    b) Simply set the credentials to NOTUSED (KUB_CREDENTIALS)

    c) A "Region". For now just set as "world" (KUB_INITIAL_VMCS)

    c) A Kubernetes network name. For now, just set it as "default" (KUB_NETNAME)

    d) The name of user already existing on the Docker images that CBTOOL will use to login on these (KUB_LOGIN). By the default, the images from the "ibmcb" account on Docker Hub use "ubuntu" (KUB_LOGIN)


Parallel Libvirt Manager

TBD


VMware vCloud Director

TBD


CloudStack

TBD


SoftLayer

  • The CBTOOL Orchestrator Node (ON) is supposed to have network access to both the Softlayer API and to the instantiated VMs (once created), through their backend IP addresses.

  • Pieces of information needed for your private configuration file :

    a) Username and API key (SLR_CREDENTIALS)

    b) SoftLayer DataCenter, the default being dal05 (SLR_INITIAL_VMCS)

    e) The name of user already existing on the VM images that CBTOOL will use to login on these (SLR_LOGIN)

Azure Service Management

  • The CBTOOL Orchestrator Node (ON) is supposed to have network access to both the Azure API and to the instantiated VMs (once created), through their private IP addresses.

  • Follow the instructions on the libcloud Documentation to create a new certificate file and upload it through the Azure Portal

  • Pieces of information needed for your private configuration file :

    a) Subscription ID (get from the Portal) and Certificate file path, which you just generated (ASM_CREDENTIALS)

    b) Azure Region, the default being "Central US" (ASM_INITIAL_VMCS)

    e) The name of user already existing on the VM images that CBTOOL will use to login on these (ASM_LOGIN)


Clone this wiki locally