From 61b0eae2b8c3bd6ed7548f9d7069d3fd878ddb41 Mon Sep 17 00:00:00 2001 From: "Yang Yang(Tony)" <29932814+tonyyang-svail@users.noreply.github.com> Date: Tue, 29 Oct 2019 20:33:22 -0700 Subject: [PATCH] [Doc] Submit Argo Workflow from SQLFlow Container (#1079) * [Doc] Submit Argo Jobs from SQLFlow Container * polish * Update argo-setup.md * Update argo-setup.md * Update argo-setup.md * remove argo in Docker image --- doc/argo-setup.md | 62 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 62 insertions(+) create mode 100644 doc/argo-setup.md diff --git a/doc/argo-setup.md b/doc/argo-setup.md new file mode 100644 index 0000000000..d2492ef1ba --- /dev/null +++ b/doc/argo-setup.md @@ -0,0 +1,62 @@ +# Submit Argo Workflow from SQLFlow Container + +In this document, we explain how to submit jobs from a SQLFlow server container to a Kubernetes cluster. We use Minikube on a Mac, but you can use Kubernetes clusters on public cloud services as well. The jobs we submit in this document are Argo workflows. + +Please be aware that, in practice, the SQLFlow container might be running on the Kubernetes cluster as Argo workflows, but not in a separate container. And, it is the submitter program running in the SQLFlow server container who submits Argo workflows by calling the Kubernetes API other than the argo command. It is known as **Kubernetes-native** to call Kubernetes APIs from a container running on the Kubernetes cluster. For how to implement Kubernetes-native calls, please refer to the ElasticDL master program as an example. + +## On the Mac + +1. Install [Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/). +1. Start Minikube + ```bash + minikube start --cpus 2 --memory 4000 + ``` +1. Start a SQLFlow Docker container. + ```bash + docker run --rm --net=host -it -v $GOPATH:/go -v $HOME:/root -w /go/src/sqlflow.org/sqlflow sqlflow:latest bash + ``` + We use `-v $HOME:/root` to mount the home directory on the host, `$HOME`, to the home directory in the container, `/root`, so we can access the Minikube cluster configuration files in `$HOME/.kube/` from within the container. + +## In the SQLFlow Container + +1. One more step for sharing the `$HOME/.kube/`. The credential in `$HOME/.kube/config` is referred to by absolute path, e.g. `certificate-authority: /Users/yang.y/.minikube/ca.crt`. So we need to create a symbolic link mapping `/User/yang.y` to `/root`. Please substitute `yang.y` to your user name and type the following command. + ``` + mkdir /Users && ln -s /root /Users/yang.y + ``` +1. Verify you have access to the Minikube cluster by typing the following command in the container. + ``` + $ kubectl get namespaces + NAME STATUS AGE + default Active 23h + kube-node-lease Active 23h + kube-public Active 23h + kube-system Active 23h + ``` +1. Install the Argo controller and UI. + ```bash + kubectl create namespace argo + kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo/stable/manifests/install.yaml + ``` +1. Grant admin privileges to the 'default' service account in the namespace `default`, so that the service account can run workflow. + ``` + kubectl create rolebinding default-admin --clusterrole=admin --serviceaccount=default:default + ``` +1. Install Argo. Skip this step if Argo has been installed. + ``` + curl -sSL -o /usr/local/bin/argo https://github.com/argoproj/argo/releases/download/v2.3.0/argo-linux-amd64 + chmod +x /usr/local/bin/argo + ``` +1. Run example workflow. + ``` + argo submit --watch https://raw.githubusercontent.com/argoproj/argo/master/examples/hello-world.yaml + argo submit --watch https://raw.githubusercontent.com/argoproj/argo/master/examples/coinflip.yaml + argo submit --watch https://raw.githubusercontent.com/argoproj/argo/master/examples/loops-maps.yaml + argo list + argo get xxx-workflow-name-xxx + argo logs xxx-pod-name-xxx #from get command above + ``` + +## Appendix + +1. Argo official demo: https://github.com/argoproj/argo/blob/master/demo.md +1. Minikube installation guide: https://kubernetes.io/docs/tasks/tools/install-minikube/