Skip to content

Commit

Permalink
[Doc] Submit Argo Workflow from SQLFlow Container (#1079)
Browse files Browse the repository at this point in the history
* [Doc] Submit Argo Jobs from SQLFlow Container

* polish

* Update argo-setup.md

* Update argo-setup.md

* Update argo-setup.md

* remove argo in Docker image
  • Loading branch information
tonyyang-svail authored and Yancey1989 committed Oct 30, 2019
1 parent 26d63fc commit 61b0eae
Showing 1 changed file with 62 additions and 0 deletions.
62 changes: 62 additions & 0 deletions doc/argo-setup.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Submit Argo Workflow from SQLFlow Container

In this document, we explain how to submit jobs from a SQLFlow server container to a Kubernetes cluster. We use Minikube on a Mac, but you can use Kubernetes clusters on public cloud services as well. The jobs we submit in this document are Argo workflows.

Please be aware that, in practice, the SQLFlow container might be running on the Kubernetes cluster as Argo workflows, but not in a separate container. And, it is the submitter program running in the SQLFlow server container who submits Argo workflows by calling the Kubernetes API other than the argo command. It is known as **Kubernetes-native** to call Kubernetes APIs from a container running on the Kubernetes cluster. For how to implement Kubernetes-native calls, please refer to the ElasticDL master program as an example.

## On the Mac

1. Install [Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/).
1. Start Minikube
```bash
minikube start --cpus 2 --memory 4000
```
1. Start a SQLFlow Docker container.
```bash
docker run --rm --net=host -it -v $GOPATH:/go -v $HOME:/root -w /go/src/sqlflow.org/sqlflow sqlflow:latest bash
```
We use `-v $HOME:/root` to mount the home directory on the host, `$HOME`, to the home directory in the container, `/root`, so we can access the Minikube cluster configuration files in `$HOME/.kube/` from within the container.

## In the SQLFlow Container

1. One more step for sharing the `$HOME/.kube/`. The credential in `$HOME/.kube/config` is referred to by absolute path, e.g. `certificate-authority: /Users/yang.y/.minikube/ca.crt`. So we need to create a symbolic link mapping `/User/yang.y` to `/root`. Please substitute `yang.y` to your user name and type the following command.
```
mkdir /Users && ln -s /root /Users/yang.y
```
1. Verify you have access to the Minikube cluster by typing the following command in the container.
```
$ kubectl get namespaces
NAME STATUS AGE
default Active 23h
kube-node-lease Active 23h
kube-public Active 23h
kube-system Active 23h
```
1. Install the Argo controller and UI.
```bash
kubectl create namespace argo
kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo/stable/manifests/install.yaml
```
1. Grant admin privileges to the 'default' service account in the namespace `default`, so that the service account can run workflow.
```
kubectl create rolebinding default-admin --clusterrole=admin --serviceaccount=default:default
```
1. Install Argo. Skip this step if Argo has been installed.
```
curl -sSL -o /usr/local/bin/argo https://github.com/argoproj/argo/releases/download/v2.3.0/argo-linux-amd64
chmod +x /usr/local/bin/argo
```
1. Run example workflow.
```
argo submit --watch https://raw.githubusercontent.com/argoproj/argo/master/examples/hello-world.yaml
argo submit --watch https://raw.githubusercontent.com/argoproj/argo/master/examples/coinflip.yaml
argo submit --watch https://raw.githubusercontent.com/argoproj/argo/master/examples/loops-maps.yaml
argo list
argo get xxx-workflow-name-xxx
argo logs xxx-pod-name-xxx #from get command above
```

## Appendix

1. Argo official demo: https://github.com/argoproj/argo/blob/master/demo.md
1. Minikube installation guide: https://kubernetes.io/docs/tasks/tools/install-minikube/

0 comments on commit 61b0eae

Please sign in to comment.