Note: this page is going to be extended.
If you need to test your Knative images in stock Knative environment, use the following commands to setup environment.
git clone https://github.com/ease-lab/vhive
cd vhive
./scripts/cloudlab/setup_node.sh stock-only
sudo containerd
./scripts/cluster/create_one_node_cluster.sh stock-only
# wait for the containers to boot up using
watch kubectl get pods -A
# once all the containers are ready/complete, you may start Knative functions
kn service apply
./scripts/github_runner/clean_cri_runner.sh stock-only
We also offer self-hosted stock-Knative environments powered by KinD. To be able to use them, follow the instructions below:
- Set
jobs.<job_id>.runs-on
tostock-knative
. - For your GitHub workflow, define
TMPDIR
environment variable in your manifest:env: TMPDIR: /root/tmp
- As the first step of all jobs, "create
TMPDIR
if not exists":jobs: my-job: name: My Job runs-on: [stock-knative] steps: - name: Setup TMPDIR run: mkdir -p $TMPDIR
- As the first step of all jobs, "create
- Make sure to clean-up and wait for it to end! This varies for each workload, but below are some examples:
jobs: my-job: name: My Job runs-on: [stock-knative] steps: # ... name: Cleaning if: ${{ always() }} run: | # ...
- If you have used
kubectl apply -f ...
then usekubectl delete -f ...
- If you have used
kn service apply
then usekn service delete -f ... --wait
- If you have used
You can use the image to build/test/develop vHive inside a kind container. This image is preconfigured to run a single node Kubernetes cluster inside a container and contains packages to setup vHive on top of it.
# Set up the host (the same script as for the self-hosted GitHub CI runner)
./scripts/github_runner/setup_runner_host.sh
# pull latest image
docker pull vhiveease/vhive_dev_env
# Start a container
kind create cluster --image vhiveease/vhive_dev_env
Before running a cluster, one might need to install additional tools, e.g., Golang, and check out the vHive repository manually.
# Enter the container
docker exec -it <container name> bash
# Inside the container, create a single-node cluster
./scripts/cluster/create_one_node_cluster.sh [stock-only]
Notes:
When running a vHive, or stock Knative, cluster inside a kind container, one should not run setup scripts but start the daemon(s) and create the cluster right away.
Currently, with Firecracker, running only a single-node cluster is supported (Issue raised). Running a multi-node cluster with stock Knative should work but is not tested.
# list all kind clusters
kind get clusters
# delete a cluster
kind delete cluster --name <name>
-
vHive supports both the baseline Firecracker snapshots and our advanced Record-and-Prefetch (REAP) snapshots.
-
vHive integrates with Kubernetes and Knative via its built-in CRI support. Currently, only Knative Serving is supported.
-
vHive supports arbitrary distributed setup of a serverless cluster.
-
vHive supports arbitrary functions deployed with OCI (Docker images).
-
vHive has robust Continuous-Integration and our team is committed to deliver high-quality code.
# create a folder in the local storage (on <MINIO_NODE_NAME> that is one of the Kubernetes nodes)
sudo mkdir -p <MINIO_PATH>
cd ./configs/storage/minio
# create a persistent volume (PV) and the corresponding PV claim
# specify the node name that would host the MinIO objects
# (use `hostname` command for the local node)
MINIO_NODE_NAME=<MINIO_NODE_NAME> MINIO_PATH=<MINIO_PATH> envsubst < pv.yaml | kubectl apply -f -
kubectl apply -f pv-claim.yaml
# create a storage app and the corresponding service
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl delete deployment minio-deployment
kubectl delete pvc minio-pv-claim
kubectl delete svc minio-service
kubectl delete pv minio-pv
Note that files in the bucket persist in the local filesystem after a persistent volume removal.
Currently, vHive supports two modes of operation that enable different types of performance analysis:
-
Distributed setup. Allows analysis of the end-to-end performance based on the statistics provided by the invoker client.
-
Single-node setup. A test that is integrated with vHive-CRI orchestrator via a programmatic interface allows to analyze latency breakdown of boot-based and snapshot cold starts, using detailed latency and memory footprint metrics.
Knative function call requests can now be traced & visualized using zipkin. Zipkin is a distributed tracing system featuring easy collection and lookup of tracing data. Checkout this for a quickstart guide.
- Once the zipkin container is running, start the dashboard using
istioctl dashboard zipkin
. - To access requests remotely, run
ssh -L 9411:127.0.0.1:9411 <Host_IP>
for port forwarding. - Go to your browser and enter localhost:9411 for the dashboard.
-
vHive uses Firecracker-Containerd binaries that are build using the
user_page_faults
branch of our fork of the upstream repository. Currently, we are in the process of upstreaming VM snapshots support to the upstream repository. -
Current Firecracker version is 0.21.0. We plan to keep our code loosely up to date with the upstream Firecracker repository.
-
vHive uses a fork of kind to speed up testing environment setup requiring Kubernetes.