From 3e123d13121020201c80ad00eefb5d4fbcc0fc2b Mon Sep 17 00:00:00 2001 From: Laura Jordana Date: Sun, 7 Jul 2024 23:20:34 -0700 Subject: [PATCH] remove HF Token from run.sh command (#62) * HF token no longer needed in run.sh --- docs/gpt-in-a-box/kubernetes/v0.2/inference_server.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/docs/gpt-in-a-box/kubernetes/v0.2/inference_server.md b/docs/gpt-in-a-box/kubernetes/v0.2/inference_server.md index 85dfce72..58cb9b06 100644 --- a/docs/gpt-in-a-box/kubernetes/v0.2/inference_server.md +++ b/docs/gpt-in-a-box/kubernetes/v0.2/inference_server.md @@ -2,7 +2,7 @@ Run the following command for starting Kubeflow serving and running inference on the given input: ``` -bash $WORK_DIR/llm/run.sh -n -g -f -m -e [OPTIONAL -d -v -t ] +bash $WORK_DIR/llm/run.sh -n -g -f -m -e [OPTIONAL -d -v ] ``` * **n**: Name of a [validated model](validated_models.md) @@ -12,7 +12,6 @@ bash $WORK_DIR/llm/run.sh -n -g -f +bash $WORK_DIR/llm/run.sh -n llama2_7b -d data/summarize -g 1 -e llm-deploy -f '1.1.1.1:/llm' -m /mnt/llm ``` ### Cleanup Inference deployment