diff --git a/docs/gpt-in-a-box/kubernetes/v0.2/generating_mar.md b/docs/gpt-in-a-box/kubernetes/v0.2/generating_mar.md index acd376c1..1e8ccd68 100644 --- a/docs/gpt-in-a-box/kubernetes/v0.2/generating_mar.md +++ b/docs/gpt-in-a-box/kubernetes/v0.2/generating_mar.md @@ -9,7 +9,7 @@ python3 $WORK_DIR/llm/generate.py [--hf_token --repo_ver * **model_name**: Name of a [validated model](validated_models.md) * **output**: Mount path to your nfs server to be used in the kube PV where model files and model archive file be stored * **repo_version**: Commit ID of model's HuggingFace repository (optional, if not provided default set in model_config will be used) -* **hf_token**: Your HuggingFace token. Needed to download LLAMA(2) models. +* **hf_token**: Your HuggingFace token. Needed to download LLAMA(2) models. (It can alternatively be set using the environment variable 'HF_TOKEN') ### Examples The following are example commands to generate the model archive file. diff --git a/docs/gpt-in-a-box/vm/v0.3/generating_mar.md b/docs/gpt-in-a-box/vm/v0.3/generating_mar.md index 1b9925a6..a1b6f495 100644 --- a/docs/gpt-in-a-box/vm/v0.3/generating_mar.md +++ b/docs/gpt-in-a-box/vm/v0.3/generating_mar.md @@ -17,7 +17,7 @@ Where the arguments are : - **model_path**: Absolute path of model files (should be empty if downloading) - **mar_output**: Absolute path of export of MAR file (.mar) - **skip_download**: Flag to skip downloading the model files -- **hf_token**: Your HuggingFace token. Needed to download and verify LLAMA(2) models. +- **hf_token**: Your HuggingFace token. Needed to download and verify LLAMA(2) models. (It can alternatively be set using the environment variable 'HF_TOKEN') ## Examples The following are example commands to generate the model archive file.