From 9ef0c21d8c1228355f07e41b2f3edcfbe919e918 Mon Sep 17 00:00:00 2001 From: Jeffrey Ip Date: Fri, 22 Nov 2024 17:55:58 +0800 Subject: [PATCH] updated docs --- docs/docs/confident-ai-hyperparameters-prompt-versioning.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/confident-ai-hyperparameters-prompt-versioning.mdx b/docs/docs/confident-ai-hyperparameters-prompt-versioning.mdx index 8ec1630a..3d9f5686 100644 --- a/docs/docs/confident-ai-hyperparameters-prompt-versioning.mdx +++ b/docs/docs/confident-ai-hyperparameters-prompt-versioning.mdx @@ -16,7 +16,7 @@ As a quick overview, here's the typical prompt versioning workflow you'll adopt 1. Create a prompt on Confident AI in **Hyperparameters** > **Prompts**. 2. Pull your prompt template from Confident AI, like how you would pull a GitHub repo. -3. Save the prompt in memeory, and interpolate it dynamically with variables each time you need to feed it to your LLM application. +3. Save the prompt in memory, and interpolate it dynamically with variables each time you need to feed it to your LLM application. Let's assume we've already created a prompt on Confident AI with the alias `"My First Prompt"` and created a first version for it: