API reference #223
Replies: 4 comments 5 replies
-
You da man!! Thank you so much for this. I feel like I’ve been reverse engineering this from the example notebooks and source code. Extremely helpful. Hope this gets added to Guidance documentation yesterday |
Beta Was this translation helpful? Give feedback.
-
Thank you for this! Please keep this updated always. It really should be added to the documentation. It's so easy to generate and add, but it is extremely helpful for those using the module. Low cost, high value! |
Beta Was this translation helpful? Give feedback.
-
SO much in here that is not in the examples, and it's so much clearer than trying to reverse engineer the examples to an API or dig into source. If you want this library to catch on, please add API documentation! |
Beta Was this translation helpful? Give feedback.
-
Hey, isn't this the same as https://guidance.readthedocs.io/en/latest/api.html#library ? |
Beta Was this translation helpful? Give feedback.
-
For convenience, I extracted the docstrings from
guidance/library/*.py
and formatted them as Markdown.EDIT: may be a clone of https://guidance.readthedocs.io/en/latest/api.html#library
Guidance API reference
assistant
gen
await
add
role
geneach
block
contains
system
select
break
equal
user
set
each
greater
shell
if
less
parse
strip
subtract
add
#Add the given variables together.
assistant
#A chat role block for the
'assistant'
role.This is just a shorthand for
{{#role 'assistant'}}...{{/role}}
.Parameters
hidden: bool
Whether to include the assistant block in future LLM context.
await
#Awaits a variable by returning its value and then deleting it.
Note that this is useful for repeatedly getting values since programs
will pause when they need a value that is not yet set. This means
that putting
await
in a loop will create a stateful "agent" that canrepeatedly await values when called multiple times.
Parameters
name: str
The name of the variable to await.
block
#Generic block-level element.
This is useful for naming or hiding blocks of content.
Parameters
name: str
The name of the block. A variable with this name will be set with the generated block content.
hidden: bool
Whether to include the generated block content in future LLM context.
break
#Breaks out of the current loop.
This is useful for breaking out of a geneach loop early, typically this is used inside an
{{#if ...}}...{{/if}}
block.contains
#Check if a string contains a substring.
each
#Iterate over a list and execute a block for each item.
Parameters
list: iterable
The list to iterate over. Inside the block each element will be available as
this
.hidden: bool
Whether to include the generated item blocks in future LLM context.
parallel: bool
If this is
True
then we generate all the items in the list in parallel. Note that this is only compatible withhidden=True
. Whenparallel=True
you can no longer raise aStopIteration
exception to stop the loop at a specific step (since the steps can be runin parallel in any order).
equal
#Check that all arguments are equal.
gen
#Use the LLM to generate a completion.
Parameters
name: str or None
The name of a variable to store the generated value in. If none the value is just returned.
stop: str
The stop string to use for stopping generation. If not provided, the next node's text will be used if that text matches a closing quote, XML tag, or role end. Note that the stop string is not included in the generated value.
stop_regex: str
A regular expression to use for stopping generation. If not provided, the stop string will be used.
save_stop_text: str or bool
If set to a string, the exact stop text used will be saved in a variable with the given name. If set to
True
, the stop text will be saved in a variable namedname+"_stop_text"
. If set toFalse
, the stop text will not be saved.max_tokens: int
The maximum number of tokens to generate in this completion.
n: int
The number of completions to generate. If you generate more than one completion, the variable will be set to a list of generated values. Only the first completion will be used for future context for the LLM, but you may often want to use
hidden=True
when usingn > 1
.temperature: float
The temperature to use for generation. A higher temperature will result in more random completions. Note that caching is always on for
temperature=0
, and is seed-based for other temperatures.top_p: float
The
top_p
value to use for generation. A highertop_p
will result in more random completions.logprobs: int or None
If set to an integer, the LLM will return that number of top log probabilities for the generated tokens which will be stored in a variable named
name+"_logprobs"
. If set toNone
, the logprobabilities will not be returned.
pattern: str or None
A regular expression pattern guide to use for generation. If set the LLM will be forced (through guided decoding) to only generate completions that match the regular expression.
hidden: bool
Whether to hide the generated value from future LLM context. This is useful for generating completions that you just want to save in a variable and not use for future context.
list_append: bool
Whether to append the generated value to a list stored in the variable. If set to
True
, the variable must be a list, and the generated value will be appended to the list.save_prompt: str or bool
If set to a string, the exact prompt given to the LLM will be saved in a variable with the given name.
token_healing: bool or None
If set to a bool this overrides the token_healing setting for the LLM.
**llm_kwargs
Any other keyword arguments will be passed to the LLM call method. This can be useful for setting LLM-specific parameters like
repetition_penalty
for Transformers models orsuffix
for some OpenAI models.geneach
#Generate a potentially variable length list of items using the LLM.
Parameters
list_name: str
The name of the variable to save the generated list to.
stop: str or list of str
A string or list of strings that will stop the generation of the list. For example if
stop="</ul>"
then the list will be generated until the first"</ul>"
is generated.max_iterations: int
The maximum number of items to generate.
min_iterations: int
The minimum number of items to generate.
num_iterations: int
The exact number of items to generate (this overrides
max_iterations
andmin_iterations
).hidden: bool
If
True
, the generated list items will not be added to the LLMs input context. This means that each item will be generated independently of the others. Note that if you usehidden=True
you must also setnum_iterations
to a fixed number (since without adding items the context there is not way for the LLM to know when to stop on its own).join: str
A string to join the generated items with.
single_call: bool
This is an option designed to make look generation more convienent for LLMs that don't support guidance acceleration. If
True
, the LLM will be called once to generate the entire list. This only works if the LLM has already been prompted to generate content that matches the format of the list. After the single call, the generated list variables will be parsed out of the generated text using a regex. (note that only basic template tags are supported in the list items when usingsingle_call=True
).single_call_temperature: float
Only used with
single_call=True
. The temperature to use when generating the list items in a single call.single_call_max_tokens: int
Only used with
single_call=True
. The maximum number of tokens to generate when generating the list items.single_call_top_p: float
Only used with
single_call=True
. Thetop_p
to use when generating the list items in a single call.greater
#Check if
arg1
is greater thanarg2
.Note that this can also be called using
>
as well asgreater
.if
#Standard if/else statement.
Parameters
value: bool
The value to check. If
True
then the first block will be executed, otherwise the second block (the one after the{{else}}
) will be executed.invert: bool
If
True
then the value will be inverted before checking.less
#Check if
arg1
is less thanarg2
.Note that this can also be called using
<
as well asless
.parse
#Parse a string as a guidance program.
This is useful for dynamically generating and then running guidance programs (or parts of programs).
Parameters
string: str
The string to parse.
name: str
The name of the variable to set with the generated content.
role
#A chat role block.
select
#Select a value from a list of choices.
Parameters
variable_name: str
The name of the variable to set with the selected value.
options: list of str or None
An optional list of options to select from. This argument is only used when select is used in non-block mode.
logprobs: str or None
An optional variable name to set with the logprobs for each option. If this is set the log probs of every option is fully evaluated. When this is
None
(the default) we use a greedy max approach to select the option (similar to how greedy decoding works in a language model). So in some cases the selected option can change when logprobs is set since it will be more like an exhaustive beam search scoring than a greedy max scoring.list_append: bool
Whether to append the generated value to a list stored in the variable. If set to
True
, the variable must be a list, and the generated value will be appended to the list.set
#Set the value of a variable or set of variables.
Parameters
name: str or dict
If a string, the name of the variable to set. If a dict, the keys are the variable names and the values are the values to set.
value: str, optional
The value to set the variable to. Only used if
name
is a string.hidden: bool, optional
If
True
, the variable will be set but not printed in the output.shell
#Send a command to the shell and return the output.
strip
#Strip whitespace from the beginning and end of the given string.
Parameters
string: str
The string to strip.
subtract
#Subtract the second variable from the first.
Parameters
minuend: int or float
The number to subtract from.
subtrahend: int or float
The number to subtract.
system
#A chat role block for the
'system'
role.This is just a shorthand for
{{#role 'system'}}...{{/role}}
.Parameters
hidden: bool
Whether to include the assistant block in future LLM context.
user
#A chat role block for the
'user'
role.This is just a shorthand for
{{#role 'user'}}...{{/role}}
.Parameters
hidden: bool
Whether to include the assistant block in future LLM context.
Beta Was this translation helpful? Give feedback.
All reactions