This function takes a single string or a tidyprompt()
object and
adds a new prompt wrap to it. A prompt wrap is a set of functions
that modify the prompt text, extract a value from the LLM response,
and validate the extracted value. The functions are used to ensure
that the prompt and LLM response is in the correct format and meets the
specified criteria.
Usage
prompt_wrap(
prompt,
modify_fn = NULL,
extraction_fn = NULL,
validation_fn = NULL,
type = c("unspecified", "mode", "tool", "break")
)
Arguments
- prompt
A single string or a
tidyprompt()
object- modify_fn
A function that takes the previous prompt text (as first argument) and returns the new prompt text
- extraction_fn
A function that takes the LLM response (as first argument) and attempts to extract a value from it. Upon succesful extraction, the function should return the extracted value. If the extraction fails, the function should return a
llm_feedback()
message which will be sent back to the LLM to initiate a retry. A special objectllm_break()
can be returned to break the extraction and validation loop- validation_fn
A function that takes the (extracted) LLM response (as first argument) and attempts to validate it. Upon succesful validation, the function should return TRUE. If the validation fails, the function should return a
llm_feedback()
message which will be sent back to the LLM. A special objectllm_break()
can be returned to break the extraction and validation loop- type
The type of prompt wrap; one of 'unspecified', 'mode', 'tool', or 'break'. Types are used to determine the order in which prompt wraps are applied. When constructing the prompt text, prompt wraps are applied to the base prompt in the following order: 'unspecified', 'break', 'mode', 'tool'. When evaluating the LLM response and applying extraction and validation functions, prompt wraps are applied in the reverse order: 'tool', 'mode', 'break', 'unspecified'. Order among the same type is preserved in the order they were added to the prompt. Example of a tool is
add_tools()
; example of a mode isanswer_by_react()
. Example of a break isquit_if()
. Most other prompt wraps will be 'unspecified', likeanswer_as_regex()
oradd_text()
Value
A tidyprompt()
object with the prompt_wrap()
appended to it
See also
Other prompt_wrap:
llm_break()
,
llm_feedback()
Other pre_built_prompt_wraps:
add_text()
,
add_tools()
,
answer_as_boolean()
,
answer_as_code()
,
answer_as_integer()
,
answer_as_list()
,
answer_as_named_list()
,
answer_as_regex()
,
answer_by_chain_of_thought()
,
answer_by_react()
,
quit_if()
,
set_system_prompt()
Examples
# A custom prompt_wrap may be created during piping
prompt <- "Hi there!" |>
prompt_wrap(
modify_fn = function(base_prompt) {
paste(base_prompt, "How are you?", sep = "\n\n")
}
)
prompt
#> <tidyprompt>
#> The base prompt is modified by a prompt wrap, resulting in:
#> > Hi there!
#> >
#> > How are you?
#> Use '<tidyprompt>$prompt_wraps' to show the prompt wraps.
#> Use '<tidyprompt>$base_prompt' to show the base prompt text.
#> Use '<tidyprompt> |> construct_prompt_text()' to get the full prompt text.
#>
# (Shorter notation of the above:)
prompt <- "Hi there!" |>
prompt_wrap(\(x) paste(x, "How are you?", sep = "\n\n"))
# It may often be preferred to make a function which takes a prompt and
# returns a wrapped prompt:
my_prompt_wrap <- function(prompt) {
modify_fn <- function(base_prompt) {
paste(base_prompt, "How are you?", sep = "\n\n")
}
prompt_wrap(prompt, modify_fn)
}
prompt <- "Hi there!" |>
my_prompt_wrap()
# For more advanced examples, take a look at the source code of the
# pre-built prompt wraps in the tidyprompt package, like
# answer_as_boolean, answer_as_integer, add_tools, answer_as_code, etc.
# Below is the source code for the 'answer_as_integer' prompt wrap function:
#' Make LLM answer as an integer (between min and max)
#'
#' @param prompt A single string or a [tidyprompt()] object
#' @param min (optional) Minimum value for the integer
#' @param max (optional) Maximum value for the integer
#' @param add_instruction_to_prompt (optional) Add instruction for replying
#' as an integer to the prompt text. Set to FALSE for debugging if extractions/validations
#' are working as expected (without instruction the answer should fail the
#' validation function, initiating a retry)
#'
#' @return A [tidyprompt()] with an added [prompt_wrap()] which
#' will ensure that the LLM response is an integer.
#'
#' @export
#'
#' @example inst/examples/answer_as_integer.R
#'
#' @family pre_built_prompt_wraps
#' @family answer_as_prompt_wraps
answer_as_integer <- function(
prompt,
min = NULL,
max = NULL,
add_instruction_to_prompt = TRUE
) {
instruction <- "You must answer with only an integer (use no other characters)."
if (!is.null(min) && !is.null(max)) {
instruction <- paste(instruction, glue::glue(
"Enter an integer between {min} and {max}."
))
} else if (!is.null(min)) {
instruction <- paste(instruction, glue::glue(
"Enter an integer greater than or equal to {min}."
))
} else if (!is.null(max)) {
instruction <- paste(instruction, glue::glue(
"Enter an integer less than or equal to {max}."
))
}
modify_fn <- function(original_prompt_text) {
if (!add_instruction_to_prompt) {
return(original_prompt_text)
}
glue::glue("{original_prompt_text}\n\n{instruction}")
}
extraction_fn <- function(x) {
extracted <- suppressWarnings(as.numeric(x))
if (is.na(extracted)) {
return(llm_feedback(instruction))
}
return(extracted)
}
validation_fn <- function(x) {
if (x != floor(x)) { # Not a whole number
return(llm_feedback(instruction))
}
if (!is.null(min) && x < min) {
return(llm_feedback(glue::glue(
"The number should be greater than or equal to {min}."
)))
}
if (!is.null(max) && x > max) {
return(llm_feedback(glue::glue(
"The number should be less than or equal to {max}."
)))
}
return(TRUE)
}
prompt_wrap(prompt, modify_fn, extraction_fn, validation_fn)
}