This function is used to wrap a tidyprompt()
object and ensure that the
evaluation will stop if the LLM says it cannot answer the prompt. This is
useful in scenarios where it is determined the LLM is unable to provide a
response to a prompt.
Arguments
- prompt
A single string or a
tidyprompt()
object- quit_detect_regex
A regular expression to detect in the LLM's response which will cause the evaluation to stop. The default will detect the string "NO ANSWER" in the response
- instruction
A string to be added to the prompt to instruct the LLM how to respond if it cannot answer the prompt. The default is "If you think that you cannot provide a valid answer, you must type: 'NO ANSWER' (use no other characters)". This parameter can be set to
NULL
if no instruction is needed in the prompt- success
A logical indicating whether the
send_prompt()
loop break should nonetheless be considered as a successful completion of the extraction and validation process. IfFALSE
, theobject_to_return
must will always be set to NULL and thus parameter 'response_result' must also be set to 'null'; ifFALSE
,send_prompt()
will also print a warning about the unsuccessful evaluation. IfTRUE
, theobject_to_return
will be returned as the response result ofsend_prompt()
(andsend_prompt()
will print no warning about unsuccessful evaluation); parameter 'response_result' will then determine what is returned as the response result ofsend_prompt()
.- response_result
A character string indicating what should be returned when the quit_detect_regex is detected in the LLM's response. The default is 'null', which will return NULL as the response result o f
send_prompt()
. Under 'llm_response', the full LLM response will be returned as the response result ofsend_prompt()
. Under 'regex_match', the part of the LLM response that matches the quit_detect_regex will be returned as the response result ofsend_prompt()
Value
A tidyprompt()
with an added prompt_wrap()
which will ensure
that the evaluation will stop upon detection of the quit_detect_regex in the
LLM's response
See also
Other pre_built_prompt_wraps:
add_text()
,
add_tools()
,
answer_as_boolean()
,
answer_as_code()
,
answer_as_integer()
,
answer_as_list()
,
answer_as_named_list()
,
answer_as_regex()
,
answer_by_chain_of_thought()
,
answer_by_react()
,
prompt_wrap()
,
set_system_prompt()
Other miscellaneous_prompt_wraps:
add_text()
,
set_system_prompt()
Examples
if (FALSE) { # \dontrun{
"What the favourite food of my cat on Thursday mornings?" |>
quit_if() |>
send_prompt(llm_provider_ollama())
# --- Sending request to LLM provider (llama3.1:8b): ---
# What the favourite food of my cat on Thursday mornings?
#
# If you think that you cannot provide a valid answer, you must type:
# 'NO ANSWER' (use no other characters)
# --- Receiving response from LLM provider: ---
# NO ANSWER
# NULL
} # }