This function is responsible for sending strings or tidyprompt()
objects,
including their prompt wraps, to a LLM provider (see llm_provider()
) for evaluation.
The function will interact with the LLM provider until a successful response
is received or the maximum number of interactions is reached. The function will
apply extraction and validation functions to the LLM response, as specified
in the prompt wraps (see prompt_wrap()
). If the maximum number of interactions
Usage
send_prompt(
prompt,
llm_provider = llm_provider_ollama(),
max_interactions = 10,
clean_chat_history = TRUE,
verbose = NULL,
stream = NULL,
return_mode = c("only_response", "full")
)
Arguments
- prompt
A string or a
tidyprompt()
object- llm_provider
llm_provider()
object (default isllm_provider_ollama()
)- max_interactions
Maximum number of interactions allowed with the LLM provider. Default is 10. If the maximum number of interactions is reached without a successful response, 'NULL' is returned as the response (see return value)
- clean_chat_history
If the chat history should be cleaned after each interaction. Cleaning the chat history means that only the first and last message from the user, the last message from the assistant, and all messages from the system are used when requesting a new answer from the LLM; keeping the context window clean may increase the LLM's performance
- verbose
If the interaction with the LLM provider should be printed to the console. This will overrule the 'verbose' setting in the LLM provider
- stream
If the interaction with the LLM provider should be streamed. This setting will only be used if the LLM provider already has a 'stream' parameter (which indicates there is support for streaming). This setting will overrule the 'stream' setting in the LLM provider
- return_mode
One of 'full' or 'only_response'. See return value
Value
If return mode 'only_response', the function will return only the LLM response
after extraction and validation functions have been applied (NULL is returned
when unsucessful after the maximum number of interactions).
If return mode 'full', the function will return a list with the following elements:
'response' (the LLM response after extraction and validation functions have been applied;
NULL is returned when unsucessful after the maximum number of interactions),
'chat_history' (a dataframe with the full chat history which led to the final response),
'chat_history_clean' (a dataframe with the cleaned chat history which led to
the final response; here, only the first and last message from the user, the
last message from the assistant, and all messages from the system are kept),
'start_time' (the time when the function was called),
'end_time' (the time when the function ended),
'duration_seconds' (the duration of the function in seconds), and
'http_list' (a list with all HTTP requests and full responses made for chat completions).
When using 'full' and you want to access a specific element during (base R) piping,
you can use the 'extract_from_return_list()
' function to assist in this
See also
tidyprompt()
, prompt_wrap()
, llm_provider()
, llm_provider_ollama()
,
llm_provider_openai()
, llm_provider_openrouter()
Other prompt_evaluation:
llm_break()
,
llm_feedback()
Examples
if (FALSE) { # \dontrun{
"Hi!" |>
send_prompt(llm_provider_ollama())
# --- Sending request to LLM provider (llama3.1:8b): ---
# Hi!
# --- Receiving response from LLM provider: ---
# It's nice to meet you. Is there something I can help you with, or would you like to chat?
# [1] "It's nice to meet you. Is there something I can help you with, or would you like to chat?"
"Hi!" |>
send_prompt(llm_provider_ollama(), return_mode = "full")
# --- Sending request to LLM provider (llama3.1:8b): ---
# Hi!
# --- Receiving response from LLM provider: ---
# It's nice to meet you. Is there something I can help you with, or would you like to chat?
# $response
# [1] "It's nice to meet you. Is there something I can help you with, or would you like to chat?"
#
# $chat_history
# ...
#
# $chat_history_clean
# ...
#
# $start_time
# [1] "2024-11-18 15:43:12 CET"
#
# $end_time
# [1] "2024-11-18 15:43:13 CET"
#
# $duration_seconds
# [1] 1.13276
#
# $http_list
# $http_list[[1]]
# Response [http://localhost:11434/api/chat]
# Date: 2024-11-18 14:43
# Status: 200
# Content-Type: application/x-ndjson
# <EMPTY BODY>
"Hi!" |>
add_text("What is 5 + 5?") |>
answer_as_integer() |>
send_prompt(llm_provider_ollama(), verbose = FALSE)
# [1] 10
} # }