This class provides a structure for creating llm_provider
objects with different implementations of $complete_chat()
.
Using this class, you can create an llm_provider object that interacts
with different LLM providers, such Ollama, OpenAI, or other custom providers.
See also
Other llm_provider:
llm_provider_google_gemini()
,
llm_provider_groq()
,
llm_provider_mistral()
,
llm_provider_ollama()
,
llm_provider_openai()
,
llm_provider_openrouter()
,
llm_provider_xai()
Public fields
parameters
A named list of parameters to configure the llm_provider. Parameters may be appended to the request body when interacting with the LLM provider API
verbose
A logical indicating whether interaction with the LLM provider should be printed to the console
url
The URL to the LLM provider API endpoint for chat completion
api_key
The API key to use for authentication with the LLM provider API
api_type
The type of API to use (e.g., "openai", "ollama"). This is used to determine certain specific behaviors for different APIs, for instance, as is done in the
answer_as_json()
functionhandler_fns
A list of functions that will be called after the completion of a chat. See
$add_handler_fn()
Methods
Method new()
Create a new llm_provider object
Arguments
complete_chat_function
Function that will be called by the llm_provider to complete a chat. This function should take a list containing at least '$chat_history' (a data frame with 'role' and 'content' columns) and return a response object, which contains:
'completed': A dataframe with 'role' and 'content' columns, containing the completed chat history
'http': A list containing a list 'requests' and a list 'responses', containing the HTTP requests and responses made during the chat completion
parameters
A named list of parameters to configure the llm_provider. These parameters may be appended to the request body when interacting with the LLM provider. For example, the
model
parameter may often be required. The 'stream' parameter may be used to indicate that the API should stream. Parameters should not include the chat_history, or 'api_key' or 'url', which are handled separately by the llm_provider and '$complete_chat()'. Parameters should also not be set when they are handled by prompt wrapsverbose
A logical indicating whether interaction with the LLM provider should be printed to the console
url
The URL to the LLM provider API endpoint for chat completion (typically required, but may be left NULL in some cases, for instance when creating a fake LLM provider)
api_key
The API key to use for authentication with the LLM provider API (optional, not required for, for instance, Ollama)
api_type
The type of API to use (e.g., "openai", "ollama"). This is used to determine certain specific behaviors for different APIs (see for example the
answer_as_json()
function)
Method set_parameters()
Helper function to set the parameters of the llm_provider object. This function appends new parameters to the existing parameters list.
Method complete_chat()
Sends a chat history (see chat_history()
for details) to the LLM provider using the configured $complete_chat()
.
This function is typically called by send_prompt()
to interact with the LLM
provider, but it can also be called directly.
Arguments
input
A string, a data frame which is a valid chat history (see
chat_history()
), or a list containing a valid chat history under key '$chat_history'
Method add_handler_fn()
Helper function to add a handler function to the llm_provider object. Handler functions are called after the completion of a chat and can be used to modify the response before it is returned by the llm_provider. Each handler function should take the response object as input (first argument) as well as 'self' (the llm_provider object) and return a modified response object. The functions will be called in the order they are added to the list.
Arguments
handler_fn
A function that takes the response object plus 'self' (the llm_provider object) as input and returns a modified response object
Details
If a handler function returns a list with a 'break' field set to TRUE
,
the chat completion will be interrupted and the response will be returned
at that point.
If a handler function returns a list with a 'done' field set to FALSE
,
the handler functions will continue to be called in a loop until the 'done'
field is not set to FALSE
.
Method set_handler_fns()
Helper function to set the handler functions of the
llm_provider object.
This function replaces the existing
handler functions list with a new list of handler functions.
See $add_handler_fn()
for more information
Examples
# Example creation of a llm_provider-class object:
llm_provider_openai <- function(
parameters = list(
model = "gpt-4o-mini",
stream = getOption("tidyprompt.stream", TRUE)
),
verbose = getOption("tidyprompt.verbose", TRUE),
url = "https://api.openai.com/v1/chat/completions",
api_key = Sys.getenv("OPENAI_API_KEY")
) {
complete_chat <- function(chat_history) {
headers <- c(
"Content-Type" = "application/json",
"Authorization" = paste("Bearer", self$api_key)
)
body <- list(
messages = lapply(seq_len(nrow(chat_history)), function(i) {
list(role = chat_history$role[i], content = chat_history$content[i])
})
)
for (name in names(self$parameters))
body[[name]] <- self$parameters[[name]]
request <- httr2::request(self$url) |>
httr2::req_body_json(body) |>
httr2::req_headers(!!!headers)
request_llm_provider(
chat_history,
request,
stream = self$parameters$stream,
verbose = self$verbose,
api_type = self$api_type
)
}
return(`llm_provider-class`$new(
complete_chat_function = complete_chat,
parameters = parameters,
verbose = verbose,
url = url,
api_key = api_key,
api_type = "openai"
))
}
llm_provider <- llm_provider_openai()
if (FALSE) { # \dontrun{
llm_provider$complete_chat("Hi!")
# --- Sending request to LLM provider (gpt-4o-mini): ---
# Hi!
# --- Receiving response from LLM provider: ---
# Hello! How can I assist you today?
} # }