Skip to contents

Create a new Ollama llm_provider() instance

Usage

llm_provider_ollama(
  parameters = list(model = "llama3.1:8b", stream = getOption("tidyprompt.stream", TRUE)),
  verbose = getOption("tidyprompt.verbose", TRUE),
  url = "http://localhost:11434/api/chat"
)

Arguments

parameters

A named list of parameters. Currently the following parameters are required:

  • model: The name of the model to use

  • stream: A logical indicating whether the API should stream responses

Additional parameters may be passed by adding them to the parameters list; these parameters will be passed to the Ollama API via the body of the POST request. Options specifically can be set with the $set_options function (e.g., ollama$set_options(list(temperature = 0.8))). See available options at https://ollama.com/docs/api/chat

verbose

A logical indicating whether the interaction with the LLM provider should be printed to the console

url

The URL to the Ollama API endpoint for chat completion (typically: "http://localhost:11434/api/chat")

Value

A new llm_provider() object for use of the Ollama API

Examples

# Various providers:
ollama <- llm_provider_ollama()
openai <- llm_provider_openai()
openrouter <- llm_provider_openrouter()
mistral <- llm_provider_mistral()
groq <- llm_provider_groq()
xai <- llm_provider_xai()
gemini <- llm_provider_google_gemini()

# Initialize with settings:
ollama <- llm_provider_ollama(
  parameters = list(
    model = "llama3.2:3b",
    stream = TRUE
  ),
  verbose = TRUE,
  url = "http://localhost:11434/api/chat"
)

# Change settings:
ollama$verbose <- FALSE
ollama$parameters$stream <- FALSE
ollama$parameters$model <- "llama3.1:8b"

if (FALSE) { # \dontrun{
  # Try a simple chat message with '$complete_chat()':
  response <- ollama$complete_chat("Hi!")
  response
  # $role
  # [1] "assistant"
  #
  # $content
  # [1] "How's it going? Is there something I can help you with or would you like
  # to chat?"
  #
  # $http
  # Response [http://localhost:11434/api/chat]
  # Date: 2024-11-18 14:21
  # Status: 200
  # Content-Type: application/json; charset=utf-8
  # Size: 375 B

  # Use with send_prompt():
  "Hi" |>
    send_prompt(ollama)
  # [1] "How's your day going so far? Is there something I can help you with or
  # would you like to chat?"
} # }