Helper function to handle making requests to LLM providers, to be used within a complete_chat() function for a LLM provider.
Arguments
- url
The URL of the LLM provider API endpoint
- headers
A named list of headers to be passed to the API (can be NULL)
- body
The body of the POST request
- stream
A logical indicating whether the API should stream responses
- verbose
A logical indicating whether the interaction with the LLM provider should be printed to the console. Default is TRUE.
- stream_api_type
The type of API to use; specifically required to handle streaming. Currently, "openai" and "ollama" have been implemented. "openai" should also work with other similar APIs for chat completion