pe.llm.openai module
- class pe.llm.openai.OpenAILLM(progress_bar=True, dry_run=False, num_threads=1, **generation_args)[source]
Bases:
LLM
A wrapper for OpenAI LLM APIs. The following environment variables are required:
OPENAI_API_KEY
: OpenAI API key. You can get it from https://platform.openai.com/account/api-keys. Multiple keys can be separated by commas, and a key with the lowest current workload will be used for each request.
- __init__(progress_bar=True, dry_run=False, num_threads=1, **generation_args)[source]
Constructor.
- Parameters:
progress_bar (bool, optional) – Whether to show the progress bar, defaults to True
dry_run (bool, optional) – Whether to enable dry run. When dry run is enabled, the responses are fake and the APIs are not called. Defaults to False
num_threads (int, optional) – The number of threads to use for making concurrent API calls, defaults to 1
**generation_args (str) – The generation arguments that will be passed to the OpenAI API
- _get_environment_variable(name)[source]
Get the environment variable.
- Parameters:
name (str) – The name of the environment variable
- Raises:
ValueError – If the environment variable is not set
- Returns:
The value of the environment variable
- Return type:
str
- _get_response_for_one_request(messages, generation_args)[source]
Get the response for one request.
- Parameters:
messages (list[str]) – The messages
generation_args (dict) – The generation arguments
- Returns:
The response
- Return type:
str
- get_responses(requests, **generation_args)[source]
Get the responses from the LLM.
- Parameters:
requests (list[
pe.llm.request.Request
]) – The requests**generation_args (str) – The generation arguments. The priority of the generation arguments from the highest to the lowerest is in the order of: the arguments set in the requests > the arguments passed to this function > and the arguments passed to the constructor
- Returns:
The responses
- Return type:
list[str]