When integrating AI models into applications, provide consistent configuration methods that support different access patterns (direct API, chat interfaces, or intermediary services like LiteLLM).
When integrating AI models into applications, provide consistent configuration methods that support different access patterns (direct API, chat interfaces, or intermediary services like LiteLLM).
Key practices:
LITELLM_BASE_URL
, OPENAI_API_KEY
)Example:
def create_model(model_name, **kwargs):
# Support prefix-based mode selection
COPY_PASTE_PREFIX = "cp:"
copy_paste_mode = model_name.startswith(COPY_PASTE_PREFIX)
if copy_paste_mode:
model_name = model_name.removeprefix(COPY_PASTE_PREFIX)
# Support environment variables for configuration
base_url = os.environ.get("MODEL_API_BASE_URL")
api_key = os.environ.get("MODEL_API_KEY")
# Create appropriate client based on configuration
if base_url:
client = RemoteModelClient(base_url, api_key, model_name)
else:
client = LocalModelClient(model_name)
return client
This standardized approach improves maintainability when supporting multiple AI backends and simplifies switching between different deployment configurations.
Enter the URL of a public GitHub repository