When integrating new LLM providers or AI services, prioritize OpenAI-compatible patterns and reuse existing utilities rather than creating separate implementations. This approach reduces code duplication, leverages battle-tested functionality, and maintains consistency across the codebase.
When integrating new LLM providers or AI services, prioritize OpenAI-compatible patterns and reuse existing utilities rather than creating separate implementations. This approach reduces code duplication, leverages battle-tested functionality, and maintains consistency across the codebase.
Key practices:
openai_compatible_providers
instead of creating separate handler blocksbase_llm_http_handler
or openai_like_chat_completion
for OpenAI-compatible APIsOpenAILikeConfig
or OpenAIGPTConfig
for transformation classesget_llm_provider
, supports_function_calling
, and parameter mapping functionsExample of preferred approach:
# Instead of creating a separate provider block:
elif custom_llm_provider == "new_provider":
# Custom implementation...
# Prefer adding to openai_compatible_providers:
openai_compatible_providers = [..., "new_provider"]
# And use existing handlers:
response = base_llm_http_handler.completion(
model=model,
messages=messages,
custom_llm_provider=custom_llm_provider,
api_base=api_base,
api_key=api_key,
# ...
)
This pattern ensures new AI integrations benefit from existing error handling, parameter validation, streaming support, and other features while minimizing maintenance overhead.
Enter the URL of a public GitHub repository