When integrating new LLM providers or AI services, prioritize OpenAI-compatible patterns and reuse existing utilities rather than creating separate implementations. This approach reduces code duplication, leverages battle-tested functionality, and maintains consistency across the codebase.

Key practices:

Example of preferred approach:

# Instead of creating a separate provider block:
elif custom_llm_provider == "new_provider":
    # Custom implementation...

# Prefer adding to openai_compatible_providers:
openai_compatible_providers = [..., "new_provider"]

# And use existing handlers:
response = base_llm_http_handler.completion(
    model=model,
    messages=messages,
    custom_llm_provider=custom_llm_provider,
    api_base=api_base,
    api_key=api_key,
    # ...
)

This pattern ensures new AI integrations benefit from existing error handling, parameter validation, streaming support, and other features while minimizing maintenance overhead.