When integrating with AI model providers, implement defensive programming to handle API inconsistencies, bugs, and unexpected return types. AI provider libraries often have implementation bugs or return different types than documented, requiring robust error handling and type checking.
When integrating with AI model providers, implement defensive programming to handle API inconsistencies, bugs, and unexpected return types. AI provider libraries often have implementation bugs or return different types than documented, requiring robust error handling and type checking.
Key practices:
Example from LiteLLM tokenizer bug workaround:
def count_tokens(text: str) -> int:
enc = tokenizer_json["tokenizer"].encode(text)
if isinstance(enc, list):
return len(enc)
elif hasattr(enc, "ids"):
return len(enc.ids)
else:
raise TypeError(f"Unexpected return type from encode: {type(enc)}")
This approach ensures your AI model integrations remain robust even when underlying provider APIs have bugs or inconsistent behavior.
Enter the URL of a public GitHub repository