Ensure AI model configurations accurately reflect official specifications and avoid hardcoded assumptions. Model parameters, token limits, capabilities, and naming should match vendor documentation rather than using generic defaults.
Key practices:
tokens: maxInput + maxOutput
)default: 0.8
for strength)Example of proper model configuration:
{
description: 'Gemini 2.5 Flash 是 Google 最先进的主力模型',
displayName: 'Gemini 2.5 Flash', // Clear, consistent naming
id: 'google/gemini-2.5-flash',
contextWindowTokens: 1_048_576, // Verified against official docs
maxOutput: 65_535, // Separate input/output limits
// No hardcoded parameter defaults
}
This prevents user confusion, billing errors, and ensures reliable model behavior across different AI providers.
Enter the URL of a public GitHub repository