Prompt
When implementing quantized operations in ML models, thoroughly validate quantization parameters to ensure correctness and prevent subtle numerical errors. Key validation points:
- For int16 quantized operators, verify zero points are exactly 0
- For quantized tensor operations, ensure consistent scale factors between input and output
- When one tensor is quantized, verify all related tensors use compatible quantization schemes
Example:
// Good: Proper quantization parameter validation
if (input->type == kTfLiteInt16) {
TF_LITE_ENSURE_EQ(context, input->params.zero_point, 0);
TF_LITE_ENSURE_EQ(context, output->params.zero_point, 0);
}
// For quantized operations, validate scale consistency
if (input->type == kTfLiteInt8 || input->type == kTfLiteInt16) {
TF_LITE_ENSURE_EQ(context, input->params.scale, output->params.scale);
}
// Ensure quantization scheme compatibility
if (lhs.IsQuantized()) {
if (!rhs.IsQuantized() || !output.IsQuantized()) {
return absl::FailedPreconditionError(
"If one tensor is quantized, all must be quantized");
}
}