When implementing quantized operations in ML models, thoroughly validate quantization parameters to ensure correctness and prevent subtle numerical errors. Key validation points:
When implementing quantized operations in ML models, thoroughly validate quantization parameters to ensure correctness and prevent subtle numerical errors. Key validation points:
Example:
// Good: Proper quantization parameter validation
if (input->type == kTfLiteInt16) {
TF_LITE_ENSURE_EQ(context, input->params.zero_point, 0);
TF_LITE_ENSURE_EQ(context, output->params.zero_point, 0);
}
// For quantized operations, validate scale consistency
if (input->type == kTfLiteInt8 || input->type == kTfLiteInt16) {
TF_LITE_ENSURE_EQ(context, input->params.scale, output->params.scale);
}
// Ensure quantization scheme compatibility
if (lhs.IsQuantized()) {
if (!rhs.IsQuantized() || !output.IsQuantized()) {
return absl::FailedPreconditionError(
"If one tensor is quantized, all must be quantized");
}
}
Enter the URL of a public GitHub repository