Design cache keys to avoid performance pitfalls and ensure reliable cache behavior. Cache keys should be lightweight, hashable, and deterministic to prevent unnecessary cache misses and memory bloat.
Key principles:
Example of problematic cache key design:
# BAD: Large image data becomes part of cache key
cache_key = (node_id, large_image_tensor, other_inputs)
# BAD: Unhashable model object causes cache misses
cache_key = (node_id, model_instance, inputs)
Example of optimized cache key design:
# GOOD: Use link references instead of actual values
if is_link(input_data):
cache_key = (node_id, ("ANCESTOR", ancestor_index, socket))
else:
cache_key = (node_id, input_data) # Only for small, hashable data
# GOOD: Cache expensive signature computations
if node_id not in self.immediate_node_signature:
self.immediate_node_signature[node_id] = self.compute_signature(node_id)
This approach ensures cache keys remain efficient while maintaining cache correctness, preventing both performance degradation and incorrect cache behavior.
Enter the URL of a public GitHub repository