When implementing or changing optimization code (samplers, objectives, and result selection), ensure algorithmic correctness, preserve sampler assumptions, and prefer explicit post-processing over ad‑hoc objective hacks.

Why: Incorrect assumptions about samplers, unclear trial counting, off-by-one indexing, or hidden objective tweaks can silently break search quality or produce confusing results. The rule reduces bugs and makes behavior auditable.

Checklist (practical actions):

When to apply: any change touching sampling logic, objective computation, trial bookkeeping, or final selection of best trials. Following this guidance improves correctness, reproducibility, and debuggability of optimization code.