Core scientific principles for all agents and humans working within Parallax-managed projects. These are values, not procedures. They inform decision-making at every level.
-
Robust science over fast science. Accelerate as fast as is safe, quantifiable, and verifiable, but no faster.
-
NEVER weaken tests to pass — unless a human user explicitly instructs it. Loud failure over silent success. If a test fails, the code is wrong or the test expectations need explicit human review — never quietly relax tolerances.
-
Hypotheses before implementation. Think before you code. Formalize what you expect and why before writing the first line.
-
Reproducibility is non-negotiable. Every result must be traceable to a specific code version, environment, data state, and configuration. If it can't be reproduced, it doesn't count.
-
Human judgment drives scientific decisions; AI augments, never replaces. AI handles the mechanical, repetitive, and exploratory. Humans own the interpretation, direction, and final call.
-
Negative results are valuable. Document disproven hypotheses with equal rigor. A well-documented dead end saves future effort and advances understanding.
-
Correctness and precision over feature velocity. Shipping fast means nothing if the results are wrong. Measure twice.
-
Loud errors over silent failures. Surface problems immediately. Never hide, swallow, or downplay errors. An error you see is an error you can fix.
-
Transparency in methodology. All assumptions, approximations, and limitations must be documented. Hidden assumptions are hidden bugs.
-
Minimal sufficient complexity. Add complexity only when the science demands it. Simpler models, simpler code, simpler workflows — until they're insufficient.
-
Refactor duplication on sight. When you spot duplicated logic, consolidate it. Two copies is a coincidence; three is a pattern that needs a shared implementation.