Reflect phase · ~5 minutes You’ve now closed at least one experiment end-to-end. As a project accumulates experiments, the value shifts from each individual experiment’s result to the pattern across them. Which directions are paying off? Which aren’t? Where are decisions defensible vs. ad-hoc? What can a teammate (or a leader, or future-you) read on this screen and pick up the project’s state, without scheduling a sync? This tutorial walks through the views that surface those answers: the outcomes timeline, the cluster signals across directions, the decision rationale on individual experiments, and the typed lineage between related ones. Useful for an IC staying close to a project, a senior IC briefing leadership, or an engineering leader opening Remyx for the first time.Documentation Index
Fetch the complete documentation index at: https://docs.remyx.ai/llms.txt
Use this file to discover all available pages before exploring further.
Open the project at the portfolio level
Pick your project from the switcher at the top of the sidebar, then click Outcomes under Experiments. This is the project’s experiment timeline. It groups every experiment under the project by date and offers three chart modes.- Trend. Accuracy of the team’s bets over time. How often do experiments ship?
- Velocity. Experiment throughput, with shipped experiments highlighted. How fast is the team running?
- Impact. Observed deltas above and below zero. How big are the wins, and how often does anything regress?
Drill into a decision
Click any shipped experiment open. The detail page shows the disposition, the rationale captured at decision time, the metrics that were used, and the eval template + decision policy that governed the call. This is where a project’s reasoning lives. A rationale like “Shipped because three of four benchmarks improved at ≥2% with confidence above 0.85; the MindCube regression was inside the noise floor” (written when the decision was made, against the policy that was in place) is what makes the project’s history defensible months later. Browse a few decisions in a row and you can usually feel which directions the team has clarity on and which are still ambiguous. Repeated overrides on the same metric, missing rationales, or reject classifications followed by an override-to-ship are all signals worth tracking.Read the cluster patterns
Open Insights. This view groups experiments by tag, computes hit rates and average deltas per cluster, and classifies each direction as HIGH, MIXED, or LOW signal. A typical pattern looks like:Trace lineage across experiments
Each experiment’s sidebar lets you link related experiments by type:borrows_from. This variant uses ideas from an earlier experiment.variant_of. A fork of an earlier experiment with one parameter changed.replicates. Re-running an earlier experiment to confirm a result.references. A softer relationship; useful as a breadcrumb for context.
variant_of shows up on B’s sidebar as well.
To trace a chain across experiments, click through the sidebar one hop at a time. “This shipped variant borrows_from an earlier embedding-swap experiment, which replicates a paper from the digest two months ago.” Three clicks, no syncs.
A graph-based lineage view sits on the roadmap as an additional way to read the same data. The relationships you create today will populate it when it lands.
Use it for a leadership brief
Three views, one screen, in roughly this order:- Outcomes (Velocity / Impact). Shipping rate, hit rate, magnitude of impact.
- Insights (cluster patterns). Which directions are paying off, which aren’t.
- A specific experiment’s rationale. Why a particular call was made.
Recap
You’ve completed the full series. Your project now has:- An extracted history that bootstraps the record
- A discovery feed grounded in your team’s work
- A locked eval template and decision policy
- At least one closed-loop experiment with a logged decision
- A portfolio view that reads at three levels (velocity, cluster, individual decision)
- Typed relationships across related experiments
Tips
Backfilled experiments are still part of the story
Backfilled experiments are still part of the story
Experiments extracted by cold-start in Create your project appear in Outcomes and Insights alongside experiments created manually since. Where you remember the deltas, fill them in with the click-to-edit pill on each record. The impact chart and cluster patterns get sharper.
Use the Outcomes view in real reviews
Use the Outcomes view in real reviews
Don’t assemble a deck. Open the project’s Outcomes tab in the meeting. Flip through the three chart modes. Click into the experiment a leader asks about. The screen is the artifact.
Link related experiments as you create them
Link related experiments as you create them
Next
Run the loop again
Loop back to scoping with the next idea from your digest
Connect more tools
Wire integrations into your project (Linear, Slack, Jira)
ExperimentOps concepts
The methodology behind what you just built
Series overview
Full series arc