/experiments/insights
A team runs 14 experiments in a quarter. Five explored retrieval and all worked. Three explored routing and none did. But nobody sees this pattern because each experiment is tracked in a different tool and nobody has time for meta-analysis across the full body of work.
Insights does that meta-analysis automatically. It groups your completed experiments by direction, computes which themes consistently produce positive results, and recommends what to try next based on your team’s history.
How Pattern Detection Works
Remyx analyzes all completed experiments in the current project through three steps:Tag clustering
Experiments are grouped by their tags. Each tag with 3+ completed experiments becomes a cluster to evaluate.
Hit rate computation
For each cluster, Remyx computes what fraction of experiments produced a positive, statistically significant delta.
Signal classification
Each cluster is classified based on average delta and hit rate:
| Signal | Criteria | Meaning |
|---|---|---|
| High (green) | Avg delta ≥ 1.5% and ≥ 50% positive | This direction consistently works |
| Low (red) | Avg delta ≤ 0.5% or ≤ 50% positive | This direction isn’t producing results |
| Mixed (yellow) | Between high and low | Inconclusive — needs more experiments |
The Insights View
The page is organized into three sections by signal strength:Strong Directions
Clusters where the team’s experiments have consistently produced positive results. These are the directions worth doubling down on.Mixed Signal
Clusters with inconclusive results — some experiments positive, some not. These may need more experiments or a refined approach.Weak Directions
Clusters where experiments haven’t produced meaningful results. Useful signal — knowing what doesn’t work prevents wasted effort.Cluster Details
Each cluster appears as a collapsible row showing:| Element | Description |
|---|---|
| Tag name | The grouping tag (e.g., “retrieval”, “prompt-engineering”) |
| Signal badge | HIGH / LOW / MIXED with color |
| Positive count | ”5 of 5 significant” |
| Avg delta | Average observed improvement across the cluster |
| Experiment count | Total experiments in this cluster |
- Each experiment with its name, delta, status, and decision summary
- Click any experiment to navigate to its detail view
- Research-backed next steps: papers, repos, or models whose methods align with this cluster’s direction but haven’t been tried yet
- Each recommendation shows title and a link to the resource viewer
Starting from a Recommendation
Each recommended resource has a Start Experiment button. Clicking it creates a new experiment with:- The resource linked as the source
- The cluster’s tag pre-filled
- The current project selected
Context Line
At the top of the page, a summary line shows the scope:“5 directions across 14 experiments”This differentiates the Insights view from the Overview (which shows initiative-level health) — Insights operates at the tag/direction level within a single project.
When Insights Appear
Insights require a minimum of 4 experiments with shared tags to surface meaningful patterns. With fewer experiments, the page shows an empty state encouraging more experimentation.Tags drive pattern detection. Consistent, descriptive tagging across experiments is what makes Insights useful. Use reusable tags that describe the direction (e.g.,
retrieval, prompt-engineering, tool-use) rather than one-off labels.Example
After 14 experiments on a Customer Support AI initiative:Related
Outcomes
View and manage individual experiments
Overview
Portfolio view across all initiatives
ExperimentOps Concepts
The methodology behind pattern detection