Skip to main content
Nav: Experiments > Outcomes | URL: /experiments Most experiment tracking stops at artifacts and metrics. But the most valuable output of an experiment is the team’s interpretation: what worked, what didn’t, and what to do next. That context typically lives in someone’s head. When they leave, the next person starts from scratch. Outcomes captures what your team tried, what happened, and what the team decided and why. Every decision logged is institutional knowledge that persists regardless of team changes.

Experiment Timeline

The main view shows all experiments for the currently selected project, sorted by date or impact.

Experiment Cards

Each experiment appears as a card showing:
  • Name and creation date
  • Source type icon — research resource, hypothesis, incident, or recommendation
  • Status badge — Configure, In Progress, Complete, or Validated
  • Target metric and observed delta (color-coded: green positive, red negative, gray pending)
  • Tags for grouping and pattern detection

Metric Trend Chart

A collapsible line chart at the top tracks your target metric across completed experiments, showing improvement (or lack thereof) over time. When collapsed, key trend values remain visible in the summary bar.
ControlWhat it does
Search barInstant filtering by name, hypothesis, initiative, or tags
Status chipsAll / Configure / In Progress / Complete / Validated (with counts)
Source filtersToggle by source type (Papers / Hypotheses)
SortBy date (newest first) or by impact (largest delta first)

Creating an Experiment

Click + New Experiment in the top-right corner. The create form adapts based on source type:
Select the Search papers source mode. Search for a paper, repo, or model by title. Remyx autocompletes from its resource index.
FieldDescription
NameShort descriptive name
ResourceSearch and select from the index
HypothesisWhat you expect to happen
Target metricDropdown of metrics configured for this project
ProjectWhich initiative this belongs to
TagsComma-separated labels for grouping
Target repositoryGitHub repo for the implementation (optional)
Tracker linkLink to Linear, Jira, or GitHub issue (optional)
After creation, you’re redirected to the experiment detail page.

Experiment Detail

URL: /experiments/dashboard/<experiment_id> A two-column layout showing the full lifecycle of a single experiment.

Origin Section

For research-sourced experiments, the Origin section shows the launch context — built automatically on first load (~2-4 seconds):
FieldDescriptionEditable?
Resource titleLink to the resource viewer
Abstract excerptOne-sentence summaryClick to edit
Key methodsTechnique badges extracted from the resourceAdd/remove inline
Target repositoryRepo where the implementation landsChange triggers context rebuild
Implementation planAI-generated plan referencing actual file pathsCollapsible, editable, regeneratable
Docker imagePre-built environment referenceRead-only
For hypothesis-sourced experiments, the Origin section shows the hypothesis text.

Analysis Card

Combined Hypothesis and Decision in a single card:
  • Hypothesis — the team’s prediction, always visible at top
  • Decision — logged after results are in; includes text, author, and timestamp; click to edit

Implement Section

A compact bar for Claude Code integration:
  • Copy-paste command to run Claude Code with the Remyx MCP connection
  • Link to Connectors for setup
When a PR exists, a green banner appears at the top of the page with the PR title, status, and link to GitHub.

Activity Feed

Unified chronological feed combining:
  • Comments with @mention support, edit/delete
  • System events from the knowledge graph (experiment created, status changed, decision logged, PR opened)
SectionWhat it shows
StatusDropdown: Configure → In Progress → Complete → Validated
MetricTarget metric, observed delta, confidence level
ResourcesLinked artifacts: PR, ticket, repo, dataset, tracking run, custom links
Related ExperimentsBidirectional linking with cross-project search
ProjectInitiative context from project settings
TagsEditable tag list

Logging a Decision

The most important step in the ExperimentOps workflow. After reviewing results:
  1. Scroll to the Decision section in the Analysis card
  2. Write what the team decided and why
  3. The decision is timestamped and attributed to the author
Good decisions capture reasoning and next steps:
“Ship to 100%. The re-ranker specifically helps with multi-topic tickets where the old retriever returned tangentially related articles. Three retrieval experiments now, all positive. This is our best direction.”

Insights

See cross-experiment patterns and recommended next steps

Overview

Leadership portfolio across all initiatives

Connectors

Link GitHub, Linear, Jira for bidirectional sync

Projects & Settings

Configure metrics, repos, and integrations per project