Experiment
Remyx Experiments is an easy-to-use interface based on agile principles, designed for creating, testing, and improving AI experiments. It supports various tasks, such as data curation, model fine-tuning, and model evaluation. With Remyx Experiments, you can:- Quickly launch new experiments and prototypes.
- Easily track the progress and performance of ongoing tasks.
- Efficiently iterate by reviewing results, downloading artifacts, and seamlessly pushing outcomes directly to platforms like the Hugging Face Hub.
How It Works
Navigate to the Experiments tab to view your ongoing and completed experiments. You can configure a new experiment by creating a new card under the “Configure” column. Currently supported experiment types include:- Data Curation
- Model Finetune
- Evaluation
More experiment types and integrations coming soon!

Experiment Types
Data Curation
Compose datasets for fine-tuning using minimal inputs such as seed phrases, existing datasets, or Hugging Face datasets. After your data curation job completes:- Preview Dataset: Quickly inspect a preview of your generated or augmented dataset.
- Download Dataset: Get immediate access to your curated data.
- Push to Hub: Easily share your dataset on Hugging Face for community use.
Model Finetune
This experiment type currently supports training large language models (LLMs) and expanding to multi-modal foundation models soon.
- Download Model: Easily retrieve your fine-tuned model.
- Push to Hub: Directly share your model on the Hugging Face Hub for public use or further collaboration.
Evaluation
This experiment type currently MyxMatch style evaluations and expanding to more eval types soon.
- View Evaluation Results: Access detailed evaluation metrics and rankings.
- Download Results: Retrieve detailed evaluation artifacts in JSON format.
- Push Evaluation to Hub: Share evaluation results to Hugging Face as datasets for transparency and community benchmarking.