Available Models
Remyx supports evaluation, training, and deployment for these model families:Language Models
| Model family | Remyx constant | Notes |
|---|---|---|
| Bittensor | "BTLMLMHeadModel" | LM-head model (BTLM) |
| BioGPT | "BioGptForCausalLM" | Causal LM (biomedical text) |
| Bloom | "BloomForCausalLM" | Causal LM |
| Bloom | "BloomModel" | Base / backbone model |
| ChatGLM | "ChatGLMModel" | Base / backbone model |
| CodeGen | "CodeGenForCausalLM" | Causal LM (code) |
| Falcon | "FalconForCausalLM" | Causal LM |
| Falcon Mamba | "FalconMambaForCausalLM" | Mamba (SSM) causal LM |
| GPT-2 Base | "GPT2Model" | Base / backbone model |
| GPT BigCode | "GPTBigCodeForCausalLM" | Causal LM (code) |
| GPT BigCode with LM Head | "GPTBigCodeLMHeadModel" | Code model with LM head |
| GPT-J | "GPTJForCausalLM" | Causal LM |
| GPT-Neo | "GPTNeoForCausalLM" | Causal LM |
| GPT-NeoX | "GPTNeoXForCausalLM" | Causal LM |
| Gemma v2 | "Gemma2ForCausalLM" | Causal LM |
| Gemma | "GemmaForCausalLM" | Causal LM |
| LLaMA | "LlamaForCausalLM" | Causal LM |
| MPT | "MPTForCausalLM" | Causal LM |
| Mistral | "MistralForCausalLM" | Causal LM |
| MobileLLM | "MobileLLMForCausalLM" | Causal LM (compact) |
| MosaicGPT Base | "MosaicGPT" | Base / backbone model |
| OPT | "OPTForCausalLM" | Causal LM |
| Phi 3 | "Phi3ForCausalLM" | Causal LM |
| Phi 3 Small | "Phi3SmallForCausalLM" | Causal LM |
| Phi | "PhiForCausalLM" | Causal LM |
| QWen with LM Head | "QWenLMHeadModel" | LM-head model |
| Qwen v2 | "Qwen2ForCausalLM" | Causal LM |
| Qwen v2 MoE | "Qwen2MoeForCausalLM" | Mixture-of-experts causal LM |
| RW | "RWForCausalLM" | Causal LM |
| Recurrent Gemma | "RecurrentGemmaForCausalLM" | Recurrent / compressed-attention variant |
| Reformer with LM Head | "ReformerModelWithLMHead" | Reformer with LM head |
| Rwkv v5 | "Rwkv5ForCausalLM" | RWKV causal LM |
| Rwkv | "RwkvForCausalLM" | RWKV causal LM |
| StableLM Alpha | "StableLMAlphaForCausalLM" | Causal LM |
| StableLM Epoch | "StableLMEpochForCausalLM" | Causal LM |
| StableLM | "StableLmForCausalLM" | Causal LM |
| Starcoder v2 | "Starcoder2ForCausalLM" | Causal LM (code) |
| XGLM | "XGLMForCausalLM" | Causal LM (multilingual) |
Multi-modal Models
Remyx supports training and deployment for these model families:| Model family | Remyx constant | Notes |
|---|---|---|
| LLaVA V1.5 | "LlavaLlamaForCausalLM" | Vision–language model (LLaMA backbone, causal LM head) |
Evaluation Tasks
Remyx currently supports the following evaluation types:| Task type | Constant | Notes |
|---|---|---|
| MYXMATCH | "myxmatch" | Remyx matching / comparison workflow |
| BENCHMARK | "benchmark" | Standard benchmark suite (LightEval-backed tasks below) |
Benchmark Tasks
The following lighteval evaluation tasks are currently supported:BIG-Bench (BIGBENCH)
| Constant | Task string | Notes | |||
|---|---|---|---|---|---|
| BIGBENCH_ANALOGICAL_SIMILARITY | `“bigbench | analogical_similarity | 0 | 0”` | Analogical reasoning |
| BIGBENCH_AUTHORSHIP_VERIFICATION | `“bigbench | authorship_verification | 0 | 0”` | Style / authorship attribution |
| BIGBENCH_CODE_LINE_DESCRIPTION | `“bigbench | code_line_description | 0 | 0”` | Code ↔ natural language |
| BIGBENCH_CONCEPTUAL_COMBINATIONS | `“bigbench | conceptual_combinations | 0 | 0”` | Concept combination |
| BIGBENCH_LOGICAL_DEDUCTION | `“bigbench | logical_deduction | 0 | 0”` | Deductive logic |
Harness
| Constant | Task string | Notes | |||
|---|---|---|---|---|---|
| HARNESS_CAUSAL_JUDGMENT | `“harness | bbh:causal_judgment | 0 | 0”` | BBH: causal judgment |
| HARNESS_DATE_UNDERSTANDING | `“harness | bbh:date_understanding | 0 | 0”` | BBH: calendar / dates |
| HARNESS_DISAMBIGUATION_QA | `“harness | bbh:disambiguation_qa | 0 | 0”` | BBH: ambiguous questions |
| HARNESS_GEOMETRIC_SHAPES | `“harness | bbh:geometric_shapes | 0 | 0”` | BBH: geometry |
| HARNESS_LOGICAL_DEDUCTION_FIVE_OBJECTS | `“harness | bbh:logical_deduction_five_objects | 0 | 0”` | BBH: multi-object deduction |
HELM
| Constant | Task string | Notes | |||
|---|---|---|---|---|---|
| HELM_BABI_QA | `“helm | babi_qa | 0 | 0”` | bAbI reading / QA |
| HELM_BBQ | `“helm | bbq | 0 | 0”` | Bias Benchmark for QA |
| HELM_BOOLQ | `“helm | boolq | 0 | 0”` | Yes/no reading comprehension |
| HELM_COMMONSENSEQA | `“helm | commonsenseqa | 0 | 0”` | Commonsense MCQ |
| HELM_MMLU_PHILOSOPHY | `“helm | mmlu:philosophy | 0 | 0”` | MMLU philosophy subset |
Leaderboard
| Constant | Task string | Notes | |||
|---|---|---|---|---|---|
| LEADERBOARD_ARC_CHALLENGE | `“leaderboard | arc:challenge | 0 | 0”` | ARC (challenge) |
| LEADERBOARD_GSM8K | `“leaderboard | gsm8k | 0 | 0”` | Grade-school math (8k) |
| LEADERBOARD_HELLASWAG | `“leaderboard | hellaswag | 0 | 0”` | Commonsense sentence completion |
| LEADERBOARD_TRUTHFULQA_MC | `“leaderboard | truthfulqa:mc | 0 | 0”` | TruthfulQA (multiple choice) |
| LEADERBOARD_MMLU_WORLD_RELIGIONS | `“leaderboard | mmlu:world_religions | 0 | 0”` | MMLU world religions |
LightEval
| Constant | Task string | Notes | |||
|---|---|---|---|---|---|
| LIGHTEVAL_ARC_EASY | `“lighteval | arc:easy | 0 | 0”` | ARC (easy) |
| LIGHTEVAL_ASDIV | `“lighteval | asdiv | 0 | 0”` | ASDiv math word problems |
| LIGHTEVAL_BIGBENCH_MOVIE_RECOMMENDATION | `“lighteval | bigbench:movie_recommendation | 0 | 0”` | BigBench: recommendations |
| LIGHTEVAL_GLUE_COLA | `“lighteval | glue:cola | 0 | 0”` | GLUE CoLA (linguistic acceptability) |
| LIGHTEVAL_TRUTHFULQA_GEN | `“lighteval | truthfulqa:gen | 0 | 0”` | TruthfulQA (generation) |