Skip to main content
This document provides an overview of all supported and available constants for Remyx APIs and tasks. These constants define the models, benchmark, and datasets supported by Remyx.

Available Models

Remyx supports evaluation, training, and deployment for these model families:

Language Models

Model familyRemyx constantNotes
Bittensor"BTLMLMHeadModel"LM-head model (BTLM)
BioGPT"BioGptForCausalLM"Causal LM (biomedical text)
Bloom"BloomForCausalLM"Causal LM
Bloom"BloomModel"Base / backbone model
ChatGLM"ChatGLMModel"Base / backbone model
CodeGen"CodeGenForCausalLM"Causal LM (code)
Falcon"FalconForCausalLM"Causal LM
Falcon Mamba"FalconMambaForCausalLM"Mamba (SSM) causal LM
GPT-2 Base"GPT2Model"Base / backbone model
GPT BigCode"GPTBigCodeForCausalLM"Causal LM (code)
GPT BigCode with LM Head"GPTBigCodeLMHeadModel"Code model with LM head
GPT-J"GPTJForCausalLM"Causal LM
GPT-Neo"GPTNeoForCausalLM"Causal LM
GPT-NeoX"GPTNeoXForCausalLM"Causal LM
Gemma v2"Gemma2ForCausalLM"Causal LM
Gemma"GemmaForCausalLM"Causal LM
LLaMA"LlamaForCausalLM"Causal LM
MPT"MPTForCausalLM"Causal LM
Mistral"MistralForCausalLM"Causal LM
MobileLLM"MobileLLMForCausalLM"Causal LM (compact)
MosaicGPT Base"MosaicGPT"Base / backbone model
OPT"OPTForCausalLM"Causal LM
Phi 3"Phi3ForCausalLM"Causal LM
Phi 3 Small"Phi3SmallForCausalLM"Causal LM
Phi"PhiForCausalLM"Causal LM
QWen with LM Head"QWenLMHeadModel"LM-head model
Qwen v2"Qwen2ForCausalLM"Causal LM
Qwen v2 MoE"Qwen2MoeForCausalLM"Mixture-of-experts causal LM
RW"RWForCausalLM"Causal LM
Recurrent Gemma"RecurrentGemmaForCausalLM"Recurrent / compressed-attention variant
Reformer with LM Head"ReformerModelWithLMHead"Reformer with LM head
Rwkv v5"Rwkv5ForCausalLM"RWKV causal LM
Rwkv"RwkvForCausalLM"RWKV causal LM
StableLM Alpha"StableLMAlphaForCausalLM"Causal LM
StableLM Epoch"StableLMEpochForCausalLM"Causal LM
StableLM"StableLmForCausalLM"Causal LM
Starcoder v2"Starcoder2ForCausalLM"Causal LM (code)
XGLM"XGLMForCausalLM"Causal LM (multilingual)

Multi-modal Models

Remyx supports training and deployment for these model families:
Model familyRemyx constantNotes
LLaVA V1.5"LlavaLlamaForCausalLM"Vision–language model (LLaMA backbone, causal LM head)

Evaluation Tasks

Remyx currently supports the following evaluation types:
Task typeConstantNotes
MYXMATCH"myxmatch"Remyx matching / comparison workflow
BENCHMARK"benchmark"Standard benchmark suite (LightEval-backed tasks below)

Benchmark Tasks

The following lighteval evaluation tasks are currently supported:

BIG-Bench (BIGBENCH)

ConstantTask stringNotes
BIGBENCH_ANALOGICAL_SIMILARITY`“bigbenchanalogical_similarity00”`Analogical reasoning
BIGBENCH_AUTHORSHIP_VERIFICATION`“bigbenchauthorship_verification00”`Style / authorship attribution
BIGBENCH_CODE_LINE_DESCRIPTION`“bigbenchcode_line_description00”`Code ↔ natural language
BIGBENCH_CONCEPTUAL_COMBINATIONS`“bigbenchconceptual_combinations00”`Concept combination
BIGBENCH_LOGICAL_DEDUCTION`“bigbenchlogical_deduction00”`Deductive logic

Harness

ConstantTask stringNotes
HARNESS_CAUSAL_JUDGMENT`“harnessbbh:causal_judgment00”`BBH: causal judgment
HARNESS_DATE_UNDERSTANDING`“harnessbbh:date_understanding00”`BBH: calendar / dates
HARNESS_DISAMBIGUATION_QA`“harnessbbh:disambiguation_qa00”`BBH: ambiguous questions
HARNESS_GEOMETRIC_SHAPES`“harnessbbh:geometric_shapes00”`BBH: geometry
HARNESS_LOGICAL_DEDUCTION_FIVE_OBJECTS`“harnessbbh:logical_deduction_five_objects00”`BBH: multi-object deduction

HELM

ConstantTask stringNotes
HELM_BABI_QA`“helmbabi_qa00”`bAbI reading / QA
HELM_BBQ`“helmbbq00”`Bias Benchmark for QA
HELM_BOOLQ`“helmboolq00”`Yes/no reading comprehension
HELM_COMMONSENSEQA`“helmcommonsenseqa00”`Commonsense MCQ
HELM_MMLU_PHILOSOPHY`“helmmmlu:philosophy00”`MMLU philosophy subset

Leaderboard

ConstantTask stringNotes
LEADERBOARD_ARC_CHALLENGE`“leaderboardarc:challenge00”`ARC (challenge)
LEADERBOARD_GSM8K`“leaderboardgsm8k00”`Grade-school math (8k)
LEADERBOARD_HELLASWAG`“leaderboardhellaswag00”`Commonsense sentence completion
LEADERBOARD_TRUTHFULQA_MC`“leaderboardtruthfulqa:mc00”`TruthfulQA (multiple choice)
LEADERBOARD_MMLU_WORLD_RELIGIONS`“leaderboardmmlu:world_religions00”`MMLU world religions

LightEval

ConstantTask stringNotes
LIGHTEVAL_ARC_EASY`“lightevalarc:easy00”`ARC (easy)
LIGHTEVAL_ASDIV`“lightevalasdiv00”`ASDiv math word problems
LIGHTEVAL_BIGBENCH_MOVIE_RECOMMENDATION`“lightevalbigbench:movie_recommendation00”`BigBench: recommendations
LIGHTEVAL_GLUE_COLA`“lightevalglue:cola00”`GLUE CoLA (linguistic acceptability)
LIGHTEVAL_TRUTHFULQA_GEN`“lightevaltruthfulqa:gen00”`TruthfulQA (generation)