Available Models
Remyx supports evaluation, training, and deployment for these model families:Language Models
- Bittensor:
"BTLMLMHeadModel" - BioGPT:
"BioGptForCausalLM" - Bloom:
"BloomForCausalLM" - Bloom:
"BloomModel" - ChatGLM:
"ChatGLMModel" - CodeGen:
"CodeGenForCausalLM" - Falcon:
"FalconForCausalLM" - Falcon Mamba:
"FalconMambaForCausalLM" - GPT-2 Base:
"GPT2Model" - GPT BigCode:
"GPTBigCodeForCausalLM" - GPT BigCode with LM Head:
"GPTBigCodeLMHeadModel" - GPT-J:
"GPTJForCausalLM" - GPT-Neo:
"GPTNeoForCausalLM" - GPT-NeoX:
"GPTNeoXForCausalLM" - Gemma v2:
"Gemma2ForCausalLM" - Gemma:
"GemmaForCausalLM" - LLaMA:
"LlamaForCausalLM" - MPT:
"MPTForCausalLM" - Mistral:
"MistralForCausalLM" - MobileLLM:
"MobileLLMForCausalLM" - MosaicGPT Base:
"MosaicGPT" - OPT:
"OPTForCausalLM" - Phi 3:
"Phi3ForCausalLM" - Phi 3 Small:
"Phi3SmallForCausalLM" - Phi:
"PhiForCausalLM" - QWen with LM Head:
"QWenLMHeadModel" - Qwen v2:
"Qwen2ForCausalLM" - Qwen v2 MoE:
"Qwen2MoeForCausalLM" - RW:
"RWForCausalLM" - Recurrent Gemma:
"RecurrentGemmaForCausalLM" - Reformer with LM Head:
"ReformerModelWithLMHead" - Rwkv v5:
"Rwkv5ForCausalLM" - Rwkv:
"RwkvForCausalLM" - StableLM Alpha:
"StableLMAlphaForCausalLM" - StableLM Epoch:
"StableLMEpochForCausalLM" - StableLM:
"StableLmForCausalLM" - Starcoder v2:
"Starcoder2ForCausalLM" - XGLM:
"XGLMForCausalLM"
Multi-modal Models
Remyx supports training and deployment for these model families:- LLaVA V1.5:
"LlavaLlamaForCausalLM"
Evaluation Tasks
Remyx currently supports the following evaluation types:- MYXMATCH:
"myxmatch" - BENCHMARK:
"benchmark"
Benchmark Tasks
The following lighteval evaluation tasks are currently supported:BIG-Bench (BIGBENCH)
- BIGBENCH_ANALOGICAL_SIMILARITY:
"bigbench|analogical_similarity|0|0" - BIGBENCH_AUTHORSHIP_VERIFICATION:
"bigbench|authorship_verification|0|0" - BIGBENCH_CODE_LINE_DESCRIPTION:
"bigbench|code_line_description|0|0" - BIGBENCH_CONCEPTUAL_COMBINATIONS:
"bigbench|conceptual_combinations|0|0" - BIGBENCH_LOGICAL_DEDUCTION:
"bigbench|logical_deduction|0|0"
Harness
- HARNESS_CAUSAL_JUDGMENT:
"harness|bbh:causal_judgment|0|0" - HARNESS_DATE_UNDERSTANDING:
"harness|bbh:date_understanding|0|0" - HARNESS_DISAMBIGUATION_QA:
"harness|bbh:disambiguation_qa|0|0" - HARNESS_GEOMETRIC_SHAPES:
"harness|bbh:geometric_shapes|0|0" - HARNESS_LOGICAL_DEDUCTION_FIVE_OBJECTS:
"harness|bbh:logical_deduction_five_objects|0|0"
HELM
- HELM_BABI_QA:
"helm|babi_qa|0|0" - HELM_BBQ:
"helm|bbq|0|0" - HELM_BOOLQ:
"helm|boolq|0|0" - HELM_COMMONSENSEQA:
"helm|commonsenseqa|0|0" - HELM_MMLU_PHILOSOPHY:
"helm|mmlu:philosophy|0|0"
Leaderboard
- LEADERBOARD_ARC_CHALLENGE:
"leaderboard|arc:challenge|0|0" - LEADERBOARD_GSM8K:
"leaderboard|gsm8k|0|0" - LEADERBOARD_HELLASWAG:
"leaderboard|hellaswag|0|0" - LEADERBOARD_TRUTHFULQA_MC:
"leaderboard|truthfulqa:mc|0|0" - LEADERBOARD_MMLU_WORLD_RELIGIONS:
"leaderboard|mmlu:world_religions|0|0"
LightEval
- LIGHTEVAL_ARC_EASY:
"lighteval|arc:easy|0|0" - LIGHTEVAL_ASDIV:
"lighteval|asdiv|0|0" - LIGHTEVAL_BIGBENCH_MOVIE_RECOMMENDATION:
"lighteval|bigbench:movie_recommendation|0|0" - LIGHTEVAL_GLUE_COLA:
"lighteval|glue:cola|0|0" - LIGHTEVAL_TRUTHFULQA_GEN:
"lighteval|truthfulqa:gen|0|0"