This document provides an overview of all supported and available constants for Remyx APIs and tasks. These constants define the models, benchmark, and datasets supported by Remyx.
Remyx supports evaluation, training, and deployment for these model families:
"BTLMLMHeadModel"
"BioGptForCausalLM"
"BloomForCausalLM"
"BloomModel"
"ChatGLMModel"
"CodeGenForCausalLM"
"FalconForCausalLM"
"FalconMambaForCausalLM"
"GPT2Model"
"GPTBigCodeForCausalLM"
"GPTBigCodeLMHeadModel"
"GPTJForCausalLM"
"GPTNeoForCausalLM"
"GPTNeoXForCausalLM"
"Gemma2ForCausalLM"
"GemmaForCausalLM"
"LlamaForCausalLM"
"MPTForCausalLM"
"MistralForCausalLM"
"MobileLLMForCausalLM"
"MosaicGPT"
"OPTForCausalLM"
"Phi3ForCausalLM"
"Phi3SmallForCausalLM"
"PhiForCausalLM"
"QWenLMHeadModel"
"Qwen2ForCausalLM"
"Qwen2MoeForCausalLM"
"RWForCausalLM"
"RecurrentGemmaForCausalLM"
"ReformerModelWithLMHead"
"Rwkv5ForCausalLM"
"RwkvForCausalLM"
"StableLMAlphaForCausalLM"
"StableLMEpochForCausalLM"
"StableLmForCausalLM"
"Starcoder2ForCausalLM"
"XGLMForCausalLM"
Remyx supports training and deployment for these model families:
"LlavaLlamaForCausalLM"
Remyx currently supports the following evaluation types:
"myxmatch"
"benchmark"
The following lighteval evaluation tasks are currently supported:
"bigbench|analogical_similarity|0|0"
"bigbench|authorship_verification|0|0"
"bigbench|code_line_description|0|0"
"bigbench|conceptual_combinations|0|0"
"bigbench|logical_deduction|0|0"
"harness|bbh:causal_judgment|0|0"
"harness|bbh:date_understanding|0|0"
"harness|bbh:disambiguation_qa|0|0"
"harness|bbh:geometric_shapes|0|0"
"harness|bbh:logical_deduction_five_objects|0|0"
"helm|babi_qa|0|0"
"helm|bbq|0|0"
"helm|boolq|0|0"
"helm|commonsenseqa|0|0"
"helm|mmlu:philosophy|0|0"
"leaderboard|arc:challenge|0|0"
"leaderboard|gsm8k|0|0"
"leaderboard|hellaswag|0|0"
"leaderboard|truthfulqa:mc|0|0"
"leaderboard|mmlu:world_religions|0|0"
"lighteval|arc:easy|0|0"
"lighteval|asdiv|0|0"
"lighteval|bigbench:movie_recommendation|0|0"
"lighteval|glue:cola|0|0"
"lighteval|truthfulqa:gen|0|0"