BenevolenceMessiah Yi-Coder-9B-Chat-Instruct-TIES-MoE-v1.0: Difference between revisions

From OODA WIKI
Jump to navigation Jump to search
VeritasNumeri (talk | contribs)
Bot update: Populating AM Almanac data.
 
(No difference)

Latest revision as of 21:02, 8 August 2025

BenevolenceMessiah/Yi-Coder-9B-Chat-Instruct-TIES-MoE-v1.0
Extrinsic Performance (LLM Leaderboard)
Rank N/A
Average Score 0.45
Intrinsic Architecture
Architecture MixtralForCausalLM
Hidden Layers 48
Attention Heads 32
Vocab Size 64000


BenevolenceMessiah/Yi-Coder-9B-Chat-Instruct-TIES-MoE-v1.0 is a deep learning model with a recorded average score of 0.45 on the Open LLM Leaderboard.

Performance Metrics

  • ARC Score: N/A
  • HellaSwag Score: N/A
  • MMLU Score: N/A

This page was last updated automatically by the Almanac Ingestor bot.