Princeton-nlp Mistral-7B-Base-SFT-KTO

From OODA WIKI
Jump to navigation Jump to search


princeton-nlp/Mistral-7B-Base-SFT-KTO is a deep learning model with a recorded average score of 0.42 on the Open LLM Leaderboard.

princeton-nlp/Mistral-7B-Base-SFT-KTO
Extrinsic Performance (LLM Leaderboard)
Rank N/A
Average Score 0.42
Intrinsic Architecture
Architecture MistralForCausalLM
Hidden Layers 32
Attention Heads 32
Vocab Size 32000

Performance Metrics

  • ARC Score: N/A
  • HellaSwag Score: N/A
  • MMLU Score: N/A

This page was last updated automatically by the Almanac Ingestor bot.