Princeton-nlp Mistral-7B-Base-SFT-KTO: Difference between revisions

From OODA WIKI
Jump to navigation Jump to search
VeritasNumeri (talk | contribs)
Bot update: Populating AM Almanac data.
 
(No difference)

Latest revision as of 20:58, 8 August 2025

princeton-nlp/Mistral-7B-Base-SFT-KTO
Extrinsic Performance (LLM Leaderboard)
Rank N/A
Average Score 0.42
Intrinsic Architecture
Architecture MistralForCausalLM
Hidden Layers 32
Attention Heads 32
Vocab Size 32000


princeton-nlp/Mistral-7B-Base-SFT-KTO is a deep learning model with a recorded average score of 0.42 on the Open LLM Leaderboard.

Performance Metrics

  • ARC Score: N/A
  • HellaSwag Score: N/A
  • MMLU Score: N/A

This page was last updated automatically by the Almanac Ingestor bot.