BEE-spoke-data smol llama-101M-GQA: Difference between revisions

From OODA WIKI
Jump to navigation Jump to search
VeritasNumeri (talk | contribs)
Bot update: Populating AM Almanac data.
 
(No difference)

Latest revision as of 20:21, 8 August 2025

BEE-spoke-data/smol_llama-101M-GQA
Extrinsic Performance (LLM Leaderboard)
Rank N/A
Average Score 0.30
Intrinsic Architecture
Architecture LlamaForCausalLM
Hidden Layers 6
Attention Heads 24
Vocab Size 32128


BEE-spoke-data/smol_llama-101M-GQA is a deep learning model with a recorded average score of 0.30 on the Open LLM Leaderboard.

Performance Metrics

  • ARC Score: N/A
  • HellaSwag Score: N/A
  • MMLU Score: N/A

This page was last updated automatically by the Almanac Ingestor bot.