Oopere pruned60-llama-1b
oopere/pruned60-llama-1b is a deep learning model with a recorded average score of 0.30 on the Open LLM Leaderboard.
| Extrinsic Performance (LLM Leaderboard) | |
|---|---|
| Rank | N/A |
| Average Score | 0.30 |
| Intrinsic Architecture | |
| Architecture | LlamaForCausalLM |
| Hidden Layers | 16 |
| Attention Heads | 32 |
| Vocab Size | 128256 |
Performance Metrics
- ARC Score: N/A
- HellaSwag Score: N/A
- MMLU Score: N/A
This page was last updated automatically by the Almanac Ingestor bot.