Skip to content

Conversation

@Supporterino
Copy link

@Supporterino Supporterino commented Oct 21, 2025

                    'c.          supporterino@Freya-II
                 ,xNMM.          ---------------------
               .OMMMMo           OS: macOS 26.0.1 25A362 arm64
               OMMM0,            Host: MacBookPro18,4
     .;loddo:' loolloddol;.      Kernel: 25.0.0
   cKMMMMMMMMMMNWMMMMMMMMMM0:    Uptime: 6 days, 4 hours, 22 mins
 .KMMMMMMMMMMMMMMMMMMMMMMMWd.    Packages: 209 (brew)
 XMMMMMMMMMMMMMMMMMMMMMMMX.      Shell: zsh 5.9
;MMMMMMMMMMMMMMMMMMMMMMMM:       Resolution: 1800x1169
:MMMMMMMMMMMMMMMMMMMMMMMM:       DE: Aqua
.MMMMMMMMMMMMMMMMMMMMMMMMX.      WM: Quartz Compositor
 kMMMMMMMMMMMMMMMMMMMMMMMMWd.    WM Theme: Blue (Dark)
 .XMMMMMMMMMMMMMMMMMMMMMMMMMMk   Terminal: iTerm2
  .XMMMMMMMMMMMMMMMMMMMMMMMMK.   Terminal Font: MesloLGS-NF-Regular 13
    kMMMMMMMMMMMMMMMMMMMMMMd     CPU: Apple M1 Max
     ;KMMMMMMMWXXWMMMMMMMk.      GPU: Apple M1 Max
       .cooc,.    .,coo:.        Memory: 4747MiB / 32768MiB

llama3.2:3b

running benchmark 3 times using model: llama3.2:3b

Run Eval Rate (Tokens/Second)
1 80.32 tokens/s
2 80.11 tokens/s
3 80.17 tokens/s
Average Eval Rate 80.20 tokens/second

deepseek-r1

Running benchmark 3 times using model: deepseek-r1

Run Eval Rate (Tokens/Second)
1 35.99 tokens/s
2 33.12 tokens/s
3 32.74 tokens/s
Average Eval Rate 33.95 tokens/second

@Supporterino Supporterino marked this pull request as ready for review October 21, 2025 18:29
@geerlingguy
Copy link
Owner

@Supporterino which version of deepseek-r1 was run? I presume it was maybe 8b and not 14b, since that would be the default according to https://ollama.com/library/deepseek-r1

You can run a specific version specifying like deepseek-r1:14b

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants