This is a great idea, but the models seem pretty outdated - it's recommending things like qwen 2.5 and starcoder 2 as perfect matches for my m4 macbook pro with 128gb of memory.
I wish there was more support for AMD GPUs on Intel macs. I saw some people on github getting llama.cpp working with it, would it be addable in the future if they make the backend support it?
5 comments