![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://feddit.nu/api/v3/image_proxy?url=https%3A%2F%2Flemdro.id%2Fpictrs%2Fimage%2F6d56629c-a7b1-465d-8b58-ad77926e3a41.png)
Because people don’t understand the difference between using ChatGPT running in a datacenter somewhere via API calls and having a tiny model actually running on device.
Even quantized down to 4bit the best 7B models barely run locally on the best mobile hardware that exists today. GPT4 is ~250x larger than that 7B model lol
I don’t see myself changing anytime soon, auto mouse layer is amazing. I have a Draculad PCB and case but no real reason to build it since I wfh.