253 points by ingve 4 days ago | 207 comments | View on ycombinator
rootusrootus 3 days ago |
buran77 4 days ago |
Are they seeing a worthwhile niche for the tinkerers (or businesses?) who want to run local LLMs with middling performance but still need full set of GPIOs in a small package? Maybe. But maybe this is just Raspberry jumping on the bandwagon.
I don't blame them for looking to expand into new segments, the business needs to survive. But these efforts just look a bit aimless to me. I "blame" them for not having another "Raspberry Pi moment".
P.S. I can maybe see Frigate and similar solutions driving the adoption for these, like they boosted Coral TPU sales. Not sure if that's enough of a push to make it successful. The hat just doesn't have any of the unique value proposition that kickstarted the Raspberry wave.
dwedge 4 days ago |
8GB RAM for AI on a Pi sounds underwhelming even from the headline
t43562 4 days ago |
They seem very fast and I certainly want to use that kind of thing in my house and garden - spotting when foxes and cats arrive and dig up my compost pit, or if people come over when I'm away to water the plants etc.
[edit: I've just seen the updated version in Pimonori and it does claim usefulness for LLMs but also for VLMs and I suspect this is the best way to use it].
cmpxchg8b 4 days ago |
djhworld 3 days ago |
Hitching their wagon to the AI train comes with different expectations, leading to a mixed bag of reviews like this.
speedgoose 4 days ago |
I fail to see the use-case on a Pi. For learning you can have access to much better hardware for cheaper. Perhaps you can use it as a slow and expensive embedding machine, but why?
agent013 4 days ago |
renewiltord 4 days ago |
I was able to run a speech to text on my old Pixel 4 but it’s a bit flaky (the background process loses the audio device occasionally). I just want to take some wake word and then send everything to remote LLM and then get back text that I do TTS on.
syntaxing 3 days ago |
Barathkanna 4 days ago |
endymion-light 4 days ago |
I buy a raspberry pi because I need a small workhorse - I understand adding RAM for local LLMs, but it would be like a raspberry pi with a GPU, why do i need it when a normal mini machine will have more ram, more compute capacity and better specs for cheaper?
moffkalast 4 days ago |
Case closed. And that's extremely slow to begin with, the Pi 5 only gets what, a 32 bit bus? Laughable performance for a purpose built ASIC that costs more than the Pi itself.
> In my testing, Hailo's hailo-rpi5-examples were not yet updated for this new HAT, and even if I specified the Hailo 10H manually, model files would not load
Laughable levels of support too.
As another datapoint, I've recently managed to get the 8L working natively on Ubuntu 24 with ROS, but only after significant shenanigans involving recompiling the kernel module and building their library for python 3.12 that Hailo for some reason does not provide outside 3.11. They only support the Pi OS (like anyone would use that in prod) and even that is very spotty. Like, why would you not target the most popular robotics distro for an AI accelerator? Who else is gonna buy these things exactly?
phito 4 days ago |
nottorp 3 days ago |
YOLO for example.
joelthelion 3 days ago |
That said, perhaps there is a niche for slow LLM inference for non-interactive use.
For example, if you use LLMs to triage your emails in the background, you don't care about latency. You just need the throughput to be high enough to handle the load.
imtringued 3 days ago |
I once tried to run a segmentation model based on a vision transformer on a PC and that model used somewhere around 1 GB for the parameters and several gigabytes for the KV cache and it was almost entirely compute bound. You couldn't run that type of model on previous AI accelerators because they only supported model sizes in the megabytes range.
janalsncm 3 days ago |
The price point is still a little high for most tasks but I’m sure that will come down.
Lio 3 days ago |
That's also limited to 8Gb RAM so again you might be better off with a larger 16Gb Pi and using the CPU but at least the space is heating up.
With a lot of this stuff it seems to come down to how good the software support is. Raspberry Pis generally beat everything else for that.
sxzygz 3 days ago |
undefined 4 days ago |
JustFinishedBSG 3 days ago |
myrmidon 3 days ago |
My impression so far was that the resulting models are unusably stupid, but maybe there are some specific tasks where they still perform acceptably?
tonymet 3 days ago |
bdavbdav 3 days ago |
yjftsjthsd-h 3 days ago |
Western0 1 day ago |
wyldfire 3 days ago |
1970-01-01 3 days ago |
incomingpain 3 days ago |
Dont need more than 8gb. It'll be enough power. IT can do audio to audio.
kotaKat 4 days ago |
... why though? CV in software is good enough for this application and we've already been doing it forever (see also: Everseen). Now we're just wasting silicon.
esskay 4 days ago |
5ersi 3 days ago |
giantg2 3 days ago |
huntercaron 4 days ago |
yawniek 3 days ago |
xp84 3 days ago |
rballpug 4 days ago |
teekert 4 days ago |
1. Can I run a local LLM that allows me to control Home Assistant with natural language? Some basic stuff like timers, to do/shopping lists etc would be nice etc.
2. Can I run object/person detection on local video streams?
I want some AI stuff, but I want it local.
Looks like the answer for this one is: Meh. It can do point 2, but it's not the best option.
Havoc 3 days ago |
A NPU that adds to price but underperforms a rasp cpu?
You get SBC with 32gb ram…
Nevermind the whole minipc ecosystem which will crush this
venturecruelty 3 days ago |
undefined 4 days ago |
MORPHOICES 3 days ago |
klft 3 days ago |
vander_elst 3 days ago |