402 points by tmaly 4 days ago | 156 comments | View on ycombinator
navar 3 days ago |
MohskiBroskiAI about 2 hours ago |
I found that for local setups (running on my MacBook), the overhead of managing a vector index wasn't worth the hallucinations. Cosine similarity is just too fuzzy for code/tech docs.
I switched to using *Optimal Transport (Wasserstein Distance)* in-process. It essentially treats the memory as a geometry problem. If the "transport cost" from the query to the memory chunk is too high, it rejects it mathematically.
It’s way lighter than running a local Chroma/LanceDB instance, and the coherence is ~0.96 vs ~0.67 for standard embeddings.
It's free (MIT license) and open-source btw.
__jf__ 3 days ago |
For retrieval I load all the vectors from the SQlite database into a numpy.array and hand it to FAISS. Faiss-gpu was impressively fast on the RTX6000 and faiss-cpu is slower on the M1 Ultra but still fast enough for my purposes (I'm firing a few queries per day, not per minute). For 5 million chunks memory usage is around 40 GB which both fit into the A6000 and easily fits into the 128GB of the M1 Ultra. It works, I'm happy.
beklein 3 days ago |
I can recommend https://github.com/tobi/qmd/ . It’s a simple CLI tool for searching in these kinds of files. My previous workflow was based on fzf, but this tool gives better results and enables even more fuzzy queries. I don’t use it for code, though.
CuriouslyC 3 days ago |
ktyptorio about 1 hour ago |
esperent 3 days ago |
This took about one hour to set up and works very well.
(+) At least, I don't think this counts as RAG. I'm honestly a bit hazy on the definition. But there's no vectordb anyway.
eb0la 3 days ago |
After some time we noticed a semi-structured field in the prompt had a 100% match with the content needed to process the prompt.
Turns out operators started puting tags both in the input and the documents that needed to match on every use case (not much, about 50 docs).
Now we look for the field first and put the corresponding file in the prompt, then we look for matches in the database using the embedding.
85% of the time we don't need the vectordb.
theahura 3 days ago |
tebeka 3 days ago |
amscotti 3 days ago |
Using Ollama for the embeddings with “nomic-embed-text”, with LanceDB for the vector database. Recently updated it to use “agentic” RAG, but probably not fully needed for a small project.
acutesoftware 3 days ago |
The real lightbulb moment is when you realise the ONLY thing a RAG passes to the LLM is a short string of search results with small chunks of text. This changes it from 'magic' to 'ahh, ok - I need better search results'. With small models you cannot pass a lot of search results ( TOP_K=5 is probably the limit ), otherwise the small models 'forget context'.
It is fun trying to get decent results - and it is a rabbithole, next step I am going into is pre-summarising files and folders.
I open sourced the code I was using - https://github.com/acutesoftware/lifepim-ai-core
scosman 3 days ago |
It uses LanceDB and has dozens of different extraction/embedding models to choose from. It even has evals for checking retrieval accuracy, including automatically generating the eval dataset.
You can use its UI, or call the RAG via MCP.
juanre 3 days ago |
It uses PostgreSQL with pgvector, hybrid BM25, multi-query expansion, and reranking.
(It's the first time I share it publicly, so I am sure there'll be quirks.)
marwamc 3 days ago |
https://github.com/rhobimd-oss/shebe
One area where BM25 particularly shines is the refactoring workflow: let's say you want to upgrade your istio installation from 1.28 to 1.29 and maybe in 1.29 the authorizationpolicy crd has a breaking change in one of it's properties. BM25 allows you to efficiently enumerate all code locations in your codebase that need to change and then you can set the cli coders off using this list. Grep and LSP can still perform this enumeration but they have shortcomings. Wrote about it here https://github.com/rhobimd-oss/shebe/blob/main/WHY_SHEBE.md#...
autogn0me 3 days ago |
lmeyerov 3 days ago |
Studies generally show when you do agentic retrieval w/ text search, that's pretty good. Adding vector retrieval and graph rag, so the typical parallel multi-retrieval followed by reranking, gives a bit of speedup and quality lift. That lines up with my local flow experience, where it is only enough that I want that for $$$$ consumer/prosumer tools, and not easy enough for DIY that I want to invest in that locally. For those who struggle with tools like spotlight running when it shouldn't, that kind of thing turns me off on the cost/benefit side.
For code, I experiment with unsound tools (semgrep, ...) vs sound flow analyzers, carefully setup for the project. Basically, ai coders love to use grep/sed for global replace refactors and other global needs, but keeps tripped up on sound flow analysis. Similar to lint and type checking, that needs to be setup for a project and taught as a skill. I'm not happy with any of my experiments here yet however :(
threecheese 2 days ago |
I store file content blobs in SQLite, and use FTS5 (bm25) to maintain a fulltext index plus sqlite-vec for storing embeddings. Search uses both of these, and then reciprocal rank fusion gets the best results and pipes those to a local transformers model to judge. It’s all Python with mlx-lm and mlx-embeddings libraries, the models are grabbed from huggingface. It’s not the fastest, but it’s local and easy to understand (and for Claude to write, mostly).
spqw 3 days ago |
yokuze 3 days ago |
It’s a CLI tool and MCP server for creating discrete, versioned “libraries” of RAG-able content.
Under the hood, it uses an embedding model locally. It chunks your content and stores embeddings in SQLite. The search functionality uses vector + keyword search + a re-ranking model.
You can also point it at any GitHub repo and it will create a RAG DB out of it.
You can also use the MCP server to create and query the libraries.
raghavankl 3 days ago |
gaganyatri 3 days ago |
Demo: https://app.dwani.ai
GitHub: https://github.com/dwani-ai/discovery
Now working on added Agentic features, by continuous analysis of Document with Generated prompts.
Agent_Builder 1 day ago |
In setups like GTWY.ai, constraining how retrieved data is used per step mattered more than the vector store itself. Otherwise assumptions leak forward and hallucinations look “reasonable”.
rahimnathwani 4 days ago |
folli 2 days ago |
I'm positively surprised on how well it works, especially if you also connect it to an LLM.
oliveiracwb 3 days ago |
On the retrieval side, I built a custom search/indexing layer (Node) specifically for service traceability and discovery. It uses a hybrid approach — embeddings + full-text search + IVF-HNSW — to index and cross-reference our APIs, services, proxies and orchestration repos. The RAG pipelines sit on top of this layer, which gives us reasonable recall and predictable latency.
Compliance and observability are still a problem. Every year new vendors show up promising audits, data lineage and observability, but none of them really handle the informational sprawl of ~600 distributed systems. The entropy keeps increasing.
Lately I’ve been experimenting with a more semantic/logical KAG approach on top of knowledge graphs to map business rules scattered across those systems. The goal is to answer higher-level questions about how things actually work — Palantir-like outcomes, but with explicit logic instead of magic.
Curious if others are moving beyond “pure RAG” toward graph-based or hybrid reasoning setups.
init0 3 days ago |
bzGoRust 3 days ago |
tschellenbach 3 days ago |
philip1209 3 days ago |
g0wda 3 days ago |
podgietaru 3 days ago |
https://aws.amazon.com/blogs/machine-learning/use-language-e...
The code for it is here: https://github.com/aws-samples/rss-aggregator-using-cohere-e...
The example link no longer works, as I no longer work at AWS.
prakashn27 3 days ago |
So I use hosted one to prevent this. My business use vector db, so created a new db to vectorize and host my knowledge base. 1. All my knowledge base is markdown files. So I split that by header tags. 2. The split is hashed and hash value is stored in SQLite 3. The hashed version is vectorized and pushed to cloud db. 4. When ever I make changes , I run a script which splits and checks hash, if it is changed the. I upsert the document. If not I don’t do anything. This helps me keep the store up to date
For search I have a cli query which searches and fetches from vector store.
jackfranklyn 3 days ago |
The real challenge wasn't model quality - it was the chunking strategy. Financial data is weirdly structured and breaking it into sensible chunks that preserve context took more iteration than expected. Eventually settled on treating each complete record as a chunk rather than doing sliding windows over raw text. The "obvious" approaches from tutorials didn't work well at all for structured tabular-ish data.
cbcoutinho 3 days ago |
For local deployments, Qdrant supports storing embeddings in memory as well as in a local directory (similar to sqlite) - for larger deployments Qdrant supports running as a standalone service/sidecar and can be made available over the network.
metawake 3 days ago |
ragtune explain "your query" --collection prod
Shows scores, sources, and diagnostics. Helps catch when your chunking
or embeddings are silently failing or you need numeric estimations to base your judgements on.Open source: https://github.com/metawake/ragtune
mmargenot 3 days ago |
This is specifically a “remembrance agent”, so it surfaces related atoms to what you’re writing rather than doing anything generative.
Extension: https://github.com/mmargenot/tezcat
Also available in community plugins.
init0 3 days ago |
kb = Ragi(["./docs", "s3://bucket/data/*/*.pdf", "https://api.example.com/docs"])
answer = kb.ask("How do I deploy this?")
that's it! with https://pypi.org/project/piragi/
yakkomajuri 3 days ago |
Not sure how useful it is for what you need specifically: https://blog.yakkomajuri.com/blog/local-rag
pj4533 3 days ago |
reactordev 3 days ago |
save_memory, recall_memory, search
Save memory vectorizes a session, summarizes it, and stores it in SQLite. Recall memory takes vector or a previous tool run id and loads the full text output. Search takes a vector array or string array and searches through the graph using fuzzy matching and vector dot products.
It’s not fancy, but it works really well. gpt-oss
codebolt 3 days ago |
motakuk 4 days ago |
Bombthecat 3 days ago |
turnsout 3 days ago |
robotswantdata 3 days ago |
The newer “agent” search approach can just query a file system or api. It’s slightly slower but easier to setup and maintain as no extra infrastructure.
throwaway7783 2 days ago |
softwaredoug 3 days ago |
lsb 3 days ago |
claylyons 3 days ago |
SamLeBarbare 3 days ago |
dvorka 3 days ago |
ehsanu1 3 days ago |
beret4breakfast 3 days ago |
lee1012 3 days ago |
geuis 3 days ago |
To answer the question more directly, I've spent the last couple of years with a few different quant models mostly running on llama.cpp and ollama, depending. The results are way slower than the paid token api versions, but they are completely free of external influence and cost.
However the models I've tests generally turn out to be pretty dumb at the quant level I'm running to be relatively fast. And their code generation capabilities are just a mess not to be dealt with.
eajr 4 days ago |
nineteen999 4 days ago |
lormayna 3 days ago |
Works well, but I didn't tested on larger scale
ramesh31 4 days ago |
undefined 3 days ago |
yandrypozo 3 days ago |
juleshenry 3 days ago |
andoando 3 days ago |
jacekm 3 days ago |
mooball 3 days ago |
sinandrei 3 days ago |
baalimago 3 days ago |
Question being: WHY would I be doing RAG locally?
xpl 2 days ago |
Strift 3 days ago |
TL;DR: - chunk files, index chunks - vector/hybrid search over the index - node app to handle requests (was the quickest to implement, LLMs understand OpenAPI well)
I wrote about it here: https://laurentcazanove.com/blog/obsidian-rag-api
electroglyph 3 days ago |
jeanloolz 3 days ago |
whattheheckheck 4 days ago |
pdyc 3 days ago |
jeffchuber 3 days ago |
__mharrison__ 3 days ago |
VerifiedReports 2 days ago |
lee101 3 days ago |
sascha10000 3 days ago |
undefined 3 days ago |
undergrowth 3 days ago |
undergrowth 3 days ago |
https://huggingface.co/MongoDB/mdbr-leaf-ir
It ranks #1 on a bunch of leaderboards for models of its size. It can be used interchangeably with the model it has been distilled from (https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1...).
You can see an example comparing semantic (i.e., embeddings-based) search vs bm25 vs hybrid here: http://search-sensei.s3-website-us-east-1.amazonaws.com (warning! It will download ~50MB of data for the model weights and onnx runtime on first load, but should otherwise run smoothly even on a phone)
This mini app illustrates the advantage of semantic vs bm25 search. For instance, embedding models "know" that j lo refers to jennifer lopez.
We have also published the recipe to train this type of models if you were interested in doing so; we show that it can be done on relatively modest hardware and training data is very easy to obtain: https://arxiv.org/abs/2509.12539