Hacker news

  • Top
  • New
  • Past
  • Ask
  • Show
  • Jobs

Ask HN: Is replacing an enterprise product with LLMs a realistic strategy?

6 points by chandmk about 15 hours ago | 6 comments | View on ycombinator

codingdave about 5 hours ago |

The biggest gotcha is that if existing products were developed over a decade or more, that is decade of iteration over details and customer feedback. You can see the final result, but not the rationale behind 10+ years worth of decisions and discussions. The LLMs are almost guaranteed to get something wrong without that context, which means you final product won't be competitive. Unless you understand the nuance of which features are table stakes vs. market choices vs. regulatory requirement or other such fixed functionality, you might spend all your energy building something that is not even viable.

That doesn't mean you cannot build a newer, better, competitive product. You surely can. But you need to build the understanding of the market yourself so you know when the LLMs go off the rails and get them back on track.

dapperdrake about 3 hours ago |

The attempt will be made.

Most of the rest comes down to inertia and path dependence.

The new lossier models are rarely an improvement over existing less lossy models. That is why there was an old style model in the first place. Putting in the work already had value. And it delivers value now.

verdverm about 12 hours ago |

Your questions are very interesting and I'm not sure anyone knows. Some people are trying, others want to, I know one company that has gone back on the ai initiative because the ROI was not there.

What I would do is to express your pessimism lightly, or more like, "we are making these assumptions about a new technology we know little about" (pick just 2-3)

Then push hard to convince them to carve out little pieces to try out the supposed "AI changes the economics of building software." and other assumptions. Say something like "how can we validate these assumptions with the minimal effort/time/money, because I've seen some horror stories and not sure the hype holds up. I'm all for it if it works, but we just don't know and we need to chip away at that"

My personal take is that this idea they have will end poorly. I've worked hard and built custom agents to squeeze more out of them (my gem-3-flash is better than copilot with anything impo.), and my takeaway is two-fold (1) they can be both impressively good and unbelievably bad, even the very best models from any company (2) people are sharing their wins far more than the fails, like stonks, the outcomes you can find in the wild have bias. I know I delete a bunch of false starts, gonna be hard to automate this and not spend more than you would on a human, especially as the project grows. You are going to have to pay to load a bunch of context on every run just so the model can go from tickets in Jira to finding what/where needs to change, to getting actually relevant code changes, then making sure they work.

undefined about 5 hours ago |

undefined

MohskiBroskiAI about 13 hours ago |

The issue isn't the LLM's reasoning; it's the retrieval layer.

Most "Enterprise AI" is just a wrapper around a Vector DB doing cosine similarity. That’s probabilistic. It works 80% of the time, but for an enterprise product, the 20% hallucination rate on edge cases is a dealbreaker.

I spent the last 6 months trying to replace a legacy system with agents, and I hit this exact wall. I eventually had to rip out the Vector DB and replace it with a custom memory protocol using Optimal Transport (Wasserstein Distance) just to get deterministic retrieval.

If you treat memory as 'Geometry' (strict topology) instead of 'Search' (fuzzy matching), you can actually bound the hallucination error mathematically. It’s the only way I could sleep at night deploying this to production.

TL;DR: Yes, it’s realistic, but not if you use the standard RAG stack. You need stricter constraints on the context window.

lesserknowndan about 13 hours ago |

Title: spelling "replacing".

philwyshbone about 5 hours ago |

[flagged]