311 points by bblcla 4 days ago | 233 comments | View on ycombinator
woeirua 2 days ago |
simonw 3 days ago |
> But in context, this was obviously insane. I knew that key and id came from the same upstream source. So the correct solution was to have the upstream source also pass id to the code that had key, to let it do a fast lookup.
I've seen Claude make mistakes like that too, but then the moment you say "you can modify the calling code as well" or even ask "any way we could do this better?" it suggests the optimal solution.
My guess is that Claude is trained to bias towards making minimal edits to solve problems. This is a desirable property, because six months ago a common complaint about LLMs is that you'd ask for a small change and they would rewrite dozens of additional lines of code.
I expect that adding a CLAUDE.md rule saying "always look for more efficient implementations that might involve larger changes and propose those to the user for their confirmation if appropriate" might solve the author's complaint here.
maxilevi 3 days ago |
disconcision 3 days ago |
I agree that the examples listed here are relatable, and I've seen similar in my uses of various coding harnesses, including, to some degree, ones driven by opus 4.5. But my general experience with using LLMs for development over the last few years has been that:
1. Initially models could at best assemble a simple procedural or compositional sequences of commands or functions to accomplish a basic goal, perhaps meeting tests or type checking, but with no overall coherence,
2. To being able to structure small functions reasonably,
3. To being able to structure large functions reasonably,
4. To being able to structure medium-sized files reasonably,
5. To being able to structure large files, and small multi-file subsystems, somewhat reasonably.
So the idea that they are now falling down on the multi-module or multi-file or multi-microservice level is both not particularly surprising to me and also both not particularly indicative of future performance. There is a hierarchy of scales at which abstraction can be applied, and it seems plausible to me that the march of capability improvement is a continuous push upwards in the scale at which agents can reasonably abstract code.
Alternatively, there could be that there is a legitimate discontinuity here, at which anything resembling current approaches will max out, but I don't see strong evidence for it here.
lordnacho 3 days ago |
This is exactly why I love it. It's smart enough to do my donkey work.
I've revisited the idea that typing speed doesn't matter for programmers. I think it's still an odd thing to judge a candidate on, but appreciate it in another way now. Being able to type quickly and accurately reduces frustration, and people who foresee less frustration are more likely to try the thing they are thinking about.
With LLMs, I have been able to try so many things that I never tried before. I feel that I'm learning faster because I'm not tripping over silly little things.
mikece 4 days ago |
Beyond that what can Claude do... analyze the business and market as a whole and decide on product features, industry inefficiencies, gap analysis, and then define projects to address those and coordinate fleets of agents to change or even radically pivot an entire business?
I don't think we'll get to the point where all you have is a CEO and a massive Claude account but it's not completely science fiction the more I think about it.
MarginalGainz 2 days ago |
The issue seems to be that LLMs treat code as a literary exercise rather than a graph problem. Claude is fantastic at the syntax and local logic ('assembling blocks'), but it lacks the persistent global state required to understand how a change in module A implicitly breaks a constraint in module Z.
Until we stop treating coding agents as 'text predictors' and start grounding them in an actual AST (Abstract Syntax Tree) or dependency graph, they will remain helpful juniors rather than architects.
ChicagoDave 3 days ago |
I might write something up at some point, but I can share this:
https://github.com/chicagodave/devarch/
New repo with guides for how I use Claude Code.
michalsustr 3 days ago |
alphazard 3 days ago |
Designing good APIs is hard, being good at it is rare. That's why most APIs suck, and all of us have a negative prior about calling out to an API or adding a dependency on a new one. It takes a strong theory of mind, a resistance to the curse of knowledge, and experience working on both sides of the boundary, to make a good API. It's no surprise that Claude isn't good at it, most humans aren't either.
joshcsimmons 3 days ago |
Granted it was building ontop of tailwind (shifting over to radix after the layoff news). Begs the question? What is a lego?
Scrapemist 3 days ago |
Havoc 2 days ago |
LLMs definitely can create abstractions and boundaries. e.g. most will lean towards a pretty clean front end vs backend split even without hints. Or work out a data structure that fits the need. Or splits things into free standing modules. Or structure a plan into phases.
So this really just boils down to „good” abstractions which is subject to model improvement.
I really don’t see a durable moat for us meatbags in this line of reasoning
0xbadcafebee 2 days ago |
machiaweliczny 2 days ago |
malka1986 2 days ago |
100% of code is made by Claude.
It is damn good at making "blocks".
However, Elixir seems to be a langage that works very well for LLM, cf. https://elixirforum.com/t/llm-coding-benchmark-by-language/7...
iamacyborg 3 days ago |
https://docs.google.com/document/u/0/d/1zo_VkQGQSuBHCP45DfO7...
undefined 2 days ago |
joduplessis 2 days ago |
anshumankmr 2 days ago |
undefined 2 days ago |
doug_durham 3 days ago |
esafak 3 days ago |
Ha! I don't know what that has to do with anything, but this is exactly what I thought while watching Pluribus.
jondwillis 2 days ago |
EGreg 2 days ago |
geldedus 1 day ago |
iamleppert 2 days ago |
The developers who aren't figuring out how to leverage AI tools and make them work for them are going to get left behind very quickly. Unless you're in the top tier of engineers, I'm not sure how one can blame the tools at this point.
lxe 2 days ago |
"Claude tries to write React, and fails"... how many times? what's the rate of failure? What have you tried to guide it to perform better.
These articles are similar to HN 15 years ago when people wrote "Node.JS is slow and bad"
mklyachman 3 days ago |