86 points by glimshe about 3 hours ago | 73 comments | View on ycombinator
jackfranklyn about 1 hour ago |
dijit 44 minutes ago |
In practice, I see expensive reinvention. Developers debug database corruption after pod restarts without understanding filesystem semantics. They recreate monitoring strategies and networking patterns on top of CNI because they never learned the fundamentals these abstractions are built on. They're not learning faster: they're relearning the same operational lessons at orders of magnitude higher cost, now mediated through layers of YAML.
Each wave of "democratisation" doesn't eliminate specialists. It creates new specialists who must learn both the abstraction and what it's abstracting. We've made expertise more expensive to acquire, not unnecessary.
Excel proves the rule. It's objectively terrible: 30% of genomics papers contain gene name errors from autocorrect, JP Morgan lost $6bn from formula errors, Public Health England lost 16,000 COVID cases hitting row limits. Yet it succeeded at democratisation by accepting catastrophic failures no proper system would tolerate.
The pattern repeats because we want Excel's accessibility with engineering reliability. You can't have both. Either accept disasters for democratisation, or accept that expertise remains required.
MontyCarloHall about 1 hour ago |
The first electronic computers were programmed by manually re-wiring their circuits. Going from that to being able to encode machine instructions on punchcards did not replace developers. Nor did going from raw machine instructions to assembly code. Nor did going from hand-written assembly to compiled low-level languages like C/FORTRAN. Nor did going from low-level languages to higher-level languages like Java, C++, or Python. Nor did relying on libraries/frameworks for implementing functionality that previously had to be written from scratch each time. Each of these steps freed developers from having to worry about lower-level problems and instead focus on higher-level problems. Mel's intellect is freed from having to optimize the position of the memory drum [0] to allow him to focus on optimizing the higher-level logic/algorithms of the problem he's solving. As a result, software has become both more complex but also much more capable, and thus much more common.
(The thing that distinguishes gen-AI from all the previous examples of increasing abstraction is that those examples are deterministic and often formally verifiable mappings from higher abstraction -> lower abstraction. Gen-AI is neither.)
CodingJeebus about 2 hours ago |
The pattern repeats because the market incentivizes it. AI has been pushed as an omnipotent, all-powerful job-killer by these companies because shareholder value depends on enough people believing in it, not whether the tooling is actually capable. It's telling that folks like Jensen Huang talk about people's negativity towards AI being one of the biggest barriers to advancement, as if they should be immune from scrutiny.
They'd rather try to discredit the naysayers than actually work towards making these products function the way they're being marketed, and once the market wakes up to this reality, it's gonna get really ugly.
strict9 about 1 hour ago |
This is why those same mid level managers and C suite people are salivating over AI and mentioning it in every press release.
The reality is that costs are being reduced by replacing US teams with offshore teams. And the layoffs are being spun as a result of AI adoption.
AI tools for software development are here to stay and accelerate in the coming months and years and there will be advances. But cost reductions are largely realized via onshore/offshore replacement.
The remaining onshore teams must absorb much more slack and fixes and in a way end up being more productive.
rectang 12 minutes ago |
Of course semi-technical people can troubleshoot, it's part of nearly every job. (Some are better at it than others.)
But how many semi-technical people can design a system that facilitates troubleshooting? Even among my engineering acquaintances, there are plenty who cannot.
PeterStuer about 2 hours ago |
xnx about 2 hours ago |
klodolph about 2 hours ago |
.blog-entry p:first-letter {
font-size: 1.2em;
}dvcoolarun 26 minutes ago |
Service-led companies are doing relatively better right now. Lower costs, smaller teams, and a lot of “good enough” duct-tape solutions are shipping fast.
Fewer developers are needed to deliver the same output. Mature frameworks, cloud, and AI have quietly changed the baseline productivity.
And yet, these companies still struggle to hire and retain people. Not because talent doesn’t exist, but because they want people who are immediately useful, adaptable, and can operate in messy environments.
Retention is hard when work is rushed, ownership is limited, and growth paths are unclear. People leave as soon as they find slightly better clarity or stability.
On the economy: it doesn’t feel like a crash, more like a slow grind. Capital is cautious. Hiring is defensive. Every role needs justification.
In this environment, it’s a good time for “hackers” — not security hackers, but people who can glue systems together, work with constraints, ship fast, and move without perfect information.
Comfort-driven careers are struggling. Leverage-driven careers are compounding.
Curious to see how others are experiencing this shift.
walterbell about 2 hours ago |
Speaking of tools, that style of writing rings a bell.. Ben Affleck made a similar point about the evolving use of computers and AI in filmmaking, wielded with creativity by humans with lived experiences, https://www.youtube.com/watch?v=O-2OsvVJC0s. Faster visual effects production enables more creative options.
svilen_dobrev about 1 hour ago |
https://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD104...
johanbcn about 1 hour ago |
Here's an archived link: https://archive.is/y9SyQ
blahnjok about 1 hour ago |
But less thinking is essential, or at least that’s what it’s like using the tools.
I’ve been vibing code almost 100% of the time since Claude 4.5 Opus came out. I use it to review itself multiple times, and my team does the same, then we use AI to review each others’ code.
Previously, we whiteboarded and had discussions more than we do now. We definitely coded and reviewed more ourselves than we do now.
I don’t believe that AI is incapable of making mistakes, nor do I think that multiple AI reviews are enough to understand and solve problems, yet. Some incredibly huge problems are probably on the horizon. But for now, the general “AI will not replace developers” is false; our roles have changed- we are managers now, and for how long?
SonnyTark about 1 hour ago |
If educators use AI to write/update the lectures and the assignments, students use AI to do the assignments, then AI evaluates the student's submissions, what is the point?
I'm worried about some major software engineering fields experiencing the same problem. If design and requirements are written by AI, code is mostly written by AI, and users are mostly AI agents. What is the point?
cyanydeez about 2 hours ago |
erichocean about 2 hours ago |
heliumtera about 1 hour ago |
Twey about 1 hour ago |
On top of the article's excellent breakdown of what is happening, I think it's important to note a couple of driving factors about why (I posit) it is happening:
First, and this is touched upon in the OP but I think could be made more explicit, a lot of people who bemoan the existence of software development as a discipline see it as a morass of incidental complexity. This is significantly an instance of Chesterton's Fence. Yes, there certainly is incidental complexity in software development, or at least complexity that is incidental at the level of abstraction that most corporate software lives at. But as a discipline, we're pretty good at eliminating it when we find it, though it sometimes takes a while — but the speed with which we iterate means we eliminate it a lot faster than most other disciplines. A lot of the complexity that remains is actually irreducible, or at least we don't yet know how to reduce it. A case in point: programming language syntax. To the outsider, the syntax of modern programming languages, where the commas go, whether whitespace means anything, how angle brackets are parsed, looks to the uninitiated like a jumble of arcane nonsense that must be memorized in order to start really solving problems, and indeed it's a real barrier to entry that non-developers, budding developers, and sometimes seasoned developers have to contend with. But it's also (a selection of competing frontiers of) the best language we have, after many generations of rationalistic and empirical refinement, for humans to unambiguously specify what they mean at the semantic level of software development as it stands! For a long time now we haven't been constrained in the domain of programming language syntax by the complexity or performance of parser implementations. Instead, modern programming languages tend toward simpler formal grammars because they make it easier for _humans_ to understand what's going on when reading the code. AI tools promise to (amongst other things; don't come at me AI enthusiasts!) replace programming language syntax with natural language. But actually natural language is a terrible syntax for clearly and unambiguously conveying intent! If you want a more venerable example, just look at mathematical syntax, a language that has never been constrained by computer implementation but was developed by humans for humans to read and write their meaning in subtle domains efficiently and effectively. Mathematicians started with natural language and, through a long process of iteration, came to modern-day mathematical syntax. There's no push to replace mathematical syntax with natural language because, even though that would definitely make some parts of the mathematical process easier, we've discovered through hard experience that it makes the process as a whole much harder.
Second, humans (as a gestalt, not necessarily as individuals) always operate at the maximum feasible level of complexity, because there are benefits to be extracted from the higher complexity levels and if we are operating below our maximum complexity budget we're leaving those benefits on the table. From time to time we really do manage to hop up the ladder of abstraction, at least as far as mainstream development goes. But the complexity budget we save by no longer needing to worry about the details we've abstracted over immediately gets reallocated to the upper abstraction levels, providing things like development velocity, correctness guarantees, or UX sophistication. This implies that the sum total of complexity involved in software development will always remain roughly constant. This is of course a win, as we can produce more/better software (assuming we really have abstracted over those low-level details and they're not waiting for the right time to leak through into our nice clean abstraction layer and bite us…), but as a process it will never reduce the total amount of ‘software development’ work to be done, whatever kinds of complexity that may come to comprise. In fact, anecdotally it seems to be subject to some kind of Braess' paradox: the more software we build, the more our society runs on software, the higher the demand for software becomes. If you think about it, this is actually quite a natural consequence of the ‘constant complexity budget’ idea. As we know, software is made of decisions (https://siderea.dreamwidth.org/1219758.html), and the more ‘manual’ labour we free up at the bottom of the stack the more we free up complexity budget to be spent on the high-level decisions at the top. But there's no cap on decision-making! If you ever find yourself with spare complexity budget left over after making all your decisions you can always use it to make decisions about how you make decisions, ad infinitum, and yesterday's high-level decisions become today's menial labour. The only way out of that cycle is to develop intelligences (software, hardware, wetware…) that can not only reason better at a particular level of abstraction than humans but also climb the ladder faster than humanity as a whole — singularity, to use a slightly out-of-vogue term. If we as a species fall off the bottom of the complexity window then there will no longer be a productivity-driven incentive to ideate, though I rather look forward to a luxury-goods market of all-organic artisanal ideas :)
bdcravens about 1 hour ago |
krater23 about 2 hours ago |
jalapenos about 3 hours ago |
Well probably we'd want a person who really gets the AI, as they'll have a talent for prompting it well.
Meaning: knows how to talk to computers better than other people.
So a programmer then...
I think it's not that people are stupid. I think there's actually a glee behind the claims AI will put devs out of work - like they feel good about the idea of hurting them, rather than being driven by dispassionate logic.
Maybe it's the ancient jocks vs nerds thing.
COBOL was supposed to let managers write programs. VB let business users make apps. Squarespace killed the need for web developers. And now AI.
What actually happens: the tooling lowers the barrier to entry, way more people try to build things, and then those same people need actual developers when they hit the edges of what the tool can do. The total surface area of "stuff that needs building" keeps expanding.
The developers who get displaced are the ones doing purely mechanical work that was already well-specified. But the job of understanding what to build in the first place, or debugging why the automated thing isn't doing what you expected - that's still there. Usually there's more of it.