Hacker news

  • Top
  • New
  • Past
  • Ask
  • Show
  • Jobs

The Influentists: AI hype without proof (https://carette.xyz)

270 points by LucidLynx 5 days ago | 171 comments | View on ycombinator

pizzathyme 5 days ago |

My anxiety about falling behind with AI plummeted after I realized many of these tweets are overblown in this way. I use AI every day, how is everyone getting more spectacular results than me? Turns out: they exaggerate.

Here are several real stories I dug into:

"My brick-and-mortar business wouldn't even exist without AI" --> meant they used Claude to help them search for lawyers in their local area and summarize permits they needed

"I'm now doing the work of 10 product managers" --> actually meant they create draft PRD's. Did not mention firing 10 PMs

"I launched an entire product line this weekend" --> meant they created a website with a sign up, and it shows them a single javascript page, no customers

"I wrote a novel while I made coffee this morning" --> used a ChatGPT agent to make a messy mediocre PDF

minimaxir 5 days ago |

There are two major reasons people don't show proof about the impact of agentic coding:

1) The prompts/pipelines portain to proprietary IP that may or may not be allowed to be shown publically.

2) The prompts/pipelines are boring and/or embarrassing and showing them will dispel the myth that agentic coding is this mysterious magical process and open the people up to dunking.

For example in the case of #2, I recently published the prompts I used to create a terminal MIDI mixer (https://github.com/minimaxir/miditui/blob/main/agent_notes/P...) in the interest of transparency, but those prompts correctly indicate that I barely had an idea how MIDI mixing works and in hindsight I was surprised I didn't get harrassed for it. Given the contentious climate, I'm uncertain how often I will be open-sourcing my prompts going forward.

caditinpiscinam 5 days ago |

Doesn't the existence of consumer products like ChatGPT indicate that LLMs aren't able to do human-level work? If OpenAI really had a digital workforce with the capabilities of ~100k programmers/scientists/writers/lawyers/doctors etc, wouldn't the most profitable move be to utilize those "workers" directly, rather that renting out their skills piecemeal?

thegrim000 4 days ago |

You know, there's a number of different competing propaganda battles going on:

1) There's the people and companies that stand to make money and build up companies by convincing people to buy their ai projects, hyping up ai, etc.

2) There's companies and nation states trying to destroy competitor's / other country's ai efforts, turn citizens against them, in order to gain an advantage/lead in the race.

3) There's, conversely, nation states that want to boost up and promote their ai industry in order to win the race rather than other countries winning (assuming there's a "win" at the end, like AGI, which I don't believe there is).

4) Normal citizens that have been ideologically brainwashed one way or the other, and so are going online to argue in a culture war for their beliefs / "side".

5) People posting crazy takes on ai, one way or the other, to get clicks / money on their articles.

The whole topic is awash in serious propaganda. Effectively the only path forward is believing what you yourself know for sure, from your direct experience / knowledge.

williamcotton 5 days ago |

Can someone please explain to me how I was able to construct this DSL in as short a time as I did?

https://github.com/williamcotton/webpipe

https://github.com/williamcotton/webpipe-lsp

Fully featured LSP (take a look at the GIFs in the repo), step debugger, BDD-testing framework built into the language and runtime itself (novel!), asynchronous/join in pipelines (novel!), middleware for postgres, jq, javascript, lua, graphql (with data loaders), etc. It does quite a bit. Take a look at my GitHub timeline for an idea of how long this took to build.

It is 100% an experiment in language and framework design. Why would I otherwise spend years of my life handcrafting something where I just want to see how my harebrained ideas play out when actualized?

I would absolutely love to talk about the language itself rather than how it was made but here we are.

And I wrote my own blog in my own DSL. Tell me that's not just good old fashioned fun.

shermantanktop 5 days ago |

I’ve taken to calling this (in my mind) the Age of the Sycophants. In politics, in corporate life, in technology and in social media, many people are building a public life around saying things that others want to hear, with demonstrably zero relationship to truth or even credibility.

sleekest 5 days ago |

I agree, if the benefits are so large, there should be clearer evidence (that isn't, "trust me, just use it").

That said, I use Antigravity with great success for self hosted software. I should publish it.

Why haven't I?

* The software is pretty specific to my requirements.

* Antigravity did the vast amount of work, it feels unworthy?

* I don't really want a project, but that shouldn't really stop me pushing to a public repo.

* I'm a bit hesitant to "out" myself?

Nonetheless, even though I'm not the person, I'm surprised there isn't more evidence out there.

HellDunkel 5 days ago |

This is a strage phenomenon where people get excited by the mere fact that someone else is excited by something which is not directly visible to the spectator. It works well in horror movies and as it seems with AI hype.

tgma 5 days ago |

Being respected inside big companies has little to do with engagement on social media. Most of the best engineers are working hands-down. Arguably shitposting on the internet may have a negative correlation with technical ability inside Google.

One of the times I think the draconian approach Apple has towards employee speaking as an associate of the firm without explicit authorization is the correct one.

AgentME 5 days ago |

LLMs are amazing and I do seriously wonder if the singularity could happen in my lifetime ... but there definitely are people over-hyping present capabilities too much right now. If present models were fully human-level proper-extended-Turing-test-passing AGI, then the results on the economy and the software ecosystem would be as immediately obvious and world-changing as a comet impact.

I don't think Rakyll or Andrej are claiming these things; I think they're assuming their readers share more context with them and that it's not necessary to re-tread that every time they post about their surprise at AI currently being better than they expected. I've had the experience multiple times now of reading posts from people like them, nodding along, and then reading breathless quote-tweets of those very same posts exclaiming about how it means that AGI is here right now.

arjie 5 days ago |

If you don't get the results you don't get the results. If someone else can use this tool to get the results, they'll out-compete you. If they can't, then they've wasted time and you'll out-compete them. I see these influencer guys as idea-generators. It's super-cheap to test out some of these theories: e.g. how well Claude can do 3D modeling was an idea I wanted to test and I did and it's pretty good; I wanted to test Claude as a debugging aid and it's a huge help for me.

But I would never sit down to convince a person who is not a friend. If someone wanted me to do that, I'd expect to charge them for it. So the guys who are doing it for free are either peddling bullshit or they have some other unspecified objective and no one likes that.

datsci_est_2015 5 days ago |

Anecdotally, I’m finding that, at least in the Spark ecosystem, AI-generated ideas and code are far from optimal. Some of this comes from misinterpreting the (sometimes poor) documentation, and some of it comes from, probably, there not being as many open source examples as CRUD apps, which AI “influentists” (to borrow from TFA) appear to often be hyping up.

This matters a lot to us because the difference in performance of our workflows can be the difference in $10/day in costs and $1000/day in costs.

Just like TFA stresses, it’s the expertise in the team that pushes back against poor AI-generated ideas and code that is keeping our business within reach of cash flow positive. ~”Surely this isn’t the right way to do this?”

doug_durham 5 days ago |

I never read the tweet as anything other than that an expert with deep knowledge of their domain was able to produce a PoC. Which I still find to be very exciting and worthy of being promoted. This article didn't really debunk much.

IncreasePosts 5 days ago |

Writing code is the easiest thing to do at Google. Getting past layers of hierarchy and nailing down what the code will actually do and who gets credit for it will take years for a major project.

int32_64 5 days ago |

Perhaps nobody wants to have the uncomfortable conversation that AI is making the competent more competent and the incompetent less competent, because it would imply that AI provides brutally unequal benefits. The AI haters don't want this discussion because it would imply AI has any benefits, and the AI lovers don't want to have this discussion because it would imply the benefits of AI aren't universal and will increase inequality.

fasouto 5 days ago |

The article nails the pattern but I think it's fundamnetally an incentives problem.

We're drowning in tweets, posts, news... (way more than anyone can reasonably consume). So what rises to the top? The most dramatic, attention-grabbing claims. "I built in 1 hour what took a team months" gets 10k retweets. "I used AI to speed up a well-scoped prototype after weeks of architectural thinking" gets...crickets

Social platforms are optimized for engagement, not accuracy. The clarification thread will always get a fraction of the reach of the original hype. And the people posting know this.

The frustrating part is there's no easy fix. Calling it out (like this article does) get almost no attention. And the nuanced followup never catches up with the viral tweet.

kaboomshebang 5 days ago |

Good article. "hype first and context later". Loads of online content has become highly sentational. I notice this on Youtube (especially thumnails and titles) Seems to be a trend. I wonder if -- collectively -- we'll develop a "shield" for this: (more critical thinking?)

kinduff 5 days ago |

I think humans are proxying their value through what they can do with AI. It's like a domestication flex.

ankit219 5 days ago |

Its a strange phenomenon. You want to call out the bs but then you are just giving them engagement and boost. You want to stay away but there is a sort of confluence where these guys tend to ride on each others' post and boosts those posts anyway. If you ask questions, very rarely they answer, and if they do, it takes one question to unearth that it was the prompt or the skill. Eg: huggingface people post about claude finetuning models. how? when they gave everything in a skill file, and claude knew what scripts to write. Tinker is trying the same strategy. (yes, its impressive that claude could finetune, but not as impressive as the original claim that made me pay attention to the post)

It does not matter if they get the details wrong, its just that it needs to be vague enough, and exciting enough. Infact vagueness and not sharing the code part signals they are doing something important or they are 'in the know' which they cannot share. The incentives are totally inverted.

mentalgear 5 days ago |

Great post! Indeed, it s deeply disappointing to see how both the tech industry and scientific community have fallen into the same attention-seeking trap: hyping their work with vague, sensational claims, only to later "clarify" with far more grounded—and often mundane—statements.

This tactic mirrors the strategies of tabloids, demagogues, and social media’s for-profit engagement playbook (think Zuckerberg, Musk, and the like). It’s a race to the bottom, eroding public trust and undermining the foundations of our society - all for short-term personal gain.

What’s even more disheartening is how this dynamic rewards self-promotion over substance. Today’s "experts" are often those who excel at marketing themselves, while the most knowledgeable and honest voices remain in the shadows. Their "flaw"? Refusing to sacrifice integrity for attention.

davesque 5 days ago |

Almost every aspect of public life on social media nowadays is guided by sensationalism. It's simply a numbers game, and the "number" is engagement. Why would you do anything that's not completely geared towards engagement?

ahmetomer 5 days ago |

The original post by Jaana made 8.4 million impressions, while the follow-up that included the previously/deliberately omitted context and some important information, such as "what I built is a toy version", has 277K impressions as of right now.

I respect Jaana and have been following her for years. I'd expect she ought to know how that claim would have been understood. But I guess that's the only way to go viral nowadays.

Also, this incident goes to show how the self-proclaimed AI influencer, Rohan Paul, puts a lot of thought and importance into sharing "news" about AI. As if it were not enough to share Jaana's bold claim without hesitation, he also emphasized it with an illustrious commentary: "Dario Amodei was so right about AI taking over coding."

Slop, indeed.

kfarr 5 days ago |

Like everything in LLM land it's all about the prompt and agent pipeline. As others say below, these people are experts in their domain. Their prompts are essentially a form of codifying their own knowledge, as in Rakyll and Galen's examples, to achieve specific outcomes based on years and maybe even decades of work in the problem domain. It's no surprise their outputs when ingested by an LLM are useful, but it might not tell us much about the true native capability of a given AI system.

mylittlebrain 4 days ago |

Off topic, but AI hype is appearing in hardware and consumer electronics. I just bought a rechargeable hand warmer that claims to have an "... Have Intelligent AI Temperature Control Chips That Can Accurately Control the Temperature," At least in the early days the Transistor, radios did have them.

bravetraveler 5 days ago |

Influentists: LLM or get left behind! Also Influentists: surprisingly, not suggesting some friendly bioterrorism for the upper hand... or is this the real motivation for RTO? I wouldn't know.

Anyway, I'll worry when the dead weight disappears and ceases to be replaced. Shields and energy reserves are critical, etc.

LAC-Tech 5 days ago |

This is very well said - it is much nicer and more professional than the sentiments I could express on the matter.

The age of niche tech microcelebrities is on us. It existed a bit in the past (ESR, Uncle Bob, etc), but is much more of a thing now. Some of them make great content and don't say ridiculous things. Others not so much.

Even tech executives are aping it...

dang 5 days ago |

Recent and related (were there others?):

Google engineer says Claude Code built in one hour what her team spent a year on - https://news.ycombinator.com/item?id=46477966 - Jan 2026 (81 comments)

tossandthrow 5 days ago |

Influences generally don't get to me.

Sitting 2 hours with an Ai agent developing end to end products does.

pbasista 5 days ago |

> We must stop granting automatic authority to those who rely on hype, or vibes, rather than evidence.

> The tech community must shift its admiration back toward reproducible results and away from this “trust-me-bro” culture.

Well said, in my opinion.

mekoka 5 days ago |

AI is like flossing. You waste more time listening to other people's opinions on whether it's helpful, than just trying it out yourself for a few days.

imiric 5 days ago |

I'm still waiting for all these remarkable achievements produced with this new technology to provide tangible value to the world. Surely we should be seeing groundbreaking products and innovations in all industries by now, improving the lives of millions of people.

Instead all we get is anecdata from influencers and entrepreneurs, and the technology being shoved into every brand and product. It's exhausting.

At least it seems that the tide is starting to turn. Perhaps we are at the downward slope of the Peak of Inflated Expectations.

fuefhafj 5 days ago |

A recent favorite of mine is simonw who usually is unable to stop hyping LLMs suddenly forgetting they exist in order to rhetorically "win" an argument:

> If you're confident that you know how to securely configure and use Wireguard across multiple devices then great

https://news.ycombinator.com/item?id=46581183

What happened to your overconfidence in LLMs ability to help people without previous experience do something they were unable to before?

yoz-y 5 days ago |

This is an inverse fitness influencer.

Claiming the steroids they’re taking are doing all the work and they don’t need to put in work anymore.

DotaFan 5 days ago |

I think this "trend" is due to AI companies paying (in some form) the influencers to promote AI. Simple as that.

undefined 5 days ago |

undefined

dcre 5 days ago |

To me, debunking hype has always felt like arguing with an advertisement. A good read about that: https://www.liberalcurrents.com/deflating-hype-wont-save-us/

keyle 5 days ago |

Great article. This needs to be framed. The whole trust me bro, and shock and awe of social medias is a non-stop assault these days. You can't open a wall without seeing those promoted up front and centre and without any proof.

If AI was so good today, why isn't there an explosion of successful products? All we see is these half baked "zomg so good bro!" examples that are technically impressive, but decisively incomplete or really, proof of concepts.

I'm not saying LLMs aren't useful, but they're currently completely misrepresented.

Hype sells clicks, not value. But, whatever floats the investors' boat...

aeneas_ory 5 days ago |

Thank you for calling this out, we are being gaslit by attention seeking influencers. The algorithmic brAInrot is propagated by those we thought we can trust, just like the instagram and youtube stars we cared about who turned out to be monsters. I sincerely hope those people become better or wane into meaninglessness. Rakyll seems to spend more time on X than working on advancing good software these days, a shame given her past accomplishments.

Legend2440 5 days ago |

Idk man, all AI discussion feels like a waste of effort.

“yes it will”, “no it won’t” - nobody really knows, it's just a bunch of extremely opinionated people rehashing the same tired arguments across 800 comments per thread.

There’s no point in talking about it anymore, just wait to see how it all turns out.

AIorNot 4 days ago |

I mean this is just how social media works these days for anything

Our social hypemeter is broken beyond exaggeration

The single karpathy tweet launched hundreds of linkedin, youtube, substack thinkpieces, business articles- sometimes more than actual model releases

These days anything that gets our social attention is driven to 11

charles_f 5 days ago |

This is well written.

There is proof that AI isn't what they all make it to be, in the acquisitions of these companies. Why would Anthropic need Bun or OpenAI need windsurf for billions, if agents are all knowing and ready to replace devs?

This is a modern marketing, based on FUD and sensationalism

j45 5 days ago |

The more expressive AI talkers are the less they usually come from a tech background that can understand what the technology actually could do.

Someone mentioned to me they're like the historical paper boys who used to yell Extra Extra and announcing something trying to sell newspapers.

undefined 4 days ago |

undefined

Fazebooking 5 days ago |

Its still not a Hype, its still crazy what is possible today and we still have no clear at all if this progress continues as it does or not with the implication, that if it continues, it has major implications.

My wife, who has no clue about coding at all, chatgpted a very basic android app only with guidance of chatgpt. She would never ever been able to do this in 5 hours or so without my guidance. I DID NOT HELP HER at all.

I'm 'vibecoding' stuff small stuff for sure, non critical things for sure but lets be honest, i'm transforming a handfull of sentences and requirements into real working code, today.

Gemini 3 and Claude Opus 4.5 def feel better than their prevous versions.

Do they still fail? Yeah for sure but thats not the point.

The industry continues to progress on every single aspect of this: Tooling like claude CLI, Gemini CLI, Intellij integration, etc., Context length, compute, inferencing time, quality, depth of thinking etc. there is no current plateau visible at all.

And its not just LLMs, its the whole ecosystem of Machine Learning stuff: Highhly efficient weather model from google, Alpha fold, AlphaZero, Roboticsmovement, Environment detection, Image segmentation, ...

And the power of claude for example, you will only get with learning how to use it. Like telling it your coding style, your expectations regarding tests etc. We often assume, that an LLM should just be the magic work collegue 10x programmer but its everything an dnothing. If you don't communicate well enough it is not helpful.

And LLMs are not just good in coding, its great in reformulating emails, analysing error messages, writing basic SVG files, explaining kubernetes cluster status, being a friend for some people (see character.ai), explaining research paper, finding research, summarizing text, the list is way to long.

Alone 2026 there will go so many new datacenters live which will add so much more compute again, that the research will continue to be faster and more efficient.

There is also no current bubble to burst, Google fights against Microsoft, Antrophic and co. while on a global level USA competets with China and the EU on this technology. The richest companies on the planet are investing in this tech and they did not do this with bitcoins because they understod that bitcoin is stupid. But AI is not stupid.

Or Machine learing is not stupid.

Do not underestimate the current status of AI tools we have, do not underestimate the speed, continues progress and potential exponential growth of this.

My timespan expecation for obvious advancments in AI is 5-15 years. Experts in this field predict already 2027/2030.

But to iterate over this: a few years ago no one would have had a good idea how we could transform basic text into complex code in such a robust way, which such diverse input (different language, missing specs, ...) . No one. Even 'just generating a website'.

abicklefitch 5 days ago |

[dead]

jagoff 5 days ago |

[dead]

drnick1 5 days ago |

> X trackers and content blocked

> Your Firefox settings blocked this content from tracking you across sites or being used for ads.

Why is this website serving such crap?

For God's sake, if there is anything absolutely worth showing on X, just include a screenshot of it instead of subjecting us all to that malware.

tin7in 5 days ago |

I'm really surprised how much pushback and denial there is still from a lot of engineers.

This is truly impressive and not only hype.

Things have been impressive at least since April 2025.