Hacker news

  • Top
  • New
  • Past
  • Ask
  • Show
  • Jobs

Stanford report highlights growing disconnect between AI insiders and everyone (https://techcrunch.com)

263 points by ZeidJ 4 days ago | 403 comments | View on ycombinator

ike2792 4 days ago |

This has mirrored what I've seen in my company. People in the data science/ML part of the company are super excited about AI and are always giving presentations on it and evangelizing it. Most engineers in other areas, though, are generally underwhelmed every time they try using it. It's being heavily pushed by AI "experts" and senior leaders, but the enthusiasm on the ground is lacking as results rarely live up to the extremely rosy promises that the "experts" keep making. Meanwhile, everyone can read the news about layoffs attributed to AI and can see that hiring (especially of junior engineers) has slowed to a trickle. You can only fool people for so long.

CobrastanJorji 4 days ago |

I think people are really underestimating how poorly today's tweens think of AI. "That looks like chatgpt" is an insult. Kids avoid things because they heard somewhere that AI might have been involved and have a sense that means it is bad or immoral or illegal or cheating in some nebulous way, and it's reinforced by their teachers telling them that using AI for homework is cheating.

I think this next generation is going to come up fundamentally believing that AI is generally a bad thing, and it's going to surprise older people.

belval 4 days ago |

This is poor reporting, almost needs a checklist:

[X] Tweets and instagram comments presented as "what society is thinking"

[X] Ties Luigi Mangione and the California warehouse fire to Gen Z discontent (about AI?).

[X] Statistics being used to support the title with little to no regards to continuity: "those respondents who said that AI makes them “nervous” grew from 50% to 52% during the same period" => percentage was 52% in 2023, 50% in 2024 and 52% in 2025, seems mostly flat to me, with the real jump being in 2022-2023 with 39%.

simonw 4 days ago |

I was talking recently to someone who teaches AI-adjacent courses at a US university (not in a computer science department) and they said that enrollment in their class is lower than expected, which they think is likely due to the severity of the AI backlash among students on campus.

nacozarina 4 days ago |

a person can have full faith in the potential value of ai science and simultaneously have zero faith in the current crop of business stewards of that science.

no one is questioning the underlying model mathematics, they are questioning deceptive & reckless stewards.

advael 4 days ago |

AI continues to be a stupidly vague term, and the example I keep going back to is present in this article

Meaningful advances in medical diagnosis are not coming from chatbot companies. Some are coming from machine learning methods. Perhaps measuring public sentiment about such a vagary is not a very productive way to quantify anything

That said, I continue to also be frustrated with people using the abstract concept of a new technology as a substitute for the institutions that use that technology to exert power in the world and what they do with that power, which is - as many in the comments already point out - is what the vast majority of people are actually mad about, and right to be

MrScruff 4 days ago |

I think it's not that difficult to see why a technology that will likely trigger widespread unemployment during a cost of living crisis, an arms race with China, along with all the alignment concerns, might not be hugely popular with the public.

Maybe I'd be a bit more optimistic if someone could explain a realistic economic scenario for how we're going to transition into our utopian abundant future without a depression or a revolution.

algoth1 4 days ago |

My wife has a very serious health issue, that has caused more suffering then words could describe. o1-preview was the first ai that actually proved useful. From there on, each improvement on ai caused an incremental improvement in her situation. Even recently we were able to pinpoint exactly what was causing her flare, and solve the situation the same day, just by prompting a claude opus conversation where i’ve shared all her health notes. But if i weren’t a data freak and haven’t been collecting data about her issues (what she does/takes and how she feels) for so long i dont think we would had been able to get this far. So i think ai appeals to people with problems that can be solved by finding patterns in data. People that say ai makes mistakes don’t understand that the power is in finding patterns, not in finding THE right answer. You need to prompt from that prespective

janalsncm 4 days ago |

It is worth pointing out that we got here despite all of the “alignment” research and safetyism surrounding the models. As it turns out, the models don’t wake up and start destroying things. We knew this all along, but every time a new article came along and anthropomorphized and exaggerated another experiment it fed the clickbait machine.

The fundamental alignment issue is aligning the companies themselves with society, not the models with the companies. Widespread unemployment is not aligned with society, but it is aligned with Anthropic and OpenAI if it makes them rich.

Therefore the only “harms” the companies will take seriously are those which also harm the company. For example reputational harms from enabling scams aren’t allowed.

Perhaps all of this isn’t fair, since companies actively subverted safety research for profitability. But then I would go back to my earlier point of over-indexing on unintended behaviors and under-indexing on intended ones.

munificent 4 days ago |

> Meanwhile, 56% of AI experts said they believed AI would have a positive impact on the U.S. over the next 20 years.

Imagine choosing to be an expert in something that you think is a coin flip away from making the world worse.

rconti 4 days ago |

It doesn't help that AI "thought leaders" can't articulate a vision by which our lives will improve rather than be made worse.

It looks like:

1. They take billions in investment

2. They spend trillions

3. They and their investors profit in the quadrillions from all the "labor saving"

4. ???

5. Everyone's needs are met.

maplethorpe 4 days ago |

In case you're wondering who they mean by "AI experts", I checked the Pew poll:

> Note: “AI experts” refer to individuals whose work or research relates to AI. The AI experts surveyed are those who were authors or presenters at an AI-related conference in 2023 or 2024 and live in the U.S. Expert views are only representative of those who responded.

adamddev1 4 days ago |

I don't know how many times I've seen some Google AI summary or ChatGPT with references that, when I checked, did not say what what the AI summary said. If a high school student falsified references in a paper like this, they would get a bad or failing grade. This is bad, not acceptable, the teacher would say.

But we have been sold to use these constantly falsified AI summaries as the go-to source of "truth" by all levels of society. We're trading truth for an illusion of short-term gains. This will not have good consequences.

gcheong 4 days ago |

"Make something people want" seems so quaint now.

jjulius 4 days ago |

>... with Gen Z reportedly leading the way...

The kids are alright.

neuronic 4 days ago |

I work with LLMs extensively and daily and they are very useful. BUT dear god, absolutely nothing about them is intelligent.

If you work at the edge of context you know what I mean. Even within context, if the system was truly intelligent, the way that Euclid was intelligent, why do I need /superpowers and 50 cycles to get a certain implementation right?

Why is the AI not one-shotting obscure but simple business logic cases with optimal code? Whoops pattern never seen before! There is no thought to it, zero. The LLM is just shotgunning token prediction and context management until something sticks. The amount of complexity you get out of language is certainly fascinating and surprising at times but it's not intelligence - maybe part of it?

Sell it as skills or whatever, but all you do every day is fancy ways of context management to guardrail the token predictor algorithm into predicting the tokens that you want.

nprateem 4 days ago |

I think it's pretty clear that the problems with AI are:

1. Overhyped. Try writing a blog post that doesn't sound like it. Everyone is sick of reading it now.

2. Affecting the wrong people. It used to be the rich got richer and the poor got poorer. But now a lot of the middle class will get poorer

3. Severely damages the work hard way out. Competition will become brutal if there's almost no barrier to entry. This will drive down profit, affect hiring and will become a conveyor belt of people trying to win the business lottery. This will make moats even more essential.

4. The obvious theft of creative works which destroys dreams and livelihoods.

No wonder the younger generation are against it. Those of us in the middle are still just hoping at least we can get through somehow. At least we have hope.

zb3 4 days ago |

People are anti AI for obvious and valid reasons, but I think we should focus on where the profit goes and not on hating the technology itself.

Of course, if people are fired and only capital owners / AI experts get to earn anything then this is wrong and a revolution is obviously needed and unavoidable.

But for me, the best outcome would be if it was AI that did all the jobs so people could focus on doing what they want, not that we'd go back to pre-AI era..

Initially however we need to balance between full wealth redistribution and keeping the incentive to develop AI further.

Of course by AI I mean really useful AI, the real part, not the marketing part.

ofjcihen 4 days ago |

Been saying this for a bit but the things I’ve seen associated with AI seem to be the things that it’s pretty mid at. Coding, automated actions etc. I wholeheartedly believe adoption and perception would be better if the things it was amazing at were pushed more.

Take log review for example. Whether it’s admin or security LLMs are incredible at reading awfully formatted logs and even using those to pull meaning from other logs as well. Like turn an hour long log review into a 10 minute log review type thing.

Yokohiii 4 days ago |

My only surprise is that the AI "elite" is surprised.

sotix 4 days ago |

My experience has been that the disconnect is between the Bay Area and everywhere else. The engineers at my company are split 50% in the Bay Area and 50% elsewhere. The engineers in the Bay treat it as a borderline religion. They evangelize it, and do not allow any form of criticism. It reminds me of the hippie movement: idealistic and not grounded in reality.

gaigalas 4 days ago |

https://hai.stanford.edu/assets/files/ai_index_report_2026.p...

> The United States reported the lowest trust in its own government to regulate AI responsibly of any country surveyed, at 31%.

It seems US citizens are really against the current administration, just using the fact that AI investment is intrinsecally connected to it to voice their opposition.

> Country-level expectations follow similar patterns to the earlier sentiment trends. Nigeria, Japan, Mexico, the United Arab Emirates, South Korea, and India all expected AI to create more jobs than it eliminates, with shares above 60%. The United States and Canada sat at the opposite end, where 67% and 68% of respondents expected AI to eliminate jobs and disrupt industries.

Globally, the disconnect is not growing. It's really just an U.S. problem (spilling to neighbouring Canada too).

So, no luddites in sight, again. It's just a public perception over a polemic topic being leveraged for ideological reasons sinking AI on US only.

JumpCrisscross 4 days ago |

The lack of federal permitting standards for AI data centers is really going to bite the industry in the ass. We also probably need something akin to the WARN Act for AI-related layoffs. (Possibly with multi-year benefits for large companies.)

thepasch 4 days ago |

This AI rollout has been fundamentally rushed and fucked from the very beginning and I think the people who are responsible for doing it this way have done more irreparable damage to society than any single group of humans in the entire history of the species, and I mean it.

It’s always only ever about how the new model is faster, better, smarter. Or how the tech will be bringing ruin to the job market and someone should probably do something about that some time soon. Zero efforts to create any sort of educational content - how it even works, how to vet its output, how to have an eye for confabulation, how to use it as thinking enhancement rather than replacement, to keep in mind that it’s trained to please and will literally generate anything to cause users to click the thumbs up button. Nope, it’s just “ModelGPClaude can make mistakes! Better be careful!”

And then everyone’s surprised when an utterly improvident handling of 4o kicks off the biggest concentrated wave of AI psychosis seen yet. Because, surprise! When you give people a model that’s trained to anthropomorphize itself, people who have no idea about any of this tech and have no access to education about any of it might believe it’s more than it is! Boy, who’d’ve thunk; isn’t the world complex?!

This was a symptom of this exact same disease. I have far less worry about the tech and far more worry about how the disconnected venture capital caste is inflicting it upon us.

SunshineTheCat 4 days ago |

Giant leaps in innovation almost always have a reaction like this.

It's new, people fear it. Sometimes justified, usually not.

People greatly feared the car because of the number of horse-related jobs it would displace.

President Benjamin Harrison and First Lady Caroline Harrison feared electricity so much they refused to operate light switches to avoid being shocked. They had staff turn lights on/off for them.

Looking back at these we might laugh.

We're largely in the same boat now.

It's possible AI will destroy us all, but judging from history, the irrational reactions to something new isn't exactly unprecedented.

spprashant 4 days ago |

I don't think the disconnect is very surprising to the "insiders".

Your Dario's and Sam's know exactly what they are doing. They know it's going to cause a lot of job displacement, even if the technology isn't perfect. They are trying to get the C-suite elite hyped up about it, and the hyperscalers are along for the ride as well. There's so much money to be made.

They could not care less about what joe schmoe on the street thinks about it.

largbae 4 days ago |

Well we can easily see that the "abundance" people are wrong(for example everyone can't have a penthouse apartment overlooking Central Park, no matter how capable the robots become).

An alternative possibility that inequality is about to explode between those who profit from AI/robotic labor and those displaced by it.

vrganj 4 days ago |

AI is a religious icon for capitalist ideologues.

A silicon savior to finally free capital from the dependence on labor with all its pesky demands like sick leave or a living wage.

You can see this in the literal deification going on in VC circles. AGI is the capitalist version of the Second Coming, God coming down to earth to redeem them by finally solving the contradictions in their world view.

Unfortunately for them and fortunately for the rest of us, it's not all they hope it to be.

zmmmmm 4 days ago |

My own anecdotal experience is yes, there is a real visceral hatred of AI among Gen-Z. You have to look at it through a lens where they already feel like there's been a massive amount of intergenerational theft against them - particularly with the housing market putting owning a home out of reach, along with the evaporation of the concept of a stable career. Now they are going through education learning skills that they are incessantly hearing will have no purpose and there will not be jobs for them.

It's hard not to see that they have a point. If AI is so great and going to save so much money - how about starting by paying some of that forward? Suddenly when you ask the billionaires or AI tech elite to share any of the wealth they are so confident they will generate, everyone backs away fast and starts to behave like it is all a speculative venture. So which one is it?

NooneAtAll3 4 days ago |

can someone explain what kind of ai-related regulations are there in Singapore and Indonesia to get such a high trust score?

66yatman 3 days ago |

Everyone is trying to keep their jobs that's all.

therobots927 4 days ago |

What the tech elite fail to understand is that we are at historic levels of wealth and income inequality. Access to healthcare is determined by one’s employment which makes what I’m about to explain a matter of life and death.

It doesn’t matter if you think it’s all going to work out and AI will bring an unprecedented era of abundance. That is not the current state.

The current state is: Nearly all productivity growth since 1980 has gone to shareholders, not workers: https://www.epi.org/productivity-pay-gap/

Now what do you think happens when we dramatically expand productivity with AI? Well, we’re already seeing unprecedented layoffs in tech. And it’s easy to draw the conclusion that unless something structural changes all of the productivity gains from AI will go to investors not workers. Leaving said workers without access to healthcare or housing.

And of course let’s not forget that the tech elite in question supported Trump in the last election - someone who has done everything in his power to reduce healthcare access among the low income / unemployed population. This isn’t fucking rocket science guys.

epgui 4 days ago |

This article seems to be using the word “expert” quite imprecisely.

markus_zhang 4 days ago |

Regardless, I think we are going to see an acceleration of AI research.

I just wish my wife is more serious about camping and learning survival skills. I think Shit is going to hit the fan in the next 5-10 years but she thinks that’s crazy. Oh well maybe I am crazy.

daveguy 4 days ago |

One of the most hilarious AI-vangelical posts I've seen recently is from Steve Yegg through Simon Willison [0]....

> The TL;DR is that Google engineering appears to have the same AI adoption footprint as John Deere, the tractor company. Most of the industry has the same internal adoption curve: 20% agentic power users, 20% outright refusers, 60% still using Cursor or equivalent chat tool. It turns out Google has this curve too... [0]

Ummmm... Steve. You think Google might be able to figure out a super huge awesome new thing from 1 out of 5 of their employees. Or, given this is a consistent curve across the industry (even at Google)... Maybe AI is only about a fifth as cool and helpful as you and the enthusiasts think it is?

[0] https://simonwillison.net/2026/Apr/13/steve-yegge/#atom-ever...

hcmgr 4 days ago |

The tone deafness of the tech community is so unbearable. Either too on the spectrum, too ambitious (the world is fine cause I’m getting mine), or too isolated from non-tech people, to realise most people despise what they’re creating.

There’s also a lack of willingness to ‘bring along’ the public. It’s just “make the god thing; ask for permission later”.

jarjoura 4 days ago |

This is 10000% OpenAI's fault.

In 2022 the world was open arms, welcoming AI advancements.

However, since 2022, OpenAI and all of its original founding researchers, had their dramatic fallout, and began screaming in public saying crazy people things like "the end is coming."

Why did they insist on force launching ChatGPT? Google at the time refused to launch their own version (it was their own research that gave birth to LLMs) based chat because they knew all of the negative outcomes and unreliability of it all was just a poor product experience.

Instead of launch quietly like DALL-E and keep it fun and experiemental, nope, they threw it up online and moved full-steam ahead.

"THE END IS COMING" Sam Altman said. "AI WILL TAKE YOUR JOBS WITHIN 5 YEARS" Dario said. "AGI IS ALMOST HERE" Elon Musk said.

The disconnect is because these specific men, making those specific bold crazy person claims, with zealous cult following employees (including many of us here in this forum), kept marching ahead. Not only that, no one asked the rest of the world if they even wanted this technology EVERYWHERE.

This technology could have been so cool if it were given the breathing room to find usecases for it. Natural Language programming has been tried for a half a century, and it finally arrives.

Yet, it's so tainted by all the crazy person speak, and doomsday messaging, it's also thrown out there in such a haphazard way that have burned so many bridges, this technology is truely toxic. The fact that Gen-A and Gen-Z now have to waste brain power speculating if something is AI generated, is such a waste, but here we are. Welcome to the shit storm that was entirely made by those men.

cmiles8 4 days ago |

I have seen this shift myself. A year ago everyone was super excited by AI. Now, if you exit the tech ecosystem, most people have become decidedly “meh” about the tech.

“Is that some nonsense ChatGPT told you?” Has turned into an almost cynical mocking in response to someone commenting about an issue.

The hype seems to have run its course. I’m a fan and use it constantly, but it’s also clear there’s serious storm clouds and headwinds on the horizon.

mnmnmn 4 days ago |

Funniest shit I heard all day. lol at all the genius “insiders” who the rest of us justifiably hate

shirleypatrick 4 days ago |

[dead]

neuzhou 4 days ago |

[dead]

goekjclo 4 days ago |

Makes sense.

cynicalsecurity 4 days ago |

Paraphrasing the classic, it's not AI that people are unhappy with, it's their life around AI. The world generally appears to have become a harsher and more dangerous place - even though it hasn't. But people and especially tabloid press like finding scapegoats and participating in mass hysteria. The anti-AI hysteria is going to go away soon while AI isn't. It's just another tool, like cars or factories. Granted, it brings some danger, but at the same time it brings overwhelmingly more good.

slopinthebag 4 days ago |

If "AI" was just free local and open models running on consumer hardware, fewer people would have an issue with it. Which highlights that the issue is with the hyper scalers, the rhetoric, the corporations, the marketing, etc etc.

We are ever so close to nearing the point where 90% of our AI usage can go through providers of open models, who all compete with each other to drive down prices and prevent rug pulls, leaving Dario and Sam holding empty bags.