Hacker news

  • Top
  • New
  • Past
  • Ask
  • Show
  • Jobs

Thinking Fast, Slow, and Artificial: How AI Is Reshaping Human Reasoning (https://papers.ssrn.com)

189 points by Anon84 1 day ago | 110 comments | View on ycombinator

gregfrank 15 minutes ago |

This framing points at something important that I think the alignment evaluation literature often misses: the distinction between what a model represents internally and what it does behaviorally. Probing can tell you what's in the representations, and linear probes can be surprisingly accurate. But in experiments I've run on DeepSeek and Qwen models, high probe accuracy for a given behavior doesn't predict whether the model actually routes through that behavior at inference time. The detection layer and the routing layer are architecturally separable, and most evaluation benchmarks are measuring the former while claiming to measure the latter.

bsenftner about 12 hours ago |

A major problem with LLM AIs is their core nature is not understood by the vast majority of everyone - developers included. They are an embodiment of literature, and if that confuses you you're probably operating on an incorrect definition of them.

I like to think of them as idiot savants with exponential more savant than your typical fictional idiot savant. They pivot on every word you use, each word in your series activating areas of training knowledge, until your prompt completes and then the LLM is logically located at some biased perspective of the topic you seek (if your wording was not vague and using implied references). Few seem to realize there is no "one topic" for each topic an LLM knows, there are numerous perspectives on every topic. Those perspectives reflect the reason one person/group is using that topic, and their technical seriousness within that topic. How you word your prompts dictates which of these perspectives your ultimate answer is generated.

When people say their use of AI reflects a mid level understanding of whatever they prompted, that is because the prompt is worded with the language used by "mid level understanding persons". If you want the LLM to respond with expert guidance, you have to prompt it using the same language and terms that the expert you want would use. That is how you activate their area of training to generate a response from them.

This goes further when using coding AI. If your code has the coding structure of a mid level developer, that causes a strong preference for mid level developer guidance - because that is relevant to your code structure. It requires a well written prompt using PhD/Professorial terminology in computer science to operate with a mid level code base and then get advice that would improve that code above it's mid level architecture.

thr0waway001 1 day ago |

AI reminds of listening to any person who seems like an intellectual authority on multiple subjects on YouTube and is not afraid to wax confidently on any topic. They seem very intelligent and knowledgable until they actually talk about something you know.

In other words, I try to learn from it whenever it does something I can't do but when it does something I can do or something I'm really good at it I find myself wanting to correct it cause it doesn't do it that well.

It just seems like a really quick thinking and fast executing but, ultimately, mid skilled / novice person.

HarHarVeryFunny 1 day ago |

> Across studies, participants with higher trust in AI and lower need for cognition and fluid intelligence showed greater surrender to System 3

So the smart get smarter and the dumb get dumber?

Well, not exactly, but at least for now with AI "highly jagged", and unreliable, it pays to know enough to NOT trust it, and indeed be mentally capable enough that you don't need to surrender to it, and can spot the failures.

I think the potential problems come later, when AI is more capable/reliable, and even the intelligentsia perhaps stop questioning it's output, and stop exercising/developing their own reasoning skills. Maybe AI accelerates us towards some version of "Idiocracy" where human intelligence is even less relevant to evolutionary success (i.e. having/supporting lots of kids) than it is today, and gets bred out of the human species? Maybe this is the inevitable trajectory: species gets smarter when they develop language and tool creation, then peak, and get dumber after having created tools that do the thinking for them?

Pre-AI, a long time ago, I used to think/joke we might go in the other direction - evolve into a pulsating brain, eyes, genitalia and vestigial limbs, as mental work took over from physical, but maybe I got that reversed!

seu about 13 hours ago |

There's a very interesting critique of Kahneman's "Thinking fast and slow" from German psychologist Gerd Gigerenzen: https://www.researchgate.net/publication/397923694_The_Legac...

I suggest everyone interested in learning how these theories emerge, and how the social sciences work, to give it a read. Also, it kind of dismantles the whole idea of System 1 and 2, which then I guess would question the theoretical foundations of this paper too.

kikkupico 1 day ago |

Contrary to the general opinion, I feel that AI has IMPROVED my cognitive skills. I find myself discovering solutions to problems I've always struggled with (without asking AI about it, of course). I also find myself becoming much better at thinking on my feet during regular conversations. I believe I'm spending more time deep thinking than ever before because I can leave the boring cognitive stuff to AI, and that's giving my mind tougher workouts and making it stronger; but I could be completely wrong.

vicchenai 1 day ago |

I've noticed this in my own work with financial data. I used to manually sanity-check numbers from SEC filings and catch weird stuff all the time. Started leaning on LLMs to parse them faster and realized after a few weeks I was just... accepting whatever came back without thinking about it. Had to consciously force myself to go back to spot-checking.

The "System 3" framing is interesting but I think what's really happening is more like cognitive autopilot. We're not gaining a new reasoning system, we're just offloading the old ones and not noticing.

woopsn 1 day ago |

In the technophile's future people aren't just getting dumber, not wanting to think or forgetting how - they aren't allowed to think. Maybe about anything. It's too big liability, costs too much to support, moreover detracts from the product. Like Sam A telling those Indian students they aren't worth the energy and water. That's what we're dealing with.

Ozzie_osman 1 day ago |

When humans have an easy way to do something that is almost as good, we choose that easy way. Call it laziness, energy conservation, coddling, etc. The hard thing then becomes hard to do even when the easy thing isn't available, because the cognitive muscle and the discipline atrophy.

Like kids who are never taught to do things for themselves.

alexchengyuli about 12 hours ago |

The paper puts AI next to System 1 and 2, but those are ways you think. With AI the thinking still happens, you just can't see or control it anymore.

When you googled something and got five contradictory results, that told you the question was hard. A clean AI answer doesn't give you that signal. Coherence looks the same whether the answer is right or wrong.

The failure mode didn't get worse. It got quieter.

gmuslera 1 day ago |

The main problem with "System 3" is that it have its own kind of "cognitive biases", like System 1, but those new cognitive biases are designed by marketing, politics, culture and whatever censor or makes visible the original training. Even if the process, the processing and whatever else around was perfect (that is not, i.e. hallucinations)

But, we still have the System 1, and survived and reached this stage because of it, because even a bad guess is better than the slowness of doing things right. It have its problems, but sometimes you must reach a compromise.

meander_water about 24 hours ago |

I'm conflicted about this. As I was reading the paper, my AI detector senses were tingling all over the place.

Large parts of the paper score very high probability of being written entirely by AI in gptzero.

I'm not sure if I could trust anything written in it.

tim333 about 10 hours ago |

I'm not sure if this is saying people were given a task and the option to consult an AI. When they did they were influenced by its answer.

Which is kind of duh? Of course. They have some cool language like calling the AI system 3 and calling taking advice 'cognitive surrender' but I'm not sure how this differs from asking your mate Bob and taking his advice?

psybrg-prtcls about 17 hours ago |

Anyone else get the distinct impression that parts of this paper were written by AI?

nasretdinov 1 day ago |

I mean... I don't really check calculations made by a computer (e.g. by my own programs) all that often either and I think I'm completely fine :). But I guess the difference is that we kind of know how computers work and that they're generally super accurate and make mistakes incredibly rarely. The "AI" (although I disagree with "I" part) is wrong incredibly often, and I don't think people appreciate that the difference to the "traditional" approach isn't just significant, it's astronomical: LLMs make things up at least 5% of the time, whereas CPUs male mistakes maybe (10^-12)% of time or less. It's 12 orders of magnitude or so.

danilor 1 day ago |

I couldn't figure if this was published to a journal? Or is it only published to a pre-print server?

Null-Set about 17 hours ago |

The original reseaech around thinking fast and slow (aka system 1 system 2 thinking) failed to be replicated when researchers tried.

pink_eye 1 day ago |

Can it design and implement a plutonium electric fuel cell with a 24,000 year half life? We have yet to witness it. Can it automate Farming and Agriculture? These are the real questions. #Born-Crusty

johnnymonster 1 day ago |

blocking access to a site because you don't enable javascript is diabolical

andai 1 day ago |

Damn. I came up with a hypothetical "System 3" last year! I didn't find AI very helpful in that regard though.

Current status: partially solved.

Problem: System 2 is supposed to be rational, but I found this to be far from the case. Massive unnecessary suffering.

Solution (WIP): Ask: What is the goal? What are my assumptions? Is there anything I am missing?

--

So, I repeatedly found myself getting into lots of trouble due to unquestioned assumptions. System 2 is supposed to be rational, but I found this to be far from the case.

So I tried inventing an "actually rational system" that I could "operate manually", or with a little help. I called it System 3, a system where you use a Thinking Tool to help you think more effectively.

Initial attempt was a "rational LLM prompt", but these mostly devolve into unhelpful nitpicking. (Maybe it's solvable, but I didn't get very far.)

Then I realized, wouldn't you get better results with a bunch of questions on pen and paper? Guided writing exercises?

So here are my attempts so far:

reflect.py - https://gist.github.com/a-n-d-a-i/d54bc03b0ceeb06b4cd61ed173...

unstuck.py - https://gist.github.com/a-n-d-a-i/d54bc03b0ceeb06b4cd61ed173...

--

I'm not sure what's a good way to get yourself "out of a rut" in terms of thinking about a problem. It seems like the longer you've thought about it, the less likely you are to explore beyond the confines of the "known" (i.e. your probably dodgy/incomplete assumptions).

I haven't solved System 3 yet, but a few months later found myself in an even more harrowing situation which could have been avoided if I had a System 3.

The solution turned out to be trivial, but I missed it for weeks... In this case, I had incorrectly named the project, and thus doomed it to limbo. Turns out naming things is just as important in real life as it is in programming!

So I joked "if being pedantic didn't solve the problem, you weren't being pedantic enough." But it's not a joke! It's about clear thinking. (The negative aspect of pedantry is inappropriate communication. But the positive aspect is "seeing the situation clearly", which is obviously the part you want to keep!)

pugchat about 4 hours ago |

[dead]

BANRONFANTHE about 12 hours ago |

[dead]

bobokaytop 1 day ago |

[dead]

yubainu about 18 hours ago |

[dead]

andrewssobral 1 day ago |

[dead]

ashwinnair99 1 day ago |

[flagged]

ashwinnair99 1 day ago |

[flagged]

bjourne 1 day ago |

"Time pressure (Study 2) and per-item incentives and feedback (Study 3) shifted baseline performance but did not eliminate this pattern: when accurate, AI buffered time-pressure costs and amplified incentive gains; when faulty, it consistently reduced accuracy regardless of situational moderators."

I LOLed.

deevelton 1 day ago |

Have been curious what it could look like (and whether it might be an interesting new type of “post” people make) if readers could see the human prompts and pivots and steering of the LLM inline within the final polished AI output.