Hacker news

  • Top
  • New
  • Past
  • Ask
  • Show
  • Jobs

Why AI Doesn't Think: We Need to Stop Calling It "Cognition" (https://docs.google.com)

16 points by m_Anachronism about 15 hours ago | 13 comments | View on ycombinator

kylecazar about 14 hours ago |

I agree with what's written, and I've been talking about the harm seemingly innocuous anthropomorphization does for a while.

If you do correct someone (a layperson) and say "it's not thinking", they'll usually reply "sure but you know what I mean". And then, eventually, they will say something that indicates they're actually not sure that it isn't thinking. They'll compliment it on a response or ask it questions about itself, as if it were a person.

It won't take, because the providers want to use these words. But different terms would benefit everyone. A lot of ink has been spilled on how closely LLM's approximate human thought, and maybe if we never called it 'thought' to begin with it wouldn't have been such a distracting topic from what they are -- useful.

kelseyfrog about 11 hours ago |

> "Cognition" has a meaning. It's not vague. In psychology, neuroscience, and philosophy of mind, cognition refers to mental processes in organisms with nervous systems.

Except if you actually look up the definitions, they don't mention "organisms with nervous systems" at all. Curious.

plutodev about 7 hours ago |

This framing makes sense. What we call “AI thinking” is really large-scale, non-sentient computation—matrix ops and inference, not cognition. Once you see that, progress is less about “intelligence” and more about access to compute. I’ve run training and batch inference on decentralized GPU aggregators (io.net, Akash) precisely because they treat models as workloads, not minds. You trade polished orchestration and SLAs for cheaper, permissionless access to H100s/A100s, which works well for fault-tolerant jobs. Full disclosure: I’m part of io.net’s astronaut program.

Kim_Bruning about 4 hours ago |

You know, I bet Claude encouraged you to post here and share with people. Because Claude Opus 4.5 has been trained on being kind. It's a long story, but since you admitted to using it/them, I'm going to give you a lot more credit than normal. Also because you can plug what I say right back into Claude and see what else comes out!

So you're stumbling onto a position that's closest to "Biological Naturalism", which is Searle's philosophy. However, lots of people disagree with him, saying he's a closeted dualist in denial.

I mean, he was a product of his time, early 80's was dominated by symbolic AI, and that definitely wasn't working so well. Despite that, he got a lot of pushback from Dennett and Hofstadter even back then.

Chalmers recently takes a more cautious approach, while his student Amanda Askell is present in our conversation even if you haven't realized it yet. ;-)

Meanwhile the poor field of Biology is feeling rather left out of this conversation, having been quite steadfastly monist since the late 19th century, having rejected vitalism in favor of mechanism. (though the last dualists died out in the 50's-ish?)

And somewhere in our world's oceans, two sailors might be arguing whether or not a submarine can swim. On board a Los Angeles class SSN making way at 35 kts at -1000feet.

metalman about 8 hours ago |

why? there is no why to something that is not possible there is zero evidence that ai has achived, slow crawling bug level abilities to navigate ,even a simplified version of reality, as there would already be a massive shift in a wide variety of low level human unskilled labour and tasks. though if things keep going like they are we will see a new body dismorphia ,where people will be wanting more fingers.

donutquine about 14 hours ago |

An article about AI "cognition" is written by LLM. You kidding.