189 points by Anon84 1 day ago | 110 comments | View on ycombinator
gregfrank 15 minutes ago |
bsenftner about 12 hours ago |
I like to think of them as idiot savants with exponential more savant than your typical fictional idiot savant. They pivot on every word you use, each word in your series activating areas of training knowledge, until your prompt completes and then the LLM is logically located at some biased perspective of the topic you seek (if your wording was not vague and using implied references). Few seem to realize there is no "one topic" for each topic an LLM knows, there are numerous perspectives on every topic. Those perspectives reflect the reason one person/group is using that topic, and their technical seriousness within that topic. How you word your prompts dictates which of these perspectives your ultimate answer is generated.
When people say their use of AI reflects a mid level understanding of whatever they prompted, that is because the prompt is worded with the language used by "mid level understanding persons". If you want the LLM to respond with expert guidance, you have to prompt it using the same language and terms that the expert you want would use. That is how you activate their area of training to generate a response from them.
This goes further when using coding AI. If your code has the coding structure of a mid level developer, that causes a strong preference for mid level developer guidance - because that is relevant to your code structure. It requires a well written prompt using PhD/Professorial terminology in computer science to operate with a mid level code base and then get advice that would improve that code above it's mid level architecture.
thr0waway001 1 day ago |
In other words, I try to learn from it whenever it does something I can't do but when it does something I can do or something I'm really good at it I find myself wanting to correct it cause it doesn't do it that well.
It just seems like a really quick thinking and fast executing but, ultimately, mid skilled / novice person.
HarHarVeryFunny 1 day ago |
So the smart get smarter and the dumb get dumber?
Well, not exactly, but at least for now with AI "highly jagged", and unreliable, it pays to know enough to NOT trust it, and indeed be mentally capable enough that you don't need to surrender to it, and can spot the failures.
I think the potential problems come later, when AI is more capable/reliable, and even the intelligentsia perhaps stop questioning it's output, and stop exercising/developing their own reasoning skills. Maybe AI accelerates us towards some version of "Idiocracy" where human intelligence is even less relevant to evolutionary success (i.e. having/supporting lots of kids) than it is today, and gets bred out of the human species? Maybe this is the inevitable trajectory: species gets smarter when they develop language and tool creation, then peak, and get dumber after having created tools that do the thinking for them?
Pre-AI, a long time ago, I used to think/joke we might go in the other direction - evolve into a pulsating brain, eyes, genitalia and vestigial limbs, as mental work took over from physical, but maybe I got that reversed!
seu about 13 hours ago |
I suggest everyone interested in learning how these theories emerge, and how the social sciences work, to give it a read. Also, it kind of dismantles the whole idea of System 1 and 2, which then I guess would question the theoretical foundations of this paper too.
kikkupico 1 day ago |
vicchenai 1 day ago |
The "System 3" framing is interesting but I think what's really happening is more like cognitive autopilot. We're not gaining a new reasoning system, we're just offloading the old ones and not noticing.
woopsn 1 day ago |
Ozzie_osman 1 day ago |
Like kids who are never taught to do things for themselves.
alexchengyuli about 12 hours ago |
When you googled something and got five contradictory results, that told you the question was hard. A clean AI answer doesn't give you that signal. Coherence looks the same whether the answer is right or wrong.
The failure mode didn't get worse. It got quieter.
gmuslera 1 day ago |
But, we still have the System 1, and survived and reached this stage because of it, because even a bad guess is better than the slowness of doing things right. It have its problems, but sometimes you must reach a compromise.
meander_water about 24 hours ago |
Large parts of the paper score very high probability of being written entirely by AI in gptzero.
I'm not sure if I could trust anything written in it.
tim333 about 10 hours ago |
Which is kind of duh? Of course. They have some cool language like calling the AI system 3 and calling taking advice 'cognitive surrender' but I'm not sure how this differs from asking your mate Bob and taking his advice?
psybrg-prtcls about 17 hours ago |
nasretdinov 1 day ago |
danilor 1 day ago |
Null-Set about 17 hours ago |
pink_eye 1 day ago |
johnnymonster 1 day ago |
andai 1 day ago |
Current status: partially solved.
Problem: System 2 is supposed to be rational, but I found this to be far from the case. Massive unnecessary suffering.
Solution (WIP): Ask: What is the goal? What are my assumptions? Is there anything I am missing?
--
So, I repeatedly found myself getting into lots of trouble due to unquestioned assumptions. System 2 is supposed to be rational, but I found this to be far from the case.
So I tried inventing an "actually rational system" that I could "operate manually", or with a little help. I called it System 3, a system where you use a Thinking Tool to help you think more effectively.
Initial attempt was a "rational LLM prompt", but these mostly devolve into unhelpful nitpicking. (Maybe it's solvable, but I didn't get very far.)
Then I realized, wouldn't you get better results with a bunch of questions on pen and paper? Guided writing exercises?
So here are my attempts so far:
reflect.py - https://gist.github.com/a-n-d-a-i/d54bc03b0ceeb06b4cd61ed173...
unstuck.py - https://gist.github.com/a-n-d-a-i/d54bc03b0ceeb06b4cd61ed173...
--
I'm not sure what's a good way to get yourself "out of a rut" in terms of thinking about a problem. It seems like the longer you've thought about it, the less likely you are to explore beyond the confines of the "known" (i.e. your probably dodgy/incomplete assumptions).
I haven't solved System 3 yet, but a few months later found myself in an even more harrowing situation which could have been avoided if I had a System 3.
The solution turned out to be trivial, but I missed it for weeks... In this case, I had incorrectly named the project, and thus doomed it to limbo. Turns out naming things is just as important in real life as it is in programming!
So I joked "if being pedantic didn't solve the problem, you weren't being pedantic enough." But it's not a joke! It's about clear thinking. (The negative aspect of pedantry is inappropriate communication. But the positive aspect is "seeing the situation clearly", which is obviously the part you want to keep!)
pugchat about 4 hours ago |
BANRONFANTHE about 12 hours ago |
bobokaytop 1 day ago |
yubainu about 18 hours ago |
andrewssobral 1 day ago |
ashwinnair99 1 day ago |
ashwinnair99 1 day ago |
bjourne 1 day ago |
I LOLed.
deevelton 1 day ago |