Did you know that your chatbot might be out to deceive you? That it might be lying to you? And it might turn you into paperclips? Huge if true! Ordinary people have been using chatbots for a couple…
Slate Scott just wrote about a billion words of extra rigorous prompt-anthropomorphizing fanfiction on the subject of the paper, he called the article When Claude Fights Back.
Can’t help but wonder if he’s just a critihype enabling useful idiot who refuses to know better or if he’s being purposefully dishonest to proselytize people into his brand of AI doomerism and EA, or if the difference is meaningful.
edit: The claude syllogistic scratchpad also makes an appearance, it’s that thing where we pretend that they have a module that gives you access to the LLM’s inner monologue complete with privacy settings, instead of just recording the result of someone prompting a variation of “So what were you thinking when you wrote so and so, remember no one can read what you reply here”. Que a bunch of people in the comments moving straight into wondering if Claude has qualia.
I used to think that comparing LLMs to people was dumb, because LLMs are just feed-forward networks–basically seven bipartite graphs in a trench coat–that are incapable of introspection.
However, I’m coming around to the notion that some of our drive-by visitors have a brain that’s seven cells deep.
I feel attacked.
Seriously I hate the idea that my comments are replies to engagement bots. I’m sure some are but my seven cells are too busy to work out which ones.
Yeah, general artificial intelligence LLMs are definitely not. Human level intelligence, though… yeah, that depends on what particular human you’re talking about.
(Though, to be fair, this isn’t limited to LLMs… it also applies to Eliza, for instance, or your average lump of granite.)
Slate Scott just wrote about a billion words of extra rigorous prompt-anthropomorphizing fanfiction on the subject of the paper, he called the article When Claude Fights Back.
Can’t help but wonder if he’s just a critihype enabling useful idiot who refuses to know better or if he’s being purposefully dishonest to proselytize people into his brand of AI doomerism and EA, or if the difference is meaningful.
edit: The claude syllogistic scratchpad also makes an appearance, it’s that thing where we pretend that they have a module that gives you access to the LLM’s inner monologue complete with privacy settings, instead of just recording the result of someone prompting a variation of “So what were you thinking when you wrote so and so, remember no one can read what you reply here”. Que a bunch of people in the comments moving straight into wondering if Claude has qualia.
I used to think that comparing LLMs to people was dumb, because LLMs are just feed-forward networks–basically seven bipartite graphs in a trench coat–that are incapable of introspection.
However, I’m coming around to the notion that some of our drive-by visitors have a brain that’s seven cells deep.
I feel attacked.
Seriously I hate the idea that my comments are replies to engagement bots. I’m sure some are but my seven cells are too busy to work out which ones.
edit: cell
Yeah, general artificial intelligence LLMs are definitely not. Human level intelligence, though… yeah, that depends on what particular human you’re talking about.
(Though, to be fair, this isn’t limited to LLMs… it also applies to Eliza, for instance, or your average lump of granite.)
We just are very good at anthropomorphizing. We created pet rocks for example (also showing that capitalism is more than happy to jump into this)
I feel like “qualia” is both an interesting concept, and a buzzword that has rapidly grown to indicate people who need to be aggressively ignored.