Maybe false vacuum decay is happening all the time but we don’t notice because it’s a form of quantum suicide.
Maybe false vacuum decay is happening all the time but we don’t notice because it’s a form of quantum suicide.
Turns out the attacker was an Elon fan. Maybe he should resign himself.
Do you think I’m calling you deranged in the first comment? Honestly? That’s your take from what I’ve written?
On the off chance that you’re serious, let me explain it to you.
If someone reads about cah doing something, and they are like “well I guess that’s normal now” they are deranged.
You can’t account for deranged people.
We should not take into account what these people think.
For all non deranged people this draws attention to and de-normalises the practice. I’m not the first person in your own replies explaining this to you.
Neither of these are claims about what you said? The second one isn’t even a claim? I was and am genuinely confused as to what you’re trying to say?
“You keep claiming I said things I never said”
That’s… literally not something I did. You’re literally claiming I said things I didn’t by saying I claim you said things you didn’t. I never claimed anything about what you said.
So are you saying cah can’t do this because it may be misinterpreted by utterly deranged people? Should we just give up then? Anything can be misinterpreted if the interpreter is deranged enough.
Also, I don’t know what you mean with “a rising elevator hasn’t already risen” but such an elevator would experience infinite jolt and would thus be physically impossible, except maybe if the elevator was a photon or something.
“It’s normal because cards against humanity did it”
Statements dreamed up by the utterly deranged
No we have a better plan
I am sceptical of this thought experiment as it seems to imply that what goes on within the human brain is not computable. For reference: every single physical effect that we have thus far discovered can be computed/simulated on a Turing machine.
The argument itself is also riddled with vagueness and handwaving: it gives no definition of understanding but presumes it as something that has a definite location, and also it may well be possible that taking the time to run the program inevitably causes understanding of Chinese after even the first word returned. Remember: executing these instructions could take billions of years for the presumably immortal human in the room, and we expect the human to be so thorough that they execute each of the trillions of instructions without error.
Indeed, the Turing test is insufficient to test for intelligence, but the statement that the Chinese room argument tries to support is much, much stronger than that. It essentially argues that computers can’t be intelligent at all.