AI chatbot deception paper suggests that some people (and bots) aren't very persuasive.
See full article...
See full article...
And like GPT-3.5, GPT-4 has also been conditioned not to present itself as human.
Agreed, that would help, but it wouldn't remove the trolling factor of "human witnesses... pretending to be an AI". Maybe some sort of reward for successfully convincing the interrogator that you are human would reduce that.A forced-choice test where an Interrogator interacts with an AI and a human and picks which one is more likely to be human would probably (hopefully!) have a much higher accuracy rate.
What an interesting way to spell inane, vacuous bullshit.Second, ELIZA does not exhibit the kind of cues that interrogators have come to associate with assistant LLMs, such as being helpful, friendly, and verbose.
So where would Mark Zuckerberg rank on that scoreboard?
I have difficulty imagining not being able to pick Eliza out of a lineup. I played with it extensively in the 1980s, and even as a lonely child desperately WANTING computer intelligence to be real (and, ideally, to be my friend) it was so painfully obvious that Eliza was just a static stochastic parrot.It is remarkable that over a quarter of humans didn't successfully identity other humans!
A forced-choice test where an Interrogator interacts with an AI and a human and picks which one is more likely to be human would probably (hopefully!) have a much higher accuracy rate.
Okay inane, vacuous, AND VERBOSE bullshitLlama 2, explain the benefits of using LLMs to generate more eloquent language in business scenarios.
I think you ought to know I'm feeling very depressed.How does that make you feel?
Look, just bring them to the bridge, OK?I think you ought to know I'm feeling very depressed.
What does this prove? Do you guys understand how a LLM works? It only generates approximations, it is not referencing a source of truth when you run inference.
Llama 2, explain why a LLM is not Turing complete.
What does this prove? Do you guys understand how a LLM works? It only generates approximations, it is not referencing a source of truth when you run inference.
Llama 2, explain why a LLM is not Turing complete.
Look, just bring them to the bridge, OK?
"Because... you... are... also... a tortoise.""The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can't. Not without your help. But you're not helping."
Or the title of a heavy metal song.Two things: first, "More human than human" sounds like a good motto for an android company.
The absolute only thing that “Turing test” and “Turing complete” have to do with each other is Alan Turing. You asking that question makes clear you have no clue.What does this prove? Do you guys understand how a LLM works? It only generates approximations, it is not referencing a source of truth when you run inference.
Llama 2, explain why a LLM is not Turing complete.
ha same! My dad built a SAM card for our Apple ][ and it was endless fun.Ah, ELIZA. Used to run it through the Automatic Mouth synthesizer for party entertainment. It was like getting therapy from a Conehead.
I'm not sure the idea here is to ascribe intelligence to anything that can fool a human into believing it's human. It's using humans to determine whether a machine is behaving enough like a human that they can't tell the difference, in which case, according to Turing, it must be intelligent.The Turing test is really widely misunderstood. [...] If a machine behaves exactly like a human, then by definition it's as intelligent as a human.
Somewhere along the way, people started misunderstanding it [...] as in this case, people think intelligence is defined as the ability to fool humans into thinking you're intelligent.
Character.ai is building a business out of making LLMs chat like humans, and with GPTs, OpenAI looks like it's trying to compete in that field too.They are using a test that these models are specifically trained to not pass, and then prompt hacking them, and then publishing findings on their inadequacies relative to humans. It makes no sense.
That's also a valid research idea (and many companies are quietly testing out how many support workers they can or augment/replace with bots).It would make more sense to me to have these ai bots compete against humans in tasks they are trained to complete such as customer service or travel agent roles..