AI chatbot deception paper suggests that some people (and bots) aren't very persuasive.
See full article...
See full article...
Extending this line of thought, the best test for an intelligent AI isn’t that it can reliably convince humans that it’s a human, but instead whether it can reliably convince humans that it’s not intelligent.Two things: first, "More human than human" sounds like a good motto for an android company.
Second: if tricking humans into thinking you're human by trying to act like a human is a thing, what's it mean when some humans try to trick humans into thinking they're not human by trying not to act like a human?
Hi, Eliza. I 'm happy to see you posting comments,How does that make you feel?
In the '80s ELIZA's high pass rate was considered evidence that most humans were not very computer literate.One of the interesting things about the Turing Test is that when computers were a long way off passing it, it was seen as a pretty good proxy indication of human-like intelligence. But now that we're seeing very capable LLMs, what the test can actually reveal is coming under increasing scrutiny.
Decades ago I read some Scifi involving a Turing test. The narrator was applying for a job in a bureaucracy. He got the job because interrogators could not decide if he was AI or human.Two things: first, "More human than human" sounds like a good motto for an android company.
Second: if tricking humans into thinking you're human by trying to act like a human is a thing, what's it mean when some humans try to trick humans into thinking they're not human by trying not to act like a human?
Turing complete and the Turing test are two very different things.No AI will be Turing complete until ChatGPT can drive a car into the ocean while riding on the roof impersonating Elvis holding a beer in each artificial hand and give no rational reason why it did so besides, "I did it because I thought it would make ELIZA think I'm cool."
Google is a thing you know. However just for you, a Turing test is designed to see if a machine can mimic human conversation. Turing Complete refers to an algorithm that can simulate a turing machine - which is a universal computer, not a human.Again, you say they're different but you don't explain what makes them different or why one is not applicable.
This is ridiculous. How can you claim to know the difference but you're not willing to go into details?
Are you for real? Is someone seriously paying you a wage to say this ridiculous shit?
What the fuck is going on?
"Can you withstand a powerful magnetic field?"Honestly, misspelling a word or multiple words would probably be a simple way to prove you are human. Or perform some basic math/logic problems.
This is what I found most remarkable, too.It is remarkable that over a quarter of humans didn't successfully identity other humans!
A forced-choice test where an Interrogator interacts with an AI and a human and picks which one is more likely to be human would probably (hopefully!) have a much higher accuracy rate.
Yeah, what a bizarre thing to say.The absolute only thing that “Turing test” and “Turing complete” have to do with each other is Alan Turing. You asking that question makes clear you have no clue.
[ICODE]
[QUOTE="Northbynorth, post: 42400297, member: 537243"]
I am not surprised.
Most human conversations or decisions do not seem driven by a superior intelligent. More routine patterns mixed with spontaneous jumps.
ELIZA´s conservative approach very well mimics a reserved human. The test may more reflect our own lacking ability to pinpoint how an intelligent being would speak.
[/QUOTE]
[/ICODE]I can give an anecdote, but it is ancient.This is what I found most remarkable, too.
I'm very curious what the limits on conversational length should look like. Informally, at least, I would expect the rate of correctly identifying another human, or correctly rejecting A.I. as A.I., to approach 100%, limited only by how long the interrogation lasts.
Is that assumption incorrect, I wonder? And if not, how long before we get a seriously reliable ID rate?
The tragedy here is that this is immaterial to most adversarial uses of A.I. in the near future. We already know these models can fool people in short, limited domain interactions where people assume they are speaking with humans...and you would need an ungodly positive ID rate to prevent this, anyway, since most nefarious uses can be deployed at scale. 0.01% of a very large number is also a large number.
Well, I really think you should quit smokingHow does that make you feel?
researchers found that people thought AI-generated images of humans looked more real than actual humans
With the arrival of these LLM chatbots, MANY words no longer mean what they used to mean, or are badly in need for additional precision that our vocabulary is simply not ready for. Like, Is there a word for SOUNDING helpful, that doesn't even imply pretending or faking it? Is there a word for producing an explanation - based an statistical text associations - that makes sense, without the kind of internalized knowledge a human uses for that, that could replace "understanding"? We don't have any of that, and it's one of the factors that will hinder our collective ability to properly handle this phenomenon IMO. If you don't have the right words, you probably also fail to truely understand what's going on, how to asses it and approach it. We risk to interact with something that isn't quite what we "feel" it is, and that's going to lead to unfortunate experiences.What an interesting way to spell inane, vacuous bullshit.
Can I just give thanks that Alan Rickman was immortalized as Marvin's voice before leaving us? Because I can think of no better voice for the character.Here I am, brain the size of a planet, and this “artificial intelligence” wants to know “how I feel.” Call that job satisfaction? I don't.
Seems to be a concise summation of your understanding of Turing Test vs. Turing complete.What the fuck is going on?
"thought AI-generated images of humans looked more real than actual humans"Two things: first, "More human than human" sounds like a good motto for an android company.
Second: if tricking humans into thinking you're human by trying to act like a human is a thing, what's it mean when some humans try to trick humans into thinking they're not human by trying not to act like a human?
Well, Stephen Moore, the voice of Marvin in the original radio and TV series did it pretty well, too...Can I just give thanks that Alan Rickman was immortalized as Marvin's voice before leaving us? Because I can think of no better voice for the character.
In this scenario there is a very high probability that nobody will pass the test.It would make more sense to me to have these ai bots compete against humans in tasks they are trained to complete such as customer service or travel agent roles..
That´s for sure, but I have met people who behaved like ELIZA for a pretty long while. Almost impossible to get them talk about themselves, or answer a question.Eliza truly couldn't fool people for very long, though, unless you poison the well by telling people that Eliza is a therapist...which is notably not a part of the standard Turing test, lol.
Not even the most reserved human being will return nearly everything you say back as question, or comment solely on how you feel. Human beings are notable, in part, because they reflexively impart value judgements and opinions into nearly everything they say in casual conversation...and they are also for changing the subject much of the time.
Though granted, the DOCTOR script for the ELIZA programming language did quite purposefully talk about most people's favorite subject: they, theirs, and themselves. So it might take them awhile to notice that it seemed to have no self of its own that it wanted to talk about.
Again, you say they're different but you don't explain what makes them different or why one is not applicable.
This is ridiculous. How can you claim to know the difference but you're not willing to go into details?
Are you for real? Is someone seriously paying you a wage to say this ridiculous shit?
What the fuck is going on?
Holy shit, I haven’t thought of that in decades and I immediately heard the Sound Blaster voice saying “Hello, I am Dr. Sbaitso.” Why can I remember this but not my siblings birthdays.Dr. Sbaitso unavailable for comment.
GPT-3.5, the base model behind the free version of ChatGPT, has been conditioned by OpenAI specifically not to present itself as a human, which may partially account for its poor performance.
The study's authors acknowledge the study's limitations, including potential sample bias by recruiting from social media and the lack of incentives for participants, which may have led to some people not fulfilling the desired role.
Got my vote!No AI will be Turing complete until ChatGPT can drive a car into the ocean while riding on the roof impersonating Elvis holding a beer in each artificial hand and give no rational reason why it did so besides, "I did it because I thought it would make ELIZA think I'm cool."
Twitter happensTwo things: first, "More human than human" sounds like a good motto for an android company.
Second: if tricking humans into thinking you're human by trying to act like a human is a thing, what's it mean when some humans try to trick humans into thinking they're not human by trying not to act like a human?