AI chatbot deception paper suggests that some people (and bots) aren't very persuasive.
See full article...
See full article...
A simple thought experiment.Bottled knowledge it may (it's a good description of current models). It is currently being iterated to becoming more capable of creating more original outputs than earlier AI's. Example: An original joke on URL being Unusually Rowdy Lemurs, something that doesn't even exist on Google via a quoted search.
SAM ALTMAN: I don’t think it needs that. But I wouldn’t say any of this stuff with certainty, like we’re deep into the unknown here. For me, a system that cannot go significantly add to the sum total of scientific knowledge we have access to, kind of discover, invent, whatever you want to call it, new fundamental science, is not a super intelligence. And to do that really well, I think we will need to expand on the GPT paradigm in pretty important ways that we’re still missing ideas for. But I don’t know what those ideas are. We’re trying to find them.
My guess is that he/she's just really, really fucking stupid and yet convinced of the opposite.But instead of thinking "maybe it's me", the earlyberd goes on the offensive... making me suspect it's just a troll.
True, but.... we can't be Issac Newton in an era of Einstein.LLMs only transform text. Your prompt is not an instruction you are giving to the model, it is just a vector that the LLM is trying to find the closest match in its training data.
Stop trying to bullshit all of us with this nonsense. This was a bad test and you all should be ashamed of your collective poor grasp of how LLMs work. What a disgrace.
I just want to point out a time error in your post:edit:
With regards to Sam Altman and recent bwohaha, I think their problem was that they believe they will create AGI with current approach, and he has more tempered expectations. E.g. he was saying this on a recent podcast:
SAM ALTMAN: I don’t think it needs that. But I wouldn’t say any of this stuff with certainty, like we’re deep into the unknown here. For me, a system that cannot go significantly add to the sum total of scientific knowledge we have access to, kind of discover, invent, whatever you want to call it, new fundamental science, is not a super intelligence. And to do that really well, I think we will need to expand on the GPT paradigm in pretty important ways that we’re still missing ideas for. But I don’t know what those ideas are. We’re trying to find them.
This to me displays his, everyday, practical understanding that there's a huge practical difference between humans writing another internet worth of text for his AI to train on (that would be great and improve the AI), and the current AI generating another internet worth of text for next AI to train on (that would not add anything the original internet worth of training text didn't have).
Meanwhile members of the board who tried to out him fervently believe that this is nearing an actual AGI.
Indeed. ELIZA was very easy to force into a loop. If you kept changing the subject, it could seem more realistic, but was very simple to derail once you dug more deeply and asked more-direct questions.You’ve got a little boy. He shows you his butterfly collection plus the killing jar.
Er, Stephen Moore. Who was better.Can I just give thanks that Alan Rickman was immortalized as Marvin's voice before leaving us? Because I can think of no better voice for the character.
The bots were told to make some spelling and grammar errors. And looking at the results of the study, the most common reason people realized they were talking to an AI was that it was "too informal", oops.Honestly, misspelling a word or multiple words would probably be a simple way to prove you are human. Or perform some basic math/logic problems.
Yes, among many other failings the was study like "our results might be biased by male 20-somethings, who spend too much time on the internet, with a post-graduate education, studying LLMs, who were intentionally trolling each other..."This comment section is pretty wild!
There are a couple of other problems with the study, aside from what's been pointed out in the article and the comments. The main one is that the definition of "ordinary" in "ordinary judges" has changed, perhaps in irrevocable ways.