We are interacting with artificial intelligence (AI) online not only more than ever — but more than we realize — so researchers asked people to converse with four agents, including one human and three different kinds of AI models, to see whether they could tell the difference.
The “Turing test,” first proposed as “the imitation game” by computer scientist Alan Turing in 1950, judges whether a machine’s ability to show intelligence is indistinguishable from a human. For a machine to pass the Turing test, it must be able to talk to somebody and fool them into thinking it is human.
Scientists decided to replicate this test by asking 500 people to speak with four respondents, including a human and the 1960s-era AI program ELIZA as well as both GPT-3.5 and GPT-4, the AI that powers ChatGPT. The conversations lasted five minutes — after which participants had to say whether they believed they were talking to a human or an AI. In the study, published May 9 to the pre-print arXiv server, the scientists found that participants judged GPT-4 to be human 54% of the time.
“Writer Fuel” is a series of cool real-world stories that might inspire your little writer heart. Check out our Writer Fuel page on the LimFic blog for more inspiration.