Computer game bot Turing test

(Learn how and when to remove this template message)

The computer game bot Turing test is a variant of the Turing test, where a human judge viewing and interacting with a virtual world must distinguish between other humans and video game bots, both interacting with the same virtual world. This variant was first proposed in 2008 by Associate Professor Philip Hingston[1][2] of Edith Cowan University, and implemented through a tournament called the 2K BotPrize.[3]

The UT^2 bot combats an opponent in the BotPrize.
A bot combats a human opponent in the game Unreal Tournament 2004.

History

The computer game bot Turing test was proposed to advance the fields of artificial intelligence (AI) and computational intelligence with respect to video games. It was considered that a poorly implemented bot implied a subpar game, so a bot that would be capable of passing this test, and therefore might be indistinguishable from a human player, would directly improve the quality of a game. It also served to debunk a flawed notion that "game AI is a solved problem."[2]

Emphasis is placed on a game bot that interacts with other players in a multiplayer environment. Unlike a bot that simply needs to make optimal human-like decisions to play or beat a game, this bot must make the same decisions while also convincing another in-game player of its human-likeness.[citation needed]

Implementation

The computer game bot Turing test was designed to test a bot's ability to interact with a game environment in comparison with a human player, simply 'winning' was insufficient. This evolved into a contest with a few important goals in mind:[2]

In 2008, the first 2K BotPrize tournament took place.[4] The contest was held with the game Unreal Tournament 2004 as the platform. Contestants created their bots in advance using the GameBots interface. GameBots had some modifications made so as to adhere to the above conditions, such as removing data about vantage points or weapon damage that unfairly informed the bots of relevant strengths/weakness that a human would otherwise need to learn.[citation needed]

Tournament

The first BotPrize Tournament was held on 17 December 2008, as part of the 2008 IEEE Symposium on Computational Intelligence and Games in Australia.[4][5] Each competing team was given time to set up and adjust their bots to the modified game client, although no coding changes were allowed at that point. The tournament was run in rounds, each a 10-minute death match. Judges were the last to join the server and every judge observed every player and every bot exactly once, although the pairing of players and bots did change. When the tournament ended, no bot was rated as more human than any player.[6][citation needed]

In subsequent tournaments, run during 2009–2011,[7][8][9] bots achieved scores that were increasingly human-like, but no contestant had won the BotPrize in any of these contests.

In 2012, the 2K BotPrize was held once again, and two teams programmed bots that achieved scores greater than those of human players.[3]

Successful bots

To date, there have been two successfully programmed bots that passed the computer game bot Turing test:

Aftermath

The outcome of a bot that appears more human-like than a human player is possibly overstated, since in the tournament in which the bots succeeded, the average 'humanness' rating of the human players was only 41.4%.[12] This showcases some limits of this Turing test, since the results demonstrate that human behaviour is more complicated and quantitative than was accounted for.[13] In light of this, the BotPrize competition organizers will increase the difficulty in upcoming years with new challenges, forcing competitors to improve their bots.[14]

It is also believed that methods and techniques developed for the computer game bot Turing test will be useful in fields other than video games, such as virtual training environments and in improving Human–robot interaction.[15]

Contrasts to the Turing test

The computer game bot Turing test differs from the traditional or generic Turing test in a number of ways:[2]

See also

References

  1. ^ "Philip Hingston | Home".
  2. ^ a b c d Hingston, Philip (September 2009). "A Turing Test for Game Bots" (PDF). IEEE Transactions on Computational Intelligence and AI in Games. doi:10.1109/TCIAIG.2009.2032534. S2CID 13988179.
  3. ^ a b c http://botprize.org
  4. ^ a b "Botprize : 2008". Archived from the original on 2013-02-25. Retrieved 2013-02-03.
  5. ^ "2008 IEEE Symposium on Computational Intelligence and Games (CIG'08)".
  6. ^ Mer, Kold. "New Round-Up 6 SB". {{cite journal}}: Cite journal requires |journal= (help)
  7. ^ "Botprize : 2008". Archived from the original on 2013-02-26. Retrieved 2013-02-03.
  8. ^ "Botprize : 2010". Archived from the original on 2012-12-30. Retrieved 2013-02-03.
  9. ^ "Botprize : 2011". Archived from the original on 2012-12-29. Retrieved 2013-02-03.
  10. ^ "NNRG Software - UT^2: Winning Botprize 2012 Entry".
  11. ^ Archived at Ghostarchive and the Wayback Machine: UT bot kills human judge. YouTube.
  12. ^ "Botprize 2012 : Result". Archived from the original on 2013-02-25. Retrieved 2013-02-04.
  13. ^ Dvorsky, George (October 1, 2012). "How did this game bot score higher than humans on a Turing Test?".
  14. ^ Quick, Darren (September 26, 2012). "More human than human: AI game bots pass Turing Test".
  15. ^ "Artificially Intelligent Game Bots Pass the Turing Test on Turing's Centenary". September 26, 2012.