New gaming app helps understand human strategic thinking and may improve AI reasoning

0

Fans of Candy Crush Saga, Flow Free, or Minesweeper should discover a challenging new mobile game app, hexxed, that will stretch your brain as it will help brain researchers understand human strategic thinking and possibly improve human reasoning. ‘artificial intelligence.

The puzzle game was released this month via the Apple App Store and Google Play by neuroscientist Gautam Agarwal and colleagues at the University of California at Berkeley and by the Champalimaud Center for the Unknown in Lisbon, in Portugal.

Those who download hexxed will be faced with a more difficult game than most phone games, which can be thoughtlessly played while killing time or watching TV. Those who play the hex have to learn how to succeed with almost no instruction – they have to figure out the rules on the fly.

“I really wanted to avoid skewing the player as much as possible, so in a way it goes against one of the basic principles of game design: using scenarios to orient the player,” Agarwal said.

By not providing any narrative on the rules or the goals, the Agarwal team sought to make the human experience of the game look like how artificial intelligence (AI) might approach it, which is blind, looking only for models that maximize the number of points earned.

That way, he said, he can compare human strategies more directly to those used by the advanced neural networks at the heart of today’s AI, and perhaps provide a benchmark on the how AI is measured against the more flexible intelligence of humans. .

The app is also rare among crowdsourcing players to contribute basic research into the way humans think. Other participatory studies, especially those available through Amazon Mechanical Turk, also provide large datasets that can be used to study cognition.

“You have people doing behavioral experiments in the form of games all the time, but that’s not a way that scientists seem particularly wise,” Agarwal said. “This game is a form of citizen science that can help us model behavior in a more immersive environment than what we typically build in the lab.”

The hexadecimal app stores every move made by the player, reporting the data back to the team for analysis. If enough players reach the highest level and master the game – Agarwal hopes that at least 1,000 expert level players will be spellbound by the game – he will have a unique data set that scientists can use to find answers to a question. myriad of scientific questions regarding a smart problem. -resolution.

“Initially, we focus on those leaps in insight that suddenly allow a player to solve entire classes of related problems,” said Agarwal. “Hopefully if we have a good enough model, then we can look at this data in a more detailed way and start looking at the more nuanced differences in individual experience: cultural differences, age differences, personality differences. There are people who are more prone to anxiety, planning, rumination or persistence. Do they approach the game differently? “

The sweet spot of game complexity

Scientists interested in human reasoning have, to date, used one of two approaches, Agarwal said. The first is to use simple lab tasks. While these are easy to model, they fall far short of the level of complexity found in real world problems.

At the other end of the spectrum, the researchers used existing games such as chess, Go, and Tetris. But these are all complicated strategy games – some, like chess, with idiosyncratic rules – that are difficult to model because there are essentially an unlimited number of possible game board arrangements.

“The problem with games like chess or Tetris is that you can’t map the space of possible experiences – they are practically endless,” said Agarwal, who received his doctorate. in Neuroscience from UC Berkeley in 2009, in collaboration with Ehud Isacoff, director of the Helen Wills Neuroscience Institute.

“The hexxed application, unlike most neuroscientific tasks, requires participants to choose from a large but well-defined space of actions. Plus, you’re struggling with essentially each of the 164 possible puzzles by the time you beat it. attempts each puzzle, forming a much more complete map connecting the puzzle space to the action space. “

Agarwal and his colleagues tested a prototype of the game last year on a dozen subjects while he was a postdoctoral fellow at the Champalimaud Center for the Unknown and found that people resorted to the same common strategies, including under- optimal.

As they progressed to levels requiring new solutions, subjects consistently used outdated strategies that were not up to the challenge.

He then compared the performance of human games to that of several neural networks that were used to master the Go and Atari games.

“The AIs were successful, but took a lot longer,” Agarwal said. “At level 1, people made two attempts to beat him. The AI ​​made 20 attempts. By the time the AI ​​reached level 5, they were closer to humans. But if you look at how humans and L ‘ AI solves individual puzzles, at the end of the game the humans collect half of the puzzles perfectly, but the AI ​​almost never gets the puzzles perfectly. She seems to use those DIY solutions which are good enough, but never get the puzzles perfectly. really at the heart of the problem presented – it’s doing something stupid. “

Agarwal’s findings on human play strategies could help design neural networks and AI with better problem-solving skills.

“When you’re in a new situation and you have to piece together seemingly unrelated past experiences to find something new – that’s closer to the border, as far as AI is concerned,” he said. “This is the problem of generalization. How do you apply familiar strategies to unfamiliar problems, and when should you ditch them and try something totally different? “

He suspects that individuals will cluster around a small number of different approaches to solving the game, jumping from one approach to another as they grapple with increasing degrees of complexity in the game.

“Each of us has only one local point of view from which we see a problem, until we abandon it for another local point of view, another theory,” he said. he declares. “By looking at thousands of humans, we can get a big picture of how humans as a population approach the same problem in a way that no one can see for themselves. This game is a way to systematically encourage people to make a series of discoveries, so that we can model them mathematically.

He’s already seen players – he now has around 40 dedicated testers – creating stories about which strategies work, many of which are bogus. His mother even reached the last level with a mistaken account of what would allow him to be successful.

The data we’ve collected so far shows how central stories are, even if we didn’t initially expect that to be the case. Whether we like it or not, this is the reality of how people approach complex problems. “

Gautam Agarwal, neuroscientist, University of California-Berkeley

He sees intriguing parallels to the complex decisions involved in voting, which require balancing a lot of sometimes conflicting evidence, forming narratives to simplify the process, but inevitably involve oversimplification and momentary decision-making that can have long-term consequences.

“I would view this project as a mathematical attempt to explain how people use stories to make sense of the world,” he added.


Source link

Share.

Leave A Reply