Researchers have developed a bot equipped with artificial intelligence that bot can win humans in multiplayer hidden games and players in tricky online multiplayer games where player roles and motives are kept secret.
Many gaming bots have been built to keep up with human players.
Earlier this year, a team developed the world’s first bot that can beat professionals in multiplayer poker
Now Bot can win humans in multiplayer hidden games.
Several bots have also been built to beat professional chess players or join forces in cooperative games such as online capture the flag. In these games, however, the bot knows its opponents and teammates from the start.
Playing poker with bot:
The bot is designed with novel deductive reasoning added into an AI algorithm commonly used for playing poker.
The researchers pitted DeepRole against human players in more than 4,000 rounds of the online game.
If you replace a human teammate with a bot, you can expect a higher win rate for your team. Bots are better partners.
The work is part of a broader project to better model how humans make socially informed decisions. Doing so could help build robots that better understand, learn from, and work with humans.
Games like Avalon better mimic the dynamic social settings humans experience in everyday life. You have to figure out who’s on your team and will work with you, whether it’s your first day of kindergarten or another day in your office.
All players simultaneously and publicly vote to approve or disapprove the subset. If a majority approve, the subset secretly determines whether the mission will succeed or fail.
If two “succeeds” are chosen, the mission succeeds; if one “fail” is selected, the mission fails. Resistance players must always choose to succeed, but spy players may choose either outcome.
The resistance team wins after three successful missions; the spy team wins after three failed missions.
The bot is trained by playing against itself as both resistance and spy. When playing an online game, it uses its game tree to estimate what each player is going to do.
At each mission, the bot looks at how each person played in comparison to the game tree.
Eventually, the bot assigns a high probability for each player’s role. These probabilities are used to update the bot’s strategy to increase its chances of victory.
Simultaneously, it uses this same technique to estimate how a third-person observer might interpret its own actions. This helps it estimate how other players may react, helping it make more intelligent decisions.
If it’s on a two-player mission that fails, the other players know one player is a spy. The bot probably won’t propose the same team on future missions, since it knows the other players think it’s bad.
Interestingly, the bot did not need to communicate with other players, which is usually a key component of the game. Avalon enables players to chat on a text module during the game.
There is still much work to be done, especially when the social interaction is more open ended, but we keep seeing that many of the fundamental AI algorithms with self-play learning can go a long way.”
Next, the researchers may enable the bot to communicate during games with simple text, such as saying a player is good or bad.
That would involve assigning text to the correlated probability that a player is resistance or spy, which the bot already uses to make its decisions.
Beyond that, a future bot might be equipped with more complex communication capabilities, enabling it to play language heavy social deduction games such as a popular game Werewolf which involve several minutes of arguing and persuading other players about who’s on the good and bad teams.
The research was published in Massachusetts Institute of Technology