The History of AI and Games

Everybody’s Turkin for the Weekend

For the last 20 years, games have served as a proving ground for AI researches across the world, but the man vs. machine dynamic in games actually goes back to 1770. That’s when Hungarian Inventor Wolfgang Von Kempelen introduced  “The Turk”, an automaton that could play the game of chess. For 84 years the machine toured the Americas and Europe winning most of the games it played (including games against Napoleon and Ben Franklin). Eventually, however, it was revealed that the entire thing was a hoax. The device (pictured below) actually worked by having a person sit inside of it and play the chess match. How it took them 84 years to figure this out speaks to times I suppose, but hey, good for Kempelen.

 

turk

IBM Deep Blew Everyone’s Minds

I was not a reporter in 1997 (I was busy coloring), but if I was, the above sentence would have been my headline. We’ll get to why in a few minutes. But first, let’s go to 1950. That’s when a man named Claude Shannon took the next major step in the chess match of man vs. machine with a paper called “Programming a Computer for Playing Chess” (I’ll give you two guesses on what it’s about). This laid the groundwork for a lot of the research going forward. Then all this[1] happened:

By 1962, a program designed by MIT students could beat amateur chess players. By 1967, MIT programmer Richard Greenblatt, who was himself an accomplished chess player, added a number of powerful heuristics to the earlier MIT systems and achieved the unprecedented score of 1400 at a chess playing competition. A score of 1400 was the level of a very good high school player, and was a significant milestone for chess playing programs. Computer chess was getting good.

It gets really interesting in the mid-1990s when IBM starts making some serious strides and beating a lot of top players. All of their work came to fruition when, in 1997, their AI, Deep Blue (get the joke from the headline?) beat Chess Grandmaster Gary Kasparov. This was a huge victory for IBM and the AI community. Chess, however; is a game that, while strategic in nature, involves knowing an insanely large combination of possible moves and the quality of each move. Computers were kind of designed to be better than us at it. It’s not like computers could ever beat us in a game that involved the English language, subtle context clues, and vague connections.

I’ll take What is Watson, for 500, Alex.

You can probably tell where this is going, but in case you’ve either been living under a rock or are a part of Generation Z (do kids these days watch game shows?), I’m talking about the esteemed game show Jeopardy. This was the birth of IBM’s Watson and the next major milestone in AI and games. The researchers at IBM struggled for a long while on this one. You might think, “why would it be hard? The computer can just download Wikipedia and be set”. That was my thought, but one of the tricks to Jeopardy, or a finance midterm, is figuring out what the question is even asking. This might not be too hard for humans, but there are puns and double entendre sprinkled throughout the questions. While these puns might make someone like you or me groan (okay, not me, I live for them) it completely throws off a computer. The breakthrough came when they thought to have Watson start keeping track of questions and answers, right or wrong. There’s a whole thing about Bayesian priors here, but to keep it simple Watson started learning from its mistakes.

69691164_100x100

Obligatory iRobot gifs aside, this was a major stride in the field of AI and now Watson is being used across the world for a lot more than a game show. IBM is using Watson for a myriad of things; from text sentiment analysis to medical diagnoses. “But, Matt”, you say, “Understanding puns aside (which is 80% of what it means to be human) a robot could never understand and implement tactics in a dynamic, changing system; balancing resources with multiple long-term branching strategies, etc., etc.”.

An AlphaStar is Born

Some of you with nerdier inclinations have played a Real-Time Strategy (RTS) game before. These include things like Age of Empires and Warcraft. For those of you who have not, you’ve maybe played something like The Sims. If you haven’t done that, I’m out of examples.

In an RTS you’re faced with solving a particular set of problems, given a particular set of resources, all in real time; which means you have to prioritize problems and allocate those resources effectively. The RTS game we are going to be talking about is a game called StarCraft II and last week researchers at Google’s DeepMind shocked the StarCarft world when their AI, AlphaStar beat professional players 10-1. Now, I understand, broadly, the challenges this sort of game could give AI researchers, but I think it’s best they describe them to you. Some of them are [2]:

  • Game theory: StarCraft is a game where, just like rock-paper-scissors, there is no single best strategy. As such, an AI training process needs to continually explore and expand the frontiers of strategic knowledge.
  • Imperfect information:Unlike games like chess or Go where players see everything, crucial information is hidden from a StarCraft player and must be actively discovered by “scouting”.
  • Long term planning: Like many real-world problems cause-and-effect is not instantaneous. Games can also take anywhere up to one hour to complete, meaning actions taken early in the game may not pay off for a long time.
  • Real time: Unlike traditional board games where players alternate turns between subsequent moves, StarCraft players must perform actions continually as the game clock progresses.
  • Large action space: Hundreds of different units and buildings must be controlled at once, in real-time, resulting in a combinatorial space of possibilities. On top of this, actions are hierarchical and can be modified and augmented. Our parameterization of the game has an average of approximately 10 to the 26 legal actions at every time-step.

I could talk about this for a while, but I’m running out of words here so I’ll need to take a high-level overview. They trained the AI in a way that copied a lot of professional players strategies, then trained the AI against each other until one emerged as the clear best. That AI then played against real, human professionals and stomped them mercilessly.

sc2-agent-vis2520252812529

This has many implications, but the biggest is that AI is now getting better at thinking strategically, and long term, given imperfect information. While it might be contained to the video game world right now, it will eventually spill over to the real world. Which will be the subject of my next blog post.

Thanks for reading!

  1. https://thebestschools.org/magazine/brief-history-of-computer-chess/
  2. https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/

2 comments

  1. Hearing about the advancements of AI is always, not gonna lie, a little bit terrifying. However, I really like the insights you gave on AI and the gaming world. I’m not the most skilled “gamer”, but it makes me wonder, do you think the future of gaming will transition from friends playing friends to individuals trying to outsmart the most intelligent robots in the world? Or do you think AI in gaming is merely a pathway to developing a more strategic thinking robot? Either way, it’s pretty incredible the strides that have been made in the past few years alone. I’m very curious (or afraid) to hear more developments on this in future post…

  2. One of the most fascinating things I found with the Starcraft AI is that because AI is able to multi-task much more effectively than humans, it was able to come up with strategies that humans were incapable of matching. For example, there was some unit in the game that was able to renew its health after a period of time and was also able to teleport. The AI learned that it could create waves of these units in which the first row would teleport to the back once their health was getting low, allowing a new wave of units to take damage. By using a strategy of cycling these units, it was able to overtake a group of competitors units that were typically stronger. In addition, the AI was able to continue focusing on other tasks (like base building) while using this strategy. I just thought it was so interesting that the AI realized that it could take advantage of strategies that humans were incapable of matching.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: