John Chamberlain
Developer Diary
 Developer Diary – You Heard It Here First – 17 January 2018
Starcraft, the Next Frontier in AI
When AlphaGo defeated Lee Sedol in March of 2016 in igo (or baduk as it is known in Korean), it represented a pivotal moment in the history of computer science. Igo is a computationally intractable game, so it can be brute forced like chess. In order to win at igo, a computer must be able to think strategically. For this reason, StarCraft. In StarCraft, both players command hundreds of different units which they must build over time. The units can be used, just like in real life, to execute complex cooperative strategies and maneovers. StarCraft has been called the hardest game of all time because of its complexity and real-time dynamics. To win at such a game, a computer needs to not only strategize, but form long-range plans based on complex sets of goals. So far, no technology has been invented which is capable of doing this. If such a thing existed, it would be revolutionary because such a computational engine could be used to replace generals or even company CEOs--as long as the goals are fixed and straightforward.

In 2016, after their triumph in igo, DeepMind announced they would take up this challenge. It has been well over a year since then, and they have apparently been unable to make signficant progress. The StarCraft AI's that compete publicly are still quite weak and can be beaten by amateurs like me.The fundamental reason for this is that we can devise creative plans that the AI has difficulty anticipating, and vice versa, the AI has difficulty making its own plans. There is also the problem of inference. Since StarCraft is a game of hidden information, the players have to infer what the opponent is doing from small clues. Without an ability to understand plans, it is difficult for the computer to do this. Humans can scout out the computer and easily figure out what it is doing, but the computer bumbles along oblivious to the human's plans until it is too late for the machine.

In my opinion, the way forward is through a technique called a deep generative model (DGM). DGMs are capable of learning from prior data, and then synthesizing new ideas based on that data. The main approach for implementing DGMs is a type of neural network called a "restricted Boltzmann machine" (RBM). Currently, most AI researchers do not consider RBMs to be worth exploring and they consider them inferior to other methods, but I think this is a mistake. I suspect DeepMind is still plodding along with their MCST-based systems and that is why they are making no progress. It is noteworthy that they did not figure out MCST in the first place. Others had already shown how to apply it to go. DeepMind only improved the implementation and organized it on big iron, they did not make any fundamental theoretical discoveries. To crack the StarCraft problem, DeepMind will have to start thinking more creatively and use DGMs or some new method to make progress.

return to John Chamberlain's home · diary index

Developer Diary · about · info@johnchamberlain.com · bio · Revised 17 January 2018 · Pure Content