Computers and Go

Alan Levinovitz writes,

The rate at which possible positions increase is directly related to a game’s “branching factor,” or the average number of moves available on any given turn. Chess’s branching factor is 35. Go’s is 250. Games with high branching factors make classic search algorithms like minimax extremely costly. Minimax creates a search tree that evaluates possible moves by simulating all possible games that might follow, and then it chooses the move that minimizes the opponent’s best-case scenario. Improvements on the algorithm — such as alpha-beta search and null-move — can prune the chess game tree, identifying which moves deserve more attention and facilitating faster and deeper searches. But what works for chess — and checkers and Othello — does not work for Go.

Pointer from Tyler Cowen.

My theory of skill at these sorts of games is that it involves making the correct move a high percentage of the time. With games that last many moves, eventually the weaker player will make a mistake. For example, I can get into a favorable position against Zebra Othello any time it plays a suboptimal opening, but I can almost never maintain my lead.

Another theory that I and others have is that top players use pattern recognition, based on experience. For computers, databases of games can be used to build experience and the ability to recognize patterns. The real dominance of computers in Othello took place when somebody accumulated a large database of games, so that the programs could start looking for statistical patterns in evaluating positions. My guess is that the same is true in chess.

My advice to “Go” programmers is to

1. Amass a large databases of games played by top players, with positions annotated in terms of quality at each step in the game.

2. Then, using your knowledge of Go, construct an algorithm that will rate a position by measuring its similarity to positions in the database and their ratings. (This is the hard part, and it is where the program will have to be constantly revised.)

3. Choose a move that produces a position that appears to least resemble a weak position in the database. That is, go for a mistake-avoidance approach.

Once a Go program gets good enough to come close to being able to beat a top player, the ability to use computer vs. computer games to enlarge the database will come into play. At that point, progress in computer Go will be very fast.

10 thoughts on “Computers and Go

  1. The Go search space is not so bad if you limit things down to one region of the board at a time. In fact, good Go players tend to talk about the state of a board that way. They’ll say, black has moves in this region that would increase his territory by 8 points. They’ll say, these two regions can be connected, but only if white moves on it right now; if black moves first, he can prevent the two regions from connecting.

    I would speculate that a good Go program, more than a good Chess program, needs to break the board down into regions. The search space for one region doesn’t have so large of a branching factor. As well, if you search the whole board, you end up searching a lot of sequences that are almost equivalent. Suppose regions 1, 2, and 3 all have an atari available to you. It doesn’t matter which one you go in first, or for that matter second and third. As such, it’s very wasteful to consider all possible interleavings of plays in each region.

    Note this kind of thing would also help with chess, too. Now that chess computers are as good as matters, the next frontier is to do it with less hardware, and that will surely involve a smarter search tree.

  2. I doubt self-play is the key to strength. AFAIK, it hasn’t been for chess and othello. By self-play, you might be able to tune parameters in the evaluation function, but you won’t fundamentally change what the evaluation function looks for, i.e. what parameters there are.

    Isn’t Othello close to being exhaustively searched? And I guess strong programs on decent hardware should solve “endgames” with 30+ empty squares. Don’t need much knowledge to beat humans then…

  3. Someone made the point that algorithms don’t presently play these games the same way humans do. That makes these advances mostly parlor tricks.

    It seems programmers should start beating two year olds and chimps at something they play, how they play and work up from there.

  4. Isn’t that kinda what Monte Carlo does, except instead of accessing a huge existing database, the computer self-generates the database during the game?

  5. We tend to treat the achievements of computers as “parlor tricks” because we know how they work: you build an evaluation function and run it against a very large number of board positions. What should amaze is how many evaluations it takes to beat a chess grand master – 200 million/sec? Once a thing has been done, the problem is quickly trivialized by the progress of hardware. The Dr Watson wiki page says that the latest hardware is 25 times faster than the Jeopardy winner and it fits inside 3 stacked pizza boxes. The advent of SSDs alone in the last decade makes jeopardy unwinnable by a human player, and innovations like this happen a lot.

    Obviously brains are organized completely differently. Instead executing 64 bit operations at gigahertz frequencies, whatever it is they do is executed at a few hundred hertz. Higher (conscious) brain functions are a few hertz or less. Hardware still has a long way to go before these ultra-parallel techniques will produce human level results, but this will certainly happen, and shortly afterwards will become trivial.

    The best book on this remains Kurzweil’s Singularity book. My Amazon book review mentioned that “friction” would slow his forecasts significantly. But in terms of the grand sweep of history, he is unavoidably correct.

    • I vaguely understand all that. But I also have wondered, even with Watson, how many humans created the software and even put together the dictionary and encyclopedia that gets uploaded to Watson’s brain.

      How much of the current achievements is purely division of human labor focused on a very specific algorithm?

      When will we have a robot give me a ride to the library and check out his own reference books?

    • Considering that the real “game” of Jeopardy is hitting the answer button faster than the other players, it’s not surprising that a machine should be better at it.

      I’d have liked to see how Watson did with more puns, but that’s contra the factual-questions nature of the show.

  6. This is fine advice, but also very well known by AI researchers. Accurately evaluating position quality, say by comparing with known expert plays, is a huge part of tree search in large trees. If you can’t play the game out to the end, then your minimax simulation needs an evaluation function at the “leaves” (the board states where you stop going deeper in the play tree and need to measure their quality), and similarity to known boards that resulted in a win, or board positions by expert players, is an obvious evaluation function.

  7. It’s pretty well understood by this point that chess mastery is mostly memorization. It’s the way you solve any NP problem: you reduce it to a problem that you already know how to solve.

  8. When we may be (probably are) observing are the limitations of the particular form of computer architecture that is basically binary; yes or no; either/or.

    Binary determinations of the action or inaction upon specific “pieces” or resources, persons, material objects etc. may have extensive limitations in application to spatial inferences (uses of dimensions) requiring the consumption or application of “pieces” or other material substances.

    The applications of what we call human “intelligence” involved much more than binary processes.

Comments are closed.