Over Spring Break, I learned how to play a game called Dao, a two person game played on a 4x4 grid. Developed by some incredibly smart people back in 2001, the game is deceptively simple. Each player is given four pieces that are set up diagonally on the board (creating an X with both sets of pieces). Then, each player alternates turns moving pieces. Pieces can move in any direction, but must move all of the spaces they can. The game is over when one of the players completes the winning conditions, which are various alignments of their pieces on the board OR when one player traps the piece of another player in the corner.
My friend and I were playing the game because he is required to develop an AI for the game, and we wanted to figure out the basic mechanics of the game for him to do so. Before he and I started playing, however, I played online against a computer. I lost, every single time I played. That said, when I started playing against my friend, I won far more games than I lost. It sparked an interesting thought that I wanted to raise here: I was not winning because of my own superior strategy, but because my opponent was making mistakes. In essence, I wasn't winning, per say, I was "not losing".
In looking at the difference between playing against the computer and my friend, it is blatantly clear that the obvious difference is that my friend is human and made human errors. There were several times when we would both be a move or two away from winning the game, and sometimes we would lose track of the other person's pieces because we were too focused on our own and would lose the game because of this. We could not see completely the board and the potential moves that our opponents might make.
This was not the case with the computer.
Playing against the computer, it saw the board completely at all times. And while it could not foresee what moves I would make, it could analyze the moves I did make and the best way to respond far better than I could. In terms of a programming challenge, it is interesting to think about what the programmer was trying to accomplish. Did they want their AI to win? Or did they simply want their AI to not lose? I didn't pay close enough attention while I was playing to know for sure, but I'm curious to know if the computer was simply making moves that would prevent me from winning, rather than trying to proactively win. It would be interesting to look at a log of moves and see what it is the computer does.
This got me thinking into how this could relate to Mystery at the Museum, where there are some computer controlled elements. It is different in that our computer controlled elements, the challenges, are not directly competing against the computer, and what moves they can make are being determined by random chance. They are not making their moves based off what the user does at all. I'm curious as to how the dynamic of the game would change if an AI was introduced, one that would respond based off the moves of the user - more interestingly, what would happen if we made an AI that was trying to win and not just exist for the sake of providing challenge?