A Machine to Play Pitfall

Wednesday 17 June 2009, 7:30 pm   ///  

Carlos Diuk, Andre Cohen, and Michael L. Littman of Littman’s RL3 Laboratory at Rutgers devised a new way of doing reinforcement learning, using Object-Oriented Markov Decision Processes, a representation that looks at a higher level than usual and considers objects and interactions. They had a paper about this at last year’s International Conference on Machine Learning (ICML). Better yet, they demonstrated their OO-MDPs representation by using it in a system that learned to play Pitfall in an emulator. I don’t believe that the system got all the treasures, but watching it play and explore the environment was certainly impressive. It seems like the technique is an interesting advance. By trying it out on a classic game, the researchers suggest that it will have plenty of “serious” uses in addition to being used in video game testing and in game AI.

5 Comments »

  1. yay!

    Comment by Ian Bogost — 2009-06-17 @ 7:48 pm
  2. I think I’ve talked about this with you before, but someone had a program working through a hacked version of MAME that was discovering Pac-Man patterns. I wish I could have found that software and helped make it actually do non-reversing pattern checking. This program was a side project of someone many years ago.

    Comment by Jason Scott — 2009-06-18 @ 3:54 pm
  3. Are there any papers/discussions out there about the close relationship video games have with MDPs?

    MDPs are a great way to formally describe video games are and (to me) RL, or value-function approximation, is a good metaphor for describing what gamers enjoy doing.

    Comment by sfingram — 2009-06-23 @ 7:03 pm
  4. (for sfingram) there was a lot about RL and video games at this tutorial: http://research.microsoft.com/en-us/projects/mlgames2008/

    Check out these people’s work!

    Comment by Carlos D — 2009-07-09 @ 8:12 am
  5. Nice work!

    However, there’s lots of work on using reinforcement learning algorithms of different kinds (temporal difference learning, evolution strategies etc.) to learn to play games. My previous supervisor Simon Lucas worked on learning to play Pac-Man (using the Microsoft Revenge of Arcade version), I’ve worked on various car racing games, Unreal Tournament, Super Mario Bros etc. The proceedings of the yearly Computational Intelligence and Games conferences are full of examples of this kind of research.

    Apart from a good way of testing the various kinds of RL algorithms, it’s also a good way of learning about the game you’re testing, and might actually be useful in game design.

    Comment by Julian Togelius — 2009-07-09 @ 11:24 am

RSS feed for comments on this post. TrackBack URI

Leave a comment

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2014 Post Position | Barecity theme