If you’re a fan of old 1980’s games, then you’ll be interested in this reinforcement learning environment.
NetHack is a turn-based Dungeons & Dragons style video game. The player controls a character tasked with finding the Amulet of Yendor, which is buried deep within a dungeon. During the game, the character will encounter lots of different objects, monsters and artefacts, most of which will try to kill it!
The simplicity of Nethack masks a rich and complex game. Simply remembering a route through a dungeon is useless, as the dungeon itself is regenerated every time a new game starts. The game is also non-deterministic, as many of the interactions with objects are probabilistic. Fighting orcs and goblins doesn’t always go to plan and they can really damage the agent.
Symbols represent different objects within the dungeon, including the walls, doors, and the other monsters that the agent encounters. Learning what these symbols mean, how to interact with them and what effects they have in the world offers AI agents a rich learning opportunity. It’s further complicated by objects having effects through time. For example, if the agent eats something poisonous, then it will affect it for several turns, and not just on the next immediate turn.
The game itself dates from the late 1980’s, but nevertheless offers a great environment to train and test AI agents.
The Nethack Learning Environment (NLE) was released by Facebook’s AI team in June 2020 and was presented at last year’s NeurIPS conference. It provides a way for AI agents to interact with the game. Using reinforcement learning, an agent can learn to navigate through the dungeon and interact with the objects. Feedback is via three channels, the first is the representation of the dungeon itself, the second is via a natural language message and the third is a collection of statistics about the character, such as its health, strength and so on. The character can also carry items in an inventory.
Since the game is symbol based, there are lots of opportunities to use traditional symbolic AI to control the agent. Questions abound in NLE: which planning algorithms to use, how are objectives defined (such as eating, fighting, exploring etc.), how should knowledge about the environment be retained, including causal effects of interacting with the objects? Indeed, how much knowledge should the AI practitioner encode in the agent itself. Alternatively, neural networks could be used to train the agent. A sample agent, included in the release, was generated using pure neural network methods and had to learn everything itself.
One of the key things I really like about this environment is that it is purely symbol based. The game is very complex and yet the learning environment can be run on a standard laptop. No GPUs needed.
Lots of open Artificial Intelligence questions are represented here. Facebook has launched a challenge to AI researchers to push the boundaries of Artificial Intelligence and also provide a way to showcase new problem-solving techniques. It will be interesting to see if there are any advancements or breakthroughs in training techniques by agents learning to find the Amulet of Yendor.
Nethack – Source code for the Nethack game https://github.com/NetHack/NetHack
Nethack Wiki – everything you need to know about the game: https://nethackwiki.com/wiki/Main_Page
Facebook NLE Announcement: https://ai.facebook.com/blog/nethack-learning-environment-to-advance-deep-reinforcement-learning
Nethack Learning Environment Paper: https://arxiv.org/abs/2006.13760
Nethack Learning Environment Code: https://github.com/facebookresearch/nle
The NetHack Challenge: https://ai.facebook.com/blog/launching-the-nethack-challenge-at-neurips-2021