Game Brains: Your Artificially Intelligent Opponent

As I take my BYOND project’s AI back to the drawing board for the 3rd or 4th time, I realize I’ve collected a bit of insight as to what goes into a game’s artificial intelligence that negated my earlier impressions.

In my initial impression, the main thing that worried me was creating a competent AI. Trying to design a real life robot to perform simple pathfinding is remarkably painstaking work because the computer thinks in terms of ones and zeroes – “on” and “off” switches – and the rock obstructing the way is neither “on” nor “off” but “rock.”  Performing the necessary conversion is a challenge.

Games have it a bit easier because the action takes place in a digital world from the start. There are no rocks, merely 1s and 0s that represent abstractions of rock. Some challenge remains, as the environment is restructured to have some similarities with life so the player can relate. The closer to real it is, the closer we get to our real life robot trying to understand an analog rock. Overcoming even the minute challenge in a 2D tile-based game was a trial I did not look forward to when it came to writing my own AI.

The Unbeatable AI

What I’ve learned is that this is the least of my worries. When it comes to a computer game, teaching a computer program how to play itself is simply a matter of persistence. An effective game AI can be made by simply providing enough winning instructions to handle all situations provided by the game. Once enough work has been put into that, the game can even play itself better than the player, since computers follow instructions exactly and at lightning-fast speed.
IBMs Deep Blue has defeated the best chessmasters in the world.  However, I would argue it was not truly intelligent so much as very well instructed.
IBM's "Deep Blue" has defeated the best chessmasters in
the world. However, I would argue it was not truly
intelligent so much as very well instructed.

The challenge of building the AI will differ with the game, of course. “Twitch” games, which rely on reflexes, are simple enough because the instant reflexes of a computer can dominate players the very physical level. In more open-ended games it’s more complicated because there’s more moves, but still doable if enough persistence is applied. In the end, depending on the number and sophistication of choices involved, more or less effort is required to make a perfect AI – one that makes the best possible move every time.

Whether it’s easy or hard to make a bulletproof AI, it doesn’t really matter. Unlike someone designing a failsafe workplace application AI, the unbeatable AI has no place in a game.

The ideal game AI

The real goal on behalf of a game AI designer is not to make the computer play remarkably well. The real goal is to make the computer a fun opponent. What’s fun? Lets go with flow theory: the AI needs to offer enough challenge for the player to feel challenged while simultaneously not being too frustrating. It’s a delicate balancing act that is made even trickier by having to devise a means to rate the player.

Another matter that limits me in my current BYOND work is efficiency. Just because I can run some 200,000 lines of BYOND code per second doesn’t mean I can afford to have every little robot in the game eating up 10,000 lines per second. By the time I get to 20 robots, the rest of my game grounds to a halt as my 200,000 lines are entirely consumed by the lines dedicated to the AI. Thus, a large demand on my design is to break down the AI logistics into as small of an amount of processing as possible while still allowing them to be compelling opponents.

Senster's purpose wasn't to beat its observers, but rather playfully entertain them. It accomplished this in real space with late 1960s technology.

Fortunately, these two aspects work hand in hand. The weaknesses I deliberately give my AI (so the player can feel good about finding and exploiting them) can also be the same ones that would take a lot of code to get around. One can even come up with a good in-character excuse for opponents to be thick: wild animals, clunky robots, or other such opponents aren’t expected to be highly adaptive.

An alternate (perhaps lazier) solution that seems to work well is the numerical approach. Whether or not your opponent is smart, taking on two is harder (or even one with some other physical advantage). Plus, the player can still feel fairly good about overcoming their dumb opponents when a powerful handicap was involved. I would put the caveat on this approach that too obviously dumb opponents eventually become unsatisfying no matter how many there are or how strong you make them.

Conclusion

In the end, perhaps the computer is really no match for a player in terms of opponent quality. It’s just a bit more interesting when your opponent is actually learning, and the game becomes who can learn faster. However, when there’s nobody else around, it’s better to play against the developer’s instructions than nobody at all.

Comments

Popular Posts