Wednesday, July 20, 2011

Playtesting Is Your Friend.

This is the Prezi for the lightning talk presentation I was going to give at What's Up Pitches?! back in May. Due to scheduling issues at the event the presentation didn't happen, so I've finally found time to blog about it instead.

[edit:] Embedding the Prezi isn't so hot with this Blogger template. The Prezi can be found here.

It starts with the basic premise of my business and current project, and then ambles into the hows and whys of playtesting a game design.

There are plenty of playtesting methods, each with their own pros and cons. Mark Ambinder goes through Valve's playtest methodologies in his presentation available from their publications library*. Although the methods discussed are largely designed for digital games, the approach and information gathered can apply just as easily for table games.

Two methods that I have used extensively - pretty much exclusively - for my current project have been Design Experiments and Player Feedback.

Design Experiment playtests are used to isolate core mechanics and put them through rigorous case testing. It involves identifying a problematic or unbalanced mechanic and creating an environment that the mechanic can be accessed and played repeatedly without interference from the rest of the game.

This is incredibly useful for troubleshooting, tweaking and testing new ideas. The key is making sure you can easily emulate and remove the rest of the game so that the information you gather is relevant for a full-length game session.

This is slightly easier for digital games; god-modes, location warps and event spreadsheets can ease the pain of having to work/play through an entire level to test a boss mechanic.
It is severely different for a table game. It is important to make sure the results you've acquired are just as applicable to a 2 hour game session as the 2 minute playtest. This requires a fundamental understanding of your entire game system and the motivations driving your players.

An example is the Allegiance system for my current project.
I had toyed with the idea of drawing an Allegiance card at the start of the game to determine which team you ultimately belonged to. Originally this worked out well, but due to the development of other mechanics I ended up ditching the system.
This worked in the short-term, as it provided players with open opportunities. They could react on the fly to what was happening around them.
However after a few more rounds, players started to complain that they had less direction and motivation. Although I originally thought this was just the counter-balance from the freedom to pick your own allies, after more discussion it came out that this openness devalued the worth of the end-game.

Essentially, no one cared who won the game.

I wasn't convinced that the Allegiance system could be inserted back in to the game without breaking other systems, but I really wanted to test it. So after a couple of full rounds, we sat down with some testers, removed the rest of the game and played with just the Allegiance cards. Each player still had their turn as normal, but we narrated our actions to imitate the gameplay.
This allowed a massive turnover of multiple playthroughs, and we found that even though we did not have a full 2-hour investment in the end result, there was still some level of emotional feedback when the social mechanics of the Allegiance cards were played out.

End result: playtest successful, and Allegiance was reinstated smoothly into the game.

Player Feedback is the second method I mentioned heavily in my presentation. The premise is simple - ask your testers what they thought. The reality isn't nearly so easy.

Your playtesters will have just experienced your game - for the first time, third time or hundredth - with their own set of intentions, reactions and narrative. What one player might have considered a courageous and bold move, another would snarl was deceitful and petty.
So who is right?

It is important to understand three things when you ask your playtester a question:
1) Your game's system(s),
2) The intent of your design, and
3) The intent of your players.

The last is the one that gets forgotten about. You absolutely must know what kind of player your tester is.
Are they competitive or co-operative? A sore winner or a sore loser? Do they grief? Are they tactical, strategic or reactive?
If you don't have a rudimentary understanding of your tester's game psychology going in, the only information you will be able to garner from their feedback is a basic emotional recount of their experience. Which is fine for the player, but that is not information that you can use to effectively test against the logic of your rules system.

The example I used in my presentation is when I changed from asking "Did you feel powerful?" to "When did you feel powerful?" during a testing session.
After a few sessions asking the first question, I had pages of the same feedback. Anyone who won the session felt powerful, everyone else didn't.
As soon as I started asking the second question instead, "When did you feel powerful?", I instantly received better feedback. The players who won would often have a similar end-game related answer, but everyone else was now giving me a wide range of experiences.
With a larger and more informative dataset, I now know what it is about my game that will make someone feel like a powerful mafia crime-lord, and not just someone who lost at a card game.

That's the gist of the presentation. I probably could have talked for a good half hour on the subject, so fitting it all into a lightning talk was pretty tough.
It came out as fairly concise, although I was pushing the three minute limit. Perhaps it was a stroke of luck that I didn't end up giving it!

-Anthony


* This is an amazing repository of information for game designers and AI programmers. Eat it up kids!