Sunday, July 13, 2014

Downsides of Percentile Systems

percentiles
In the progress of writing a review of Unknown Armies, I found myself in the process of writing a long digression on why I dislike percentile systems. Rather than clutter up that article, I've decided to put my personal thoughts on the matter into this blog. For those who can't wait until next week, I feel Unknown Armies avoids some of the pitfalls I am about to describe.

There are two reasons I dislike percentile systems as they always appear to be implemented: flatness and capped skill levels.

By flatness, I mean that your odds of rolling any given number are equal. You are equally likely to roll a 43 as a 01. My issue with this is that there is no bell curve behavior and consequently you are roughly as likely to succeed amazingly well, fail spectacularly, or roll average. This makes it harder to rely on any given roll.

This is what can make games like D&D so wonky. A spat of poor luck or unbalanced dice can very easily lead to easy encounters to become a total party kill (TPK). On the other side of game master screen, a tough encounter can become a cake walk due to good rolls thus ruining the pacing for the session. While this can happen with any randomized system it is more likely to happen with a flat distribution.

Let's take the difference between rolling a twenty sided die (d20) and three six-sided dice (3d6). The odds of rolling a minimum roll on a d20 is 5%. For 3d6, rolling the equivalent behavior (rolling a 3) happens less than 0.5% of the time. The same percentage of 5% actually encompasses the results from 5 and lower. This makes extreme results much less likely.

This has a subtle effect on how dependable skills should be. With a flat level, a skill where you need to roll under an 11 is equally likely to succeed (50%) with a d20 or 3d6. But once you raise that value to 12, it diverges sharply (55% for a d20, 62.5% for 3d6). This makes high skill much more worthwhile in a bell curve system as you can depend on it much more than in a flat system.

It is possible for a game designer to work around this fact by providing characters options for higher skills but when you reach the higher levels it can look like there isn't much difference between rolling under a 19 or under a 20 . This is deceptive and tends to make most designers limit a maximum skill to some level below the highest value possible on their flat distribution. In particular initial starting levels of skills are capped much lower than the maximum roll. In percentile systems this means a character might only have a 60% or 70% skill level in their area of expertise and a maximum of 99% in terms of ultimate advancement.

This brings up the second problem: percentile systems universally cap results to within the range of 0 to 100%. This is to ensure that no matter how good you are you always have a chance of failure (a notion I hold as somewhat silly when it comes to the generally cinematic reality that the majority of roleplaying games hold to, regardless of the intentions of the game designers).

What makes this aspect detrimental is when penalties are then added in. I have yet to see a system where people do not at some point at least consider adding penalties to a roll based on the difficulty of the action. Almost always the game designer feels the need to make things harder on the characters, adding penalties for darkness, weapon ranges, and other obstacles. While this seems quite reasonable, this interacts with the flat distribution in odd and non-intuitive ways.

Crucially there are issues when you are dealing with experts verses novices. Ideally when you add penalties to a roll you are hoping to illustrate the difference between an expert and a novice. The novice should find these penalties hard and generally fail while the experts should continue to succeed. But with a flat distribution and capped skill levels there is no way for the expert to keep reliable odds of success.

For example, let's use the example of shooting a target. We look at two characters, an expert special forces sniper and a backwoods hunter. Now the hunter and the sniper can probably hit a target a few hundred feet away on a sunny day given plenty of time. This can be accounted by rarely used bonuses to rolls based on the lack of stress and perfect firing conditions.

We might imagine another scenario to represent the "average" use of the skill, one where there would be no particular penalties or bonuses. Let's assume it is shooting a target in a limited span of time at the same distance (like spotting a rabbit who if you miss will escape). Now both riflemen should still have a god chance of hitting the target. Most often in the games I've seen a world-class sniper will have a chance around 90% or higher and the hunter somewhere in the 50% range. So if both shoot the hunter misses half the time and the sniper hits better than 9 times out of 10.

So far so good. Now lets increase the difficulty. Instead of a stable surface let's put them on a helicopter. Their target is now a terrorist on a nearby building with some cover (say a hostage). It is also night and even with night vision goggles visibility is mildly limited. We might imagine each of these three factors imposing a -10% (or greater) penalty.

Our hunter now has a 20% chance (or less) to hit and will most likely miss. That is expected. The sniper however, for whom this is his job, has less than a 70% chance to hit. Even if his normal skill is 99% (the absolute maximum it can be), with the penalties it is now 69% or lower. He will miss almost a third of the time. I'm not sure we want snipers taking shots in those situations but such scenarios occur both in real life and (more importantly for a game context) the movies. Players expect their characters to accomplish these feats.

The penalties I mentioned are often quite a bit lower than those that usually come up in most percentile systems. You might imagine penalties totally up to 40% or more. This makes it impossible to play a competent combatant. Or scientist. Or doctor. Or any highly skilled professional. Once you add penalties to remove armatures from the playing field the capped skill levels combined with a flat distribution also makes the best professionals incompetent. This ruins many players' fun and bothers my sense of disbelief.

With a bell curve, each penalty can be smaller and still have a great effect. Our hunter in the example would have a skill of 10 on a 3d6 roll to maintain the 50% skill level. A penalty of -3 drops his effective skill to 7, or a 16.2% chance of hitting. Our sniper on the other hand would have a skill of 17 to reach 99%. That same penalty would drop his level of success to 14 or lower which is still better than 90%.

If you want to portray characters who are skilled professionals you need a distribution of results and/or skill levels that all but ensure basic success. D&D does the latter allows high level characters to hit easy difficulties almost without rolling (though the rules might specify a roll of 1 as an automatic failure, any other roll will succeed). GURPS does both, also keeping a level of automatic failure (roles of 17 or 18 on 3d6) but allow higher skill levels to compensate for additional penalties. Other resolutions systems complicate things further by obscuring the probabilities somewhat (like dice pools) but generally round out the odds as well.
Reason #3: they never stop rolling.
Reason #3: they never stop rolling.
Except for percentiles which continue to remain the deceptively attractive to some game designers and which usually mean I immediately ignore the game.

No comments: