
One of the most frequent questions I get from readers is “How do I win at Russian Roulette?”
I’m sure you can relate. You’re standing in the dairy aisle at the grocery store and a stranger walks up to you and says “Hey! I have revolver in my car with one bullet in the chamber, wanna play Russian Roulette?”
And you’re like “Oh man, that sounds awesome. I’ll bring my friends and we can make it a game night.”
It makes sense. After all, 5 out every 6 Russian roulette players recommend it as a fun and profitable game.
No? You haven’t had a similar experience? That’s probably good. But, now that we’re on the topic, how do you consistently win at Russian Roulette? More to the point, what other scenarios like Russian Roulette exist where winning once is feasible but winning consistently is highly improbable?”
The rules for Russian roulette are simple: you sit in a circle with five other people, put one bullet in a pistol with six chambers, and each person takes a turn pointing the gun at their head and pulling the trigger.
You might roll the dice and take $1,000,000 to play Russian Roulette one time (though I wouldn’t advise it). But there’s no amount of money that would make you play it 6 or more times.
There are only two ways to consistently win Russian Roulette:
- DO NOT PLAY.
- Change the Rules1
Though you will (hopefully) never play Russian roulette, there are a surprising number of scenarios in life that have rules very similar to Russian Roulette but which otherwise sane and rational-seeming people (including Nobel prize winners) choose to play. In fact, you may be playing one of those games right now and don’t realize it.
How do you recognize games like Russian Roulette where the only way to win is to not play? And how can you change the rules to make it work more favorable for you?
The key is a big little idea called ergodicity.
What I Learned Losing 56 Million Dollars
Consider the following thought experiment offered by Black Swan author Nassim Taleb.
In scenario one, which we will call the ensemble scenario, one hundred different people go to Caesar’s Palace Casino to gamble. Each brings a $1,000 and has a few rounds of gin and tonic on the house (I’m more of a pina colada man myself, but to each their own). Some will lose, some will win, and we can infer at the end of the day what the “edge” is.
Let’s say in this example that our gamblers are all very smart (or cheating) and are using a particular strategy which, on average, makes a 50% return each day, $500 in this case. However, this strategy also has the risk that, on average, one gambler out of the 100 loses all their money and goes bust. In this case, let’s say gambler number 28 blows up. Will gambler number 29 be affected? Not in this example. The outcomes of each individual gambler are separate and don’t depend on how the other gamblers fare.
You can calculate that, on average, each gambler makes about $500 per day and about 1% of the gamblers will go bust. Using a standard cost-benefit analysis, you have a 99% chance of gains and an expected average return of 50%. Seems like a pretty sweet deal right?
Now compare this to scenario two, the time scenario. In this scenario, one person, your card-counting cousin Theodorus, goes to the Caesar’s Palace a hundred days in a row, starting with $1,000 on day one and employing the same strategy. He makes 50% on day 1 and so goes back on day 2 with $1,500. He makes 50% again and goes back on day 3 and makes 50% again, now sitting at $3,375. On Day 18, he has $1 million. On day 27, good ole cousin Theodorus has $56 million and is walking out of Caesar’s channeling his inner Lil’ Wayne.

But, when day 28 strikes, cousin Theodorus goes bust. Will there be a day 29? Nope, he’s broke and there is nothing left to gamble with.
The central insight?
The probabilities of success from the collection of people do not apply to one person. You can safely calculate that by using this strategy, Theodorus has a 100% probability of eventually going bust. Though a standard cost benefit analysis would suggest this is a good strategy, it is actually just like playing Russian roulette.
The first scenario is an example of ensemble probability and the second one is an example of time probability. The first is concerned with a collection of people and the other with a single person through time.

What is Ergodicity?
This thought experiment is an example of ergodicity. Any actor taking part in a system can be defined as either ergodic or non-ergodic.
In an ergodic scenario, the average outcome of the group is the same as the average outcome of the individual over time. An example of an ergodic systems would be the outcomes of a coin toss (heads/tails). If 100 people flip a coin once or 1 person flips a coin 100 times, you get the same outcome. (Though the consequences of those outcomes (e.g. win/lose money) are typically not ergodic)!
In a non-ergodic system, the individual, over time, does not get the average outcome of the group. This is what we saw in our gambling thought experiment.
A way to identify an ergodic situation is to ask do I get the same result if I:
- look at one individual’s trajectory across time
look at a bunch of individual’s trajectories at a single point in time
- Malcolm X knew how to consistently win at Russian Roulette. During Malcolm X’s burglary career, he once played Russian roulette, pulling the trigger three times in a row while pointing it to his head to convince his partners in crime that he was not afraid to die. Malcolm X later revealed to a reporter that he palmed the round. Smart man.
- If we looked at one individual trajectory across an arbitrarily finite segment of time, it is possible to get the incorrect belief idea that the system is ergodic. For example, if we compared one individual playing two round of Russian roulette to two different individuals playing one around, it’s possible that the bullet would not go off in either case leading us to believe the system is ergodic even though it isn’t. This goes into epistemology and the problem of induction, both far outside the scope of the post but worth keeping in mind that the longer the timeline runs, the better.