Monte Carlo Simulations

    Monte Carlo Simulations


    In a game of Monopoly, we want to find out how likely it is that a player dice throw amounts to 7.

    The theoretical probability (what you would expect to happen) for a player to get a 7 out of a 2 dice roll is 6/36 (0.1666666667).

    However, the experimental probability varies greatly, due to chance. For example, if you were to actually throw 2 dice 36 times, while you would (mathematically) expect that 6 times out of those 36 you’d roll a 7, that will not usually happen.

    In the diagram below, you can test this. The number of Steps is set to 36. Play the diagram. How many times did your player threw 7? Look at the Experimental probability calculated by the black Register. How close is it to the mathematical one?

    If you were super-lucky and you ended up with 6 at the end of the playthrough, try again 🙂

    The difference between the mathematically calculated probability and the experimental results resides in that the mathematical analysis converts a naturally random (stochastic) problem into a deterministic one. It removes randomness by averaging over the probabilities.

    Monte Carlo methods use the process of repeated random sampling to make numerical estimates of unknown parameters. The basis for Monte Carlo simulations is the Law of Large Numbers:


    “The average of the results obtained from a large number of trials should be close to the expected value and will tend to become closer to the expected value as more trials are performed.”

    On the same example diagram, we’ve set the number of Steps (read tries/throws) to 100. Press Play, then look at the value calculated by the Experimental probability Register. Is this closer to 0.1666666667 than the one you got by throwing the dice 36 times?

    To further highlight the importance of drawing many samples to increase accuracy, even for a simple random variable such as the sum of 2 dice throw, look at the histograms below.

    They represent the sum of the dice throws for 36, 100 and 1000 tries. We can definitely see that only the distribution for 1000 samples starts resembling the familiar bell-shape of the Gaussian probability distribution.

    Another interesting example of understanding the Monte Carlo Algorithm is estimating the value of Pi (π). While this method is not a particularly good way of estimating π, it is a great one to understand how Monte Carlo works.

    Let’s picture a game of darts, but played only a quadrant of the entire board. For the purpose of this example, we’ll consider that 1/4 of the board is a square with an area of 10000 units2.


    The radius of the circle inscribed in the entire board is equal to half of the board’s side, so 100 units, and it equals the side of the square that describes our focus.

    The area of the circle is π r2.

    The area of the square is side2=(2r)2=4r2

    If we divide the area of the circle, by the area of the square we get π/4.

    Back to our example, each one of your throws is a point with (x, y) coordinates in the upper right quadrant of the board. These randomly generated coordinates are whole numbers between 0 and 100.

    The probability of a point landing in the area of the circle is equal to the number of points that landed in the circle / the total number of points generated.

    Were we to generate an amount of point/throws to cover the whole area of the square, the ratio between the number of points within the square and the number of points within the circle, would equal the area of the circle divided by the area of the square, so π/4.

    In order to check if a randomly generated point has landed within the circle or on the circle’s curve, its coordinates need to satisfy the following condition: x2+y2<=10000.

    Hence we can use the following formula to estimate π:


    π = 4 * (Number of points that satisfy x2+y2<=10000) / (Total number of points generated)

    Based on that formula, if we generate points that follow the conditions described above, we could obtain the value of π. Let’s do just that, using the diagram below.

    What happens:

    1. The diagram generates x and y coordinates
    2. For each pair of coordinates it calculates x2+y2
    3. If this value is <= 10000, then it generates Circle Points
    4. If it is >10000, it generates Square Points
    5. The green Register calculates the Pi value real-time, as Circle Points and Square Points are generated

    Now we know the theory states that the more we increase the number of points generated, the closer we will get to the value of π. To visualise why the number of throws is important in order to get closer to π’s value, we’ve played the diagram for 1000 and then for 3000 throws. We then plotted the (x,y) pairs in a scatter plot. Here’s how these look like.

    π estimated value at the end of 1000 throws simulation was 3.03771661569…
    π estimated value at the end of simulating 3000 throws was 3.135978297728…

    So the more darts you throw, the more area you cover, the closer you get to a better estimation of reality.

    Games are played by people. People in real life. Moreover, games are complex systems, in which outcomes depend on more than just 2 variables.

    Instead of relying on mathematically calculated averages, in Machinations you can set your own parameters, not only at the system level but at a simulation level (a combination of Steps and Batch Plays to perform). This takes you much closer to a real-life outcome. So you can better balance your games.

    The bottom line: Machinations maintains randomness as part of the equation, just like real life does.


    All Rights Reserved © Machinations S.àr.l

    8217, Mamer, Luxembourg, accounts at BGL BNP PARIBAS, VAT number: LU30464284

    We use cookies for marketing and analytics. We also share information about your use of our site with our marketing and analytics partners who may combine it with other information that you’ve provided to them. You consent to our cookies if you continue to use our site. Learn more