I’ve been writing some very elementary probability simulations in python and wanted to simulate millions of coin flips. I started out with this somewhat-obvious code:
ht = [0, 0] for i in range(n): ht[random.choice((0, 1))] += 1
which counts heads/tails in ht and ht and performs n flips.
Can this code be sped up? Inserting time.time() calls before/after the “for” loop I found that this code can perform 10 million flips in 11.8 seconds. So it’s about 1.18 microseconds per flip.
[ aside: all measurements were made on my MacBook Air. I tried to avoid doing anything else while running the timing tests, and in all cases took the fastest result I could get. All results for a given method were similar and were run multiple times at multiple different points in time, so I believe there were no confounding factors with background activities. Your mileage may vary. ]
Borrowing a trick from the old geezer days, I thought “surely if we unroll the loop this will go faster”. So I changed it to “for i in range(n//100)” and then put 100 of the “ht[random.choice(…” statements in the body of the loop.
Result? 1.16 microseconds per flip. Ok, that’s faster, but by less than 1%. Kudos to python for having efficient looping, plus, of course, the random() call dominates the cost anyway.
Ok maybe random.choice is slower than explicitly asking for a random number:
ht = [0, 0] for i in range(n): ht[random.randrange(2)] += 1
Nope, that came in significantly slower! 1.33 microseconds per flip.
Maybe I could speed it up by eliminating array access and just adding up +1 and -1 values in a loop and reconstructing the heads vs. tails numbers after:
delta = 0 for i in range(n): delta += random.choice((-1, 1))
That ran as fast as 1.14 microseconds per flip, but still just an insignificant improvement over the the original and arguably more “obvious” code.
Of course, by this point readers are screaming at me “duh, all the time is in the random function and there’s nothing you can do about it” and they are right. I should have tested that first. For example, this code:
ht = [0, 0] for i in range(n): ht += 1
runs at 0.11 microseconds per “flip” (albeit there is no flip, it just always adds to ht in this example). Or if we go all the way to the delta method, it will be as little as 0.06 microseconds per “flip” for this:
delta = 0 for i in range(n): delta += 1
obviously there is no “flip” here but it’s fast.
Of course, we can use a single call to random to generate more than one random bit at a time. In C we might generate a 32-bit random number and then peel off the bits one by one. I’m guessing that doing this in python won’t be as efficient, but there’s still a way to easily amortize one call to the random method over multiple flips. For example we can enumerate all sixteen possibilities for four flips. In binary, they would be: 0000, 0001, 0010, 0011, … 1100 1101 1110 1111. We can note that the first (“0000”) is 4 heads (arbitrarily calling the 0 heads, it doesn’t matter which) and 0 tails. The next one (“0001”) is 3 heads and 1 tails, and so on. The complete enumeration of the outcomes looks like this:
fourflips = ((4, 0), # 0000 (3, 1), # 0001 (3, 1), # 0010 (2, 2), # 0011 (3, 1), # 0100 (2, 2), # 0101 (2, 2), # 0110 (1, 3), # 0111 (3, 1), # 1000 (2, 2), # 1001 (2, 2), # 1010 (1, 3), # 1011 (2, 2), # 1100 (1, 3), # 1101 (1, 3), # 1110 (0, 4)) # 1111
and now our loop could look like this:
ht = [0, 0] for i in range(n//4): flips = random.choice(fourflips) ht += flips ht += flips
Running this code we get 0.330 microseconds per flip. About three and a half times as fast as the original code – which makes sense, because we are calling the random function one fourth as many times and that call dominates the timing.
Of course, in doing this optimization I’ve lost the data on individual flips and now I only know the relative number of heads and tails for the four flips that are grouped together. In other words, I can no longer distinguish between a group of four flips that were “HTHT” vs “HHTT” or many other combinations that will all appear as just (2, 2) when returned by random.choice. In my application this is unimportant. If it were important, I could of course have stored the individual flips in the fourflips variable, constructing it this way:
fourflips = ( (0, 0, 0, 0), (0, 0, 0, 1), (0, 0, 1, 0), ... etc
and processing it accordingly. I stuck with the aggregated counts because that’s all I cared about in my simulations.
We can take this one level further and combine the running delta method (+1 for a heads, -1 for a tails, and reconstruct the final number of heads and tails from the total number of flips and the delta), plus we can compute the delta ahead of time for each element of the fourflips list (e.g., (0, 4) is a delta of -4, (2, 2) is a delta of 0, etc). Combining those concepts we get:
ht = [0, 0] fourdeltas = (4, 2, 2, 0, 2, 0, 0, -2, 2, 0, 0, -2, 0, -2, -2, -4) delta = 0 for i in range(n//4): delta += random.choice(fourdeltas)
which gets us all the way down to 0.285 microseconds per flip.
How far can this optimization go? As far as you have patience for and as large as you want the list of deltas to be. I tested the same idea at five flips precomputed as deltas (i.e., five bits at a time) and got down to 0.232 microseconds per flip. There’s a diminishing return for each additional bit added – at five deltas we’re calling the random function 20% as often as the original code, but at four deltas we were already down to 25% so the improvement isn’t as dramatic. Also the deltas list doubles in size each time though of course that can be precomputed by a program so it’s not clear that there’s a pragmatic limit. I don’t know if at some point the length of the deltas list makes random.choice execute slower; I don’t think it would but it depends on implementation details.
For my purposes I was happy at five bits per call to random.choice and 0.232 microseconds per flip, which is more or less five times faster than the original.