Bridge Odds

04/16/2017

     
 

 

 

BRIDGE HANDS Also see http://www.rpbridge.net/7z77.htm  and  http://www.rpbridge.net/7z68.htm

 

158,753,389,900 to 1  are the odds you will be dealt a single suit in  bridge.   158 billion to 1.
2,235,197,406,895,366,368,301,560,000 to 1 are the odds against all four players in bridge being dealt a single suit. This is over 2 octillion to 1.  An octillion is is 1027  (8 + 1)x3 = 27 zeros
How many bridge hands are possible?

The number of bridge hands that can be dealt to one player is 635,013,559,600 or 635 billion

52! divided by (13! x 39!)

The number of bridge hands that can be dealt to a second player after one hand is dealt is                     8,122,425,444 or 8 billion.

39! divided by (13! x 26!)

The number of bridge hands that can be dealt to a third player after hands are dealt to two players  is        10,400,600 or 10 million.

26! divided by (13! x 13!)

53,644,737,765,488,792,839,237,440,000 is the number of bridge deals that can be dealt to four players is  or 53 octillion*    It is the product of the above three numbers. (Once three hands are dealt, the fourth is defined.)

52! divided by (13! x 13! x 13! x 13!)

A million is 106 , a billion is 109,  a trillion is 1012 ,  a quadrillion is 1015, a quintillion is 1018  A sextillion is 1021, a septillion is 1024 , an octillion is 1027

 

Hand Distributions:

1. There are 39 patterns associated with the 636 billion hands dealt to one player.  See Durango Bill  http://durangobill.com/BrSuitStats.html

Hand Patterns in One Hand  
Pattern (any suit order) Probability (%) Sum down
4432 21.55 21.55
5332 15.52 37.07
5431 12.93 50.00
5422 10.60 60.6
4333 10.54 71.14
6322 5.64 76.78
6421 4.70 81.48
6331 3.45 84.93
5521 3.17 88.10
4441 2.99 91.09
6430 1.33 92.42
5440 1.24 93.66
7321 1.88 95.54

There are 39 hand patterns.  The missing 26 patterns are each less than 1%.    Not shown are patterns like 9310.   The top 5 occur 71% of the time.  The top 13 occur over 95% of the time.

2. There are 104 patterns associated with the 495,918,532,948,104 { i.e. COMBIN(52, 26)}   hands dealt to two players. See Durango Bill  http://durangobill.com/BrSuitStats.html

Hand Patterns in Two Hands  
Pattern (any suit order) Probability (%) Sum down
8765 23.6 23.6
7766 10.5 34.1
9764 7.3 41.4
9665 6.6 48.0

The above four patterns occur almost half of the time.

High Card Points in your hand:

High Card Points
HCP Probability(%) HCP Probability(%)
0 0.36 16 3.31
1 0.79 17 2.36
2 1.36 18 1.61
3 2.46 19 1.04
4 3.85 20 0.64
5 5.19 21 0.38
6 6.55 22 0.21
7 8.03 23 0.11
8 8.89 24 0.056
9 9.36 25 0.026
10 9.41 26 0.012
11 8.94 27 0.0049
12 8.03 28 0.0019
13 6.91 29 0.0007
14 5.69 30 0.0002
15 4.42 31-37 0.0001

Trump Splits:

Splits
Missing Cards Possible Split Probability(%)
2 2-0 48
1-1 52
3 2-1 78
3-0 22
4 3-1 49.7
2-2 40.7
4-0 9.6
5 3-2 67.83
4-1 28.26
5-0 3.91
6 4-2 48.4
3-3 35.5
5-1 14.5
6-0 1.5
7 4-3 62.2
5-2 30.5
6-1 6.8
7-0 0.5
8 5-3 47.1
4-4 32.7
6-2 17.1
7-1 2.9
8-0 0.2

Durango Bill's Website on probabilities  See  Bill Butler's page: http://www.durangobill.com/Bridge.html.

No Trump Hands http://www.bridgehands.com/S/Strong_Notrump.htm   

If you open 15-17, the distribution is:

15 HCP   4.4%   43%
16 HCP   3.3%   33%
17 HCP   2.4%   24%
  10.1% 100%

If partner has 8 flat and invites with 2N, opener will pass 43% of the time with 15 and  will bid 3N  57% of the time with 16 or 17.  So,  57% of the time the contract will be in 3N but should not be 3N,  33/(33 + 24) = 58% of the time when inviting with 8 flat, so I don't invite with 8 flat. 

 CHEATING IMPROVES THE ODDS

Blubaugh http://www.greatbridgelinks.com/GBLArchives/GBL010706.html

 

ITALY http://en.wikipedia.org/wiki/Blue_Team_(bridge)

 

1984 Boston http://www.nytimes.com/1984/07/31/nyregion/five-bridge-players-taken-out-of-play-cheating-suspected.html

 

1995   http://cam.bridgeblogging.com/2011/03/06/watchin-the-detectives/

 

2007 Gee http://tedmuller.us/Bridge/Esoterica/Ted03-TheKenGeeCheatingCoverup.htm

 

2014 Coughing German Doctors  http://www.independent.co.uk/news/world/europe/coughs-the-word-bridge-world-champions-found-guilty-of-cheating-9224319.htm

THE HAT CHECK PROBLEM

Thirteen men go to a restaurant and check their hats.  Upon leaving they blindly grab hats..    What if 1000 men entered a restaurant and checked their hats and simply grabbed hats upon leaving.  What is the probability noone received the correct hat back?  The answer is hardly affected by the number of men.   The answer is about 1 divided by e, the base of the natural logarithm, or 1/e, or 1/(2.71828)... about 37%.  This probability was determined by the  French mathematician, Nicolas Bernoulli, in the early 1700s in analyzing an old French card game Treise, which means 13 in French as the original game involved only 13 cards.  It was similar to the following game:

Using two 13 card decks (e.g. all spades), we each shuffle our decks and place them face down on the table.  We start turning cards over, one at a time and see if our cards match or do not match.  If we find a match before all the cards are compared I pay you 37 cents.  If we compare all 13 cards and there is no match you pay me 63 cents. The probability  of no matches is 1/e, about .37.

The number e is an important mathematical constant that is the base of the natural logarithm. It is approximately equal to 2.71828, and is the limit of (1 + 1/n)n as n approaches infinity, an expression that arises in the study of compound interest. It can also be calculated as the sum of the infinite series:

e =  \displaystyle\sum\limits_{n = 0}^{ \infty} \dfrac{1}{n!} = 1 + \frac{1}{1} + \frac{1}{1\cdot 2} + \frac{1}{1\cdot 2\cdot 3} + \cdots

Consider the  two decks of spades described above. The following table contains  the probability of exactly 0, 1, 2, 3 ...12, 13 matches. These probabilities are the same as for the 13 Hat Problem.                                                                                                              

P(r,N) = the probability of exactly r matches out of N cards or hats

P(r,13) =    {1 - 1/1!  +  1/2! - 1/3!  +  ....     +_   1/(13-r)!} /r!

r

Numerator

r!

P(r,13) = Numerator/r!

0

0.36788

1

0.36787943923

1

0.36788

1

0.36787944132

2

0.36788

2

0.18393971962

3

0.36788

6

0.06131324405

4

0.36788

24

0.01532829953

5

0.36788

120

0.00306568287

6

0.36786

720

0.00051091270

7

0.36806

5040

0.00007302690

8

0.36667

40320

0.00000909392

9

0.37500

362880

0.00000103340

 10

0.33333

3628800

0.00000009186

11

0.50000

39916800

0.00000001253

12

0.00000

479001600

0.00000000000

13

1.00000

6227020800

0.00000000016

 

 

 

1.00000000000

It is impossible to have exactly  N - 1 matches.    If 12 cards match, so will the 13th.
The probability of no matches is about 1/e, or 37%. The probability of exactly 1 match is only a hair more or  less.
If N is odd, P(0,N) < (1,N)      If N is even, P(0,N) > P(1,N)      P(N -1, N) = 0

See http://www.mathnet.or.kr/mathnet/paper_file/Elizabethtown/Gabriela/Monmort.pdf  and info on "Derangements" at http://en.wikipedia.org/wiki/Derangement and http://mathworld.wolfram.com/Derangement.html  and finally http://math.stanford.edu/~thiem/courses/108S/hw1solns.pdf and "Combinatorics" at http://en.wikipedia.org/wiki/Combinatorics

BIRTHDAY PROBLEM

This problem was posed by Richard von Mises in 1939.  How many people , n, must be in a room before the probability is at least 50% that someone shares a birthday? To figure out the exact probability of finding two people with the same birthday in a given group, it turns out to be easier to ask the opposite question: what is the probability that NO two will share a birthday, i.e., that they will all have different birthdays? With just two people, the probability that they have different birthdays is 364/365, or about .997. If a third person joins them, the probability that this new person has a different birthday from those two (i.e., the probability that all three will have different birthdays) is (364/365) x (363/365), about .992. With a fourth person, the probability that all four have different birthdays is (364/365) x (363/365) x (362/365), which comes out at around .983. And so on. The answers to these multiplications get steadily smaller. When a twenty-third person enters the room, the final fraction that you multiply by is 343/365, and the answer you get drops below .5 for the first time, being approximately .493. This is the probability that all 23 people have a different birthday. So, the probability that at least two people share a birthday is 1 - .493 = .507, just greater than 1/2.  See the table below.   p(n) is the probability that, with n people in a room, at least two share a birthday. It equals 1 minus the probability there are no common birthdays out of n people in a room.

n p(n)
5 2.7%

10

11.7%

20

41.1%

23

50.7%

25 56.9%

30

70.6%

40 89.1%

50

97.0%

95

99.9%

100

99.99997%

chart of birthday probabilities

PIG GAME is a simple dice game first described in print by John Scarne in 1945.   Pig is sometimes used by math teachers to teach probability concepts.

Pig is one of a family of dice games described by Reiner Knizia as "jeopardy dice games". For jeopardy dice games, the dominant type of decision is whether or not to jeopardize previous gains by rolling for potential greater gains.

Durango Bill, who has calculated many probabilities regarding Bridge Hands, has a web page describing the game together with the winning strategy in the Pig Game.  See http://www.durangobill.com/Pig.html

PARANNDO'S PARADOX

A coin-tossing example  Consider playing two games, Game A and Game B with the following rules. For convenience, define C_t to be our capital at time t, immediately before we play a game.

  1. Winning a game earns us $1 and losing requires us to surrender $1. It follows that C_{t+1} = C_t +1 if we win at step t and C_{t+1} = C_t -1 if we lose at step t.
  2. In Game A, we toss a biased coin, Coin 1, with probability of winning P_1=(1/2)-\epsilon. If \epsilon > 0, this is clearly a losing game in the long run.
  3. In Game B, we first determine if our capital is a multiple of some integer M. If it is, we toss a biased coin, Coin 2, with probability of winning P_2=(1/10)-\epsilon. If it is not, we toss another biased coin, Coin 3, with probability of winning P_3=(3/4)-\epsilon. The role of modulo M provides the periodicity..

It is clear that by playing Game A, we will almost surely lose in the long run. Harmer and Abbott show via simulation that if M=3 and \epsilon = 0.005, Game B is an almost surely losing game as well. In fact, Game B is a Markov chain, and an analysis of its state transition matrix (again with M=3) shows that the steady state probability of using coin 2 is 0.3836, and that of using coin 3 is 0.6164.    As coin 2 is selected nearly 40% of the time, it has a disproportionate influence on the payoff from Game B, and results in it being a losing game.

However, when these two losing games are played in some alternating sequence - e.g. two games of A followed by two games of B (AABBAABB...), the combination of the two games is, paradoxically, a winning game. Not all alternating sequences of A and B result in winning games. For example, one game of A followed by one game of B (ABABAB...) is a losing game, while one game of A followed by two games of B (ABBABB...) is a winning game. This coin-tossing example has become the canonical illustration of Parrondo's paradox – two games, both losing when played individually, become a winning game when played in a particular alternating sequence. The apparent paradox has been explained using a number of sophisticated approaches, including Markov chains, flashing ratchets. One way to explain the apparent paradox is as follows:

bulletWhile Game B is a losing game under the probability distribution that results for C_t modulo M when it is played individually (C_t modulo M is the remainder when C_t is divided by M), it can be a winning game under other distributions, as there is at least one state in which its expectation is positive.
bulletAs the distribution of outcomes of Game B depend on the player's capital, the two games cannot be independent. If they were, playing them in any sequence would lose as well.

The role of M now comes into sharp focus. It serves solely to induce a dependence between Games A and B, so that a player is more likely to enter states in which Game B has a positive expectation, allowing it to overcome the losses from Game A. With this understanding, the paradox resolves itself: The individual games are losing only under a distribution that differs from that which is actually encountered when playing the compound game. In summary, Parrondo's paradox is an example of how dependence can wreak havoc with probabilistic computations made under a naive assumption of independence. A more detailed exposition of this point, along with several related examples, can be found in Philips and Feldman.

For a simpler example of how and why the paradox works, again consider two games Game A and Game B, this time with the following rules:

  1. In Game A, you simply lose $1 every time you play.
  2. In Game B, you count how much money you have left. If it is an even number, you win $3. Otherwise you lose $5.

Say you begin with $100 in your pocket. If you start playing Game A exclusively, you will obviously lose all your money in 100 rounds. Similarly, if you decide to play Game B exclusively, you will also lose all your money in 100 rounds.

However, consider playing the games alternatively, starting with Game B, followed by A, then by B, and so on (BABABA...). It should be easy to see that you will steadily earn a total of $2 for every two games.

Thus, even though each game is a losing proposition if played alone, because the results of Game B are affected by Game A, the sequence in which the games are played can affect how often Game B earns you money, and subsequently the result is different from the case where either game is played by itself.

Parrondo's paradox is used extensively in game theory, and its application in engineering, population dynamics, financial risk, etc., are also being looked into as demonstrated by the reading lists below. Parrondo's games are of little practical use such as for investing in stock markets  as the original games require the payoff from at least one of the interacting games to depend on the player's capital. However, the games need not be restricted to their original form and work continues in generalizing the phenomenon. Similarly, a model that is often used to illustrate optimal betting rules has been used to prove that splitting bets between multiple games can turn a negative median long-term return into a positive one.

In the early literature on Parrondo's paradox, it was debated whether the word 'paradox' is an appropriate description given that the Parrondo effect can be understood in mathematical terms. The 'paradoxical' effect can be mathematically explained in terms of a convex linear combination.

                                                                           

 

 
       
       
       
       
    Home | Bridge and Don | Bridge Experts | Bridge Humor | Bridge Odds
     
   

 

 

This site was last updated 04/16/17