## FANDOM

16 Pages

The article studies the probability of obtaining two or more heads in a row in n tosses of a fair coin. H(n) represents the number of permutations containing two or more heads in a row in n tosses. Thus P(n), the probability of two or more heads in a row in n tosses is H(n) divided by the total number of permutations in n tosses.

$P(n) = \frac{H(n)} { 2^n }$

## DefinitionEdit

H(n) is defined as:

$H(n)=H(n-1)+H(n-2)+2^{n-2}$

where H(0) = H(1) = 0.  Two heads can not be obtained in less than two tosses.


Thus P(n) is

$P(n) = \frac{H(n-1)+H(n-2)+2^{n-2}} { 2^n }$

As n increases the denominator of P(n) increases by powers of two and the numerator increases by powers of two plus previous values. Thus as n increases, the probability P(n) will increase as well. Figure 1 shows the probability of obtaining at lease two heads in a row in n tosses, P(n), as n increases.

### DerivationEdit

The derivation for at least two heads in n tosses will be examined with n binary bits. A ‘1’ will represent heads and a ‘0’ will represent tails.

Consider the case when n is 5 or H(5). The equation calls for H(4) and H(3), which have values of 8 and 3 respectively as shown in Table 1.

 n=3 n=4 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 1 1 0 0 1 1 1 0 0 0 1 0 0 1 0 1 0 1 0 1 1 1 0 0 1 1 0 1 1 1 0 1 1 1 1 0 0 0 1 0 0 1 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 1 1 1

When analyzing the case for any n bits, all the possibilities can be broken down into four sections. Bits 1 through (n-1) are the same for the first half of the permutations and the second half of the permutations and bit n is 0 for the first half and 1 for the second half. Table 2 shows the four sections for the case of n equal to 5.

 Bit5 Bit4 Bit3 Bit2 Bit1 Section 1 0 0 0 0 0 $H(5-1)$ 0 0 0 0 1 0 0 0 1 0 0 0 0 1 1 0 0 1 0 0 0 0 1 0 1 0 0 1 1 0 0 0 1 1 1 0 1 0 0 0 0 1 0 0 1 0 1 0 1 0 0 1 0 1 1 0 1 1 0 0 0 1 1 0 1 0 1 1 1 0 0 1 1 1 1 Section 2 1 0 0 0 0 $H(5-2)$ 1 0 0 0 1 1 0 0 1 0 1 0 0 1 1 1 0 1 0 0 1 0 1 0 1 1 0 1 1 0 1 0 1 1 1 Section 3 1 1 0 0 0 $2^{5-2}$ 1 1 0 0 1 1 1 0 1 0 1 1 0 1 1 1 1 1 0 0 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1

In Figure 2, section 1 has the same number of possibilities as in 4-bit case, section 2 has the same number of possibilities as in 3-bit case, and for section 3, the first two bits are 1 therefore all combinations are valid.

Thus the three terms together define H(n) as:

$H(n) = H(n-1) + H(n-2) + 2^{n-2}$

Finally:

$P(n) = \frac{H(n)} { 2^n }$

## Relation To FibonacciEdit

Thus far, H(n) has only been defined in terms of H(n-1) and H(n-2). Thus to find H(100), it is necessary to calculate H(99), H(98), H(97), and so on until H(2). To avoid the iteration, H(n) can be defined as:

$H(n)=\sum_{k=0}^{n-1}2^k F_{(n-1)-k}$

where F is the Fibonacci sequence.

This method still requires a summation of n terms, but the summation is based on powers of two and the Fibonacci sequence. Both of which are well known and tabulated.

### DerivationEdit

We have seen that:

$H(n)=H(n-1)+H(n-2)+2^{n-2}$

Therefore:

$H(n-1)=H(n-2)+H(n-3)+2^{n-3}$

$H(n-2)=H(n-3)+H(n-4)+2^{n-4}$

$H(n-3)=H(n-4)+H(n-5)+2^{n-5}$

Using the above equations, H(n) can be calculated in terms of powers of 2 given that H(0)=0 and H(1)=1.

$H(2)=2^0+0*2^1$

$H(3)=2^0+2^1+0*2^2$

$H(4)=2*2^0+2^1+2^2+0*2^3$

$H(5)=3*2^0+2*2^1+2^2+*2^3+0*2^4$

Table 4 shows this expansion in a compact form where each row n has multiples of powers of 2.

 n $2^0$ $2^1$ $2^2$ $2^3$ $2^4$ $2^5$ $2^6$ $2^7$ $2^8$ 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 2 1 0 0 0 0 0 0 0 0 3 1 1 0 0 0 0 0 0 0 4 2 1 1 0 0 0 0 0 0 5 3 2 1 1 0 0 0 0 0 6 5 3 2 1 1 0 0 0 0 7 8 5 3 2 1 1 0 0 0 8 13 8 5 3 2 1 1 0 0 9 21 13 8 5 3 2 1 1 0 10 34 21 13 8 5 3 2 1 1

The Fibonacci sequence appears in each column from top to bottom, but more importantly it appears in each row from right to left. Thus H(n) equates to summing increasing powers of 2 from left to right with weights corresponding to the Fibonacci series from right to left. In compact form:

$H(n)=\sum_{k=0}^{n-1}2^k F_{(n-1)-k}$

where F is the Fibonacci sequence.

## Relation To Communication Theory Edit

Suppose there is a communication system that transmits binary data over a channel with noise. At the receiver there software that has error detection and error correction capabilities, however based on the line code, the receiver can only correct single errors. Thus if there is one error, the receiver will be able to detect and correct the error, however if there is two or more errors in a row, the receiver will not be able to correct the error.

The two heads in a row derivation above assumed that heads and tails have an equal probability of occurring. However, in this system, the probability of a bit being transmitted in error is less than the probability of a bit being transmitted correctly. Two Heads in A Row: With Unequal Probability shows that the probability of two heads in a row in n tosses is:

$P(n) = P(n-1) + (1-P(n-1))P(H) - (1-P(n-2) )P(H)P(T)$

where P(H) is the probability of tossing heads and P(T) is the probability of tossing tails.


If we replace the probability of tossing Heads with the probability of transmitting a bit in error and also replace the probability of tossing Tails with the probability of sending a bit correctly, we obtain:

$P(n) = P(n-1) + (1-P(n-1))P(E) - (1-P(n-2) )P(E)(1-P(E))$

where P(E) is the probability of transmitting a bit in error.


### ExampleEdit

Suppose with the system as described above, the probability of transmitting a bit in error is .001. What is the probability that the system will output a bit in error when transmitting 5000 bits? In other words, what is the probability that two or more bits will be transmitted in error in a row when transmitting 5000 bits and 10000 bits?

Since we have the equation for P(n), we plug in for n=5000 and P(E) = .001

$P(5000) = P(4999) + .001(1-P(4999)) - (1-P(n-2) ).001(.999)$

Using computer to iterate 5000 times we get:

$P(5000) = .004982 = .4982\%$

and

$P(10000) = .009939 = .9939\%$

Thus if 1 bit is sent in error every 1000 bits, the probability of the system outputting an error after the error correction is about only 1% in 10000 bit transmittions.

If the probability of sending a bit in error, P(E), is .01, then:

$P(5000) = .39904 = 39.904\%$

and

$P(10000) = .62848 = 62.848\%$

If the probability of sending a bit in error is 10 times greater, the probability of the system outputting an error is almost 62% more probable in 10000 bits.