In probability theory, there’re three comment convergence concepts: convergence in distribution, convergence in probability and convergence almost surely. Among them, the convergence almost surely is most abstract and many people find it hard to understand (especially people doing statistical engineering). To formally define convergence almost surely, we need to use a measure-theoretic argument. Here I try to use a concept from computer to illustrate almost sure convergence and avoid using any measure theory.
The almost sure convergence is defined on the abstract “sample” space. One can understand the sample space as the collection of “seeds” used to generate the random number for a computer. In the past, how computer generates a random number is by inputting a “seed” and according to this seed to output a series of values. If we input the same seed, the output value will be the same.
A random variable can be interpreted as a function (or a program) that input a “seed” and output a value. The function is fixed; that is, given the same seed, this function will output the same value.
Having identified random variables as functions, a sequence of random variables is a sequence of functions. For a sequence of functions, given the same seed, we will have a sequence of values. If these values converge to a specific value, then we call this seed a “good seed”. Otherwise we call the seed “bad seed”.
Now we test all seeds to see if they are good seeds or bad seeds. After examining every seed, we get a collection of good seeds and another collection of bad seeds. The function convergence “almost surely” if the ratio of the number of bad seeds to the number of good seeds is 0; in other words, the good seeds dominates the majority. Note that if number of good seeds is infinity, we allow number of bad seeds to be finite and non-zero and we still have convergence almost surely.
As a result, almost sure convergence is defined through “the limiting behavior of the sequence under a fixed input”. And the sequence converge almost surely is like the ordinary convergence of functions excepts for “negligible ” small portion of points (i.e. input, seeds).
For convergence in probability, we can still use our good seeds bad seeds principal but the definition is slightly different. We need to set a tolerance level. Now “for each function” in the sequence, we examine the output value for a given seed and compare it with the output value for the next function in the sequence with the same input seed. If the difference is below the tolerance, we call this seed a good seed otherwise it is a bad seed. For each function in the sequence, we will get a collection of good seeds and a collection of bad seeds. So we will have a sequence of pair of collections for good seeds and bad seeds. Note that the two collections of good/bad seeds are unique for each function in the sequence.
Here we consider the ratio again. Since we have pairs of good seeds collection and bad seeds collection, we can calculate the ratio of bad seeds to good seeds. Accordingly, we obtain a sequence of ratios derived from the sequence of pairs of collections for good/bad seeds. We call the sequence of function convergence in probability if the sequence of ratios converge to 0.
A crucial difference between convergence almost surely and convergence in probability is that for almost sure convergence, we only have one pair of good/bad seeds collections; on the contrary, for convergence in probability, we have multiple pairs of good/bad seeds collections. For convergence in probability, we allow the collections of good/bad seeds to be non-stationary (that is, the collections are keeping changing) but just keep ratio going to 0. This cannot happen in the almost sure convergence since for almost sure case, we have only one pair of good/bad seeds.
Note: For those who have learned measure theory, I define the sample spaces to be the collection of all seeds and use the counting measure. I also use the concept of Cauchy sequence to define convergence concept in convergence in probability.