Large deviations theory is a branch of probability that deals with events that are "rare" in some sense. A classic example of such a rare event would be tossing a fair coin 1000 times, and observing less than 400 heads. In essence, one is trying to capture the probability that an empirical average of an i.i.d. sequence of random variables is "away" from its true mean. (The theory can also deal with the more general, non-i.i.d. case.)
An important concept in this subject is the rate function. In the previous (coin-toss) example, we know from the strong law of large numbers that the empirical mean (empirical fraction of heads) converges to the true mean (0.5) as the number of coin-tosses (n) tends to infinity, almost surely. In a large deviations analysis of this event, one desires to find the rate at which the sample mean converges to the true mean as n gets large. This rate of convergence can be formally defined, and is called the rate function for the rare event. In the previous example, it can be shown that there exists a constant c > 0, independent of n, such that the probability of having the fraction of heads less than 0.4 is at most exp(-cn), for large n. This constant c is thus a lower bound on the rate function, and the largest possible value of such a constant c is called the rate function.
The theory of large deviations has long been used to study the behavior of queuing networks. As a simple example, consider a discrete time single-server queue, with P(arrival) = p, and P(service) = q, all i.i.d. The queue-length process is a Markov chain, and we know that q > p is necessary and sufficient for stability (positive recurrence) of the system. Further, in this case we can solve for the stationary distribution of the queue-lengths, and get P(Q = k) = (1 - c)ck for some c < 1, for all integers k ≥ 0. If we look at this scenario from a buffer allocation point of view, then our aim is to allocate a buffer to this queue that guarantees a certain packet drop probability. To ensure that the buffer overflow probability is at most x, we need to ensure that the buffer-size is about b = logc(x), since the probability that a buffer of size b overflows is about cb = eb log c, up to constant multiplicative factors. These calculations become more and more accurate if we consider the limit x → 0.
There are situations where the queue-length process in a network is difficult to analyze in closed form. This can be a result of the extremely complex state transition probabilities, or that the state transition probabilities not being known exactly, among many other possibilities. Nevertheless, it is sometimes possible to compute the rate function for the rare event of interest (like the buffer overflow), thereby capturing the tail behavior of probabilities. This enables us to make a meaningful statement about the probability of the rare event, in spite of not being able to analyze it completely.
The following LiNC members are involved in this research area:
- Dembo, Amir;  Ofer Zeitouni (1998). Large Deviations Techniques and Applications. Springer. ISBN 0387984062.
- Ganesh, Ayalvadi; Neil O'Connell, Damon Wischik (2008). Big Queues. Springer. ISBN 978-3-540-20912-6.