Pages

Quantities of Information : Entropy I


Quantities of Information: Information theory is based on probability theory and statistics. Information theory often concerns itself with measures of information of the distributions associated with random variables. Important quantities of information are entropy, a measure of information in a single random variable, and mutual information, a measure of information in common between two random variables.

1. Entropy: The entropy, H, of a discrete random variable X intuitively is a measure of the amount of uncertainty associated with the value of X when only its distribution is known. So, for example, if the distribution associated with a random variable was a constant distribution, then entropy is minimal and equal to 0. Furthermore, in the case of a distribution restricted to take on a finite number of values, entropy is maximized with a uniform distribution over the values that the distribution takes on.
If X is the set of all messages {x1 , . . .. , xn} that X could be, and p(x) is the probability of some x ϵ X, then the entropy, H, of X is defined:


(Here, I(x) is the self-information, which is the entropy contribution of an individual message, and EX is the expected value.) A property of entropy is that it is maximized when all the messages in the message space are equiprobable p(x) = 1/n ,—i.e., most unpredictable— in which case H(X) = log n .

The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having the Shannon (Sh) as unit:
Hb(p) = -p log2 p -  (1 -  p) log2(1 - p)

The graph for this equation looks like as
                    

No comments:

Post a Comment

If you have any doubt, let me know

Email Subscription

Enter your email address:

Delivered by FeedBurner

INSTAGRAM FEED