Information Theory And Coding Pdf

File Name: information theory and coding .zip
Size: 1158Kb
Published: 13.04.2021

information theory, coding and cryptography

To browse Academia. Skip to main content. By using our site, you agree to our collection of information through the use of cookies. To learn more, view our Privacy Policy. Log In Sign Up. Download Free PDF. Download PDF. A short summary of this paper.

The internet has become an integral part of our lives, making this, the third planet from the sun, a global village. People talking over the cellular phones is a common sight, sometimes even in cinema theatres.

Movies can be rented in the form of a DVD disk. Email addresses and web addresses are common on business cards. Many people prefer to send emails and e-cards to their friends rather than the regular snail mail. Stock quotes can be checked over the mobile phone. Information has become the key to success it has always been a key to success, but in today's world it is tlu key. Yet the information age that we live in today owes its existence, primarily, to a seminal paper published in that laid the foundation of the wonderful field of Information Theory-a theory initiated by one man, the American Electrical Engineer Claude E.

Shannon, whose ideas Before we go on to develop a mathematical measure of information, let us develop an intuitive feel for it. B The phone will ring in the next one hour. C It will snow in Delhi this winter. The three sentences carry different amounts of information. In fact, the first sentence hardly carries any information.

Everybody knows that the sun rises in the East and the probability of this happening again is almost unity. Sentence B appears to carry more information than sentence A. The phone may ring, or it may not. There is a finite probability that the phone will ring in the next one hour unless the maintenance people are at work again!

The last sentence probably made you read it over twice. This is because it has never snowed in Delhi, and the probability of a snowfall is very low.

It is interesting to note that the amount of information carried by the sentences listed above have something to do with the probability of occurrence of the events stated in the sentences. And we observe an inverse relationship. Sentence A , which talks about an event which has a probability of occurrence very close to 1 carries almost no information.

Sentence C , which has a very low probability of occurrence, appears to carry a lot of information made us read it twice to be sure we got the information right! The other interesting thing to note is that the length of the sentence has nothing to do with the amount of information it conveys.

In fact, sentence A is the longest but carries the minimum information. We will now develop a mathematical measure of information. Definition 1. Since a lower probability implies a higher degree of uncertainty and vice versa , a random variable with a higher degree of uncertainty contains more information. We will use this correlation between uncertainty and level of information for physical interpretations throughout this chapter.

The units of I x are determined by the base of the logarithm, which is usually selected as 2 or e. When the base is 2, the units are in bits and when the base is e, the units are in nats natural units. The following two examples illustrate why a logarithmic measure of information is appropriate. Indeed, we have to use only one bit to represent the output from this binary source say, we use a 1 to represent H and a 0 to represent T. Consider a block of m bits. There are 2m possible m-bit blocks, each of which is equally probable with probability 2-m.

Thus, this logarithmic measure of information possesses the desired additive property when a number of source outputs is considered as a block. Example 1. This source comprises two binary sources sources A and B as mentioned in Example 1. The two binary sources within the source Care independent. Let us look at the information content of the outputs of source C.

Thus, the logarithmic measure of information possesses the desired additive property for independent events. When the base is 2 the units are in bits. Note that Therefore, P x;! The information provided by the occurrence. Thus, the logarithmic definition of mutual information confirms our intuition. It is a channel that transports 1 's and O's from the transmitter Tx to the receiver Rx.

It makes an error occasionally, with probability p. A BSC flips a 1 to 0 and vice-versa with equal probability. Let the input symbols be equally likely and the output symbols depend upon the input according to the channel transition probabilities as given below Let us consider some specific cases. Hence, from the output, we can determine what was transmitted with certainty.

It is clear from the output that we have no information about what was transmitted. Thus, it is a useless channel.

For such a channel, we may as well toss a fair coin at the receiver in order to determine what was sent! Let the input symlfols be equally likely, and the output symbols depend upon the input according to the channel transition probabilities: 1. The physical interpretation is as follows. This implies that if y 1 is observed at the receiver, it can be concluded that Xo was actually transmitted.

We just flip the received bit. We now want to find out the average mutual information between the two random variables. This can be obtained simply by weighting! In this case H X is called the entropy. The term entropy has been borrowed from statistical mechanics, where it is used to denote the level of disorder in a system.

It is interesting to see that the Chinese character for entropy looks like II! The output is either a 0 with probability p or a 1 with a probability 1p. In general it can be shown that the entropy of a discrete source is maximum when the letters from the source are equally probable.

It can be seen from the plot that as we increase the parameter p from 0 to 0. The average mutual information between X and Y is defined as follows. The reason is that the information content in a continuous random variable is actually infinite, and we require infinite number of bits to represent a continuous random variable precisely. The selfinformation and hence the entropy is infinite. To get around the problem we define a quantity called the differential entropy.

We carry on with extending our definitions further. The primary objective is the compression of data by efficient representation of the symbols. The equality holds when the symbols are equally likely. It means that the average number of bits per source symbol is H X and the source rate is H X Itbitslsec.

Now let us represent the 26 letters in the English alphabet using bits. Hence, each of the letters can be uniquely represented using 5 bits. Each letter has a corresponding 5 bit long codeword. I Definition 1. The number of bits R required for unique coding when L is a power of 2 is 1.

The FLC for the English alphabet suggests that every letter in the alphabet is equally important probable and hence each one requires 5 bits for representation. However, we know that some letters are less common x, q, z etc.

It appears that allotting equal number of bits to both the frequently used letters as well as not so commonly used letters is rwt an efficient way of representation coding. Intuitively, we should represent the more frequently occurring letters by fewer number of bits and represent the less frequently occurring letters by larger number of bits. In this manner, if we have to encode a whole page of written text, we might end up using fewer number of bits overall.

When the source symbols are not equally probable, a more efficient method is to use a Variable Length Code VLC Note that the variable length code uses fewer number of bits simply because the letters appearing more frequently in the pseudo sentence are represented with fewer number of bits.

We look at yet another VLC for the frrst 8 letters of the English alphabet: We have no clue where one codeword symbol ends and the next one begins, since the lengths of the codewords are variable. However, this problem does not exist with VLCl. Here no codeword forms the prefix of any other codeword. This is called the prefix condition. As soon as a sequence of bits corresponding to any one of the possible codewords is detected, we can declare that symbol decoded. Such codes called Uniquely Decodable or Instantaneous Codes cause no decoding delay.

In this example, the VLC2 is not a uniquely decodable code, hence not a code of any utility.

Information Theory and Coding

Information Theory and Coding by John Daugman. Publisher : University of Cambridge Number of pages : Description : The aims of this course are to introduce the principles and applications of information theory. The course will study how information is measured in terms of probability and entropy, and the relationships among conditional and joint entropies; how these are used to calculate the capacity of a communication channel, with and without noise; coding schemes, including error correcting codes; how discrete channels and measures of information generalize to their continuous forms; etc. Home page url.

The declaration of the copyright is at the bottom of this page. Please, don't hesitate to contact me at if you have any questions or if you need more information. I use these lecture notes in my course Information Theory , which is a graduate course in the first year. The notes intend to be an introduction to information theory covering the following topics:. These notes are still undergoing corrections and improvements. If you find typos, errors, or if you have any comments about these notes, I'd be very happy to hear them!


Mutual information between ensembles of random variables. Why entropy is the fundamental measure of infor- mation content. • Source coding theorem; prefix.


Fundamentals in Information Theory and Coding

Subscription price IJICoT publishes state-of-the-art international research that significantly advances the study of information and coding theory and their applications to cryptography, network security, network coding, computational complexity theory, communication networks, and related scientific fields that make use of information and coding theory methods. The missions of IJICoT are to improve international research on topical areas by publishing high-quality articles, and to expose the readers to the recent advances in these areas. Information theory and its important sub-field, coding theory, play central roles in theoretical computer science and discrete mathematics. As coding theory occupies an important position within the field of information theory, the focus of IJICoT is on publishing state-of-the-art research articles relating to it.

Don't show me this again. This is one of over 2, courses on OCW. Explore materials for this course in the pages linked along the left. No enrollment or registration.

Toggle navigation. Homework 5 has been posted. Homework 4 has been posted. Quiz on it will be held on Friday, November 8, pm. Homework 3 has been posted.

To browse Academia. Skip to main content.

Клянусь, убью. - Ты не сделаешь ничего подобного! - оборвал его Стратмор.  - Этим ты лишь усугубишь свое положе… - Он не договорил и произнес в трубку: - Безопасность. Говорит коммандер Тревор Стратмор. У нас в шифровалке человек взят в заложники.

CSC 310 - Information Theory (Jan-Apr 2002)

 Enferno, - извиняясь, сказал Беккер.  - Я плохо себя чувствую.  - Он знал, что должен буквально вдавиться в пол.