If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

### Course: Computer science theory>Unit 3

Lesson 2: Modern information theory

# Introduction to channel capacity

Introduction to Channel Capacity & Message Space. Created by Brit Cruise.

## Want to join the conversation?

• Why didn't they use multiple strings?
• Multiple wire/strings would increase the cost. The cost doesn't increase all that much when the distance is short say one room to another room. But when the distance is longer, say from London to New York, the cost to build/maintain the wire increases significantly. So it makes more sense, economically, to find more efficient ways to pass a message along just one wire than over multiple.
• At , even without electrical noise, wouldn't a message with over a million possible signals be very difficult to translate into plain English? Because then the receiver would need a list with over 1,000,000 possible signals.
• No, this is an arbitrarily easy task for a computer to accomplish. What you would do is put logic gates on the end of the cable and do automatic calculations to build a truth table and translate it into binary.
• At , when he was talking about Edison's system, wouldn't it not be very good, because he was using a switch system? I'll use a light switch for my example. If the switch was on, it is impossible to turn it on again without turning it off, so wouldn't the same thing apply to Edison's system?
• Excellent question. I should include this in the upcoming challenge. One way to think about these types of problems (sending multiple 1's) is to introduce a time division. For example, ever second we could measure the state to see if it is on or off. If we leave out light on continuously for 3 seconds it would represent 1 1 1. Does this help?
• What are the "late effects of the Big Bang"( if I have written the phrase down correctly) and how do they distort the message received?
• Microwave background radiation. It is a source of radio and electrical interference that is coming at us from almost everywhere else in space; everywhere people have listened, using suitable equipment such as a directional radio antenna like a dish. The theory is that it came from a time close to the start of the universe. The theory is supported by several other observations. Radio noise also comes from nuclear fusion in our Sun, but the rest of the universe is much bigger than our Sun. Hope that helps!
• What is considered an acceptable amount of background noise for a commonly used cable? Is shielding on the cables allow us to squeeze more channels onto the cable by lowering the range of "no mans land" we need to put between varying channels?
• Im pretty sure the acceptable amount of background noise depends on the application any cable is being used for... like sending analog phone audio over a telephone wire would be different then digital computer data on ethernet a cable. Also there are other things to factor in such as the distance needed to transmit a signal over a wire... the EM background noise is not a concern for transmitting a signal from a modem to a computer a few feet away. But EM background noise is a concern for sending a signal long distances say from europe to asia where you have to consider factors such as the signal moving over various sizes and types of cable also amplification of signal voltage and a bunch more things that all effect the background noise and the clarity of the signal.

The shielding on wire cables reduces EM noise caused by interference from outside the cable caused by things like electric or magnetic fields. The shielding allows for a clearer signal. However shielding only protects from outside interference from outside the length of wire, there is still EM noise that is unavoidable within any electric system.

The goal is a clear signal to travel from point A to point B along a channel (wire). The clearer the signal is the more information can travel through the channel at a time.
• I am working my way through "A Mathematical Theory of Communication”, C.E. Shannon's 1948 paper and have hit a snag that I am hoping someone can help me out with. I am usually pretty good at picturing this kind of thing but in the case below I must be missing something.

In Part I Shannon says that N(t) represents the number of sequences of duration t (sequences in this case being symbols of a defined alphabet.) He goes on to say that N(t) = N(t-t1) + N(t-t2) + ... + N(t-tn) where t1, …, tn represent the transmit time of the symbols S1, …, Sn of the given alphabet. He further states that the total number [N(t)] is equal to the sum of the number of sequences ending in S1, S2, ..., Sn and that these sequences are the N(t-ti) terms given above.
My problem is that I do not understand how N(t) for some arbitrary t could equal the series given above. It seems to me that N(t) would equal a*N(t1) + b*N(t2) + ... + n*N(tn) where the values a, …, n are the (not necessarily unique) constants needed to reach N(t). Furthermore I am confused as to how N(t-ti) represents a sequence ending in Si for some symbol i.

Please understand that I am not criticizing or challenging Shannon, I am trying to understand how the math fits together and feel like I am missing some obvious point.
Any help would be greatly appreciated.
Mark
• N(t) is the number of allowed signals of duration t.

Suppose we have only two signals S1 and S2.
S1 requires t1 seconds to transmit and S2 takes t2 seconds to transmit.
We could find N(t) by adding:
- the number of allowed signals of duration t that end with S1
- the number of allowed signals of duration t that end with S2

We could generate all the allowed signals of duration t that end with S1 by:
-finding ALL the signals that take (t-t1) seconds to transmit
-add our S1 signal (that takes t1 seconds) onto the end of each of those signals
- (t-t1)+t1 = t , so each of these new signals would be the right length

The number of signals that take (t-t1) seconds to transmit is N(t-t1). This is also the number of new signals, ending in S1, that we just generated.

Similarly, we could generate all the allowed signals of duration t that end with S2 by:
-finding ALL the signals that take (t-t2) seconds to transmit
-add our S2 signal (that takes t2 seconds) onto the end of each of those signals
- (t-t2)+t2 = t , so each of these new signals would be the right length

The number of signals that take (t-t2) seconds to transmit is N(t-t2). This is also the number of new signals, ending in S2, that we just generated.

From before:
N(t)=
number of allowed signals of duration t that end with S1
+ the number of allowed signals of duration t that end with S2

Which is:
N(t) = N(t-t1) + N(t-t2)

The same logic applies for any number of signals

Hope this makes sense
• Would it be possible to calculate the channel capacity of spoken words? the transmitting end (the speaker) and the receiving end (the listener) are both limited in message difference and symbol rate, I'm just wondering if it would be possible to calculate.
• What about the message(s)? How would you include that in your calculations?
My first thought is that you would have to use some sort of average...
Really, it depends what the speaker says to the listener!
• wow, so what does this have to do with morse code?
• Morse code requires very low channel capacity because it is only sending 1 bit (the lowest unit of information) at a time.