Current time:0:00Total duration:10:16
0 energy points
Video transcript
Philip: What I would like to know is the answer to a very simple question. Are we alone as conscious beings in this entire buzzing 400 billion star galaxy, one of 10 to the tenth other galaxies? It seems pretty implausible. Voiceover: The modern search for extraterrestrial intelligence, or SETI, began in 1959 when two Cornell physicists, Giuseppi Cocconi and Philip Morrison, published an article in Nature that outlined the possibility of using radio and microwaves to communicate between the stars. In order for this to work, researchers assume that any intelligent civilization will have discovered the ability to transmit radio waves. This assumption is based, in part, on the fact that it took human beings only 80 years to figure out how to do this following Alessandro Volta's discovery of batteries and electric current. The premise is quite simple. We can create radio waves by sending short pulses of electric current through wires. These waves can then travel beyond our atmosphere and out through space with little interference. Once these radio, or electromagnetic, waves are sent out, they can be received using antennas and turned back into electrical pulses. In 1960, Frank Drake conducted the first search for radio signals from other solar systems. Much like turning a radio dial, Drake was trying to scan the sky in order to tune in to faint radio signals that might be coming from other worlds. Though this first attempt did not result in any noteworthy findings, researchers have been scanning the stars ever since. Carl: So, there is some chance that in the next few decades we will get a signal from some spectacularly distant, spectacularly exotic civilization, and everything on earth will, as a consequence, change. That is possible. Kent: The interesting thing about the SETI search is that although we claim that we're looking for extraterrestrial intelligence, we can't actually define what an intelligent signal is. A starting point is to say we look for a signal which nature will not produce by any mechanism that we understand. Voiceover: An important question emerges. How can we ever know if such a signal is coming from an intelligent source? At the first SETI meeting in 1961, John Lilly proposed that researchers study dolphin languages to help them learn more about what extraterrestrial signals might be like. Much of this early work culminated in the research conducted by Laurance R. Doyle and Brenda McCowan. Doyle and McCowan's work is based on the assumption that if there is some common trait in both human and nonhuman communication systems, then extraterrestrial communication systems should also share this trait. They analyzed a long sequence of vocalizations from both adult and baby humans and dolphins. In the case of dolphins, this was a set of whistles and clicks. Human babies learn to speak through a process of vocal imitation, slowly amassing a larger and larger set of sound signals. (baby babble) However, during what is known as the babbling phase, the sounds produced are more or less random, or unstructured. To see this, Doyle and McCowan plotted the different sound signals against their frequency, or how often they occur, then they ordered the symbols in the graph according to frequency, with the most common symbols on the left and the least common on the right. With human babies, the slope is nearly level as all sound signals produced occur fairly evenly or randomly. However, as children learn the language of their parents, they narrow their sound repertoire to fit the model to which they are exposed. As a result, structure is imposed on our speech patterns. Consequently, the slope of this graph converges towards a 45 degree angle, or a -1 slope on a log-log chart. This is known as Zipf's Law. What's interesting is that this same slope appears in different human languages, and seems to be a pattern all humans share. Even more surprising is that this pattern also emerged when Doyle and McCowan analyzed nonhuman communication. They found that the whistle sounds produced by baby dolphins seemed to be distributed in a pattern similar to human babies during the babbling phase. At first, the dolphin whistles are more or less unstructured. By the time they reach adulthood, the graph converges on a slope of about -1, which is the same as humans. (dolphin cries) But this sort of analysis looks only at individual signals, or words, and doesn't say anything about the deeper linguistic structure of either human or dolphin communication systems. Let's clarify what we mean by deeper structure with an example. If I select a random word from a book and ask you to guess what it is, you will have no clue what it might be and will have to simply guess. If, instead, I give you a random word from a book and ask you to predict the word that follows it, you will still have to guess, however, you'll notice it's likely easier to guess this word. If I give you a sequence of two words from a book and ask you to predict a third word, it becomes more predictable still. If you are given a sequence of three words, this trend continues. The ability to guess is even easier. It seems that as a result of the structure of language, the freedom of choice decreases as we look at longer and longer strings of words. Intuitively, this is why we can finish each other's sentences. Now, to quantify this, Doyle and McCowan borrowed Claude Shannon's measure of entropy, which, as you recall, is a measure of surprise. Entropy can be thought of as the number of yes or no questions, or bits, required to guess the next word. As predictability increases, the information entropy decreases. Doyle and McCowan calculated the entropy for different depths, or orders, so single words is first order, groups of two words is second order, groups of three words is third order, and so on. Then they plotted the value of information entropy against this depth. For adult humans, as we may expect, they found that the information entropy decreases as the depth increases. This is a result of the rule structure in our communication systems. Amazingly, Doyle and McCowan did the same thing with dolphin languages, and found the same pattern. Dolphin communication systems display decreasing information entropy as we look at longer sequences of sound signals. This means that in dolphin communication systems, there is a rule structure which emerges, and arguably, this also allows dolphins to finish each other's sentences, too. (dolphin cries) Contrast this to just a random sequence of symbols, which has a flat line on this information entropy graph since there is no conditional dependence between symbols. Because this pattern emerges in both human and nonhuman communication systems, Doyle and McCowan have suggested that this decreasing of entropy is essential for the transmission of what we might call knowledge. As Doyle puts it, if we get a narrow band signal, a -1 slope on the Zipf plot, and higher order Shannon entropies, we've nailed it. All of this rests on a simple premise, that aliens, too, can finish each other's sentences. Dark-haired man: How can I help you to communicate with us? Alien: How I am able to speak. By assimilation of former photosynthesis. I have been able to incorporate certain of Dr. Wyman's functional processes. Dark-haired man: Was Dr. Wyman's death necessary? Alien: Through his sacrifice, I can communicate. Voiceover: Without even understanding the language or culture of the other human or nonhuman species, Claude Shannon's entropy is a unit of measure that can allow us to detect the presence of these structural rules regardless of meaning. Claude Shannon's model of information was born out of a desire to save time over the telegraph wires. This led to the global unit of information, the bit, a single difference, now the backbone of our information economy. The increasingly digital and network technologies that drive our modern world point to the power and persistence of Claude Shannon's ideas. The bit is here to stay, and the study of information theory will continue to play a key role in our technological and social innovations on earth and perhaps beyond. Carl: I think even if there's a plausible argument for a few we ought to keep looking. I'd even go further than that. If there's a plausible argument that there isn't anybody out there, bearing in mind that we can be wrong, we ought to keep looking, because the question is of the most supreme importance. It calibrates our place in the universe. It tells us who we are. So it is worthwhile trying to find other civilizations, I would say, no matter what.