Right now, if you think about, your cell phone, your PC are not nodes on a worldwide network, they're spokes into the network and then the network has nodes inside it.
But all of these different devices are going to become nodes, meaning that it's not only going to be sending and receiving your own messages, but it's also going to be cooperating by forwarding other people's messages, and being part of a pervasive sort of web of communication.
The practical implication for people using this is you're not going to be carrying around objects. Computing is going to become invisible. It's going to be woven in your belt buckle, and your clothing.
We'll be spending quite a bit of our time in virtual-reality environments. Environments like Second Life are really a crude harbinger of what is to come. We'll have these virtual-reality environments which will be quite competitive with real reality. They'll be very realistic. They'll be full immersion, just as in SL you can be someone else, you don't have to look the same in these virtual environments.
And they won't just be kind of a plaything. Second Life already has a real economy, and people do real business transactions and have real romance. And we'll be doing that in a real panoply of virtual-reality environments.
Ten years from now, this biotechnology revolution which I alluded to earlier will be becoming quite mature. It will be a thousand times more advanced than we are today. We collected the genome four years ago, we're actually making pretty good progress already -- although it's still early on actually reverse engineering it. Which is to say, understanding how biology actually works. But that will be at an advanced stage ten and certainly 15 years from now.
My prediction is within 15 years from now, we'll be adding more than a year every year to your remaining life expectancies. So that's kind of a tipping point. It's not a guarantee of immortality, but rather than the sands of time running out, they'll be running in. You make it through a year, you'll actually make it through another 15 months added to our life expectancy. Right now we're adding about three or four months per year to life expectancy. When that goes over a year per year, it will represent the tipping point.
We're actually learning to re-engineer our bodies. If I ask you, how long does a car last? Well, most cars don't last that long. But in fact, you can take care of a car, and there are cars that are 80 years old, even a hundred years old. If you really address what goes wrong with them, you fix it, replace different parts -- we're going to be able to do that with our bodies. The reason that analogy doesn't hold with our bodies today, is that we don't have all the plans. We don't understand all of the mechanisms. That's what's proceeding exponentially. Within 15 years, we really will have enough of the plans to be achieving this tipping point in terms of life extension.
If you look at 2009-2010 cell phone technology, I believe you will be seeing speech-to-speech translation in there. You do have large-vocabulary speech recognition on the phone today, for quite a few applications. And there are millions of people who are actually creating written documents using speech recognition. That's not quite ubiquitous, because there are hundreds of millions, if not billions, of computer users, and there are millions of people using large-vocabulary speech recognition to create text
But that's growing, and the accuracy is getting quite good, particularly with a little bit of training. Like if you train it for 10-15 minutes on your voice. So that's coming. And I think speech to speech language translation on cell phones and services like Skype, will actually be very widely used before text creation, although I say there are millions of people using text creation. That's because, when you create text, you more or less want perfect documents and casual conversation, people can make an occasional error, you can compensate for that from context.
And just the opportunity to speak to people who don't speak the same language is overcoming a big barrier.
We do use standard formats, and the standard formats are continually changed, and the formats are not always backwards compatible. It's a nice goal, but it actually doesn't work.
I have in fact electronic information that in fact goes back through many different computer systems. Some of it now I cannot access. In theory I could, or with enough effort, find people to decipher it, but it's not readily accessible. The more backwards you go, the more of a challenge it becomes.
And despite the goal of maintaining standards, or maintaining forward compatibility, or backwards compatibility, it doesn't really work out that way. Maybe we will improve that. Hard documents are actually the easiest to access. Fairly crude technologies like microfilm or microfiche which basically has documents are very easy to access.
So ironically, the most primitive formats are the ones that are easiest.
So something like acrobat documents, which are basically trying to preserve a flat document, is actually a pretty good format, and is likely to last a pretty long time. But I am not confident that these standards will remain. I think the philosophical implication is that we have to really care about knowledge. If we care about knowledge it will be preserved. And this is true knowledge in general, because knowledge is not just information. Because each generation is preserving the knowledge it cares about and of course a lot of that knowledge is preserved from earlier times, but we have to sort of re-synthesize it and re-understand it, and appreciate it anew.
The narrowness is going to gradually get less narrow and one of the sources of human-level AI that has the breadth and generality of human intelligence is going to be understanding the human brain itself. And that's another area in which we see exponential progress. The spatial resolution of brain scanning is doubling every year, and the amount of data we are collecting is doubling every year on the brain. And most importantly is we're showing that we can actually understand this data and turn it into working models and simulations. And there's an increasing number of areas of the brain, regions, that have been modeled and simulated, and these simulations are gaining in sophistication and precision.
This includes areas of the auditory cortex, the visual cortex, the cerebellum. IBM has a project to simulate a slice of the cerebral cortex, which is arguably the most important region, where we do our abstract reasoning.
And I made the case in the book, that within 20 years we will have models and simulations of all the regions. There are many different benefits to that goal to understand how the brain works, we'll be able to fix it better. But most importantly, it will expand the AI toolkit.
And it's not my view that we would actually have to do this. I actually think we would achieve human level AI if we never looked at the brain at all. On the other hand, I think we will get there faster with more supple algorithms, by understanding the best example we have of intelligence, which is the human brain.
The term "strong AI" actually originally came from John Searle who is a critic of artificial intelligence. He was actually making a different point, which is about consciousness. But the term has stuck.
A lot of terms have stuck, that we may have preferred to not be the term of choice. Like "artificial intelligence", because it makes it sound like it's not real intelligence.
"Artificial intelligence" is real intelligence. And virtual reality is a real form of reality. You and I are now engaging in a form of auditory reality but it doesn't mean it's not a real conversation. I can't say, "that's not a real agreement I made with you last night, that was virtual reality."
But anyway, "strong AI" has come to refer to human level AI, artificial intelligence that could pass the Turing Test, which is the test that Alan Turing devised half a century ago in which a human judge interviews an AI and a human, or maybe several of each, over what he called teletype lines, basically instant messaging. If after a suitably long period of time, he or she cannot tell who is the AI and who is the human, then the AI is said to have passed the test. And it has actually held up as really the only good test we have of human level or so called strong AI.
Related News and Discussion: