After the weighty seriousness of my last two posts, I thought I would explore the "lighter" side of the news today. Let's look at some typical human silliness.
It goes without saying that without the awesome wonders of technology, our species probably wouldn't be here on this formerly hospitable blue-green planet. After the Cambrian "explosion" 543 million years ago, animal life thrived (despite some large setbacks) despite the fact that no truly "smart" animal evolved for almost all of that time. If some group of Australopithecines hadn't started using primitive stone tools about 3.4 million years ago, and started making them some one million years later, those bipedal apes likely would have died off, and big-brained, technology-loving hominids wouldn't be here doing all the silly stuff they do.
Humans love technology. And they're good at it. They're clueless about themselves, but they can build you a better can opener or shale rock fracker. The extreme lover of technology believes so-called existential risks, especially those posed by advanced technology, pose an extreme danger to the human future. I always find this hilarious, especially in light of the fact that these clueless but vain, self-absorbed humans are fouling their own nest at alarming and accelerating rate.
Regardless of this disturbing real-world trend, which only a blind monkey could miss, some Really Smart Guys at Cambridge University are worried about a future Skynet taking over the world. Cambridge to study technology's risk to humans reads the Associated Press headline.
LONDON — Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction?
Philosophers and scientists at Britain's Cambridge University think the question deserves serious study. A proposed Center for the Study of Existential Risk [CSER] will bring together experts to consider the ways in which super intelligent technology, including artificial intelligence, could "threaten our own existence," the institution said Sunday.
"In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology," Cambridge philosophy professor Huw Price said.
When that happens, "we're no longer the smartest things around," he said, and will risk being at the mercy of "machines that are not malicious, but machines whose interests don't include us."
The smartest things around? I guess we humans are the "smartest" things around, but seriously, how low has the bar been set? Smart compared to what? Marine invertebrates? A toaster?
Fears that machines could overtake humans have long been the subject of science fiction — the computer HAL in the movie "2001: A Space Odyssey," for example, is one of film's best-known computer threats...
Price is co-founding the project together with Cambridge professor of cosmology and astrophysics Martin Rees and Jann Tallinn, one of the founders of the internet phone service Skype.
According to Huw Price, Skype founder Jann Tallinn has said that he sometimes feels he is more likely to die from
an AI accident than from cancer or heart disease. Talk about believing your own bullshit!
Price does admit many people—like me, and you, I hope—consider worrying about the Rise of the Machines to be a "flaky" concern.
Price acknowledged that many people believe his concerns are far-fetched, but insisted the potential risks are too serious to brush away.
"It tends to be regarded as a flaky concern, but given that we don't know how serious the risks are, that we don't know the time scale, dismissing the concerns is dangerous. What we're trying to do is to push it forward in the respectable scientific community," he said.
While Price said the exact nature of the risks is difficult to predict, he said that advanced technology could be a threat when computers start to direct resources towards their own goals, at the expense of human concerns like environmental sustainability.
Human concerns like environmental sustainability?
That's some serious silliness right there, but I'll leave it alone because I really love this next part. I love it so much that I need the red font.
He compared the risk to the way humans have threatened the survival of other animals by spreading across the planet and using up natural resources that other animals depend upon.
Humans have threatened the survival of other animals by gobbling up all the resources!
That's certainly true, but apparently, in Huw Price's overactive imagination, these big-brained bipeds have not also threatened their own survival.
How else could he also believe that it is reasonable to expect that some time in this or the next century intelligence will escape from the constraints of biology?
Did I mention technological optimists (pessimists?) are clueless?
Bonus Video — Don't worry about it, these Cambridge guys are on the case.
YAJM (Yet Another JFC Moment)!
Posted by: goritsas | 11/27/2012 at 10:42 AM