PDA

View Full Version : Scientists: Humans and machines will merge in future



phyzik
07-15-2008, 09:56 AM
Some pretty scary stuff, sounds like they are talking about turning us into the freaking Borg! :lol

http://www.cnn.com/2008/TECH/07/15/bio.tech/index.html

LONDON, England (CNN) -- A group of experts from around the world will Thursday hold a first of its kind conference on global catastrophic risks.
Some experts say humans will merge with machines before the end of this century.

They will discuss what should be done to prevent these risks from becoming realities that could lead to the end of human life on earth as we know it.

Speakers at the four-day event at Oxford University in Britain will talk about topics including nuclear terrorism and what to do if a large asteroid were to be on a collision course with our planet.

On the final day of the Global Catastrophic Risk Conference experts will focus on what could be the unintended consequences of new technologies, such as superintelligent machines that, if ill-conceived, might cause the demise of Homo sapiens.

"Any entity which is radically smarter than human beings would also be very powerful," said Dr. Nick Bostrom, director of Oxford's Future of Humanity Institute, host of the symposium. "If we get something wrong, you could imagine the consequences would involve the extinction of the human species."

Bostrom is a philosopher and a leading thinker of transhumanism -- a movement that advocates not only the study of the potential threats and promises that future technologies could pose to human life but also the ways in which emergent technologies could be used to make the very act of living better.

"We want to preserve the best of what it is to be human and maybe even amplify that," Bostrom told CNN.

Transhumanists, according to Bostrom, anticipate a coming era where biotechnology, molecular nanotechnologies, artificial intelligence and other new types of cognitive tools will be used to amplify our intellectual capacity, improve our physical capabilities and even enhance our emotional well-being.

The end result would be a new form of "posthuman" life with beings that possess qualities and skills so exceedingly advanced they no longer can be classified simply as humans.

"We will begin to use science and technology not just to manage the world around us but to manage our own human biology as well," Bostrom told CNN. "The changes will be faster and more profound than the very, very slow changes that would occur over tens of thousands of years as a result of natural selection and biological evolution."

Bostrom declined to try to predict an exact time frame when this revolutionary biotechnological metamorphosis might occur. "Maybe it will take eight years or 200 years," he said. "It is very hard to predict."

Other experts are already getting ready for what they say could be a radical transformation of the human race in as little as two decades.

"This will happen faster than people realize," said Dr. Ray Kurzweil, an inventor and futurist who calculates technology trends using what he calls the law of accelerating returns, a mathematical concept that measures the exponential growth of technological evolution.

In the 1980s Kurzweil predicted that a tiny handheld device would be invented sometime early in the 21st century allowing blind people to read documents from anywhere at anytime -- earlier this year such a device was publicly unveiled. He also anticipated the explosive growth of the Internet in the 1990s.

Now Kurzweil is predicting the impending arrival of something called the Singularity, which he defines in his book on the subject as "the culmination of the merger of our biological thinking and existence with our technology, resulting in a world that is still human but that transcends our biological roots."

"There will be no distinction, post-Singularity, between human and machine or between physical and virtual reality," he writes.
Don't Miss

* Just Imagine: 2020
* Waste not, watt not
* Curing diseases with nature
* Animal inspired robots

Singularity will approach at an accelerating rate as human-created technologies become exponentially smaller and increasingly powerful and as fields such as biology and medicine are understood more and more in terms of information processes that can be simulated with computers.

By the 2030s, Kurzweil tells CNN, humans will become more non-biological than biological, capable of uploading our minds onto the Internet, living in various virtual worlds and even avoiding aging and evading death.

In the 2040s, Kurzweil predicts non-biological intelligence will be billions of times better than the biological intelligence humans have today, possibly rendering our present brains as obsolete.

"Our brains are a million times slower than electronics," said Kurzweil. "We will increasingly become software entities if you go out enough decades."

This movement towards the merger of man and machine, according to Kurzweil, is already starting to happen and is most visible in the field of biotechnology.

As scientists gain deeper insights into the genetic processes that underlie life, they are able to effectively reprogram human biology through the development of new forms of gene therapies and medications capable of turning on or off enzymes and RNA interference, or gene silencing.

"Biology and health and medicine used to be hit or miss," said Kurzweil. "It wasn't based on any coherent theory about how it works."

The emerging biotechnology revolution will lead to at least a thousand new drugs that could do anything from slow down the process of aging to reverse the onset of diseases, like heart disease and cancer, Kurzweil said.

By 2020, Kurzweil predicts a second revolution in the area of nanotechnology. According to his calculations, it is already showing signs of exponential growth as scientists begin test first generation nanobots that can cure Type 1 diabetes in rats or heal spinal cord injuries in mice.

One scientist is developing something called a respirocyte -- a robotic red blood cell that, if injected into the bloodstream, would allow humans to do an Olympic sprint for 15 minutes without taking a breath or sit at the bottom of a swimming pool for hours at a time.

Other researchers are developing nanoparticles that can locate tumors and one day possibly even eradicate them.

And some Parkinson's patients now have pea-sized computers implanted in their brains that replace neurons destroyed by the disease -- new software can be downloaded to the mini computers from outside the human body.

"Nanotechnology will not just be used to reprogram but to transcend biology and go beyond its limitations by merging with non-biological systems," Kurzweil told CNN. "If we rebuild biological systems with nanotechnology, we can go beyond its limits."

The final revolution leading to the advent of Singularity will be the creation of artificial intelligence, or superintelligence, which, according to Kurzweil, could be capable of solving many of our biggest threats, like environmental destruction, poverty and disease.

"A more intelligent process will inherently outcompete one that is less intelligent, making intelligence the most powerful force in the universe," writes Kurzweil.

Yet the invention of so many high-powered technologies and the possibility of merging these new technologies with humans may pose both peril and promise for the future of mankind.

"I think there are grave dangers," said Kurzweil. "Technology has always been a double-edged sword."

midgetonadonkey
07-15-2008, 10:17 AM
I want robot spider legs.

Richard Cranium
07-15-2008, 10:35 AM
HSoVKUVOnfQ

anakha
07-15-2008, 10:42 AM
As long as we don't end up like this...

http://img28.picoodle.com/img/img28/4/7/15/f_robocop204m_052b5d2.jpg (http://www.picoodle.com/view.php?img=/4/7/15/f_robocop204m_052b5d2.jpg&srv=img28)

thispego
07-15-2008, 10:47 AM
really not that scary

jag
07-15-2008, 10:53 AM
"Other experts are already getting ready for what they say could be a radical transformation of the human race in as little as two decades."


Technological changes that drastic don't happen over a period of 2 decades. The "predictions" of these scientists are nothing more than fantasy at this point.

1369
07-15-2008, 12:44 PM
http://www.collider.com/uploads/imageGallery/Blade_Runner/blade_runner.jpg

Does not approve.

Spurminator
07-15-2008, 01:01 PM
Domo arigato Mr. Roboto

http://images.amazon.com/images/P/B000002GF6.01._SCLZZZZZZZ_.jpg

Dex
07-15-2008, 01:22 PM
Pacemakers, people.

The future is now.

MannyIsGod
07-15-2008, 01:23 PM
"Other experts are already getting ready for what they say could be a radical transformation of the human race in as little as two decades."


Technological changes that drastic don't happen over a period of 2 decades. The "predictions" of these scientists are nothing more than fantasy at this point.

Kurzweil talks about what he calls the law of accelerating returns. Basically the rate of technological increase grows exponentially. Think about it this way, it took us hundreds of years to go from the stone age to the iron age, yet it took less than 50 years from the first airplane to the first space flight. A year today sees much more technological growth than it did 10 years ago and ten years from now it will see exponentially more.

MannyIsGod
07-15-2008, 01:25 PM
In the 1980s Kurzweil predicted that a tiny handheld device would be invented sometime early in the 21st century allowing blind people to read documents from anywhere at anytime -- earlier this year such a device was publicly unveiled. He also anticipated the explosive growth of the Internet in the 1990s.

Anyone who read the thread on other planets a few weeks ago saw my posts in there on this very subject trumpeting Kurzweil. The dude has some radical ideas of what is going to happen and he's doing a decent job with his "predictions" so far.


Gonna be an interesting time to be alive.

phyzik
07-15-2008, 03:28 PM
Anyone who read the thread on other planets a few weeks ago saw my posts in there on this very subject trumpeting Kurzweil. The dude has some radical ideas of what is going to happen and he's doing a decent job with his "predictions" so far.


Gonna be an interesting time to be alive.

Ya, Im particularly interested in the "cheat death" part.... :downspin:

JoeChalupa
07-15-2008, 03:30 PM
There are many already walking around with non human parts so big deal.

Jimcs50
07-15-2008, 04:09 PM
TPark has been working on this project for years. He has been fucking his vacuum cleaner since he was 16.


:p:

InRareForm
07-15-2008, 04:33 PM
good luck figuring out the complete picture of neuroscience.

phyzik
07-15-2008, 04:43 PM
TPark has been working on this project for years. He has been fucking his vacuum cleaner since he was 16.


:p:

that is a disturbing image.

MannyIsGod
07-15-2008, 04:47 PM
good luck figuring out the complete picture of neuroscience.

Are you saying we'll never know exactly how the brain works?

InRareForm
07-15-2008, 04:54 PM
Are you saying we'll never know exactly how the brain works?

nope, IMO. There has been advances and there is hope, but the brain is something that can't be figured out (in the big picture) by science. we can figure out what neuron pattern does this, or what neurons do what, or what part of the brain lights up when so and so happens, but we will never figure out why and how a physical process gives rise to complex entities such as consciousness for example.

MannyIsGod
07-15-2008, 05:52 PM
nope, IMO. There has been advances and there is hope, but the brain is something that can't be figured out (in the big picture) by science. we can figure out what neuron pattern does this, or what neurons do what, or what part of the brain lights up when so and so happens, but we will never figure out why and how a physical process gives rise to complex entities such as consciousness for example.

I could take out the brain and insert tons of other items that people in the past have said we'd never figure out. I think that's a pretty foolish statement to make.

What in particular do you think is so complex about the brain that we'll never figure out? Are you just thinking this because we can't figure it out NOW? Do you know of any neuro-scientists who would agree with your point of view?

Ed Helicopter Jones
07-15-2008, 05:59 PM
I could take out the brain and insert tons of other items that people in the past have said we'd never figure out. I think that's a pretty foolish statement to make.

What in particular do you think is so complex about the brain that we'll never figure out? Are you just thinking this because we can't figure it out NOW? Do you know of any neuro-scientists who would agree with your point of view?



I can say with all certainty that no one will ever figure out the female brain.

Shaolin-Style
07-15-2008, 06:11 PM
I've been ready for this since the first time I played Deus Ex.

"I've got implants up to my eyeballs"

Gotta watch out for those crazed templars though.

jag
07-15-2008, 07:46 PM
Kurzweil talks about what he calls the law of accelerating returns. Basically the rate of technological increase grows exponentially. Think about it this way, it took us hundreds of years to go from the stone age to the iron age, yet it took less than 50 years from the first airplane to the first space flight. A year today sees much more technological growth than it did 10 years ago and ten years from now it will see exponentially more.

The real issues as far as time is concerned are discovery and implementation...


Show me the exponential growth from 1990 to 2010 compared to the growth from 1900 to 1920.

Do some research on military equipment. You'll find that the military is in the process of implementing technology discovered in the early 90's. Discovery is one thing, but implementation is another.

Kurzweil is a wishful thinker who would like some funding.

Mister Sinister
07-15-2008, 08:08 PM
You will all be assimilated.

MannyIsGod
07-15-2008, 09:46 PM
The real issues as far as time is concerned are discovery and implementation...


Show me the exponential growth from 1990 to 2010 compared to the growth from 1900 to 1920.

Do some research on military equipment. You'll find that the military is in the process of implementing technology discovered in the early 90's. Discovery is one thing, but implementation is another.

Kurzweil is a wishful thinker who would like some funding.

Uh, take one look at microprocessor development? The speed of PCs? Computer development in general? The CPU you have in your cell phone is now EXPONENTIALLY faster than the one we had in major computers in 1990.

Even if I want to use your military equation they are far more advanced today than they were even 5 years ago - especially when you talk about their communications and computer equipment. The fact they are slow isn't a knock on how fast our technology develops but the bureaucracy that has to do with anything in the government. You also don't know what their best equipment is capable of.

As for Kurzweil, I'm fairly certain he's not asking for funding from anyone considering his inventions and books probably have him living quite well.

I just want to get something straight though: You ARE arguing that technology today improves at the same rate it did at the turn of the 20th century? Is that correct?

Cry Havoc
07-15-2008, 10:17 PM
http://img.photobucket.com/albums/v116/whisperingstorm/borg3d_000.jpg

"They say we're going to become machines."

"SILLY HUMANS. YOUR SCIENCE IS DUMB."

Cant_Be_Faded
07-15-2008, 10:19 PM
BRING
IT
ON


I remember reading up about this stuff as a kid now we have a good chance of actually living through it! I'd become a cyborg in an instant. Fo....sho.
What's really cool is that this article alludes to a possible future cyberpunk world...Chiba City Blues....real surfing the matrix type stuff. William Gibson predicted this stuff over two decades ago.

InRareForm
07-15-2008, 10:49 PM
What in particular do you think is so complex about the brain that we'll never figure out? Are you just thinking this because we can't figure it out NOW? Do you know of any neuro-scientists who would agree with your point of view?

the complexity is the rise of consciousness and the qualia of sensory experience and others.

Here are a few cases and philosophers who give their arguments why things such as consciousness will be too hard to explain in physical science terms.

What it's like to be Problem:

Thomas Nagel (1974) argues that conscious experience is subjective, and thus permanently impossible to objective scientific understanding. He invites us to ponder ‘what it is like to be a bat’ and urges the intuition that no amount of physical-scientific knowledge (including neuroscientific) supplies a complete answer. Nagel's intuition pump has generated extensive philosophical discussion. At least two well-known replies make direct appeal to neurophysiology. John Biro (1991) suggests that part of the intuition pumped by Nagel, that bat experience is substantially different from human experience, presupposes systematic relations between physiology and phenomenology. Kathleen Akins (1993a) delves deeper into existing knowledge of bat physiology and reports much that is pertinent to Nagel's question. She argues that many of the questions about bat subjectivity that we still consider open hinge on questions that remain unanswered about neuroscientific details. One example of the latter is the function of various cortical activity profiles in the active bat.



Explanatory Gap by David Chalmers

More recently philosopher David Chalmers (1996) has argued that any possible brain-process account of consciousness will leave open an ‘explanatory gap’ between the brain process and properties of the conscious experience.[10] This is because no brain-process theory can answer the "hard" question: Why should that particular brain process give rise to conscious experience? We can always imagine ("conceive of") a universe populated by creatures having those brain processes but completely lacking conscious experience. A theory of consciousness requires an explanation of how and why some brain process causes consciousness replete with all the features we commonly experience. The fact that the hard question remains unanswered shows that we will probably never get a complete explanation of consciousness at the level of neural mechanism. Paul and Patricia Churchland (1997) have recently offered the following diagnosis and reply. Chalmers offers a conceptual argument, based on our ability to imagine creatures possessing brains like ours but wholly lacking in conscious experience. But the more one learns about how the brain produces conscious experience--and a literature is beginning to emerge (e.g., Gazzaniga, 1995)--the harder it becomes to imagine a universe consisting of creatures with brain processes like ours but lacking consciousness. This is not just bare assertion. The Churchlands appeal to some neurobiological detail. For example, Paul Churchland (1995) develops a neuroscientific account of consciousness based on recurrent connections between thalamic nuclei (particularly "diffusely projecting" nuclei like the intralaminar nuclei) and cortex.[11] Churchland argues that the thalamocortical recurrency accounts for the selective features of consciousness, for the effects of short-term memory on conscious experience, for vivid dreaming during REM (rapid-eye movement) sleep, and other "core" features of conscious experience. In other words, the Churchlands are claiming that when one learns about activity patterns in these recurrent circuits, one can't "imagine" or "conceive of" this activity occurring without these core features of conscious experience. (Other than just mouthing the words, "I am now imagining activity in these circuits without selective attention/the effects of short-term memory/vivid dreaming/...").



Qualia problem

A second focus of skeptical arguments about a complete neuroscientific explanation of consciousness is sensory qualia: the introspectable qualitative aspects of sensory experience, the features by which subjects discern similarities and differences among their experiences. The colors of visual sensations are a philosopher's favorite example. One famous puzzle about color qualia is the alleged conceivability of spectral inversions. Many philosophers claim that it is conceptually possible (if perhaps physically impossible) for two humans not to differ neurophysiologically, while the color that fire engines and tomatoes appear to have to one subject is the color that grass and frogs appear to have to the other (and vice versa). A large amount of neuroscientifically-informed philosophy has addressed this question. (C.L. Hardin 1988 and Austen Clark 1993 are noteworthy examples.) A related area where neurophilosophical considerations have emerged concerns the metaphysics of colors themselves (rather than color experiences). A longstanding philosophical dispute is whether colors are objective properties existing external to perceivers or rather identifiable as or dependent upon minds or nervous systems. Some recent work on this problem begins with characteristics of color experiences: for example, that color similarity judgments produce color orderings that align on a circle (Clark 1993). With this resource, one can seek mappings of phenomenology onto environmental or physiological regularities. Identifying colors with particular frequencies of electromagnetic radiation does not preserve the structure of the hue circle, whereas identifying colors with activity in opponent processing neurons does. Such a tidbit is not decisive for the color objectivist-subjectivist debate, but it does convey the type of neurophilosophical work being done on traditional metaphysical issues beyond the philosophy of mind.

MannyIsGod
07-15-2008, 11:01 PM
A theory of consciousness requires an explanation of how and why some brain process causes consciousness replete with all the features we commonly experience. The fact that the hard question remains unanswered shows that we will probably never get a complete explanation of consciousness at the level of neural mechanism.

This is pure and utter bullshit imo. How can he sit there and say that because we don't understand it now we never will?

The first philosopher you posted seems to think that it will be impossible to imitate human consciousness since it is subjective. If that's the case, then every singe life form ever has a unique consciousness that can never be replicated because it is uniquely subjective to them.

If you want to take that stance then its fine, but that is in no way arguing that we can't perform human brain level activity and imitate it directly in a synthetic manner. What it states is that whatever we create - even if conscious - will never be the same as us.

InRareForm
07-15-2008, 11:09 PM
Here is something interesting as well. I love this stuff because I minored in philosophy with lots of courses in philosophy of mind.


Chinese room argument

Searle's Chinese room argument tries to show that strong AI is false. But how can anyone show it to be false if we don't know what the human mind's program is? How can one know it a priori - before any empirical tests have been given? This is the ingenious part of Searle's argument. The idea is to construct a machine which would be a zombie (ie. not mental) with any program. And if this machine would exist, it is the case that strong AI would be false, since no program would ever make it mental.

But how to construct such an machine? And worse than that, how would we actually know if it has thoughts or not? This is the second problem which Searle solves by putting ourselves to implement the machine. If we implement the program, we would know if it is mental or not. Therefore the Chinese room argument has a thought experiment part. This is presented next.

Suppose you are in a closed room which has two slots. From the slot 1 somebody gives you Chinese characters which you don't recognize as words ie. you don't know what these small characters mean. You also has a huge rulebook which you use to construct another Chinese characters from those that were given to you, and finally you split these new characters out of the slow 2. In short:

1. Chinese characters comes in, 2. you use the rulebook and construct more Chinese characters and 3. you put those new characters out.

In its essence, this is just like a computer program which has an input, it computes something and finally splits an output. Suppose further that the rulebook is such that people outside this room can discuss with you in Chinese. For example, they send you a question 'how are you' and you, following the rulebook, would give a meaningful answer. So far, the computer program simulates human being which understands Chinese.

One can even ask 'do you understand Chinese?' from the room and it can answer 'yes, of course' despite of the fact that you, inside the room, would not understand a word of what is going on. You are just following rules, not understanding Chinese.

The crucial part is this: given any rulebook (=program), you would never understand the meanings of those characters you manipulate. Searle has constructed a machine which cannot ever be mental. Changing the program means only to change the rulebook and you can clearly see that it does not increase you understanding. Remember that the strong artificial intelligence states that given the right program, any machine running it would be mental. Well, says Searle, this Chinese room would not understand anything... there must be something wrong in strong AI.
Criticism

Searle has presented his views, it is time for other philosophers and cognitive scientists to introduce their comments and criticism. The criticism is presented in a form of a dialogue. Cognitive scientists' comments are grounded on many commentaries on Searle's arguments, and Searle's replies are based on his commentaries on criticism. However, they are fictive.

Cognitive Scientists (CS from now on): I'm impressed. You have surely given an exceptional argument which raises many profound questions concerning the foundations of artificial intelligence. But how can you insist that we can never come up with thinking machines? It might be that our present computers and programs are still too simple (Sloman & Croucher 1980)? Maybe our present computers are just too slow (Dennett 1987)?

Searle: This is not a matter of any machines, future prospects or the speed of your computers. It has nothing to do with the hardware. Strong artificial intelligence says that all that matters is software.

CS: I see your point. But I still find that your Chinese room is not analoguous to computers, as you claimed. In fact, you have later written that there is no such thing as an intrinsic syntax in the nature (Searle 1993, Jacquette 1990): why do you postulate that such an entity would exist in computers? Sure, computers are programmed by syntactical programs but in their essence they are just hardware. And a program is transformed to electrical impulses, which is, hardware in your vocabulary. So, I think that the Chinese room argument has nothing to do with computers.

Searle: On that point I was wrong when I first introduced the argument in 1980 (note that this is my interpretation, I think Searle has not admit this). I compared the Chinese room to Roger Schank's computer simulations (1977). However, as I said, my argument has nothing to do with hardware or computers, its about programs. But it still denies strong artificial intelligence, cognitivism and cognitive science.

CS: But the 'intrinsic intentionality' (mentality) you are talking about... it is a privite experience... why would we want to introduce any objective criteria for such an subjective experience (Wilensky 1980, Jacquette 1989)? In fact, we have this 'other minds problem' stating that it is ultimately impossible to know that someone else has any subjective experiences. We cannot observe other people's thoughts. I think you want too much - the Turing test (1950) would certainly be enough! I think this doubts the importance of your argument. Whether any machines will be mental, we cannot never know that for sure.

Searle: Agreed. But what's the point in the strong AI then? If the strong AI claims that any system with the right program would be mental, it is clearly a metaphysical hypothesis in the same sense... However, I can present my Chinese room argument without the other minds problem. My argument shows that the person in the Chinese room doesn't understand Chinese. We rely on our own experiences when we verify this fact. So, this argument holds whether there are some conscious minds other than me (common sense) or not (solipsism). In that sense, it's about the ontology of mentality, not about epistemology (Searle 1993b). And in cognitive sciece, one just presupposes that mental minds exist.

CS: Curiously, we still have a feeling that your argument is just a 'intuition pump' of some kind (Dennett, Block, Hofstadter 1980). You have just constructed a purposeful and intuitive situation, aiming at a false conclusion. Think about earth: people were pretty much convinced that it is flat. Nobody believes that today. There we have an example of wrong intuition - maybe that's what Chinese room is all about. Anyway, why should we - as scientists - believe on any intuitions or thought experiments?

Searle: It is a plain fact that I don't understand Chinese in the Chinese room. There is no intuition about that. The argument relies on that fact.

CS: Hmmm... You refer to this concept of 'intentionality' in your argument. You are claiming that the man in the room does not have intentionality?

Searle: Right. Since I don't understand Chinese, I don't know what those Chinese characters mean. This means essentially the same as lacking of intentionality. Intentionality, on the other hand, is one essential form of mentality (Brentano).

CS: But isn't it quite problematic to try to distinguish between intentional and non-intentional creatures? What kind of mental states would you postulate to monkeys, cats, insects... (Menzel 1980)? There is perhaps no point in saying that something is mental and something is not?

Searle: My argument has nothing to do with that. I am not trying to find any criteria for mentality.

CS: Yes, but we meant that what if there exists intentionality in some other form (Jacquette 1989, Carleton 1984)? It might turn out that the human intentionality is not the only possibility. For example, in your original article 1980, you said that 1) there are some processes P which produce intentionality, 2) X is intentional and derived from this that X has those processes P. This is plainly a false logical inference.

Searle: Can you show where I did say so?

CS: You wrote (1980) that "it says simply that certain brain processes are sufficient for intentionality" and "any mechanism capable of producing intentionality must have causal powers equal to those of the brain."

Searle: ...well... you are right on that. I made a logical mistake. However, it does not destroy my arguments (1990b) since it has nothing to do with the strong AI thesis. Strong AI thesis states that all that matters is program. They don't distinguish between alien or human intelligence. In fact, I think they are not trying to build a machine with an alien intentionality, it is the human mentality they are after.

CS: What if the whole 'feeling' of intentionality is only a some sort of illusion (Bridgeman 1980, Ringe 1980)?

Searle: Illusion? Don't you have a concrete feeling of being intentional and mental?

CS: Of course, but I asked if it is important in any sense, perhaps it is just an illusion and doesn't exist as we think?

Searle: So? Then strong AI advocates are also talking about some kind of a illusion... it doesn't matter whether it is illusion or not... we are just debating about it. Who cares if it's illusion? And most of all, what a marvellous and wonderful illusion it is! Let us simply say that the intentionality is in fact an illusion, and continue our debate keeping in mind that we are talking about illousions.

CS: Well, ok, just a thought... but you said that this rulebook is so complex and huge that the room can answer meaningfully to any Chinese questions?

Searle: Yes.

CS: So it must be possible for the room to learn?

Searle: Of course.

CS: I think that if the room is able to answer meaningfully to any questons, we must simply say that it understands (Rorty 1980). Cheating completely is not cheating anymore! The problem vanishes.

Searle: Going back to Turing's proposal? If you want, you can define the concept of 'understanding' behaviorally. Then, understanding means the same as behaving as you would understand. I called this as-if-intentionality or observer-relative intentionality. It is for that reason we must distinguish between as-if-intentionality and intrinsic intentionality.

CS: Oh, I see... so, Turing was talking about observer- relative understanding?

Searle: Precisely. However, Turing never mentioned anything about understanding. He was talking about intelligence.

CS: These conceptual issues again... is it necessary to say that consciousness and mentality are necessary for intelligence, semantics and intentionality (Korb 1991, Dretske 1990)? I think that even semantics can exists without any conscious 'intrinsic' experience.

Searle: You can say so. But what I mean... I simply don't use these words in that sense...

CS: Yes, I know. Well, I think a killer counterargument has come to my mind... you said that it is a plain fact that the man in the room does not understand Chinese and therefore he has no intentionality?

Searle: Right.

CS: But isn't it also a fact that he is intentional when he uses his rulebook and carries out the orders? He is clearly an other human being (Boden 1988, Chrisley 1995, Rapaport 1986) and one can conclude that even a pure syntactical symbol manipulation would then require intentionality! Chinese room would not work at all if the man dies or becomes unconscious.

Searle: You made a good point. The person is not intentional what comes to Chinese, but of course, he understands something about the rulebook. You are right on that.

CS: So isn't there something quite wrong in your argumentation then? One can conclude, in the same manner as you, that computers need to understand the rules they use in computations...

Searle: From one example you cannot infere for all. If the man in the Chinese room has intentionality, it does not follow that all computers must have intentionality.

CS: But now you are saying that the man in the room is intentional, aren't you?

Searle: In a sense. But he or she does not understand Chinese, and that is the fact we must concentrate on. OK, I have to admit that the thought experiment is naturally such that a person is there with his or her intentionality. But you must realize that this argument is about Chinese, not about understanding the rulebook.

CS: Can you make your point more explicitly? It is no more clear to me what your argument says?

Searle: My argument says... that if there is a entity which does computation, such as human being or computer, it cannot understand the meanings of the symbols it uses. It does not understand the input given or the output splitted out.

CS: Fine. I can accept that, at least temporarily. But how can one be so sure that the room as a whole does not understand? How can you, being only a part of the system, know about the system as whole (Weiss 1990, Copeland 1993, see also Searle 1980). Think about a single neuron in your head. Do you think it is conscious? But however, quite miraculously, the system of many neurons becomes conscious (Dyer 1990). Maybe the understanding is not in the manipulator but in the rulebook (Abelson 1980), or in the researcher how made the rulebook (Schank 1980)?

Searle: If the room keeps bothering you, we can leave it out. Suppose that the man memorizes the whole rulebook and starts working outdoors. There are no differences and no other subparts where the intentionality would mysteriously hide (Searle 1980). And, if the understanding is on the rulebook or in the researcher, you are talking about the weak artificial intelligence.

CS: What about the indexicals? If you ask the room "What is this?", he cannot answer. He cannot see it from the rulebook (Carleton 1984, Ben-Yami 1993).

Searle: That assumes that the only input to the machine is language.

CS: So, we must imagine, instead of a passive room, a robot of some kind with perceptions, ability to move, sensort modalities etc. etc. (Bridgeman 1980, Fodor 1980, Boden 1988, Dyer 1990)? A Chinese robot?

Searle: That does not make any difference. When you are sitting in the room and getting meaningless input, you can't even know whether it is language or something else. It is so meaningless. It does not matter whether the system is a language speaker or a humanoid Star Wars robot.

CS: I have to admit. But what if we simulate - in the means of computer program - every neuron in our head? I mean every neuron in any arbitrary level of detail (Pylyshyn 1980, Bridgeman 1980, Haugeland 1980, Lycan 1980, Hofstadter 1980, Wilensky 1980, Savitt 1981, Jacquette 1989, 1989b, 1990, Dyer 1990)?

Searle: We might get a robot behaving just like us.

CS: But how can you be so sure that it does not have mental life? Think! If we simulate every neuron - what could possible be left out? Nothing! There is absolutely everything in that program we can ever need.

Searle: First, I have to say that this idea is quite far from cognitivism and cognitive science where we are not supposed to simulate brains but minds. That has nothing to do with the original physical symbol system hypothesis.

CS: Of course, but in your original article you said that the Chinese room argument applies to any Turing equivalent computer simulation. Surely this brain-simulation can be Turing equivalent.

Searle: So I said...

CS: Sorry to interrupt, but let me formulate another thought experiment (comes from Savitt 1982). Suppose there are n neurons in your head. Call him agentn. Take one neuron from his head and formalize its functioning, and replace the missing neuron with its simulation. Call this agentn-1. And you can go on: take another neuron, write down its program and make the replacement. Your Chinese room is agent0! No neurons are left, there is only one huge program - a simulation of his every neuron. What happens between? Does his intentionality vanish? Does it transfer to the program?

Searle: A nice story. But first, we must distinguish two systems. One with many small demons simulating every neuron and the one where only one demon is running the whole program. My Chinese room is about the latter. The former is another issue. And finally, there is no difference. If you make the rulebook to simulate all neurons, this does not give any sense of intentionality to the manipulator. He cannot even know whether the rules concern stock prizes, language production, neural simulation or anything else. We have to admit that the man in the room cannot understand whatever the program is.
The cognitive scientist has failed

Time goes by. Cognitive scientist goes to her chambers and starts to wonder the issue. The man in the Chinese room does not understand... is there any way out of this plain fact? She is feeling depressed, being a defender of strong artificial intelligence and cognitivism. She has done a lifetime job with computer simulations, thinking that some day they would produce mentality. Chinese room seems to be a system which cannot ever be mental.

Then suddenly everything becomes clear. She knows what's wrong with Searle's argument, and why Searle is wrong.
Revenge

Searle: You told me you have another counter-argument dealing with my Chinese room?

CS: Yes I do! And this time you cannot beat me.

Searle: We'll see...

CS: Previously I argued that what if the whole room understands Chinese, and not the person inside it. You replied that the room is unnecessary, and that the man can memorize the rulebook, was this correct?

Searle: Yes, that's what I said.

CS: You also said that then there are no other systems where the intentionality would hide? There is only one person who is manipulating the rules from his memory?

Searle: Right.

CS: There is only one person... however, as we know, persons and brains are not the same thing, are they?

Searle: No, they are not. Brains do not have sensations but persons do. Persons see red colors but brains do not. This is just a conceptual issue here, quite obvious indeed. It has nothing to do with the fact that brains cause persons or that they are identical. In our language we must distinguish between persons and brains. I have said earlier (1992) that those mental, personal phenomena are also irreducible.

CS: Now, you said that the person does not understand, but what about his brain?

Searle: ?

CS: Who says that there must be only one person per brain (McDermott 1980, Dennett 1980, McCarthy 1980, Maloney 1987, Cole 1991)? A person cannot report on his brain. For example, you are quite unreliable source of information about your neurons or synapses. Moreover, as you previously replied, if the person memorizes the rulebook and goes outdoors, there would be no place where the intentionality can hide. My answer is: there is such a place, namely, the person's brain.

Searle: There is no reason to think that there would ever be any another person in my brain than me!

CS: But in principle it is possible, isn't it?

Searle: Yes, but...

CS: In philosophy, anything that is possible in principle can do.

Searle: What are you suggesting? Are you saying that we cannot rely on the Chinese room since we cannot be - as persons - sure if there is someone else in our... head... which would understand Chinese?

CS: Precisely. You are making a sort of categorical mistake (Boden 1988) in this argument, mixing the brain and the mind. In computers, for example, you don't have the person's level as a starting-point. It might only emerge in right circumstances. The person in the Chinese room is just not reliable a source of information about this matter.

Searle: But still... I find your speculations quite unconvincing, even if I have to admit that it is possible in principle.

CS: I know. But the best part is still coming.

Searle: Hit me.

CS:There is one way for the man inside the room to learn Chinese.

Searle: What are you waiting? Speak it out!

CS: The man can go outdoors and start to learn, like anybody else. He can just break the walls and meet his programmers.

Searle: What?? But that would violate my argument..! You cannot be serious.

CS: I am deadly serious. OK, I have to admit that it violates your original argument, but in what way? Going out of the room means essentially the same as changing the causal relations between the person and his environment. Nothing else happens! He just starts to consider his input in a different way. Why is this so crucial? Because the question was not that the man was doing syntactical manipulation on symbols but that he just had wrong causal relations to his environment (Fodor 1980). Going out of the room fixes those relations and the system learns to understand. As you have previously said, there does not even exist such an entity as pure syntax. It is all about causal relations. Your Chinese room argument is just a curious situation where those causal relations are preset to prevent some kind of learning.

Searle: A nice thought, but isn't it so that any computer is always in the same situation also?

CS: Can you prove that? I mean, that's quite an another argument. Your Chinese room uses human beings which are, in their essence, able to understand if the causal relations are changed.

Searle: But still, if some strong AI advocate gives me his ingenious computer program producing mentality, then I can run it in the Chinese room and see that there wasn't any mentality at the first place.

CS: And still, you cannot tell about your brain, or some part of it, can it understand, and still, there is already intentionality present in that room which causes the system to learn when the causal relations are changed. You simply can't state a proof about the missing mentality if the mentality is in fact not missing.

Searle: Confusing... quite confusing... I thought it was just a plain fact that the man cannot the understand Chinese.

CS: You keep saying that over and over again. But, if there is a man manipulating the symbols, he is intentional already. If there is no man in the room, we cannot know.

Conclusions

I don't know whether the issue is settled down or not. Searle himself is still cheering for his argument. Many are against. I would put the cognitive scientist's last phrase in my own mouth and say that that's why the argument does fail.

u2sarajevo
07-15-2008, 11:12 PM
Hey..... if they all look like Number 6 what's to complain about?

JoeChalupa
07-15-2008, 11:20 PM
It'll be interesting indeed.

travis2
07-16-2008, 06:14 AM
http://starwars-ep2.hp.infoseek.co.jp/battlestar_galactica_tricia_helfer.jpg

phyzik
07-16-2008, 09:36 AM
I wouldn't mind those artificial blood cells that they talk about (letting you stay underwater for hours or running an Olympic marathon for 15 minutes without taking a breath). Even a few implants would be cool. I just don't know if I would be comfortable if 80% of my body was artificial (as in voluntarily enhancement)... At what point can you no longer say your still human?

Don't get me wrong, I'm really excited to hopefully be alive when all of this, if possible, comes to reality, but like the article says.... Technology is a double edged sword.

Cant_Be_Faded
07-17-2008, 12:23 AM
Hopefully the nanotechnology wont be a gateway to real life zombies that can run for 200m bursts without breathing and hold their breath for extended periods of times or we're fucked.

BradLohaus
07-17-2008, 01:00 AM
BRING
IT
ON


I remember reading up about this stuff as a kid now we have a good chance of actually living through it! I'd become a cyborg in an instant. Fo....sho.
What's really cool is that this article alludes to a possible future cyberpunk world...Chiba City Blues....real surfing the matrix type stuff. William Gibson predicted this stuff over two decades ago.

Austin's finest would disagree...:nope

Cant_Be_Faded
07-17-2008, 09:05 AM
lol
prahps