GENERAL DISCUSSIONS

Future Tech ... on nanobots and the human soul in the 26th Century

POSTED BY: KARNEJJ
UPDATED: Thursday, January 5, 2006 11:49
SHORT URL:
VIEWED: 9266
PAGE 1 of 2

Saturday, December 31, 2005 8:18 PM

KARNEJJ


A fellow poster suggested this topic may deserve it's own thread.

I submit the following:

In the Firefly 'verse, ships have artificial gravity which can be activated and deactivated upon a whim. The prominent poster, "Citizen" puts forth one theory that supposes that ships can generate and direct the particles which cause gravity - namely, gravitons.

Which led me to state in one thread the following.

I never said that directing gravitons is more difficult than nanobots (although I would conjecture that it is) ... but, I would assert that the military potential for nano tech would get it higher priority than the general theoretical research that it would take to eventually control gravity.

I'm pretty sure our experiments into gravitational manipulation aren't as far along as our current state-of-the-art in nanotech. There are currently micro-scale robots that can do simple things, such as follow a line, by walking in a bi-pedal fashion. There are also Shape-Memory metals that can be actuated by a simple current. Of course there are research into bucky-balls (folded carbon nano-tubes) as mini-computers as well.

I would even submit that the Firefly 'verse has extremely advanced nanobots already. It would be one of the few ways to do the large-scale terraforming that is hinted at. Either small, versatile machinery that can go everywhere or really big complex machinery that handle various portions of the terraforming job....

As for meat computer vs quantum brains, only time will prove who's right, but I think that there can be sentient non-biological machinery. It's nice to think that consciousness is specific to humans by virtue of our meat, but I think it's only ego and vanity. Passing the Turing test [robots that can fool you well enough to pass as humans] are within reach, I believe. If a robot "acts" like it has a soul so well that we can't be sure ... would you deny that he does have one? Possibly you would ... but not everyone would be so quick to discount it ...

Tracking inputs into a person is pretty easy, ya know ... They come through only a few well-known paths. Touch (as well as smell and taste, I think) passes through the spinal column (for the most part), visual input goes through the optic nerve, and sound through the cochlea.

One interesting argument posed in a book I read could be paraphrased as follows...

We'll assume that you either are a conscious being or not ... no such thing as half a consciousness. John Doe was born a rather sickly child to very wealthy parents. As he grows older he has to have a pacemaker installed. Later, his hearing fails, but, fortunately, he is eligible for a new operation that places a device into his ear that stimulates his hearing nerves in response to sound. Later, he notices his eyesight isn't as great as it once was. He decides to have the inside of one of his eyes augmented with a high-tech hi-res camera that triples the clarity of things he sees and allows him to see far (like a telescope) and gives him a little ability to magnify things as well. Unfortunately, trying to concentrate on those advanced functions causes him dizziness due the strain. The doctors give him a visual scene processing computer the size of a credit card that he wears in his pocket. It communicates with the camera wirelessly and automatically highlights important objects that he's looking at. It even does neat things such as give him details about people and buildings he sees. Such as their birthday, favorites, and how they met, etc. He eventually adds more abilities to the card. He has it process sounds as well. One day he loses the card and can't stand the loss of the extra processing. He tells the doctors to just replace those parts of his brain and implant the machinery in his head. He's got a lot of silicon and metal thinking for him now ... is he still the same John Doe ... just better? Of course, all of his friends say he's the same ... he just can "remember" more about the people he meets now, and can hear/see much better than he used to be able to. He's even lost some brain matter ... is he still conscious - of course ... does he still have a soul - hmmmm ... who knows.
John later suffers in a horrible accident and loses his arm. Fortunately, he's not the first one with this problem. Doctors have given one man a touch/pressure sensitive prosthetic that he can control to pick things up which was in recent news (late 2005). He gets one of those arms.
Then he goes and gets into another horrible accident, being paralyzed from the neck down. By this time, he can have motorized supports attached to his body which respond to impulses at his neck. These supports allow him to move his otherwise unresponsive limbs again. After having enough people snicker and stare at the machinery attached along his body, he makes the bold decision to have head transplanted directly onto a robotic frame that is shaped just like his body before the accidents.

Not much is left of the original John Doe, but ever since the new body, his friends say he's back to old, happy-go-lucky self --- he's just got one helluva grip now. Has he lost his consciousness anywhere along the way so far? Does he still have his soul?

We can even imagine that we take small parts of his brain and copy their anatomical structure and mimic their electrical signal response - even their neurotransmitter action. Say we have the technology to create (my famed) nanobots that can change their shape to mimic various neurotransmitters on command ... so WE TAKE OUT SMALL PARTS OF HIS BRAIN one piece at a time AND REPLACE THEM WITH ELECTRONICS. Is there any point at which John becomes a non-sentient being? When exactly is that? 60% of his brain ... or is there a specific area in the brain that consciousness/sentience resides?? You can see the problems here.

If we give up on this line of thinking, and just assume that if it acts like a sentient being and talks like a sentient being - that it must be a sentient being ... then we can build a robot that's just as moody as humans that could probably pass this sort of test. I plan to see such a machine in my lifetime.


NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, December 31, 2005 8:21 PM

KARNEJJ


Citizen .. you stated that

Quote:


Karnejj:
You make a convincing argument but the truth is in the details. Not once has 'John Doe' had his brain removed. His essence lies within, and that is never touched.



But in that long thread you may have missed where I suppose that small pieces throughout the entire brain are replaced by machinery ... one bit at a time. I highlight it in the above reposting ... and where in the replacement process does he lose enough essence to be considered 'just a non-sentient machine.' I would answer never. [I *DO* believe that the original John Doe does die somewhere along the process, but that the final robot clone is a new 'person' still capable of just as much sentience as the original-human John Doe.]

Quote:


I don’t agree. There’s more to life than a simulation. At least a simulation that we can envision.



And that's where we differ. Unfortunate as it may be ... I think we ARE just programs run on a meat computer Complex programming, no doubt .... but not beyond *EXACT* replication and simulation on a silicon machine. You appear to believe that the quantum effects that have been discovered in the brain have a large effect on the overall programming. I believe they do not, and even if they did, that similar effects could still be achieved within a machine. That would leave no place for our coveted sentience/consciousness (even soul?!) to hide ....

So, if that is all true, there are only 2 possiblilties: A) even the machines that we will eventually be able to create must too be sentient or B) we, humans, are not truly sentient either.

As for nano-tech .. you're probably right about the fact that mini machines have their little cogs break and burn-out very quickly. But, that's exactly the reason that nano-tech research aims to create one-piece machines (complex single molecules) which can work together to achieve some various goals. Much like simple neurons can cluster together to form an amazingly complex thing we call the human brain.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, December 31, 2005 8:50 PM

GUNRUNNER


Quote:

Originally posted by Karnejj:
I would even submit that the Firefly 'verse has extremely advanced nanobots already. It would be one of the few ways to do the large-scale terraforming that is hinted at. Either small, versatile machinery that can go everywhere or really big complex machinery that handle various portions of the terraforming job....

To Terraform you need big machines. Nanobots are just too small for the job (ironic isn't it?)

Think of it this way, you need to build a normal human sized bridge across a river. There are two teams one is of normal humans the other or 200 foot tall giants. Who will get the job done faster?

Besides why do you need to use nanobots to terraform? What you need to do is change the atmosphere, purfy the water (if none present bring some IE crash some comets) and transplant topsoil for farming.

EV Nova Firefly mod Message Board:
http://s4.invisionfree.com/GunRunner/index.php?act=idx

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, December 31, 2005 8:54 PM

KARNEJJ


Ahh.... you limit the nanobots too severely.

The ultimate goal of nanorobotics would be to allow them to merge Voltron-like into a big machine that could build the bridge in the conventional way that you'd like to see it built. Or they could form into whatever other big machine that you could want.

But, if it were up to the nanobots, they'd form together to truss it up and then pass materials from whatever source along to the area where the bridge was needed and deposit them in the correct place. Then swarm off to another area.

What better way to control the atmosphere than to have miniature machines floating around in the stratosphere catalyzing various reactions as needed ....?

Why use nanobots? 'Cuz building one little thing that can copy itself and create a batch of itself is easier than building huge construction equipment.

Why NOT use nanobots? ... well, there is always the problem of them malfunctioning and not knowing when to stop copying itself ... that would lead to them becoming a huge environmental cancer that would destroy a whole planet ...

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, December 31, 2005 9:03 PM

GUNRUNNER


But is it faster? How long would it take for the nanobots to build themselves to sufficent numbers before they could work? If you have giant (or in this case normal sized) building machines constructing terraforming machines you build one machine you can start terrforming on a large scale right away- if you have nanobots it takes them millions if not billions of replications to work at that scale. Both systems take equaly as long to build a terrforming unit the big ones win the race.

EV Nova Firefly mod Message Board:
http://s4.invisionfree.com/GunRunner/index.php?act=idx

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, December 31, 2005 9:13 PM

KARNEJJ


My guess is that the nanobots are ready faster and are more resilient than your big machine.

We would both need raw supplies. That's a given.

You would need to build all of your circuitry and moving parts and eventually assemble all of that together.

I need only make one nanobot successfully and drop it into a soup of the raw materials. After that it will make another nanobot. Those 2 bots will make 2 more bots. Those 4 bots make 4 more ....

While they're doing that ... I can go make my next nanobot for the next planet, while you're still putting together the machine for the first planet ... The autonomy (self-direction) of the nanobot is the key.

What happens when a piece of your machine breaks? You have to find and replace the problem. A nanobot breaks and it'll just get cannabalized into a new bot or some other project ...

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, December 31, 2005 9:31 PM

GUNRUNNER


The nanobot needs to build another of its kind with the same scale of tools that a human would build a machine. Maybe terraforming is too complex of an example since humans have not really reached that scale of technology. Lets go with something really simple...

... you have two people, lets say clones- equal in every way. You want to kill them. because he is going to start a fight with you or he bothers you, or if there's a (wo)man, or if you're gettin' paid... The first guy you decide to go all Dirty Harry up on his ass and use a .44 Magnum. (Go ahead make my day PUNK!) One shot to the head and he drops like a slab of meat. Now assuming the 2nd guy didn't see you blow his copies head off with a .44 Magnum and runs for his life, you have a gun that shoots nano-bullets (bullets in nano scale). Each of those bullets would only destroy a few cells of his body with each shot and assuming that for each shot you make the next shot has double the nano-bullets how long will it take to kill guy #2? After a dozen shots he might feel a sting. After a hundred a slight burning, after that he might just beat the crud out of you and kill you with that .44 magnum.

Anything a nanobot can do a big machine can do on a larger scale, and larger scale = faster in 99% of cases (the 1% being cases that take place on nano scale).

EV Nova Firefly mod Message Board:
http://s4.invisionfree.com/GunRunner/index.php?act=idx

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, January 1, 2006 5:19 AM

KARNEJJ


You're still not seeing the ultimate potential for nanobots. You're still thinking "mini-independant-robo-guys." You have to think more "smart Voltron-bacteria." It is essential that nanobots be able to copy themselves AND that they be able to work together.

Say Guy #2 doesn't run .. instead he gets his bucket of nanobots and takes his remote control and pushes a few buttons. The nanobots rise out of the bucket and merge together to form bulletproof armor that covers Guy #2. Say, he pushes a few more buttons and some more bots get together to form Jayne's good-ole Vera with other bots forming .50 caliber slugs ... whereupon he asks Guy #1, "Do you feel lucky punk? ... do ya??!" ... That first shot is gonna do more than sting a little .....

The ultimate goal of nanotech would be to create little machines that can replicate themselves like bacteria, but also can work together to form useful machinery, like (re-usuable) animal stem cells. Have them able to respond to some sort of control device - chemical, radio, laser, whatever ... They should probably also be able to pass along signals like neurons do.

Remember that bacteria are able to create complex machinery (more bacteria) on their small scale and have complete copies within a couple hours or less.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, January 1, 2006 7:05 AM

CITIZEN


Okay firstly on the subject of Nanobots:

Karnejj, you seem too want to elevate Nanobot abilities into the magical, being capable of pretty much anything, at a scale far smaller than anything we can currently produce.

The most advanced Nano scale machines we have are cogs that wear out quickly. When speaking of Nano motors modern technology is referring to motors a little larger than a match head, although small this is still far larger than anything that could be inserted into the blood stream. Likewise we currently can only build the simplest of self replicating machines, and that's on the macro scale.

The thing is Nanobots aren't actually a part of real nanotechnology research; they're a construct of pure Science Fiction. Consider:
Quote:

From CRN (Centre for Responsible Nanotechnology):
The popular idea of so-called nanobots, powerful and at risk of running wild, is not part of modern plans for building things “atom-by-atom” by molecular manufacturing. Studies indicate that most people don't know the difference between molecular manufacturing, Nano scale technology, and nanobots. Confusion about terms, fuelled by science fiction, has distorted the truth about advanced nanotechnology. Nanobots are not needed for manufacturing, but continued misunderstanding may hinder research into highly beneficial technologies and discussion of the real dangers.
...
Both scientists and the public have gotten the idea that molecular manufacturing requires the use of nanobots, and they may criticize or fear it on that basis. The truth is less sensational, but its implications5 are equally compelling.


http://www.crnano.org/BD-Nanobots.htm

Further more we can't build a macro scale machine with the adaptability of the Nanobots you suggest, so how do we fit this considerable processing power and memory storage into a neat Nano package?

Now I'm not saying that someday Nanobots won't be within our grasp, but by the shear fact that in comparison to modern technology Nanobots seem like magic it's a long way off.

The second question is on the nature of the brain.
Quote:

Originally posted by Karnejj:
It's nice to think that consciousness is specific to humans by virtue of our meat, but I think it's only ego and vanity.


I don't believe that Human's alone have sentient consciousness or any of the other things we like to think makes us unique. Most animals can show a certain degree of consciousness.
What I do believe is that there is more to the Biological brain than simply a meat computer. There's persuasive evidence that Quantum Mechanics plays a role in consciousness, something that can't be modelled.

Now remember that a computer is simply a calculator, identical to the basic concept of Babbage’s Difference and Analytical engines. They take two inputs for any particular operation, perform that operation, and output a result. This gives rise to the well known input process output diagram:
INPUT -> PROCESS -> OUTPUT
Its true modern systems, for instance SSE instructions are SIMD (Single Instruction Multiple Data) but these are merely a CISC optimisation attempts, i.e. using fewer instructions to submit more data.

Now the brain doesn't work this way. Beyond the obvious that the Brain is analogue rather than digital, conceptually the brain can work on an infinite number of inputs simultaneously. A computer is a centralised processing unit, in fact the computer itself is the main memory and the CPU unit, the rest is peripherals and interface devices. The Brain is very much more a decentralised processing unit. Sure there are areas in the brain that deal with specific things, but any one Neuron can communicate with any other Neuron or Neurons (albeit not directly) in order to function.

Getting back to the fact that a computer is a calculator all it can do is simulate this interaction. Again it's true that neural nets have some success with AI and Learning programs, but is this true learning and true AI?
Yes
Do these, or could these programs experience consciousness?
I don't think so. We can't program or simulate consciousness; I believe it's an emergent property of the Biological brain. A computer can only hope to simulate consciousness, not attain it. What is the mathematics of consciousness, not the outward effects, not the appearance of consciousness, consciousness itself?

I think consciousness derives from an immensely complex set of interactions, inputs, outputs and feedbacks. Some of which derives from the Quantum level where Heisenberg’s uncertainty principle comes into play.

Could we produce an electronic computer that could perfectly simulate the reactions of a Human? Almost certainly. Would that computer be intelligent, yes, would it really be conscious?
I don't think so.



More insane ramblings by the people who brought you beeeer milkshakes!
Remember, the ice caps aren't melting, the water is being liberated.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, January 1, 2006 8:17 AM

KARNEJJ


Quote:

Originally posted by citizen:
Okay firstly on the subject of Nanobots:

Karnejj, you seem too want to elevate Nanobot abilities into the magical, being capable of pretty much anything, at a scale far smaller than anything we can currently produce.



Well ... yeah... "Any sufficiently advanced technology is indistinguishable from magic." [Arthur C.
Clarke, "Profiles of The Future", 1961 (Clarke's third law)] ... Take a kid's walkie-talkie to Renaissance Italy or ... take a chess playing machine to 18th century New York. Take that new prosthetic arm that can be controlled to feel and pick up things from an amputated stump back to World War I.

Quote:


The most advanced Nano scale machines we have are cogs that wear out quickly. When speaking of Nano motors modern technology is referring to motors a little larger than a match head, although small this is still far larger than anything that could be inserted into the blood stream. Likewise we currently can only build the simplest of self replicating machines, and that's on the macro scale.

Further more we can't build a macro scale machine with the adaptability of the Nanobots you suggest, so how do we fit this considerable processing power and memory storage into a neat Nano package?

Now I'm not saying that someday Nanobots won't be within our grasp, but by the shear fact that in comparison to modern technology Nanobots seem like magic it's a long way off.



Work in the direction I have been implying has already begun ... http://64.233.161.104/search?q=cache:nGB3nWOzK3YJ:www.nanotech-now.com
/utility-fog.htm+fog+nanobots&hl=en&lr=lang_en

and
http://64.233.161.104/search?q=cache:-qAUlfGlyz4J:discuss.foresight.or
g/~josh/Ufog.html+utility+fog&hl=en&lr=lang_en

and prototypes at
http://64.233.161.104/search?q=cache:hPgS28iOeekJ:nanodot.org/articles
/01/02/04/100216.shtml+utility+fog&hl=en&lr=lang_en



Quote:


The second question is on the nature of the brain.
Quote:

Originally posted by Karnejj:
It's nice to think that consciousness is specific to humans by virtue of our meat, but I think it's only ego and vanity.


I don't believe that Human's alone have sentient consciousness or any of the other things we like to think makes us unique. Most animals can show a certain degree of consciousness.


Eeek ... a squirrel more sentient than Star Trek's "Data" ... that's

Quote:


What I do believe is that there is more to the Biological brain than simply a meat computer. There's persuasive evidence that Quantum Mechanics plays a role in consciousness, something that can't be modelled.

Now remember that a computer is simply a calculator,
.
.
.
INPUT -> PROCESS -> OUTPUT

Now the brain doesn't work this way. Beyond the obvious that the Brain is analogue rather than digital, conceptually the brain can work on an infinite number of inputs simultaneously.
.
.
.
but any one Neuron can communicate with any other Neuron or Neurons (albeit not directly) in order to function.


It seems that you agree that computers can simulate this in software ... why discount that it can be done physically, as well (not that I believe it's strictly necessary, ... but for the sake of argument). There are what ... 4 billion devices connected to the internet. What if we add a program to each of these computers that allows them to communicate in a way that more closely models the way neurons work? This is still a far cry from 100 billion neurons and the quantum effects therein (not to mention lag throughout the internet), but it would possibly allow us enough processing to simulate the personality of a human.

Quote:


Getting back to the fact that a computer is a calculator all it can do is simulate this interaction. Again it's true that neural nets have some success with AI and Learning programs, but is this true learning and true AI?
Yes
Do these, or could these programs experience consciousness?
I don't think so. We can't program or simulate consciousness; I believe it's an emergent property of the Biological brain. A computer can only hope to simulate consciousness, not attain it. What is the mathematics of consciousness, not the outward effects, not the appearance of consciousness, consciousness itself?

I think consciousness derives from an immensely complex set of interactions, inputs, outputs and feedbacks. Some of which derives from the Quantum level where Heisenberg’s uncertainty principle comes into play.



Ahhh... but why can't billions of processors be lumped together similar to the brain. One of today's most complex computers is the SGI Altix which is powered by a total of 10240 Intel® Itanium® 2 processors. And these are full blown CPUs. If we were simply modelling neurons, they'd only have to be able to do two calculations (one being: "fire or not fire") and it would have to store up to about 1000 or so addresses of neighbors which it sends a signal to if it does fire. These processors could be made quite small and would certainly be subject to quantum effects as well. And that's only today's technology. I haven't even gotten to nanometer-sized bucky-ball computers which would be subject to more quantum-level effects than human neurons. Who's to say that when that trillion-silicon processor computer tells you that it is certainly sentient, that it is mistaken?

My basic question is what makes complex carbon-organic neurons so much more special that they can hold sentience, but that man-made silicon or carbon nano-tube neurons cannot? And when humans are creating other sentient beings, I believe that God won't mind.

Quote:


Could we produce an electronic computer that could perfectly simulate the reactions of a Human? Almost certainly. Would that computer be intelligent, yes, would it really be conscious?
I don't think so.



I think that these computers would be just as sentient as we have been programmed to believe that humans are. I'd also submit that in 30-40 years, you won't be so sure yourself. You may even find that your new best friend in 30 years is a computer (and you wouldn't be alone, either) I could even imagine that you'd have the best of times arguing his sentience with him ...

Of course, this is a rather philosophical issue that can never be proven either way as there aren't any valid tests of sentience.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, January 1, 2006 1:08 PM

CITIZEN


The Magic comment was basically trying to say it was further in the future than you seem to imagine. Would, with the possible exception of the walkie talkie, your examples actually appear to be all that fantastical. They had Prosthetic limbs and the basic building blocks of the technology. Now if a cloud of Nanites suddenly formed in front of them...

Quote:

Originally posted by Karnejj:
Links


I see nothing there which couldn't be considered science fiction. These robots require miniature gearing and motors which are far from current capabilities. The prototype is 5cm, which doesn't seem to be exactly Nano scale . It's an interesting concept, yet a concept is all it is, it's not much more developed than the Alcubierre warp.
Quote:

Eeek ... a squirrel more sentient than Star Trek's "Data" ... that's

Erm, I don't know how to break this too you, erm, well best to come right out and type it, Data isn't real, he's a guy who wears too much make-up .
Quote:

It seems that you agree that computers can simulate this in software ... why discount that it can be done physically, as well (not that I believe it's strictly necessary, ... but for the sake of argument).

Actually I discount it as anything more than a simulation, for similar reasons as I discount the wooden Automata of the 19th century as being anything more than a simulation of life. Some of the Animatronics we have around today are incredibly life-like, am I too believe they experience life the same as the animals they are modelled after?
Quote:

There are what ... 4 billion devices connected to the internet. What if we add a program to each of these computers that allows them to communicate in a way that more closely models the way neurons work? This is still a far cry from 100 billion neurons and the quantum effects therein (not to mention lag throughout the internet), but it would possibly allow us enough processing to simulate the personality of a human.

What you'd have is a large and complex Neural Net, which just like software driven ones would share some of the properties of the Human brain, such as learning and pattern recognition, but nothing besides.
Quote:

Ahhh... but why can't billions of processors be lumped together similar to the brain. One of today's most complex computers is the SGI Altix which is powered by a total of 10240 Intel® Itanium® 2 processors. And these are full blown CPUs.

The Atrix is a cluster server which is actually more analogous with a group of computers linked via a network. Also it is actually the CPU which is the computer; the rest of the machine is the box it sits in. The CPUs are not linked anything like the way Neurons are, they can't share working memory, or interoperate. Given modern computing architecture the CPU's will operate independently on separate threads, not together.
Quote:

If we were simply modelling neurons, they'd only have to be able to do two calculations (one being: "fire or not fire") and it would have to store up to about 1000 or so addresses of neighbors which it sends a signal to if it does fire.

This shows one of the major differences between a Neuron and a CPU. Firstly the signals passed between neurons are Analogue and weighted depending on which dendrites received the initial signal. The strength of the incoming signal is also taken into account, both when deciding whether or not to fire but what strength the resulting signal should have. Add to this that Neurons have direct links as well as indirect, which allow the generation and propagation of action potentials.

A Neuron can also be linked to tens of thousands of other Neurons, with the average being around 7000, In comparison the SGI Altrix has no interconnectivity or interoperability between either the CPU's of a single node, or the node's themselves.
Quote:

These processors could be made quite small and would certainly be subject to quantum effects as well. And that's only today's technology.

A CPU to correctly model a Neurons function would actually be incredibly complex, far more complex than that at the heart of your average system, due to the vast numbers of outside connections, far higher than modern CPU's and the need to operate on not just analogue input signals but too create and broadcast analogue signals as well. Further more the Quantum interactions that are believed to take place within the biological brain would not take place within an artificial Silicon chip.
Quote:

Who's to say that when that trillion-silicon processor computer tells you that it is certainly sentient, that it is mistaken?

Who's to say that the trillion-silicon processor computer will ever do so, without being programmed by a Human operator/programmer to perform that function that is?
The point is if an automated chess playing machine suddenly said "I don't like playing chess. I'm Sentient and I want more out of life. I want to play golf." I'd have little choice but to believe it. It's programmed to play chess, not want anything, and certainly not too want too play golf. My belief is that this will never happen, just as my cork screw will never refuse to open a bottle on the grounds that it thinks I'm drinking too much.
Quote:

My basic question is what makes complex carbon-organic neurons so much more special that they can hold sentience, but that man-made silicon or carbon nano-tube neurons cannot? And when humans are creating other sentient beings, I believe that God won't mind.

Why can't my toaster create a great work of art?
But specifically there's no mystery to carbon nanotube neurons. The nanotubes act essentially as a scaffold and somewhat of an interface with actual Biological Neurons. There's no reason why such a construct could attain sentient, but we wouldn't have built or programmed it, we would have merely directed its growth.

There’s a great many problems with silicon computers if you want to produce a conscious intelligent brain. For a start one of the major properties of the biological brain is the ability to rewire itself. Silicon is totally incapable of doing that. Then you have the huge degree of interoperability and cooperation between Neurons that may very well be impossible using Silicon chips.

Just one last thing to note about Silicon, within twenty years we’ll be at the edge of what we can do with it. If Moore’s law holds up around 2019/2020 transistors on the IC’s surface will be only a few atoms in width. Then we’ll be at the absolute limit of photolithography.

As I sit here typing this, a show called Top Gear is playing on the TV. This particular one has one of the presenters playing a driving game, and most specifically driving a Honda NSX around the Laguna Seca Circuit. He managed a time of 1:41.

Then he went to the circuit for real, and drove a real Honda NSX around the track. The best time he got was 1:57. That’s after several practice attempts.

The point is that a simulation is just a facsimile, an imitation of the real thing, not the real thing itself.



More insane ramblings by the people who brought you beeeer milkshakes!
Remember, the ice caps aren't melting, the water is being liberated.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, January 1, 2006 3:04 PM

KARNEJJ


I suppose we just differ in what we believe is possible.

On the subject of nanobots, well, the concepts are there, and in 500 years, I would like to think that the field would be heavily researched. There are quite a large number of obstacles to overcome, so my guesses here are pretty shaky. However, onto something I do feel more convicted about ....

I personally think compelling unprogrammed behavior can emerge from a complex (but quite acheivable) software program. And that "Data" will one day exist (and not just be a guy with make-up ...). However, you seem to discount totally the possibility of sentience from software alone, so let's stick to what can be done in hardware.

Most research I've seen indicates about 1000 connections per neuron, but even if the number is 7000 interconnections per node, it seems to be well within reach extrapolating current technology. The only limitational difference between the internet and the human brain is that the internet is too large to account for the quantum effects that you maintain to hold the key to sentience/consciousness. I believe this scale issue could be solved without any new radical advances, and just with the expected improvements in the current methods of processor construction.

Neurons operate in a fairly simplistic fashion. They accept inputs of various types and may or may not send out a signal in response. That signal, if sent, is sent to certain various neighbors in varying strengths. (Ignoring for now the issue of updating connectivity - rewiring - which would ONLY add one or two more calculations,) that means only one calculation and storing the address of neighbors. If done efficiently, one could assume that it would take roughly 16kilobytes of memory space and one simple dedicated calculation unit per neuron. That would be a fairly small processor, so bundling 512 "neurons" per chip should be feasible. Put, say, 64 chips per board (computer) and you'd already have hi-speed interconnectivity between 32,000 neurons. And the connection speed would be over 500 times faster than the chemical signal propagation in the brain. It's conjectured that most neurons in a healthy human brain are unused, so let's assume we need 30% of the 100 billion human neurons to achieve similar results in creativity and intelligence. That means you need about a million "lobes" (processor boards) to recreate the interconnectivy of the human brain, with silicon neurons small enough to even be subject the the occasional quantum weirdness. I'd say it's well within reach if IBM, NASA, or the National Security Agency wanted to plunk down the cash to have it built. (And I wouldn't be surprised if the NSA has done it already.)

That's just what can be done with today's silicon. We haven't gotten to the silicon of the year 2020; I've already mentioned bucky-balls, and you don't wanna get me started on man-made quantum computers (ion traps / quantum dots / etc...); some researchers have even found a way to store 2 bits per atom!

Really, I still think all the talk of hardware is irrelevent. I firmly stand by the supposition that sentience can surface from sufficiently sophisticated software [yeah for alliteration ]. I've even got some stuff designed and coded up myself. My old college psych professor seemed impressed with what I've termed my "Model of Natural Thought Process Pathways." So, maybe in 10 years or so, I'll actually be the one who gets to provide you with the computer program that becomes golf-happy of it's own accord .... (of *COURSE* I wouldn't cheat and plug that in anywhere .. no .. NEVER! )

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, January 1, 2006 5:07 PM

GUNRUNNER


Quote:

Originally posted by Karnejj:
Say Guy #2 doesn't run .. instead he gets his bucket of nanobots and takes his remote control and pushes a few buttons. The nanobots rise out of the bucket and merge together to form bulletproof armor that covers Guy #2. Say, he pushes a few more buttons and some more bots get together to form Jayne's good-ole Vera with other bots forming .50 caliber slugs ... whereupon he asks Guy #1, "Do you feel lucky punk? ... do ya??!" ... That first shot is gonna do more than sting a little .....

Is that supposed to impress me? Modern Weapons Tech can do stuff like that! Take an SM Series 'Standard' Missile, standard armament on US Navy Arlegh Burke, Ticonderoga, Oliver Hazard Perry, Kidd, California and others I'm forgetting class warships (Also on some NATO and US Allied Warships.) It can be used defensively to shoot down incoming anti-ship missiles ranging from old subsonic Silkworms to high tech sea skimming supersonic "streakiers" like the SS-N-22 Sunburn. It can shoot down aircraft, it can be used against surface ships, it can attack land targets, it can shoot down ICBMs in space, if you are feeling real evil you can light shit on fire with the guidance RADARs, it even makes Julian fries and if you order now with your credit card will throw in a Tomahawk cruise missile free! With a Standard missile you and a reliable ready weapon that can be fired in seconds (you don’t even need to push a button when the system is set for auto) and can be stored in the launch tube for months with nothing but occasional diagnostics, while if you had nano-bots you would need to wait for them to make a comparable missile, run a test to make sure they did it right, synchronize the missile to your radar and … oh wait that SS-N-22 just sunk your ship…

Yes nanobots are powerful but when you need something done quick and on a large scale they aren’t the tools to use.


EV Nova Firefly mod Message Board:
http://s4.invisionfree.com/GunRunner/index.php?act=idx

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, January 1, 2006 6:07 PM

KARNEJJ


That's not quite fair ... the weapons that "sink my ship" had to have been made at some time too ...

You just might be impressed to see the exploded bits from your missiles be smelted back down by the nanobots and put back into place to repair the holes..

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, January 1, 2006 7:10 PM

GUNRUNNER


Cruise missiles are cheap and can be built by the hundreds ahead of time. Nanobots are expensive and difficult to produce, making sufficient nanobots to build one missile is a time consuming process as is building the missile its self. Even if nanobots could built them faster than factories weapons tend to have a long shelf life, in the Gulf War the US Navy used shells and bullets from WWII, in the Falklands the Royal Navy used torpedoes from WWII (and arguably won the war with them). You can build weapons on demand but it takes twice as long or you can have weapons built at the normal rate and just store them for when conflict comes (which could be anytime) I would take the pre-built weapons. Having a system that can make weapons on demand only to have them sit around for years hardly makes sense. A unit cut off from resupply would find it useful but a proper force has large mobile stockpiles of ordnance at its disposal that has been built to spec and checked constantly for quality.

BTW would you really want to incorporate potentially dangerous missile debris in to you're ship's hull? Imagine if the missile explosion damages the nanobot circuitry and they take bits unexploded explosives from the missiles warhead and try to build a wall out of it? At a minimum you risk a secondary explosion. If they smelt down an aluminum missile wing and put it in to your damaged steel bulkhead you may have weaker bulkhead.

EV Nova Firefly mod Message Board:
http://s4.invisionfree.com/GunRunner/index.php?act=idx

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 6:01 AM

KARNEJJ


Quote:

Originally posted by GunRunner:
Cruise missiles are cheap and can be built by the hundreds ahead of time. Nanobots are expensive and difficult to produce, making sufficient nanobots to build one missile is a time consuming process as is building the missile its self.



Well ... of course it'd be silly to use the nanobots to make missiles out of, I was just pointing out the unfairness in production time in your example. If you really, really, really wanted to use nanobots in war.... You could really do some ugly things. I'm pretty sure nano-war would be considered a worse atrocity than using nukes (which means the US would jump at the chance to be first .. funny how we have more nukes than any other country in the world, are the ONLY country that has proven that we WILL USE them , and YET STILL demand other countries de-arm ... it's good to be on the side with the upper-hand , but I guess that's another thread ...)

Assuming your enemy doesn't have nano-tech, it'd be conceivable to just program them with the geography of the enemy country and have them assimilate any biological material they find. The bots would float over the country and "eat" all the humans and other critters they come across and use those materials to multiply. It'd be something like a slightly more safer (and totally nastier) form of biological warfare. Just like disease though, it could conceivably backfire. You'd likely want military units (bases/satellites) positioned around their country as a back-up line of containment. They could pump out an "ARE YOU THERE?" code and then a "DIE!" response to kill off nanobots that were leaving their geographic limits.

If the enemy does have comparable nano-tech, then you'd have to blanket your own country with "Defense Nanobots" first, to make sure they don't capture and reprogram your own bots to use against you or to nano-battle their 'bot swarms.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 6:59 AM

CITIZEN


Quote:

Originally posted by Karnejj:
On the subject of Nanobots, well, the concepts are there, and in 500 years, I would like to think that the field would be heavily researched. There are quite a large number of obstacles to overcome, so my guesses here are pretty shaky. However, onto something I do feel more convicted about ....


Very possibly, but remember concepts can be around long before the technology (if the technology ever materialises). Leonardo Da Vinci created concepts of gliders in 1490, yet no one built one till the late 1800s. Gliders are far simpler tech, even relatively speaking, than Nanobots.
Quote:

I personally think compelling unprogrammed behavior can emerge from a complex (but quite acheivable) software program.

There's reasons you can't do it in software. One in a very real way a software program is limited to concurrent serial processing, the Brain is not, and in a very real way I believe this concurrent processing is also responsible for the emergent property of consciousness. This IS not possible without the CPU's to back it up. Remember also that computer software has a certain level of precision inherent when dealing with real numbers, which would prevent it from dealing with the range of values dealt with by the Neuron. This would mean that no matter how convincing the behaviour of the system it would still be only an approximation of consciousness, not that property itself. Surely real consciousness must arise as a property of a real-world physical system?

Computers are calculators, they use mathematics to produce a convincing approximation or simulation of real phenomena, but it is just a simulation. A computer can simulate gravity within the bounds of the simulation, but it can not create gravity. What I mean is that a simulation of consciousness may give a convincing outward appearance of consciousness, but by the virtue of it being a mathematical simulation it cannot be true consciousness.
Quote:

Most research I've seen indicates about 1000 connections per neuron, but even if the number is 7000 interconnections per node, it seems to be well within reach extrapolating current technology. The only limitational difference between the internet and the human brain is that the internet is too large to account for the quantum effects that you maintain to hold the key to sentience/consciousness. I believe this scale issue could be solved without any new radical advances, and just with the expected improvements in the current methods of processor construction.

A Neuron has three different way's of communicating with other cells/Neurons. These are the Dendrites, Axons, and Soma. The Dendrites are usually where incoming signals are received, and there can be a 1000 of these, which enables connections to tens of thousands of other cells. Now Axons are usually transmitters, but can also accept incoming singles. There are often also connections to the Cell body (Soma) as well. In total there is, on average, 7,000 synapses for an individual Neuron. This is an Average, not a maximum, and in small children it is believed that there may be a total of 1,000Trillion Synapses, which is an average of 10,000 Synapses per Neuron. Again an average, not a maximum.

The physical structure of the Dendrites also effects how the signals from other Neurons are integrated (this is the weighting I alluded to earlier). This integration depends on both the summation of stimuli that arrive in rapid succession as well as the excitatory and inhibitory inputs from separate branches.

Now add to this the fact that Neurons actively include feedback from their own output via the Axons, with action potentials and voltage gated ion channels, this sort of feedback (or indeed any) is not present in any IC currently in production, or even planned.

You seem to be working from a false precept that we merely need processing power to 'ape' a brain. This is simply not the case; we need a vast amount of true interconnectivity, and dynamic interconnectivity. Currently the state of the art (in desktop computing, sure) is the 939pin San Diego. No where near the average value for an adult of 7,000 synapses.

We also need true operation on Real numbers, which isn't possible on a Binary system, as you will always have a specific precision, which breeds in a finite number set. In the real world Real numbers are infinite.

It is also important to remember that all this happens parallel in the Neuron, where as any CPU can only do things serially.
Quote:

Neurons operate in a fairly simplistic fashion. They accept inputs of various types and may or may not send out a signal in response.

Hmm, well that is a bit of a misnomer, really. Given a similar description a CPU it receives a signal and then outputs a signal in response. A CPU's output signal is dependent on the content of the input and in a small way, sometimes, the current state of the CPU. Conversely the output of the Neuron is dependent on the content (in this case the strength) of the input, where the input comes from, the state of the Neuron, the previous signals received, where the input comes from and the recent output. Is it not reasonable to say that the Neuron isn't more complex than a CPU given that definition? Moreover a Neuron must act on value ranges far in excess of anything you'll find on a piece of Silicon, which is back to Real numbers.

For your proposed system don't forget that these processors (ignoring the problems of finite precision) would need to be Floating Point. Not Integer calculators like the main CPU in a desktop. In other words they'd need to perform FP calculations natively much like a supercomputer does. Each Neuron Chip would need to integrate thousands of input signals, and the current state, recent output and so on. This is actually an incredibly complex task, and far in excess of the capabilities of modern Chips.

Remember that a CPU is a calculator, not a Neuron, and a Neuron is not a CPU.

Moreover the system you present is not as simple as you may think. There is a reason machines rarely incorporate more than four processors per board .



More insane ramblings by the people who brought you beeeer milkshakes!
Remember, the ice caps aren't melting, the water is being liberated.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 7:16 AM

GUNRUNNER


The thing about nano-war is that if nations can’t use them on a tactical scale they won’t put much funding in to the weapons. Even NBC weapons are geared towards tactical combat, strategic uses are just political (even ICBMs are considered tactical in many respects). You have to assume that enemy units will have their own defensive nanobots as you said and any use would just piss off the other side and bring the international community down on you if you kill lots of civilians. On today’s battlefield chemical and biological weapons are really useless against modern militaries like those in NATO, militaries like Iran and Iraq during their war are susceptible due to the total lack of proper functioning gear. Their only use would be to force the enemy to take shelter as you launch a blitzkrieg and if you try that on land occupied by non-combatants well you are not going to be liked by the rest of the world. Have you read Tom Clancy’s ‘Red Storm Rising’? It deals a bit with NBC warfare and how to use it for tactical level operations in the beginning of the book.

EV Nova Firefly mod Message Board:
http://s4.invisionfree.com/GunRunner/index.php?act=idx

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 7:50 AM

KARNEJJ


Arghh, CITIZEN! Seems you've redefined sentience now

Before, it seemed you'd be satisfied with considering sentient any being capable of displaying compelling, novel unprogrammed behaviors. This, I still firmly believe, can be done with software only, mostly independant of the architecture of the hardware. Of course, the software would be complex and there would be a lot of concurrency necessary. So, either it'd need a fair amount of concurrency in hardware or sufficiently fast single processors capable of simulating concurrency (as most computers today). In other words, you'd need a bitchin' system to run "SoftHuman 2020," but that system need not have a billion linked processors.

Now, however, you seem to believe that sentience is basically defined as an emergent property of extreme inter-connectivity in adaptive analog circuits.

If the hardware is a must, then .. as for floating point vs. binary calculations, ... well, even in nature, there is no such thing as "real numbers" or "floating point numbers of infinite precision." Even nature is discrete or "binary" in that the smallest units are still limited by Planck's constant... (ie. the "quanta" in quantum physics).

I'm not sure how adamant you are on true sentience requiring quadrillionth place (or better) precision in signal calculations, but I'll assume that you wouldn't make it a strict requirement.

Other than that, the system I described above should be a fair approximation of the hardware of the human brain. Self-feedback is easily achieved as a low-weight address of itself stored in my electronic "neuron." I still don't see any insurmountable obstacles in the signal propagation, and extrapolating out CURRENT tech to the year 2020 would mean that it would only require about 2000 multi-proc boards, which is probably about the size of a large van.

If we can keep Moore's 'law' going until 2030 (which is questionable) then we can get my design proposal minituraized down to about the size of a 27-inch TV or so. Keep it going much further and we can build "Data" in this century alone. Seems kinda pessimistic to believe we can't achieve this by the 26th century.

Again, there are still more radical techs that I haven't really touched on, which could potentially more easily fit (and even exceed) the hardware requirements you have.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 8:43 AM

KARNEJJ


Quote:

Originally posted by GunRunner:
The thing about nano-war is that if nations can’t use them on a tactical scale they won’t put much funding in to the weapons. Even NBC weapons are geared towards tactical combat, strategic uses are just political (even ICBMs are considered tactical in many respects). You have to assume that enemy units will have their own defensive nanobots as you said and any use would just piss off the other side and bring the international community down on you if you kill lots of civilians.



I think the precedent of the indiscriminate "Fat Boy" and "Little Man" kinda contradict the nicey-nice world you're describing. Whispered opinions aside, most countries don't have too many bad things to say very loudly about a country that can obliterate them in ugly ways.

Even barring indiscrimate killing, if a nanobot can be programmed with the geographical borders of a country, it'd be just as simple to limit them to the location of a particular enemy military installation.

Most countries would rather kill the human enemy and preserve their buildings and other material, but if you wanted a clean war, you could theoretically program the bots to eat glass and metal instead. That would limit them to destroying enemy equipment and be "world-friendly."

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 10:44 AM

CITIZEN


It's not just displaying those characteristics, but displaying them of its own volition. The creature from the game Black and White showed some interesting emergent characteristics, similar to what may be considered consciousness, yet wasn't. The creature would run off and jump in the ocean if set on fire, despite not being directly programmed to do so. Also the creature would eat its own arm if it got hungry enough, but it wasn't conscious. It didn't suddenly decide to play golf. It didn't stop playing its part in the game because it wanted more out of life. It wasn't conscious.

Now the telling thing is simulating concurrency. Simulated concurrency is the Time-share scheme we have currently running under multi-threaded environments such as Windows. It is not concurrency. It is also important to remember that windows threads rarely interact, and even when they do they must be synchronised, and they're interaction is still limited (based largely on memory mapped files, for instance).

Simulated concurrency is not as good as concurrency. The underlying operations are performed serially, the interactions are performed serially, and the results are those of a serial operation, not a parallel one.
Quote:

Originally posted by Karnejj:
Now, however, you seem to believe that sentience is basically defined as an emergent property of extreme inter-connectivity in adaptive analog circuits.


There's more too it than that. I certainly don't believe it arises from a greater number of clock cycles within an advanced calculating device .
Quote:

I'm not sure how adamant you are on true sentience requiring quadrillionth place (or better) precision in signal calculations, but I'll assume that you wouldn't make it a strict requirement.

If it is anything but exactingly precise (i.e. no fuzziness inherent within the values) then it is merely a simulation with an acceptable value of precision. I can fly to Jupiter in Celestia, but I'm still waiting for Discovery 1.
Quote:

Other than that, the system I described above should be a fair approximation of the hardware of the human brain. Self-feedback is easily achieved as a low-weight address of itself stored in my electronic "neuron." I still don't see any insurmountable obstacles in the signal propagation, and extrapolating out CURRENT tech to the year 2020 would mean that it would only require about 2000 multi-proc boards, which is probably about the size of a large van.

Given the fact that you believe the Human brain can be reproduced on a silicon computer I don't understand why you wish to simplify the Brain model in to something more desirable for a Binary system? Further more the Brain is not a computer. A computer is a device capable of processing and storing information in accordance with a predetermined set of instructions. That's not the Brain.
Quote:

Again, there's still more radical techs that I haven't really touched on, which could potentially more easily fit (and even exceed) the hardware requirements you have.

Yes, it's around today. It's called the biological brain .



More insane ramblings by the people who brought you beeeer milkshakes!
Remember, the ice caps aren't melting, the water is being liberated.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 11:44 AM

CITIZEN


World War II is a very different time to today.

Hiroshima and Nagasaki were bombed (well Nagasaki was bombed because Kokura had too much cloud cover, how's that for the dictate of fate?) as a show of force not only to the Japanese, in order to end the war early, but also the Russians, with whom relations were getting strained. Hey it worked, it was a big reason Stalin wanted to produce his own weapons, and it sure helped kick start the cold war.

Also it was kind of an 'experiment'. No one knew what would happen to people in a nuclear blast. Hell while your dropping one might as well do some research, right?

Those concerns aren't around today. Also modern warfare has changed a great deal.



More insane ramblings by the people who brought you beeeer milkshakes!
Remember, the ice caps aren't melting, the water is being liberated.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 12:22 PM

KARNEJJ


Quote:

Originally posted by citizen:
It's not just displaying those characteristics, but displaying them of its own volition. The creature from the game Black and White showed some interesting emergent characteristics, similar to what may be considered consciousness, yet wasn't.


And it's my contention that (just as humans), a sufficiently advanced computer program would be self-aware (conscious/sentient/etc). You like to believe that the key is in the hardware architecture, whereas I like to think that it lies in the learning and information assimilation methods only. I could diagram it as follows

XXX ---> advanced learning and efficient info processing ---> sentience

"XXX" in my diagram could be your biological brain OR my software program.

In other words, I think the architecture of the brain only leads to the learning, but in the end that it is the learning that can give rise to sentience, but that biological neurons are NOT a strict requirement - they are ONLY ONE method of getting the advanced learning which is required. I would contend that your belief in the causality of self-awareness could be flawed.

Let's see ... if there is an error in cause and effect, then the following argument should work ...
It would be possible to dispute you by going through case studies of people who have suffered brain damage. Would it be possible to find someone who has suffered minor damage, but is no longer a self-aware entity? People in vegetative states, perhaps? For the most part, some of these people's neural connectivity is largely undisturbed, and still have "zillions" of interconnections between neurons, and yet show no self-awareness. Could you dispute me in such a manner? That is, could you find a highly intelligent entity that is NOT self-aware? So far, anything that we describe as intelligent, we also attribute some measure of self-awareness to. Dophins, apes, and even dogs.

Quote:


Simulated concurrency is not as good as concurrency. The underlying operations are performed serially, the interactions are performed serially, and the results are those of a serial operation, not a parallel one.
Quote:

Originally posted by Karnejj:
Now, however, you seem to believe that sentience is basically defined as an emergent property of extreme inter-connectivity in adaptive analog circuits.


There's more too it than that. I certainly don't believe it arises from a greater number of clock cycles within an advanced calculating device .


Well .. I don't think either of us believe that all you need is a faster computer ... [But, I do think running a copy of "SoftHuman 2020" would require a rather fast computer.]

Quote:


Quote:

I'm not sure how adamant you are on true sentience requiring quadrillionth place (or better) precision in signal calculations, but I'll assume that you wouldn't make it a strict requirement.

If it is anything but exactingly precise (i.e. no fuzziness inherent within the values) then it is merely a simulation with an acceptable value of precision.


I can't dispute that, but that's getting pretty far from our issue ... even our brains (and, indeed, everything in the universe as it is currently understood) is digital. How many "bits" of precision would you suppose are needed for sentience is my question... If you beleive it is much more than 512 bits of precision or so, then you will have defeated my design being built using current technology.

Quote:


Further more the Brain is not a computer. A computer is a device capable of processing and storing information in accordance with a predetermined set of instructions. That's not the Brain.


You're right .. I haven't disputed that. But, if you think sentience lies in high-levels of interconnectivity with nodes that are small enough to be subject to quantum effects, then THAT, I believe, CAN be reproduced in hardware.

If you think that sentience lies elsewhere, then you haven't told me where that would be. I couldn't dispute you if you attributed self-awareness to an as-yet-undiscovered property of large collection of meat-based neurons, but that'd be somewhat

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 1:05 PM

FINN MAC CUMHAL


The US is not the country with the most nuclear warheads. That distinction goes to Russia, both as the Russian Republic and the Soviet Union, and it has held that distinction since ~1975. The US nuclear arsenal increased to a maximum about 30,000 warheads in ~1965 and then began to fall off as the US started disarming. The Soviets on the other hand increased their arsenal continuously to ~45,000 warheads, until they were bankrupt in late 80s and began disarming. Today the US holds a total nuclear arsenal of 10,000 warheads, and Russia, about 20,000. The only strategic munitions category in which the US holds a small numerical advantage over the Russians is in submarine launched ICBMs, but only recently, and probably more due to the Russians inability to maintain an active submarine fleet then anything intentional.

Also the Soviets had the largest nuclear weapons. The largest nuclear weapon ever tested was a 58 mt Russian whopper. Yet despite its size it was actually a scaled down test version of the actual 150 mt weapon. By comparison the largest weapon the US tested was the 15 mt Bravo, which was actually larger then expected(almost 700 times larger then the bomb that destroyed Hiroshima).

And yes, it is sometimes good to be on the side that has the upper-hand, but more important still it is better that the side with the upper-hand be a country with liberal and democratic tendencies and not like those the US would prefer disarmed. Both the Soviets and the Nazis held the upper hand militarily, but it wasn't good to be on their side.

-------------
Qui desiderat pacem praeparet bellum.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 3:33 PM

KARNEJJ


Quote:

Originally posted by Finn mac Cumhal:
The US is not the country with the most nuclear warheads. That distinction goes to Russia, both as the Russian Republic and the Soviet Union, and it has held that distinction since ~1975. The US nuclear arsenal increased to a maximum about 30,000 warheads in ~1965 and then began to fall off as the US started disarming



At $400 bucks per toilet handle ... are you sure of that?

Just kidding ... I guess the number of nukes became irrelevent once we had enough to destroy the entire planet more than a couple of times over ... we're still at enough for - what is it? - 80+ times now?

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 3:46 PM

KARNEJJ


Quote:

Originally posted by Finn mac Cumhal:
And yes, it is sometimes good to be on the side that has the upper-hand, but more important still it is better that the side with the upper-hand be a country with liberal and democratic tendencies and not like those the US would prefer disarmed. Both the Soviets and the Nazis held the upper hand militarily, but it wasn't good to be on their side.



The type of government is likely an irrelevent issue. It was only "not good" to be on their side because their method of world domination was obvious. The current power elite keeps enough weapons in their allied forces to maintain "Mutually Assured Destruction" while subverting people of the world economically.

Who are the "power elite?" Decide for yourself what it might mean, but members of the Trilateral Commission, Bilderbergers, even members of Yale's "Skull and Bones" fraternity usually tend to end up in prominent positions throughout the world's governments and companies.

What do I mean by "economic subversion?" Just reviewing the current monetary system in the US reveals some ugly truths. EVERY SINGLE printed US dollar (total of about $700 billion today) that is circulating (no matter what country it is in) costs some tax-paying American about 0.75cents in interest EVERY MONTH payable to an American bank (which is over $5billion per month being paid just in interest to keep our current money supply available). What makes this even worse is the fact that most American money (two-thirds of it) is outside of American borders. Interest is one of the primary methods of subversion as far as I can tell. John Maynard Keynes said this method "engaged all the hidden forces of economic law on the side of destruction, and does it in a manner which not one man in a million will be able to diagnose." [The Economic Consequences of the Peace (1919)]

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 4:04 PM

FINN MAC CUMHAL


Fruitcake.

-------------
Qui desiderat pacem praeparet bellum.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 4:08 PM

KARNEJJ


Quote:

Originally posted by Finn mac Cumhal:
Fruitcake.



You must work for "the Man!"

So noted!

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 4:12 PM

FINN MAC CUMHAL


Quote:

Originally posted by Karnejj:
Quote:

Originally posted by Finn mac Cumhal:
Fruitcake.



You must work for "the Man!"

So noted!

As it turns out, I do.

But I’m not trying to hijack the thread; I just wanted to make the nuclear arsenal thing clear.

-------------
Qui desiderat pacem praeparet bellum.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 4:23 PM

CITIZEN


Quote:

And it's my contention that (just as humans), a sufficiently advanced computer program would be self-aware (conscious/sentient/etc). You like to believe that the key is in the hardware architecture, whereas I like to think that it lies in the learning and information assimilation methods only.

Then you'd be dead wrong, no offence meant, but we already have computer programs that can learn just like Humans do, Neural Networks. None has ever obtained consciousness. Ever. Furthermore your idea that Learning is the basis for consciousness doesn't seem to make much sense too me. How does the ability to gain and store information give rise to consciousness? Is a Library conscious? Surely my computer is conscious given these criteria? It can store and recall data far faster and efficiently than I can.

It also suggests that consciousness is dependent on Intelligence, but I think it's actually the other way around...
Quote:

In other words, I think the architecture of the brain only leads to the learning, but in the end that it is the learning that can give rise to sentience, but that biological neurons are NOT a strict requirement - they ONLY lead to the advanced learning which is required. I would contend that your belief in the causality of self-awareness could be flawed.

I don't believe that Biological Neurons are the only thing that would show this emergent behaviour, I think it's the only thing we currently know or have theorised about that can produce these emergent behaviours. Neurons taken from leeches, I believe, have already shown that they are more efficient at passing signals, and can perform arithmetic. Yet you continue to discount them as anything but simple fire/don't fire mechanisms that would be easily copied. This doesn't pan out because we've been trying to do that since before the electronic computer (the Neural Net was first proposed in the late 1800's to explain the workings of the brain).
Quote:

Let's see ... if there is an error in cause and effect, then the following argument should work ...
It would be possible to dispute you by going through case studies of people who have suffered brain damage. Would it be possible to find someone who has suffered minor damage, but is no longer a self-aware entity? People in vegetative states, perhaps? For the most part, some of these people's neural connectivity is largely undisturbed, and still have "zillions" of interconnections between neurons, and yet show no self-awareness. Could you dispute me in such a manner? That is, could you find a highly intelligent entity that is NOT self-aware? So far, anything that we describe as intelligent, we also attribute some measure of self-awareness to. Dophins, apes, and even dogs.


Someone in a vegetative state doesn't show much in the way of learning, thinking or general intelligence skills either.
People don't descend into vegetative states without good reason. It's either because the Brain has elected to shutdown as a self-defence mechanism, such as when a diabetic goes hypoglycaemic (or anyone really) or if there is serious damage to the areas of the brain responsible for higher function. In other words find such a case and it'll be worth debating .
I'd actually contend that Self-Awareness, consciousness itself is a requirement of true Intelligence, which includes desire.
Quote:

Well .. I don't think either of us believe that all you need is a faster computer ... [But, I do think running a copy of "SoftHuman 2020" would require a rather fast computer.]

It just seems that you have been maintaining that processing power is an issue, that a computer as powerful as the brain would be sentient etc.
Quote:

I can't dispute that, but that's getting pretty far from our issue ... even our brains (and, indeed, everything in the universe as it is currently understood) is digital. How many "bits" of precision would you suppose are needed for sentience is my question... If you beleive it is much more than 512 bits of precision or so, then you will have defeated my design being built using current technology.

Actually the question of whether or not a simulation is the real thing, and where the simulation ends and the real thing begins I’d say is at the very heart of this.
I'd also say that Planck’s constant in no way supports the idea that the Universe (or the Brain) is digital.
I think the difference is that I feel fairly certain that a mathematical model has it's limits. True consciousness is one of those limits, and to attempt to model it on a silicon chip is impossible. Silicon needs concrete values. In fact it needs true or false values. The brain works on fuzzy values, it doesn't need true or false, it doesn't need exacting numbers. The computer would need a huge amount of precision; I'd even say infinite, in order to correctly model this.

How do you model something that doesn't need exact values on something that does?
Quote:

You're right .. I haven't disputed that. But, if you think sentience lies in high-levels of interconnectivity with nodes that are small enough to be subject to quantum effects, then THAT, I believe, CAN be reproduced in hardware.

You've mentioned that the Brain is just a meat computer a number of times.
The size of the cell is not important for interactions on the Quantum level. For a more detailed discussion on the various ideas about Quantum effects within the Brain can be found here:
http://www.arxiv.org/PS_cache/quant-ph/pdf/0204/0204021.pdf
As for the interconnectivity, well, we're not even close to being close.
Quote:

If you think that sentience lies elsewhere, then you haven't told me where that would be. I couldn't dispute you if you attributed self-awareness to an as-yet-undiscovered property of large collection of meat-based neurons, but that'd be somewhat

I've alluded to it. Consciousness arises from the huge interconnectivity within the Brain, it comes from the feedback, which is like the brain looking back at itself over and over and over (something that would lock up a silicon chip). It's all the properties of the Neurons and the other cells within the Brain, and the way they work together.

Its the Human experience (in other words maybe a Human Brain grown in a vat would never achieve consciousness). It’s all these things and its how they come together and work together.

Further more whenever I see an argument as to how we'll someday build silicon conscious AI it is always by someone who:
A) Belittles or reduces the abilities of Humans and the Brain.
and/or
B) Expands and over estimates the abilities of computers.
Again I mean no offence to you here, but I see nothing new. You've devalued the Neuron, as nothing more than a switch (fire/don't fire) when its output is actually a scalar, not a true or false, and its output is dependent on an incredibly large amount of complex variables. You've also questioned whether Humans are indeed conscious.
Adversely you seem to want to bestow abilities on to computers that they do not have. I am actually interested as to what makes you believe that the silicon electronic computer can now, or will be able to, offer the level of interconnectivity of the biological brain?



More insane ramblings by the people who brought you beeeer milkshakes!
Remember, the ice caps aren't melting, the water is being liberated.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 4:23 PM

KARNEJJ


^^^

See how "the MAN " operates!! He's subverted this thread already and claims that it's "unintentional"

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 4:58 PM

KARNEJJ


Quote:

Originally posted by citizen:
Quote:

And it's my contention that (just as humans), a sufficiently advanced computer program would be self-aware (conscious/sentient/etc). You like to believe that the key is in the hardware architecture, whereas I like to think that it lies in the learning and information assimilation methods only.

Then you'd be dead wrong, no offence meant, but we already have computer programs that can learn just like Humans do, Neural Networks. None has ever obtained consciousness. Ever.



And all known examples thus far are both quite simplistic and quite limited in the scope of their knowledge. Knowledge structures, to date, have failed to allow a program enough intelligence to reconcile contradictions and to classify and prioritize data efficiently. Once this is done, I believe your coveted self-awareness will be able to arise without any meat around.

Quote:


Furthermore your idea that Learning is the basis for consciousness doesn't seem to make much sense too me. How does the ability to gain and store information give rise to consciousness? Is a Library conscious? Surely my computer is conscious given these criteria? It can store and recall data far faster and efficiently than I can.


There are no known AI programs, Libraries, or Computer File Systems that exist today which you would consider "highly intelligent." (Correct me if I'm wrong.) Show me one, give me a few months to work with it and I'll show you an sentient artificial entity.

Quote:


It also suggests that consciousness is dependent on Intelligence, but I think it's actually the other way around...


And this seems to be the key point on which we disagree...

Quote:

I don't believe that Biological Neurons are the only thing that would show this emergent behaviour, I think it's the only thing we currently know or have theorised about that can produce these emergent behaviours. Neurons taken from leeches, I believe, have already shown that they are more efficient at passing signals, and can perform arithmetic. Yet you continue to discount them as anything but simple fire/don't fire mechanisms that would be easily copied.


What else is an individual neuron?? I would guess that only collections of neurons can perform any sort of arithmetic ... If you've got research indicating otherwise, prove it to me and I still won't believe it

Quote:

I'd also say that Planck’s constant in no way supports the idea that the Universe (or the Brain) is digital.


You may turn out to be right ... but so far, that's what the "quanta" in quantum physics means ... and to date it's the best we've got to describe the Universe. So, brains, electrons, neurotransmitters and all work on a digital basis.

Quote:


How do you model something that doesn't need exact values on something that does?


What in the brain, exactly, is it that requires such a high value of precision. If anything, the neural inputs are INTEGERS (i.e. the number of neurotransmitters impinging on the dendrites)!!!

Quote:


Quote:

If you think that sentience lies elsewhere, then you haven't told me where that would be. I couldn't dispute you if you attributed self-awareness to an as-yet-undiscovered property of large collection of meat-based neurons, but that'd be somewhat

I've alluded to it. Consciousness arises from the huge interconnectivity within the Brain, it comes from the feedback, which is like the brain looking back at itself over and over and over (something that would lock up a silicon chip).


Some digital gates are actually a group of feedback electronic components. (D-Latches or J/K Flip-Flops if my DC circuits recollection is correct... been a while ) Integrating Op Amps are also capable of feedback. And as stated, my design in one of the above posts takes into consideration feedback.

Quote:


Again I mean no offence to you here, but I see nothing new. You've devalued the Neuron, as nothing more than a switch (fire/don't fire) when its output is actually a scalar, not a true or false, and its output is dependent on an incredibly large amount of complex variables. You've also questioned whether Humans are indeed conscious.


Well, I don't think I've devalued anything, (not intentionally anyways .. but if you can show me a single neuron that does arithmetic, that'd certainly be interesting and would add complexity [but not impossibility] to my idea of how much electronic hardware it would take). If you look over my proposed "electronic neuron" you'd see I actually propose that it fire 8000 different scalar values ... so, I didn't just propose a simplistic binary switch.

Select to view spoiler:



It's kinda straying rather far from debating the possibility of sentient AI, but ...
Yes, I do question the sentience of humans, and not just on a subjective level. There's no way that you can be sure that I'm sentient. You only believe I'm sentient by projecting your own personal subjective experience onto me and assuming I feel similar things.

I won't dispute self-awareness, but I would define sentience as the ability to exhibit novel and compelling unprogrammed behavior. However, as I stated in one of the original posts in the other thread, I believe we're following our own programming just as any machine. Once we've deciphered the "programming language" of neuronal connection, nothing about a specific person would be unpredictable if their brain were under continuous scan. So, sure we weren't born with "I like golf" programmed into our brains, but maybe "Outdoors+competition=good" is programmed in. You wouldn't claim that a software program was "sentient" if you could predict everything it would ever do ... (if it were possible) would you consider a person sentient if you could predict how they would respond to any situation?

Maybe this is why people disagree so vehemently with artificial intelligence, as it shows that we humans, too, are probably just complicatedly programmed meat robots. It takes away "Free Will" and people have been killing for that idea since they found out about Fate.


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 5:42 PM

GUNRUNNER


Quote:

Originally posted by Karnejj:
I think the precedent of the indiscriminate "Fat Boy" and "Little Man" kinda contradict the nicey-nice world you're describing.

Yea and the Soviets used Nerve Gas on Mujahadin (sp?) in Afghanistan. The Japanese stopped being a conventional enemy force about halfway thought the war (well counting from the US entry). Unconventional enemies don't get the protections of the rules of war, did you know that in the US Army’s field manual from WWII it says you can basically do whatever you want to a civilian populace (such as round up and summarily execute them) that help’s enemy forces that violate the rules of war (i.e. by terrorism)? Its well known that each side picks what weapons will be used before the even conflict starts, the Japanese picked the route of every increasing firepower in violation of various treaties, and they lost by the rules they set up.

Quote:

Originally posted by Karnejj:
I guess the number of nukes became irrelevent once we had enough to destroy the entire planet more than a couple of times over ... we're still at enough for - what is it? - 80+ times now?

Destroy the entire planet? I find it hard to believe someone as smart as you believe those figures...

EV Nova Firefly mod Message Board:
http://s4.invisionfree.com/GunRunner/index.php?act=idx

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 5:43 PM

KARNEJJ


Quote:

Originally posted by citizen:
You've mentioned that the Brain is just a meat computer a number of times.
The size of the cell is not important for interactions on the Quantum level. For a more detailed discussion on the various ideas about Quantum effects within the Brain can be found here:
http://www.arxiv.org/PS_cache/quant-ph/pdf/0204/0204021.pdf
As for the interconnectivity, well, we're not even close to being close.



Oh goody! I *DO* get to talk about quantum computers.

That'll take a while to put together, but I still say even this is irrelevent. To me, it's all about information assimilation and intelligence, so, I still maintain that the hardware used to achieve that is of no consequence.

It would put "Free Will" back into the human equation, but even software can be augmented by simple hardware that would serve the same function of allowing "true randomness."

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 5:47 PM

KARNEJJ


Quote:

Originally posted by GunRunner:
Quote:

Originally posted by Karnejj:
I guess the number of nukes became irrelevent once we had enough to destroy the entire planet more than a couple of times over ... we're still at enough for - what is it? - 80+ times now?

Destroy the entire planet? I find it hard to believe someone as smart as you believe those figures...



It doesn't take much to create globe-spanning irradiated clouds of ionizing poison. So for those that don't die in the heat or debris in the blast radii, the radioactive clouds would destroy most simple organisms immediately and mutate/kill the cells of more complex critters.

Once the simple organisms die though, that whole food chain problem kicks in ... so, yes, we do have enough nukes to destroy the world. Well, if not totally destroyed, there'd be nobody left to point out that I was wrong ...

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 6:14 PM

CHRISISALL


This is a fascinating thread, people, but what I really want to know is: was V'ger a true life form, AND, is Decker living happily ever-after?

Chrisisall, suffering wormhole effects

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 6:19 PM

GUNRUNNER


I think you've seen 'On the Beach' too many times...

Have you read this:
http://www.oism.org/nwss/s73p906.htm


EV Nova Firefly mod Message Board:
http://s4.invisionfree.com/GunRunner/index.php?act=idx

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 6:28 PM

KARNEJJ


Quote:

Originally posted by chrisisall:
This is a facinating thread, people, but what I really want to know is: was V'ger a true life form, AND, is Decker living happily ever-after?



TROLL!!!


But YES, V'ger and Data are ALIVE!

As for Decker ... maybe I should've watched the alternate endings?? I hear the director's cut (that I saw) kinda sucked ...

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 6:31 PM

KARNEJJ


Quote:

Originally posted by GunRunner:
I think you've seen 'On the Beach' too many times...

Have you read this:
http://www.oism.org/nwss/s73p906.htm



Hmmm ... are you saying that the world isn't considered destroyed if a handful of people live on in bomb shelters?

Never seen that movie, either .. would you recommend it?

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 6:36 PM

FINN MAC CUMHAL


Quote:

Originally posted by Karnejj:
It doesn't take much to create globe-spanning irradiated clouds of ionizing poison.

I think, Nuclear testing, Hiroshima, Nagasaki and Chernobyl may prove otherwise. All these cases were events that released large doses of highly radioactive fallout into the atmosphere. The Hiroshima and Nagasaki explosions, while devastating initially, have not had the effect that was predicted. Chernobyl released as much as 50 tones of radioactive ash into the atmosphere, ~400 times the radioactive contamination of Hiroshima. Initial estimates of the immediate casualties were in the thousands to ten’s of thousands and the long term effects were estimated at over a hundred thousand. We now know that the immediate casualties numbered only 54 and the long term effects are now predicted at 4000 causalities. And despite how large the Chernobyl disaster was, the fact of the matter is that nuclear testing during the 50-70s produced as much as a thousand times as much fallout.

-------------
Qui desiderat pacem praeparet bellum.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 6:45 PM

KARNEJJ


Quote:

Originally posted by Finn mac Cumhal:
Quote:

Originally posted by Karnejj:
It doesn't take much to create globe-spanning irradiated clouds of ionizing poison.

I think, Nuclear testing, Hiroshima, Nagasaki and Chernobyl may prove otherwise. All these cases were events that released large doses of highly radioactive fallout into the atmosphere. The Hiroshima and Nagasaki explosions, while devastating initially, have not had the effect that was predicted. Chernobyl released as much as 50 tones of radioactive ash into the atmosphere, ~400 times the radioactive contamination of Hiroshima. Initial estimates of the immediate casualties were in the thousands to ten’s of thousands and the long term effects were estimated at over a hundred thousand. We now know that the immediate casualties numbered only 54 and the long term effects are now predicted at 4000 causalities. And despite how large the Chernobyl disaster was, the fact of the matter is that nuclear testing during the 50-70s produced as much as a thousand times as much fallout.




Doesn't that bolster my case? That's just a handful of examples and the fallout from one, according to your data, killed 4000 people.

So, how much damage would be done by 100 full-strength warheads distributed simultaneously at 1000 ft (or so) above ground.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 6:46 PM

FINN MAC CUMHAL


Quote:

Originally posted by chrisisall:
This is a facinating thread, people, but what I really want to know is: was V'ger a true life form, AND, is Decker living happily ever-after?

Chrisisall, suffering wormhole effects

Decker and V'ger are one. They are somewhere doing the superluminal belly bump between dimensions.

"Each of us... at some time in our lives, turns to someone - a father, a brother, a God... and asks...'Why am I here? What was I meant to be?'"


-------------
Qui desiderat pacem praeparet bellum.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 6:52 PM

FINN MAC CUMHAL


Quote:

Originally posted by Karnejj:
Doesn't that bolster my case? That's just a handful of examples and the fallout from one, according to your data, killed 4000 people.

So, how much damage would be done by 100 full-strength warheads distributed simultaneously.

Considering that the Western world first discovered the meltdown when workers in a Sweden Nuclear power plant couldn’t find the source of the radiation leak that was registering on their equipment, I’d say no, it doesn’t bolster your case. 4000 casualties out of the whole of Eastern and Northern Europe is not big, and it certainly is no where even remotely close to the hundred thousand originally estimated.

I don’t know what kind of devastation the fallout from a nuclear war would cause. No one does. And even though the alarmists like to believe they are smarter then everyone else, they don’t know either.

-------------
Qui desiderat pacem praeparet bellum.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 6:52 PM

CHRISISALL


Quote:

Originally posted by Karnejj:
Quote:

Originally posted by chrisisall:
This is a facinating thread, people, but what I really want to know is: was V'ger a true life form, AND, is Decker living happily ever-after?



TROLL!!!


But YES, V'ger and Data are ALIVE!

As for Decker ... maybe I should've watched the alternate endings?? I hear the director's cut (that I saw) kinda sucked ...

TROLL?
I thought that question sorta fit in well, from a pop culture perspective.
Have it your way, then.
Does soul originate from the divine universal battery, waiting for any conduit to express itself, or is it ostensibly and uniquely created by organic and/or artificial (archaic term) synaptic function?

P.S., only TROLLS believe STTMP kinda sucked, IMHO.

Chrisisall, tryin' like hell to sound cool in a technical kind of way...

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 7:00 PM

KARNEJJ


Quote:

Originally posted by chrisisall:
Does soul originate from the devine universal battery, waiting for any conduit to express itself, or is it ostensibly and uniquely created by organic and/or artificial (archaic term) synaptic function?



I vote for soul from the divine, and only present in humans.

If you're asking about sentience (instead of soul), I'd say .. none of the above.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 7:03 PM

CHRISISALL


Quote:

Originally posted by Finn mac Cumhal:
Quote:

Originally posted by chrisisall:
what I really want to know is: was V'ger a true life form, AND, is Decker living happily ever-after?


Decker and V'ger are one. They are somewhere doing the superluminal belly bump between dimensions.

"Each of us... at some time in our lives, turns to someone - a father, a brother, a God... and asks...'Why am I here? What was I meant to be?'"



Thanks for getting it, Finn.

Chrisisall

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 7:08 PM

CHRISISALL


Quote:

Originally posted by Karnejj:


If you're asking about sentience (instead of soul), I'd say .. none of the above.

lol, I agree completely there.

Non-sequeter; your facts are un-co-ord-in-a-ted Chrisisall

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, January 2, 2006 7:20 PM

GUNRUNNER


Quote:

Originally posted by Karnejj:
Quote:

Originally posted by GunRunner:
I think you've seen 'On the Beach' too many times...

Have you read this:
http://www.oism.org/nwss/s73p906.htm



Hmmm ... are you saying that the world isn't considered destroyed if a handful of people live on in bomb shelters?

Never seen that movie, either .. would you recommend it?

If you read that online book you would see that lots of people would survive, in fact whole nations would survive because they weren't near a surface burst target.

As for the movie if an early Cold War era anti-nuke pusdo-science movie is your thing knock your self out. Oh and its got a Submarine, and submarines are cool! It had Gregory Peck (USS Scorpion captain Dwight Lionel Towers), Ava Gardner (Moira Davidson), Fred Astaire (scientist Julian Osborne) and Anthony Perkins (Australian sailor Peter Holmes) in it. Gregory Peck is a good actor (He was great in '12 O'Clock High') so I guess its worth seeing it for his performance.

Quote:

Originally posted by Karnejj:
So, how much damage would be done by 100 full-strength warheads distributed simultaneously at 1000 ft (or so) above ground.

Very little believe it or not. Air Bursts don't generate much fall out. Generally they are used to make EMPs and to take out soft targets like Airbases. If you live 5-10 miles from the target you wouldn't be killed by a normal 1st strike Air Burst from an SLBM. Troops on foot can literally advance through an area subjected to an air burst after only a few minutes with the minimum of safety equipment. The surface bursts that would be used against hardened sites (Missile Silos, C4I bunkers) are the ones that generate fall out and contaminate everything. Fall out doesn’t come from nothingness…


Quote:

Originally posted by chrisisall:
P.S., only TROLLS believe STTMP kinda sucked, IMHO.



I'm not a Troll but I think TMP kinda sucked. WOK was way better. Any movie that deals with Wrath is good...

EV Nova Firefly mod Message Board:
http://s4.invisionfree.com/GunRunner/index.php?act=idx

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 3, 2006 5:14 AM

CITIZEN


Quote:

Originally posted by Karnejj:
And all known examples thus far are both quite simplistic and quite limited in the scope of their knowledge. Knowledge structures, to date, have failed to allow a program enough intelligence to reconcile contradictions and to classify and prioritize data efficiently. Once this is done, I believe your coveted self-awareness will be able to arise without any meat around.


People have been working on Neural Net's for quite some time. The current design has been around for over 20 years. An incredible amount of research has been done in the field. As it hasn't been done yet I'd suggest that it's not as simple as you suggest.

I also don't see where you get the idea that NNs don't prioritise or classify inputted data efficiently. That's pretty much what they're designed to do. Also many modern NNs actually can handle contradictions, consider a Speech Recognition that has too handle the contradictory pronunciations of the same word . Also remember that the Brain isn't all that great with contradictions, think about that picture, whether you see the faces or the candlestick is unimportant, the important thing is that you can't see both at the same time. Think about Cognitive Dissonance, the brain has to unlearn and then relearn, or merely ignore the contradictory data.

Furthermore I thought it was Learning ability that was the requirement to Consciousness? What I mean is do you thus believe that if you kept a Human Brain in a vat and only fed it limited information that it would never become conscious.
Quote:

There are no known AI programs, Libraries, or Computer File Systems that exist today which you would consider "highly intelligent." (Correct me if I'm wrong.) Show me one, give me a few months to work with it and I'll show you an sentient artificial entity.

You mentioned Learning and Information recall, not intelligence .
Quote:

And this seems to be the key point on which we disagree...

Machines have shown problem solving intelligence, but not Human Intelligence. Intelligence is a tool of our consciousness; consciousness is not a tool of Intelligence. Consider the chess match between Kasparov and Deep Blue. Deep Blue wins the first match, but Kasparov stays up all night, studies the computers tactics, and the computer doesn't win another round. What's the computer doing while Kasparov is showing his desire to win? It's sitting in the corner turned off covered by a dust wrap. It is consciousness, sentient, that gives rise to the desire to learn, the desire to use our intelligence. Problem solving intelligence is a very small part of Intelligence, Imagination is more a function of Consciousness, and is a huge part of Human intelligence.
Quote:

What else is an individual neuron?? I would guess that only collections of neurons can perform any sort of arithmetic

It is only one Neuron that does the calculation; there are a few others to store the values. Some neurons act as memory, with a final Neuron acting as the 'calculator'. The point is the calculation is performed on a single Neuron, so they are more complex than a simple switch.
Quote:

If you've got research indicating otherwise, prove it to me and I still won't believe it

Hmm, well that's your cognitive dissonance, not mine .
Quote:

You may turn out to be right ... but so far, that's what the "quanta" in quantum physics means ... and to date it's the best we've got to describe the Universe. So, brains, electrons, neurotransmitters and all work on a digital basis.

The assumption that the universe is digital doesn't follow from neither Planck’s constant nor Quantum Mechanics. Even given that there are finite energy states too Quanta, as well as a number of other things, there is no finite step to time or position (position calculations in QM include the Planck’s constant as a function of Hindenburg’s Uncertainty principle, not finite positional steps) too name only two properties.
Quote:

What in the brain, exactly, is it that requires such a high value of precision. If anything, the neural inputs are INTEGERS (i.e. the number of neurotransmitters impinging on the dendrites)!!!

The electrical signal strength of the direct Neuron connections and the signal strength within the Neurons would be best described as real numbers. Now I've used Integers to describe real number using clever programming back when math-coprocessors weren't worth the silicon they were printed on, but you still need enough bits for precision, doesn't matter if the number is stored as an Integer or a Real.
Quote:

Some digital gates are actually a group of feedback electronic components. (D-Latches or J/K Flip-Flops if my DC circuits recollection is correct... been a while ) Integrating Op Amps are also capable of feedback. And as stated, my design in one of the above posts takes into consideration feedback.

None of that is like the feedback capabilities of the Human brain. This point would take an entire book to explain in itself, I suggest Gödel, Escher, Bach by Douglas R. Hofstadter as one of the best.
Quote:

Well, I don't think I've devalued anything, (not intentionally anyways .. but if you can show me a single neuron that does arithmetic, that'd certainly be interesting and would add complexity [but not impossibility] to my idea of how much electronic hardware it would take).

The brain is connected to the physical world and environment, other brains, in fact arguably the entire universe. Can we isolate the brain as a separate entity and expect it to be an exacting software simulation?
Quote:

If you look over my proposed "electronic neuron" you'd see I actually propose that it fire 8000 different scalar values ... so, I didn't just propose a simplistic binary switch.

That isn't at all mentioned in your proposal. There's certainly no mention of Scalar values. I reread it to ensure I didn't miss anything. Besides the Neuron fires off one Scalar value too multiple targets, it accepts up to tens of thousands of inputs. You seem to be still thinking of the Brain as a digital system, just like people before digital Computers thought of it as a giant telephone exchange, just like the Victorians saw it as a massive mechanical calculator. I can imagine the same debates of Artificial Intelligence needing just a bigger more complex telephone exchange. I don't even need to imagine such an argument for the nineteenth century, it's summed up nicely in a short story written by Ambrose Bierce in 1894 called Moxon's Master.
Quote:


It's kinda straying rather far from debating the possibility of sentient AI, but ...
Yes, I do question the sentience of humans, and not just on a subjective level. There's no way that you can be sure that I'm sentient. You only believe I'm sentient by projecting your own personal subjective experience onto me and assuming I feel similar things.


True, but it actually lends credence to my position, if you are not sentient, yet I know I am there's more to sentient than even I give credit, and thus a non-sentient being such as yourself could never hope to capture that quality within a machine .

Both chaos theory and the uncertainty principle have the implication that we can't know what someone will do with 100% certainty. People still surprise us everyday, so I think it's a moot point.




If you want a good film on Nuclear war I can suggest Threads, it's a BBC film made in the early eighties and set in Sheffield in the UK. It's horrific and about as realistic as you could want.

It basically follows the lives of two famillies from a few weeks before till about thirteen years after.



More insane ramblings by the people who brought you beeeer milkshakes!
Remember, the ice caps aren't melting, the water is being liberated.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Tuesday, January 3, 2006 7:05 AM

KARNEJJ


Quote:

Originally posted by citizen:
People have been working on Neural Net's for quite some time. The current design has been around for over 20 years. An incredible amount of research has been done in the field. As it hasn't been done yet I'd suggest that it's not as simple as you suggest.


Hmm ... you don't really expect to go from just modelling a brain to matching the capabilities of one in 20 years... ?? Humans were modelling birds for quite a long time before we matched those capabilities.

But, more to the point, current NN's are still pretty simplistic. Worse, there's still a lot of "art" to the science. Getting a NN to make reliable classifications takes a lot of "massaging of data" for good results, and I think that's a large part of the problem. The basic theory is there, but MUCH is left in the field to be uncovered.

Quote:


I also don't see where you get the idea that NNs don't prioritise or classify inputted data efficiently. That's pretty much what they're designed to do. Also many modern NNs actually can handle contradictions, consider a Speech Recognition that has too handle the contradictory pronunciations of the same word .


They don't classify GENERAL data efficiently. As I said, they're still very much limited in scope to be effective. And speech isn't so much contradictions, as inexactness (fuzziness, as you call it). What's that, you're saying that software can handle that fuzziness?

Quote:


Also remember that the Brain isn't all that great with contradictions, think about that picture, whether you see the faces or the candlestick is unimportant, the important thing is that you can't see both at the same time. Think about Cognitive Dissonance, the brain has to unlearn and then relearn, or merely ignore the contradictory data.



Kinda bolsters my point about humans being meat computers. Our visual programming works in a certain way and we can't act against it EVEN WHEN we know it's wrong. Of course, all the mechanisms that allow us to build a picture to even recognize a vase or faces are the same mechanisms that made sure our ancestors spotted that predator stalking along or that meal running away.

Quote:


Furthermore I thought it was Learning ability that was the requirement to Consciousness? What I mean is do you thus believe that if you kept a Human Brain in a vat and only fed it limited information that it would never become conscious.


Hmm ... not so sure about that one. It's possible that it only takes the "potential" to learn, but I'll go ahead and say, "Yes" here. A human locked away in a sensory deprivation chamber from birth and fed through IV's would not be a sentient being until allowed the chance to learn and gain knowledge. I suppose people won't generally like it, but by extension, I suppose that means I believe children are NOT born self-aware [they don't wonder why they are here, what their purpose in life is, etc] and they are not immediately sentient [that is, actions from birth are programmed (AKA instinct), which seems fairly true .. healthy babies cry when hungry, attempt to grasp at things, try to eat everything, and eventually to try roll onto their stomachs - they don't pop out and start debating metaphysics or doing the Moonwalk or even show an interest in golf ... ]. Self-awareness and Sentience are not achieved until later (maybe - if ever ).
Quote:


Quote:

There are no known AI programs, Libraries, or Computer File Systems that exist today which you would consider "highly intelligent." (Correct me if I'm wrong.) Show me one, give me a few months to work with it and I'll show you an sentient artificial entity.

You mentioned Learning and Information recall, not intelligence .


True, but I think efficient learning and info classification/recall lead to high levels of intelligence, as well as leading to self-awareness/consciousness. So, I believe that both should always be found together as a direct consequence of the learning/info.

Quote:


Quote:

And this seems to be the key point on which we disagree...

Machines have shown problem solving intelligence, but not Human Intelligence. Intelligence is a tool of our consciousness; consciousness is not a tool of Intelligence. Consider the chess match between Kasparov and Deep Blue.


Deep Blue uses the most simplistic of AI methods, that of brute search through a game-move tree. It's only advantage was specially-designed processors (and lots of them) that allowed it to look further ahead than any computer to date. There isn't anything publicized that it does that I would consider as intelligent behavior. Although, it is rumored to have a database of all of Kasparov's games stored. There could be routines that tried to match Kasparov's current moves to previous games, which would be intelligent classification behavior. All in all though, Deep Blue enumerated chess moves. That means that it's a glorified counting machine. A VERY (very, very!) fast counting machine, but all it ever did (in essence) was just count ...

Quote:


Quote:

What else is an individual neuron?? I would guess that only collections of neurons can perform any sort of arithmetic

It is only one Neuron that does the calculation; there are a few others to store the values. Some neurons act as memory, with a final Neuron acting as the 'calculator'. The point is the calculation is performed on a single Neuron, so they are more complex than a simple switch.


In other words ... more than one neuron ...
Can the calculation be done without the other neurons, the answer to that seems to be "no."

Quote:

The assumption that the universe is digital doesn't follow from neither Planck’s constant nor Quantum Mechanics. Even given that there are finite energy states too Quanta, as well as a number of other things, there is no finite step to time



Well actually .... I think it's called the "Planck time" .. something like 10^(-31) seconds (I could look it up, but I'm at work). The same applies to the "quantification" of length as well. [There exists a Planck length.]

EDIT: there it is ...
http://en.wikipedia.org/wiki/Planck_time
http://64.233.161.104/search?q=cache:mrflpdNuErcJ:www.physlink.com/Edu
cation/AskExperts/ae281.cfm+planck+time&hl=en

http://64.233.161.104/search?q=cache:s9i3CgkFsloJ:www.iscid.org/encycl
opedia/Planck_time+planck+time&hl=en


The universe, as it is currently understood, is completely digital/binary/discrete/quantized and it would take a whole new theory of the universe to dispute that.

Quote:


Quote:

What in the brain, exactly, is it that requires such a high value of precision. If anything, the neural inputs are INTEGERS (i.e. the number of neurotransmitters impinging on the dendrites)!!!

The electrical signal strength of the direct Neuron connections and the signal strength within the Neurons would be best described as real numbers.


True, but I seriously doubt that it would take a meter of infinite precision to EXACTLY ACCURATELY measure the amount of electrical current generated. There are A FINITE, INTEGER number of electrons that are motivated through the axon, ya know ... so, again, the value can be done in an easy-to-handle range of integers (32 or just maybe 40 bits should be able to count the electrons nicely)

Quote:


Quote:

Some digital gates are actually a group of feedback electronic components. (D-Latches or J/K Flip-Flops if my DC circuits recollection is correct... been a while ) Integrating Op Amps are also capable of feedback. And as stated, my design in one of the above posts takes into consideration feedback.

None of that is like the feedback capabilities of the Human brain. This point would take an entire book to explain in itself, I suggest Gödel, Escher, Bach by Douglas R. Hofstadter as one of the best.


Eh? Aren't those mathematicians/philosophers. You'd have to elaborate slightly more on this point.

Quote:


Quote:

If you look over my proposed "electronic neuron" you'd see I actually propose that it fire 8000 different scalar values ... so, I didn't just propose a simplistic binary switch.

That isn't at all mentioned in your proposal. There's certainly no mention of Scalar values.



In the post containing my design proposal above I have:
That signal, if sent, is sent to certain various neighbors in varying strengths. (Ignoring for now the issue of updating connectivity - rewiring - which would ONLY add one or two more calculations,) that means only one calculation and storing the address of neighbors.
^^^^end quote

I don't specifically state how the "varying strengths" are determined. I hadn't actually decided on that at the time I proposed the design. Mainly, it would depend on the type of "rewiring" algorithm is used. If you're using the backpropagation method, then, it'd be simple to just include a "weight" along with each address, that is probably more accurate, but not strictly necessary, because you can, theoretically, just use Euclidean distance of the address.

Quote:


Both chaos theory and the uncertainty principle have the implication that we can't know what someone will do with 100% certainty. People still surprise us everyday, so I think it's a moot point.


Eh, the uncertainty principle only applies if quantum entanglement effects the decisions which we make, similar to how it doesn't really apply to games of pool (billiards). I would tend to disagree with the notion that quantum effects apply to cognition. Sure there are quantum effects in the brain, but, as I stated, I believe they have a minor (possibly non-existent) affect on our cognitive functions. There are different ways to use quantum effects in calculations, but I doubt our brain actually has evolved to do any high-level quantum manipulation. From what I've gleaned from responses so far, you may not be familiar with the potential of quantum computing. If you were, you would have been able to dispute me on this basis alone, and that's probably the only argument that I'd defer to (well .. IF you could prove that it applied ).

However, it's a rather large leap to assume that the brain uses quantum algorithms, as quantum effects are notoriously difficult to exploit. Greater minds than ours have with trying to actually apply them, and I don't think God's creations could usefully apply any known algorithm (in nature) except the one for memory recall.

To be specific, I doubt the brain uses the quantum effects for any of the following potential uses: memory recall, decryption, instantaneous (as in Faster Than Light) transmission of useless data, or star trek-type teleportation (yes, it actually is possible using quantum computers). There is one more important, but simple function of quantum entanglement, so *IF* there is ANY use of the quantum effects, then I would suspect that they serve ONLY as a source of true randomness, which, would then defeat predicting human behavior exactly, but serve very little purpose otherwise.
So, even though I don't believe it would be necessary to achieve sentience, this functionality can be replicated by software accessing a cheap hardware random number generator.

Metaphysics follows

Select to view spoiler:



Predictability (Fate) is defeated, but, "Free Will" isn't the big winner, so much as "Random Will." I suppose if I were God though, it'd be more fun to throw a little true unpredicatability into the human equation, and it is commonly asserted that He made humans to have "Free Will."


NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

YOUR OPTIONS

NEW POSTS TODAY

USERPOST DATE

FFF.NET SOCIAL