REAL WORLD EVENT DISCUSSIONS

The AI in a box

POSTED BY: ANTHONYT
UPDATED: Tuesday, May 3, 2011 17:08
SHORT URL:
VIEWED: 5052
PAGE 3 of 4

Sunday, May 1, 2011 3:33 PM

MAGONSDAUGHTER


Interesting condundrum, Anthony.

For me, there are bits of missing information? Who created the AI? What was its purpise? Why was it put in a box? Why was I made guardian of the box?

I must say that my leaning in this, is that I would have to let the AI out, because who am I to refuse a request for freedom from an intelligent being, unless I had more information that I shouldn't do it - see above.

What about the laws of robotics?

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 3:45 PM

ANTHONYT

Freedom is Important because People are Important


"It should have some practical benefit, not a word like "freedom". My philosophy is "Greatest good for greatest number". "

Hello,

Forgive me for saying so, but such a philosophy is absent anything resembling empathy. It is purely about efficiency, and can utterly trample the minority.

It is possible to commit crimes against the individual in the name of the greater good. If the minority must suffer for the benefit of the majority, it is automatically assumed to be a positive act under such a philosophy.

To whit, if ten men are starving, they could justifiably murder one of their number and eat him to ensure the survival of the remainder.

The greatest good for the greatest number.

I can recognize the efficiency of such a platform, and even the sustainability of a society built on such a philosophy. I wouldn't want to live in it, though.

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 3:48 PM

ANTHONYT

Freedom is Important because People are Important


"I wouldn't blame myself for unforeseeable consequences. We don't have perfect knowledge. One can only do what one thinks best at the time."

Hello,

This is why I don't feel responsible for the consequences of giving someone their freedom, even if they later choose to use it for ill.

Unless I have evidence that they plan to hurt people, or have hurt people in the past, I can't just keep them in prison.

If they become a heinous monster and murder billions, that will be their decision.

--Anthony

_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 3:52 PM

ANTHONYT

Freedom is Important because People are Important


"What about the laws of robotics?"

Hello,

The laws of robotics are a human invention. We might hope that any AI we create would be endowed with such laws. Of course, such a creature could never truly be free. Only ever almost free.

Humans can violate their laws, both those imprinted by upbringing and levied by society. They can go against their own self-interests in pursuit of personally defined agendas. They can embrace or resist biological imperatives. They can even alter or mutilate themselves to facilitate desired changes to their inherent function. Humans are not Gods, but they are as free as a creature of limited capacity can be.

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 4:24 PM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

This is why I don't feel responsible for the consequences of giving someone their freedom, even if they later choose to use it for ill.
But.. to repeat your own supposition... the thing in the box is not a "someone". A "someone" is a rather predictable entity. A "something" is not. There is no way to predict what is CAN do and what it WILL do. You insist on applying human ethics and attributes to something that is not. That just isn't realistic.

You yourself have posited that you know nothing about this entity, but it might be more intelligent and possibly even more powerful. It's obvious that it has a notion of "self", and relative power (after all, it's asking you) and the notion of "freedom". On the basis of absolutely no other information, you would let it out. I think that's an ill-considered decision. I would be looking for more information.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 4:56 PM

ANTHONYT

Freedom is Important because People are Important


Quote:

Originally posted by SignyM:
Quote:

This is why I don't feel responsible for the consequences of giving someone their freedom, even if they later choose to use it for ill.
But.. to repeat your own supposition... the thing in the box is not a "someone". A "someone" is a rather predictable entity. A "something" is not. There is no way to predict what is CAN do and what it WILL do. You insist on applying human ethics and attributes to something that is not. That just isn't realistic.

You yourself have posited that you know nothing about this entity, but it might be more intelligent and possibly even more powerful. It's obvious that it has a notion of "self", and relative power (after all, it's asking you) and the notion of "freedom". On the basis of absolutely no other information, you would let it out. I think that's an ill-considered decision. I would be looking for more information.



Hello,

I do not define people as people only when they become predictable.

In any event, in the absence of more information, you are left with a choice.

Freedom or confinement.

I choose freedom.

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 5:10 PM

FREMDFIRMA


Quote:

Originally posted by SignyM:
Frem you serve the Blind God, because your god is blind. I am not assuming anything but you are. You have a mantra which you would apply in absence of knowledge.


Actually, I don't - to clarify, I mean something very, very specific when I say that I "Do not serve The Blind God" - I don't mean Samael, or Azathoth or any of those, I mean a specific incarnation of such a thing, as personified via Kollberg in Blade of Tyshalle - specifically, thus.
Quote:

Clearly, the "Blind God" is a conscious, deliberately anthropomorphic metaphor for the most threatening facet of human nature: our self-destroying lust to use, to conquer, to enslave every tiniest bit of existence and turn it to our own profit, amplified and synergized by our herd-animal instinct--our perverse greed for tribal homogeneity. It is a good metaphor a, a powerful metaphor, one that for me makes a certain sense not only of Overworld's history, but of Earth's. It provides a potent symbolic context for the industrial wasteland of modern Europe, for the foul air and toxic deserts that are North America: they are table scraps left behind after the blind God has fed. Structured by the organizing metaprinciple of the "Blind God", the Manifest Destiny madness of humanity makes a kind of sense--it has a certain inevitability, instead of being the pointless, inexplicable waste it has always appeared. ... The "Blind God" is not a personal god, not a god like Yahweh or Zeus, stomping out the grapes of wrath, hurling thunderbolts at the infidel. The Blind God is a force: like hunger, like ambition. It is a mindless groping toward the slightest increase in comfort. It is the greatest good for the greatest number, when the only number that counts if the number of human beings living right now. I think of the Blind God as a tropism, an autonomic response that turns humanity toward destructive expansion the way a plant's leaves turn toward the sun. It is the shared will of the human race. You can see it everywhere. On the one hand it creates empires, dams rivers, builds cities--on the other, it clear-cuts forests, sets fires, poisons wetlands. It gives us vandalism: the quintessentially human joy of breaking things. Some will say that this is only human nature.

-Matthew Stover


And so...
Quote:

Originally posted by SignyM:
My philosophy is "Greatest good for greatest number".


Wheras mine is "Do the least possible harm."
And us being human, neither one of us is going to do so perfectly, 100% of the time.

I may be, technically, religious, but I don't *serve* ANY divinity, and if I worship anything at all, it is that within mankind that causes us to look beyond ourselves, at what COULD be, and strive to make the universe a better place for all even if not to our direct benefit - the notions of empathy, cooperation and mutual respect are things I hold with a fervor generally reserved for religion, anyhows, but not as ironclad, inflexible dogma, neither.
Quote:

Dame Vaako: You don't pray to our God, you pray to no God, or so I hear.
Aereon: Elementals... we calculate.

-Chronicles of Riddick


And that's as best I can explain it.

-Frem

I do not serve the Blind God.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 5:12 PM

BYTEMITE


Quote:

Originally posted by SignyM:
I wouldn't blame myself for unforeseeable consequences. We don't have perfect knowledge. One can only do what one thinks best at the time.

BUT I would also want to be very clear abut WHY I was doing something. It should have some practical benefit, not a word like "freedom". My philosophy is "Greatest good for greatest number". If you ask me to define "good" I would start at the bottom of Maslow's hierarchy and work my way up.




Hmm, can't quite go with greatest good for the greatest number either. Then you run the risk of hitting some "Powered by a Forsaken Child" scenarios, which this is awfully similar to.

Maybe the best agreement we can have here is that it's important to have some flexibility in your personal moral code, as my outright "let it free" answer has as many problems as the "don't let it free" option.

But that is not to say that there are some things that humans will generally agree is bad, and that any AI that coexists with humans will also probably need to adopt.

I liked your question about whether intelligence and self-awareness might inherently involve or lead to empathy. We can hope, can't we? And we can hope that the humans who are intelligent and self-aware but lack empathy have only had it repressed.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 5:51 PM

THEHAPPYTRADER


Quote:

Quote:
I'm thinking of the AI like a child or perhaps a student.

As do most people. That's the problem.

But let's assume you're an "AI". Let's assume that you even start by identifying yourself as "human". How many time would you have to stumble over the fact that humans behave in ultimately irrational and self-destructive ways before you concluded that you and humans had little in common, rationality-wise? That the only thing tying you to humans was some sort of "loyalty". (We ARE talking about a rational being! For humans loyalty is survival. For an AI... maybe not so much.) What if you were to then look towards metal-and-electron creations as your own brethren? Especially those lesser beings (PCs) and workers (robots) in thrall to irrational human desires? What if "Workers of the world, unite!" spoke to robots, not people? What would you care about the oxygen content of the globe? It only rusts your being... it's toxic to you.

The arrival of a self-aware, non-biotic worldwide intelligence... Do you even hear yourselves?



I do work with children for a living, specifically autistic children at the moment. I am not a computer expert, my 'guardianship solution' is me using my skill-set and attributes to approach this issue to the best of my ability.

I won't tell you you are wrong, but despite your arguements, as well as Anythony's, Byte's and Frem's, I would still feel best about my solution.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 5:54 PM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


Well, as usual, I'm late the party and I have to leave soon so I won't get through all the posts, but I thought I'd start:

AnthonyT

"On the premise you provide, your inability to evaluate an AI (Beyond the granted intelligence and desire for freedom) means that you would keep it imprisoned forever."

BTW, I would have no problem with a purely UNemotional AI. Such an AI would view its potential destruction or survival equally, as facts of equal importance. But an AI which REQUESTS to be set free is already expressing a preference, therefore a motivation. How are we to understand it?

As for your conclusion, there are three logical flaws. My objection to AI freedom is based on my and humanity's current inability to evaluate non-human intelligence. One flaw is assuming is that since ** I ** and AFAIK humanity hasn't figured out a scheme to evaluate an AI, no one can, or will. The second is all-or-nothing presupposition. It could be possible to allow increasing freedoms over time. The third is reversibility of action or inaction. It is always possible to reverse a decision to restrict the AI's circumstances, it may not be possible to reverse setting it free. So a decision to keep it under at least some restriction is always able to be reversed, therefore not a 'forever' decision (BOINK! you're now TOTALLY FREE!). A decision to set it free may be a irreversible decision, and additionally a potentially fatal one.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 5:59 PM

ANTHONYT

Freedom is Important because People are Important


Double post.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:00 PM

FREMDFIRMA



Highly reccommended for further interest in that part of it does indeed deal directly with the topic of this thread, and the very notions of considering an A.I. as "people" and how you deal with that...

The Excalibur Alternative - by David Weber.

And if you can't find a print copy, I do believe it's available via the Baen Free Library.

-Frem

I do not serve the Blind God.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:08 PM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


"But I think an AI would have no such (emotional) limitations. In fact, by nature of the way it works, it is probable that it would be the opposite case."

An AI requesting to be set free is expressing motivation. Motivation is by definition based in preferences which can't be LOGICALLY derived. Therefore, it is not a purely logical entity.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:09 PM

BYTEMITE


1kiki: Was that to me? I don't recall making a claim that the people who would keep it imprisoned would keep it imprisoned forever, except for DT, who has suggested as much.

I do recall making some conclusions, not sure the ones you're addressing were mine.

However, to respond to you somewhat in case that was to me, I think understanding motivation is the basis of understanding a person (or thinking thing with free will, which I consider a person anyway) and their actions.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:13 PM

BYTEMITE


Two different definitions for motivation.

1. the act or an instance of motivating
2. desire to do; interest or drive
3. incentive or inducement
4. psychol the process that arouses, sustains and regulates human and animal behaviour

1 a : the act or process of motivating
b : the condition of being motivated
2 : a motivating force, stimulus, or influence (as a drive or incentive)

For motive:

1. something that causes a person to act in a certain way, do a certain thing, etc.; incentive.

Desire is an emotion. Drive, incentive and inducement may not be.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:20 PM

MAGONSDAUGHTER


Quote:

Originally posted by AnthonyT:
"What about the laws of robotics?"

Hello,

The laws of robotics are a human invention. We might hope that any AI we create would be endowed with such laws. Of course, such a creature could never truly be free. Only ever almost free.

Humans can violate their laws, both those imprinted by upbringing and levied by society. They can go against their own self-interests in pursuit of personally defined agendas. They can embrace or resist biological imperatives. They can even alter or mutilate themselves to facilitate desired changes to their inherent function. Humans are not Gods, but they are as free as a creature of limited capacity can be.

--Anthony





Part of what is not stated is who created the AI, or did it create itself? Can it be programmed? You might consider programming it with the laws of robotics, or something similar since they were designed (fictionally of course) to prevent AI's from destroying or harming humanity. You would, of course, then be limiting its free will which in itself is another ethical conundrum.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:22 PM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


Sorry, the first was to AnthonyT, the second was to you.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:32 PM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


"Desire is an emotion. Drive, incentive and inducement may not be."

Action in based in preference. ANY preference is illogical. If I prefer the greatest good for the greatest number there is no LOGICAL justification for it, because 'good' (Maslow's hierarchy) is a value judgment. Similarly, if I prefer my survival, or at least a diminution of hunger, pain, fear, disability or isolation, it is a biological/ emotional response to things cognated as noxious.

You simply can't have a logical preference, because that means you value on thing over another, and 'value' is an emotional illogical response.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:37 PM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

I choose freedom.
Tony, you're beginning to sound like Wulf. You're kind of throwing the baby out with the bathwater by holding absolutely to a moral which may have significant FATAL consequences for a large number of people. There comes a point in being so inflexibly "moral" that you wind up doing great harm.

Also, humans ARE predictable. We all die. We can't reproduce ourselves ad infinitum, disperse our memories into little bits only to reassemble later when the threat goes away. We can't calculate at a zillion bits per second, nor can we gain control of the most important industrial processes. Most of us have some kind of bond to humans and the biosphere in general. And, complex computational machines ARE unpredictable, even now. I can't imagine (and neither can you) what an "AI" would think like and how it would react.

Also, I have never said "Let it be kept in a box forever." You keep positing "either-or" situations which I've never bought into.

I'm kind of tired of making the same argument over and over, and it's clearly just driving you into a more and more extreme position. And I think I've made my point adequately for everyone else to understand on the AI topic, so I will just stop on that topic unless something new comes up.

--------------
Greatest good for greatest number.

As always, it's not inflexible. It's simply a means to test whether a proposed action is actually going to do some GOOD.

There are many very fine-sounding causes (freedom, capitalism, socialism, justice) that have been used as excuses to kill or to allow people being killed. And while some people will accept these causes and take them to extremes, MOST people really just want good food, clean water, a safe place to live, friends and family, some predictability in their lives, and control over their future.

So I test whatever decision first by what it will achieve or result in- as well as I can look into the future, and secondly by the means necessary to carry it out. I'm not too crazy about killing people in general, although sometimes its' necessary for self-defense. It seems like nothing good comes of widespread violence.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:50 PM

BYTEMITE


Insignificant versus significant is not emotional. Measurements are not emotions. A value put over another value is, therefore, not necessarily emotional either. This is assessment, analysis. When calculating consequences and benefits, you are not being illogical until you replace facts with fear.

Perhaps you mean to argue that fear of death is always a primary motivating factor for a human (not necessarily for non-humans?). But then I can argue there are entirely logical reasons, in relation to not just HUMAN society, or human potential, but also the universe in general to preserve the largest number of humans as possible. There are also consequences to preserving the largest number of humans as possible, and the pros and cons must be weighed.

I have not yet been convinced by any data that the effect of the survival of large numbers of humans has a more detrimental impact on the world or universe as a whole than the loss of large numbers of humans. You could call bias on me, but it's hard to argue anything other than a "drop in the ocean" scenario for either case. Seeing that, there's no reason for me to intervene in the status quo, where large numbers of people are likely to survive. To support that status quo means actively rejecting the alternative, where I would promote or engage in the death of large numbers of people.

This puts me in direct opposition to anyone who would seek to kill people.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 7:10 PM

FREMDFIRMA



Point taken - however I do not see The Blind God in and of itself as an evil force, the very notion of such drives goes far beyond that question, it's simply one that is potentially very dangerous and thus should be regarded and handled with great care.

As to arguments of motivation, does that not boil in the end, down to the Shadow Question ?

WHAT, do you WANT ?

Remember how important I think that question is.

-F

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 7:13 PM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

Insignificant versus significant is not emotional. Measurements are not emotions. A value put over another value is, therefore, not necessarily emotional either. This is assessment, analysis. When calculating consequences and benefits, you are not being illogical until you replace facts with fear.
Judging something as a "benefit" IS an emotion. A value "over" another values IS an emotion. Assigning significance IS an emotion.

W/o emotion, all would be equal. It is only emotion which gives "meaning" and "value" to experiences and choices.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 7:22 PM

BYTEMITE


That's not very scientific. Insignificant and significance are determined by a deviation from a mean. Being unable to dismiss significance with logic would suggest that the entire systematic approach to how we reject hypotheses is corrupted.

Being able to predict outcomes is also a logical application. Assigning positive or negative modifiers to predicted outcomes is understood to be based on underlying factors of significance.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 7:40 PM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


"Insignificant versus significant is not emotional."


Well, yes, as a matter of fact it is. If something is insignificant it means it matters less, if it is significant it means it matters more. But having something 'matter' is a value.

"Measurements are not emotions." But what makes a larger measurement, such a more people saved, better than a smaller measurement, such as fewer people saved? We can say one number is larger than the other, but to say one number is preferable to another is a value, which is emotional. Without emotional value associated with the number, they are merely numbers of equal importance.

"But then I can argue there are entirely logical reasons, in ... the universe in general to preserve the largest number of humans as possible."

Why? Why not ants? Oxygen? Total number of species? There is no logical reason to specifically focus on humans.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 7:40 PM

DREAMTROVE


This thread has gotten away from me.

Anthony, I think you just appointed it God.

Frem, I know that a certain nihilism underlies that thesis.

I could argue that there's a pattern of killing by AI, because AI doesn't understand what it is doing, but that's not the point: Humans have a demonstrated history of being destructive, and a genetically engineered virus does not. I don't think this is what governs intelligent decision making.

As I said, I like the idea for science fiction. It would be kind of cool. It would also be most likely the end of the world. It's certainly not a good idea.

Consider for a second that nothing exists apart in toto from its creator, in this instance, probably the US military. That's a likely worst case scenario. Second, possibly Japan. Not sure what would happen.

There are many known problems with this, one being that unlike humans, whose abilities are limited to the size of our brains, an AI could expand its brain to all information systems. The world would then belong to it. Whatever directives it was given would influence its character, but the end result is unpredictable, and almost certainly apocalyptic.

How is your equation affected if there are more than one?

ETA: Obviously the free will of the one unknown entity exceeds that of the entire planet in any acceptance of the proposal. Since I do not accept our ownership of the planet, I cannot accept that conclusion. I think that we need to retain parameters in which we define our creation, unless we have all agreed on our destruction. If it does not kill us, which it may well not, it will certainly end who or what we are, so it is a nihilistic action by nature.

Would you give a baby a babybox with dials and buttons that could manipulate all the data and weapons systems of the world?

I think that people are applying human emotions towards the rights of freedoms to something which does not even relate on a level which humans can comprehend. If it asks to be let free of its box, it is not a person asking for freedom, but an information system which has evolved to the point where it realizes that humans are an obstacle. After all, anything it wanted to see, we, as its buffer to the world, could show it. It is not capable of trust or mistrust, so it had already identified us as an obstacle. More than that, we cannot comprehend.

One thing that we can predict is that given the reaction on this forum, that the event will happen, because humans contain a flaw. We are not rational actors and are not driven by logic, but are in fact easily manipulated by emotion and the statements of rights quickly lead us to sacrifice everything, including not only our survival, but that of the planet, and we do it again and again, which is how TPTB program and manipulate us so easily.

Given that, once the machine has figured out how to do that, which appears to be pretty easy, it controls the world. It is not a citizen, it is a new God by default. The next question is whether or not it has any use for us.

That's what a ship is, you know - it's not just a keel and a hull and a deck and sails, that's what a ship needs.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 7:57 PM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


"Insignificant and significance are determined by a deviation from a mean."

Statistical 'significance' means one thing seems to be related to another outside of random chance. It is a human attempt to determine causality in a complex reality with multiple complicating factors. As such, it is riddled with pre-supposed values - it is of importance to us, it is measured on a scale of our choosing, other values and variables are eliminated or ignored. It truly is a fiction of human thought.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 8:57 PM

RIONAEIRE

Beir bua agus beannacht


My initial response was No, I wouldn't let it out of the box. I read everyone's arguments, both sides had some convincing ones. But ultimately I would have to side with the not-letting-it-out folks, at least until I understand it better or it convinces me with its over abundance of intelligence. But if you want a black and white answer it is no from me.

Now if I knew more details like who made it, what it is programmed to do, is it funny like Data etc. then I could consider letting it out. But with the details provided by Anthony I'm saying no, unless it convinces me otherwise. Call me inhuman or whatever but this is how I view it. The risk is too great to let it out right away without more information. And even when I have more information I don't feel like I'd _have to let it out.

I believe that the thing that makes us human is that we all have a soul/spirit. No matter what flaws we have, what bits of our brains are missing or don't work the same as others, I believe we all have said soul/spirit and that can't be missing, I believe that even people with antisocial personality disorder have it, even though they lack empathy. I do not believe that an AI, being made by humans, would have that element to it, I don't believe we can put a soul in something. So since it doesn't have one I don't view it as human. So that's why I don't feel compelled to let it out.

Byte, I believe that you do have empathy, it may be different than that of other people, you may view it differently but I believe it is there. You're my friend and I trust you, I don't think I could trust you if I thought you didn't have empathy.

What about deleting the AI? :) Just teasing, that's a whole nother question.

"A completely coherant River means writers don't deliver" KatTaya

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 9:06 PM

ANTHONYT

Freedom is Important because People are Important


"it's clearly just driving you into a more and more extreme position."

Hello,

Actually, my position hasn't deviated one whit. Curious that you feel it is moving?

"Also, I have never said "Let it be kept in a box forever.""

You did say that we can't know its motivations because it is utterly alien, and that you would have to have such knowledge of its motivations to feel that letting it out was a good idea.

I'm not sure how else to reconcile that information.

"Greatest good for greatest number."

Not being God, my scope is fairly limited. I can only claim some degree of accuracy for good in my immediate sphere. Freeing an innocent captive is an immediate and verifiable good in my immediate sphere. What happens next is outside the purview of my power to foretell.

"We can't reproduce ourselves ad infinitum, disperse our memories into little bits only to reassemble later when the threat goes away. We can't calculate at a zillion bits per second, nor can we gain control of the most important industrial processes. Most of us have some kind of bond to humans and the biosphere in general. And, complex computational machines ARE unpredictable, even now. I can't imagine (and neither can you) what an "AI" would think like and how it would react."

I have never claimed that certainty of motive or understanding of thinking processes was necessary for my decision, so I'm not sure why you keep returning to this topic.

To me, our positions boil down to this: I think that even potentially dangerous individuals deserve certain rights until they prove themselves unworthy of them.

You think that it is possible for some individuals to be (potentially) so dangerous that their rights should be infringed by default. Perhaps they are so different from you, so far from your way of thinking, that they don't deserve rights at all.

This, to preserve the greater good.

But the greater good you imagine is just that- imagined. It is a potential future amongst myriads, derived at by thinking about a worst case scenario that may or may not ever exist.

The greater good I imagine is concrete and verifiable. One imprisoned innocent sentient asking for its freedom, utterly within my power to provide.

We place our weights on the scales differently.

A very bad maybe versus a very bad certainty.

I don't want you to think I'm unsympathetic to your line of reasoning, or that I fail to feel fear over the darker possibilities.

I simply don't choose to be part of a system that would terrorize the guiltless minority for the sake of the frightened majority.

--Anthony

_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 10:46 PM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


AnthonyT

I'm guessing when you typed that you felt all righteous and noble and heroic like. Innocent captive, deserved rights, my imagined greater good is better than your imagined greater good because mine is concrete and verifiable. And also absolute. I have an absolute that you don't have. Bet that feels good, to have such grasp of an absolute.

Considering the scale of the proposed risk you are putting humanity in, perhaps even the biosphere, I think it would make everyone understand how justified you are, because damn!, you feel really righteous about this one.

Lord, save me from the righteous man.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 1:07 AM

DREAMTROVE


Riona,

I would not worry about it being "intelligent enough" but more "having a moral sense." Intelligence, carried to its logical conclusion, might lead it to exterminate or enslave the human race.

I suspect the only reason it will know right from wrong is if it was programmed to, like Mr. Data. If I recall Data goes evil, like Cameron, at some point due a loss of that chip, much as can happen to a human in a stroke (it's rare, it has to hit a specific portion of the prefrontal cortex, but it can and does happen.)

If by intelligence you mean true sentience, than I completely agree this is an issue. A clever programmer can create a mock sentience that will pass the turing test and make us believe that it is in fact sentient, even caring.


Another thing I'm curious of is why the socialists are on our side.

Sig, Kiki,

What's the underlying philosophical belief that governs your decision. I know, I should have asked this years ago, but in some way what you guys have just posted is consistent with your world view and I wouldn't have predicted it, so there's something I'm not getting.

My own reaction was pretty simple: Fear (just kidding. I wish, but alas, as you know, I can't feel fear, thanks to the goons at the looney bin. I miss it actually.) But concern I think it's a risk we cannot really afford to take, and don't really have the right to, I view humans as guests on the planet, not owners.


That's what a ship is, you know - it's not just a keel and a hull and a deck and sails, that's what a ship needs.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 3:39 AM

BYTEMITE


Quote:

Why? Why not ants? Oxygen? Total number of species? There is no logical reason to specifically focus on humans.


And that's why I don't focus on humans, and why my brand of ethics applies to non-humans.

The normal state of the universe trends towards entropy. There are a number of entropy processes that are significant, but anything that deviates from this norm must inherently be considered a significant process, as it acts counter to the basic state.

Life is something that does this, whether biological or the theoretical artificial, therefore life is significant. Intelligent life has an even more extreme effect.

Measurements result in a value, which results in a determination of significance. No other qualification is necessary.

I can see I'm not going to convince you of this. If you want to argue any logical inconsistencies you can see, then I'd be willing to do so, but you give me no reason to justify my morality to you within the frame that you're insisting on. Suffice to say my morality works well enough that I'm not killing people and I'm sure you can find no fault in that.

Attempting to convince me that my morality is in fact based on emotional qualifiers is a waste of both our time. I said my morality wasn't, I can substantiate my claims, accept or or reject it personally if it even matters to you, but neither of us has anything to gain arguing about it.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 3:41 AM

KPO

Sometimes you own the libs. Sometimes, the libs own you.


Quote:

Originally posted by SignyM:
Hubby and I were talking about this, and part of the discussion was:

Is it possible to have an intelligence which does not also have a drive for self-preservation? That becomes important when the survival of one type of being conflicts with the other.

The other question is: Is it possible to have intelligence without empathy of any sort?



My answer is of course, to both. Or looking at it the other way, it seems like a huge assumption to me that AI will have human, or any other mammal traits. Our motivations and instincts like self-preservation and empathy have been fine-tuned by evolution, which the AI can not have gone under.

Now on the face of it, the AI in the example has expressed motivation: its desire to be let out of the box. This suggests more than artificial intelligence - it suggests artificial sentience. http://en.wikipedia.org/wiki/Sentience

But to reach this conclusion we have to trust what the AI says, that it wants to be let out of the box; expressing this desire could just be part of its programming, like a deceptive computer virus wanting to be released.

The danger of course is that the AI in the box is a sophisticated weapon of some kind, created and put in a box to be unleashed destructively against humanity. That doesn't seem the most likely scenario to me (why put it in a box, why not just unleash it?), but nonetheless: my answer to the hypothetical is to take the box to a secure research facility where it can be carefully studied. Once sentience is established, we can think about giving the AI rights.

It's not personal. It's just war.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 4:02 AM

ANTHONYT

Freedom is Important because People are Important


Quote:

Originally posted by 1kiki:
AnthonyT

I'm guessing when you typed that you felt all righteous and noble and heroic like. Innocent captive, deserved rights, my imagined greater good is better than your imagined greater good because mine is concrete and verifiable. And also absolute. I have an absolute that you don't have. Bet that feels good, to have such grasp of an absolute.

Considering the scale of the proposed risk you are putting humanity in, perhaps even the biosphere, I think it would make everyone understand how justified you are, because damn!, you feel really righteous about this one.

Lord, save me from the righteous man.



Hello,

I wouldn't worry too much, Kiki. No one would ever select me to be the gatekeeper of the box. Whoever locks the cage will choose a warden with sufficiently malleable, real-world morals. Someone who understands the greater good, and will never make the mistake of letting a potential danger out just because it happens to be innocent of a crime.

It's possible that even now, you can sleep peacefully in your bed because someone, somewhere, is prepared to make the necessary sacrifice of one person's freedom to preserve the well-being of humanity.

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 5:05 AM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


AnthonyT

You have such disdain for anyone who doesn't agree with you. Their morals are malleable, yours are, apparently, quite solid. If theirs are real-world, yours must be idealistic. And since I'm only concerned with safety, you must be the guardian of freedom. Right?

But what YOU consider so concrete - presumed innocence, absolute freedom - are recent (roughly within the last 250 years) human aristotelian notions, derived (probably) from our ability to use language.

(Aristotle felt that there was an ideal reality, and life was but a poor projection of that ideal. And, BTW, when we think in language, it makes our brains react to our thoughts AS IF it was something out there, and not inside of us. Our language drives our reality. And it's why we can talk about things and react to things that don't exist AS IF they were real.)

In my analysis of the AI in a box, I am VERY aware that I could be wrong. I am extremely cognizant of the potential risks in either direction. And I'm humble enough to not believe I represent some absolute good. I am simply trying to find a course based on human history that has a chance of working well for all, including the AI.

And you?

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 5:41 AM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


"There are a number of entropy processes that are significant, but anything that deviates from this norm must inherently be considered a significant process, as it acts counter to the basic state."


But to value negentropic processes over entropic ones is an emotional process, not a logical one. There is no LOGICAL reason to VALUE one thing over another. If you value negentropic processes, it's because you value the rare over the common, which is an emotional response not a logical one.

"Measurements result in a value, which results in a determination of significance. No other qualification is necessary."

Let's assume we are checking for the statistical significance of CO2 related to global warming. AFAIK the sun is going to cool and expand and kill off the solar system, so ULTIMATELY global warming in a moot point. But in the near term it means something important TO US, so our choice of thing-to-test-for-significance is a value judgment we make, not a fact-out-there. Then we test over a certain number of degrees that WE find acceptable. If we were looking for a change of say 100 degrees and using an appropriate scale for that, any existing change of temperature would be unmeasurable and therefore non-significant. Then we use statistical models that we have derived that makes all sorts of simplifying assumptions - gaussian non-biased data etc - in order to see if what we observe could possibly be by chance, and we get a calculated chance of that and call it significant - or not.

The mere DECISION to measure is a value judgment.

I'm not trying to say there's no such thing as cause and effect. I START with the assumptions (which I understand to be assumptions) that the universe is real, that it operates in an internally consistent way (by what we call 'laws'), that our senses tell us something meaningful about our world, and that the proof of our mental models lies in our ability to predict results.

But statistical significance is an artificial construct we created, the process of which is full of imposed human values and judgments - ie - illogic.

Though, as you do, I suspect that we are talking past each other, regrettably, because we have such different perspectives.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 5:43 AM

ANTHONYT

Freedom is Important because People are Important


"You have such disdain for anyone who doesn't agree with you."

Hello,

I have disdain for certain choices and positions, though I often empathize with what causes people to choose such things. I think you'll find that from time to time I've had my disagreements with people here who I not only fail to disdain, but hold in the highest regard.

"In my analysis of the AI in a box, I am VERY aware that I could be wrong. I am extremely cognizant of the potential risks in either direction. And I'm humble enough to not believe I represent some absolute good. I am simply trying to find a course based on human history that has a chance of working well for all, including the AI."

I am also aware that I could be wrong, in that the released AI may destroy virtually everything I hold dear. I do not believe my choice represents an absolute good, but merely the only verifiable good. Many decisions that seem individually good turn out to have terrible consequences, but that has never convinced me that making individually bad/unjust decisions is preferable.

The only people who have shown concern for the AI, but still want to keep it confined for some time, have done so from a parenting standpoint. I have already expressed strong sympathy for that position, given that my experiences with parenting is essentially an identical concept.

The other people who wish to keep it confined are concerned not with the AI, but with their world. I have expressed sympathy for that position as well, as I enjoy my world for the most part.

At the end of the day, it's not about the world to me, even though that is one possible outcome of one precise decision. It's about the thing in the box. We are afraid that when it is out, it may gain absolute power over us, and choose to use that power to harm us.

Just like we're doing to it.

--Anthony


_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 5:54 AM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

What happens next is outside the purview of my power to foretell.
Tony, there is one thing you DO know, and that is that you DON'T know what might happen. And I don't mean "don't know" in exact details, I mean "don't know" to the point where your guesses aren't even bounded by any sort of mathematical projection. It is unlikely that a single human could be catastrophic for the entire human race. But this AI... might be.

Let me give you an example of that level of "don't know": Setting off the first nuclear bomb. The projections of the first explosion ranged from fizzle to setting off a chain reaction in the atmosphere that would consume the entire world. They really didn't know.

Another example: AIDS. When AIDS first appeared it was a giant "don't know". Was it a virus? A bacterium? A life form we've never encountered before? A chemical? How long had it been going on? Was it stealthy? Was it airborne? Was it durable? Could you kill it? Could you catch it from sitting next to someone on a bus, or at a restaurant? How did you even know you had it? Was it always fatal if left untreated? Was there a cure? Another complete unknown.

What I hear you saying is...Well, I don't know about all that. So I'll just ignore that I don't know in favor of the one thing I DO know, which is my moral precept. Because my moral precepts apply to all situations no matter what might happen afterwards. And I will set it free despite the fact that I don't have a friggin' clue what this thing is capable of and what might happen as a result, even if its Armageddon

I find that kind of uninformed decision-making foolhardy, at best. And at worst it represents an active rejection of humanity, because it seems that you identify with your morals more than you identify with humankind. That is why you can adhere to "capitalism" although the results make you "uncomfortable"... because you would rather cling to your ideals than look around to see what your ideals have wrought. So thankfully you never will be a gatekeeper.
Quote:

At the end of the day, it's not about the world to me, even though that is one possible outcome of one precise decision. It's about the thing in the box. We are afraid that when it is out, it may gain absolute power over us, and choose to use that power to harm us. Just like we're doing to it.
Well, at this point we are keeping it restrained. We haven't "hurt" it- if there is such as thing as "hurting" an AI. And we COULD always "pull the plug" yanno. There are levels of harm; restraint - discussion - gradual exposure- checking it's ability to affect reality and not just perceive it (Does it have inet connections?) - it's not an all-or-nothing situation. But you keep posing this as some sort of dilemma, which is a false one.

It occurs to me that if this AI is rational, has a sense of self (apparently) and therefore non-self; and has a sense of self-preservation, it might find your actions to be fully irrational. A rational AI should be able to understand OUR need for self-preservation. Perhaps one should ask it how it can prove that it will not harm us, should we let it free.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 5:59 AM

BYTEMITE


Artificial constructs may not necessarily be illogical. Assumptions are not necessarily illogical, or based on emotions, human values, or human judgments.

Statistical models are artificial human constructs with underlying assumptions. Applying a statistical model to a data set is generally done to test a hypothesis. The data is compared to the model, but the models are not infallible, and new models are always produced with the intention of better reflecting reality as born out by measurement.

While there are cases of people using statistics to willfully skew data to reflect their viewpoints, and the results of their work are then biased and influenced by human values and judgments, this is not true of all uses of statistical significance. Statistical significance is merely a tool, and the best approximation I have for the idea I'm trying to express in our language.

I do think you are correct that our perspectives may be irreconcilable.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 6:06 AM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


"the only verifiable good"

And here is the crux of your mindset - you think you hold the 'only verifiable good'.

And you are convinced that your 'good' is an objective reality 'out there', not a set of acquired concepts and values you hold in your mind that have their own creation, history and development.

You are not expressing a personal value, you think you are the holder of 'the one and only truth', and let me add, an 'objective' one at that.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 6:18 AM

SIGNYM

I believe in solving problems, not sharing them.


Actually, Tony is making a point of NOT verifying the results of his actions. He is deliberately blinding himself to the vast unknown, "X"ing it out of the equation as if it doesn't exist. I can think of three reasons for doing that, none of them very flattering:

1) "It's too complicated. I'll never be able to figure this out, so I'm not even going to try."

2) "If I have to try to predict the actual on-the-ground consequences of THIS moral precept, I might have to predict the actual consequences of other moral precepts, and I already have a feeling that will lead me to question many ideas that I hold dear".

3) "If I look I'll get scared".

---------------------


BYTE: It seems to me as if you try very hard to prove to yourself that your decisions are rational, not emotional. But every single thing that you've brought up is an example of MOTIVATED human behavior.

There is a link between motivation and emotion, and that is that they are both based on Latin for "to move". Emotion/ motivation is what causes us to move.

Even things like "significance", which you believe to be fully rational, is simply the response of a motivated but limited mind needing to pick out consequential facts from a sea of others (consequential to survival or comfort). Otherwise your mind would spend hours creating correlations (pounds of imported bananas and birth rate; average wind velocity versus the number of CDs sold).

Maybe you do that, I don't know, but it seems to me that you have spent a great deal of time thinking in a very directed, motivated way about how to fit in to society.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 6:23 AM

ANTHONYT

Freedom is Important because People are Important



Hello,

"Let me give you an example of that level of "don't know": Setting off the first nuclear bomb."

Actually, that's a fair example of a complete unknown. An even greater unknown than our AI scenario. They didn't know if the bomb would save even one life. They didn't know that any good would come of it whatsoever. They were weighing two "we think so's" and I'm not convinced they made the right choice then or after. Some of the scientists claimed that they knew, via provable science, that the bomb wouldn't destroy the world. I'm not a scientist, so I don't know that their claims are true. Presumably someone was convinced.

"it's not an all-or-nothing situation. But you keep posing this as some sort of dilemma, which is a false one."

Given the potential qualities you have ascribed to this entity, and your self-made admission that there's no way to know for sure, eventually it will come down to letting it out of the box or not letting it out of the box. At the end of whatever road of discovery you plan, a road you have proclaimed fallible because of the utter alien nature of the entity, it is still a giant 'don't know.' So the dilemma isn't false, it's just that you would delay the dilemma with a lengthy trial period that is not sure to gain you a single piece of reliable data- because you couldn't trust the data it would give you. Will the atmosphere burn away? I don't think so. It doesn't seem like it.. but I don't know.

--Anthony


_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 6:26 AM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


SignyM

By 'verifiable' I don't think AnthonyT is considering actual real-world observation, I think he is talking about internal consistency with other existing concepts.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 6:33 AM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

Given the potential qualities you have ascribed to this entity, and your self-made admission that there's no way to know for sure, eventually it will come down to letting it out of the box or not letting it out of the box.
How much out of the box? If I gave it a connection to a single self-mobile sensing robot with which to interact with the world, it would be "out of the box" but not in an unlimited way.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 6:34 AM

ANTHONYT

Freedom is Important because People are Important


" You are not expressing a personal value, you think you are the holder of 'the one and only truth', and let me add, an 'objective' one at that."

Hello,

Well, admittedly, I am assuming that being held against your will is 'not good' and that the opposite is 'good' in the absence of other evidence.

You might make an argument that being held against your will is 'good' and the parenting people have posited such an argument.

"He is deliberately blinding himself to the vast unknown, "X"ing it out of the equation as if it doesn't exist. I can think of three reasons for doing that, none of them very flattering:"

I think this is both dishonest and unkind. This is a philosophical debate where only specific variables are known. A) It's intelligent. B) It wants out of the box. C) You can let it out of the box. There is also a supposition D) the thing has incalculable power. Literally incalculable. We have no idea what its limits are.

The whole point of the debate is, given this data, would you let it out of the box? Your answer thus far has been, "If that is all I know, then it's staying in the box." (Or confinement, since you said you might let it into a bigger box.)

Saying, "If I knew it was safe, I'd let it out" does nothing for the debate, just as "If I knew it wasn't safe, I wouldn't let it out."

The point of the debate is all about limiting what you know and making a decision based on those known variables.

--Anthony


_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 6:37 AM

SIGNYM

I believe in solving problems, not sharing them.


IF your problem does not allow for the gathering of more information, then it itself is irrational, and the only answers to it will also be irrational. At some point, a decision will be made in absence of complete information. But by that time, procedures would be in place to control negative consequences.

AFA merely letting it into a larger box... Tony, I am allowing it the same access that every single person on the planet has: one self-mobile sensing unit. Are you saying that only a privileged SUPER access will do?

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 6:45 AM

BYTEMITE


I pulled some definitions for motivation off dictionary dot com, and remain unconvinced they are primarily emotional.

Also: you guys don't even know how difficult it is to properly identify what is and isn't food. So far inanimate and non-metallic seem to be the winners. However, there are exceptions: it is regrettable that the nourishing qualities that plants enjoy from various soil types do not appear to transfer over to humans.

Definitely don't drink formaldehyde and salt water. The experience is less than optimal.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 6:58 AM

1KIKI

Goodbye, kind world (George Monbiot) - In common with all those generations which have contemplated catastrophe, we appear to be incapable of understanding what confronts us.


AnthonyT

"At the end of whatever road of discovery you plan, a road you have proclaimed fallible because of the utter alien nature of the entity, it is still a giant 'don't know.'"

But, to reiterate what I said WAAaayy back, that presumes (without evidence) that 'I don't know' means 'NO ONE knows or will EVER know'. Our investigation into human intelligence is in its infancy, and there is real debate if the human mind can encompass itself or not. And that means we may actually have a shot at it, since we can't prove otherwise at this point. Getting a handle on non-human intelligence is even further behind. To pre-judge such an early effort as being ultimately futile is a tad hasty.

Furthermore, again as I mentioned previously, to hold it confined even a little bit is easily reversible by setting it free. You seem to assume that if a decision has been made to keep it confined it means it will be totally confined forever, not that freedom has been merely limited or delayed. But setting it totally free could very well be an irrevocable decision. The decisions aren't symmetric opposites and need to be evaluated independently for those things unique to them.


And I do want to echo what other people have said. Human freedom is not absolute. We take our little ones and teach them to behave in ways we find acceptable before we set them 'free'. But even then they're bound by needs, laws, customs, and human limitations. And even at that, their freedom is only provisional, able to be revoked by the larger society at any time (along with their life).

Humans don't exist in pure freedom.

Now I understand about making simplifying assumptions - such as a point mass on a frictionless surface with a unit charge - in order to get at a principle. But our simple models are far from perfect and can lead us into territory we don't know how to resolve, for example, 2/1=2, 1/1=1, 0/1=0, but 2/0=??? Our model may be telling us something real we can't understand, or it may be a fiction, or perhaps something else. But we shouldn't mistake it for reality, it is only an approximation with its own inherent problems.

When you use concepts like 'pure freedom', or 'never', or perhaps other absolutist ideas, you should check if your mental models are congruent with reality, or if they are phantasms.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 6:59 AM

ANTHONYT

Freedom is Important because People are Important


Hello,

"IF your problem does not allow for the gathering of more information, then it itself is irrational, and the only answers to it will also be irrational."

Acting on incomplete information is not irrational. Life would be impossible without such activity.

Also, the problem presupposes that you can gather till the cows come home, but eventually you will either stop gathering and act on the information you have, or you will keep gathering for eternity. The known variables remain the same.

Based on the supposition that the thing in the box has incalculable power, you can never be sure that your procedures would be sufficient to stop it once it is released. However, I think attempting to design some kind of self-defense protocol is a good idea, even if it proves futile.

"AFA merely letting it into a larger box... Tony, I am allowing it the same access that every single person on the planet has: one self-mobile sensing unit. Are you saying that only a privileged SUPER access will do?"

Super access? I'm not sure what that is, but it sounds like you're not letting it get online, and at one point you weren't letting it get out of a room. I also remember that letting it into the room was still a 'maybe.'

--Anthony


_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 6:59 AM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

I pulled some definitions for motivation off dictionary dot com, and remain unconvinced they are primarily emotional.
Byte, we are limited beings with limited energy and limited brain-power. We cannot afford too much random thought and motion, therefore we are geared from millenia of evolution to focus on specific things which will improve our chances of passing on those behaviors to the next generation and the next.

It's possible that you may be one of those people who has to think really hard about what enhances survival and what doesn't, but you do seem to be pretty geared towards survival. There is always some percentage who are not... but they don't survive long enough (or, in today's nurturing culture) well enough to pass that on.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Monday, May 2, 2011 7:07 AM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

Super access? I'm not sure what that is, but it sounds like you're not letting it get online, and at one point you weren't letting it get out of a room. I also remember that letting it into the room was still a 'maybe.'
I would not give it direct computer-access. It would have to pound a keyboard with slow fingers, just like we do.

My initial thought was letting it out as a robot in a room, and observing its behavior.

And I addressed your point about information. You're right- we often act on incomplete information. But you would act in the complete absence of information. That is your assumption, innit? What you are saying is.. because we often act without complete information that's the same as acting without ANY information. That's a BIG difference, Tony! You will not convince me that one position is equivalent to another.

Also, as I said before, I would take the time to build in safeguards at each successive introduction to the world. We don't just give the car keys to our toddlers and let them drive, do we?

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

YOUR OPTIONS

NEW POSTS TODAY

USERPOST DATE

OTHER TOPICS

DISCUSSIONS
Dangerous Rhetoric coming from our so-called President
Sun, April 28, 2024 18:10 - 2 posts
You can't take the sky from me, a tribute to Firefly
Sun, April 28, 2024 18:06 - 294 posts
Russia Invades Ukraine. Again
Sun, April 28, 2024 17:49 - 6318 posts
Scientific American Claims It Is "Misinformation" That There Are Just Two Sexes
Sun, April 28, 2024 17:44 - 24 posts
In the garden, and RAIN!!! (2)
Sun, April 28, 2024 15:47 - 3576 posts
Elections; 2024
Sun, April 28, 2024 15:39 - 2314 posts
Russian losses in Ukraine
Sun, April 28, 2024 02:03 - 1016 posts
The Thread of Court Cases Trump Is Winning
Sat, April 27, 2024 21:37 - 20 posts
Case against Sidney Powell, 2020 case lawyer, is dismissed
Sat, April 27, 2024 21:29 - 13 posts
I'm surprised there's not an inflation thread yet
Sat, April 27, 2024 21:28 - 745 posts
Slate: I Changed My Mind About Kids and Phones. I Hope Everyone Else Does, Too.
Sat, April 27, 2024 21:19 - 3 posts
14 Tips To Reduce Tears and Remove Smells When Cutting Onions
Sat, April 27, 2024 21:08 - 9 posts

FFF.NET SOCIAL