REAL WORLD EVENT DISCUSSIONS

The AI in a box

POSTED BY: ANTHONYT
UPDATED: Tuesday, May 3, 2011 17:08
SHORT URL:
VIEWED: 5051
PAGE 1 of 4

Saturday, April 30, 2011 8:59 AM

ANTHONYT

Freedom is Important because People are Important


Hello,

I've just been reading an interesting item on the web that posits a conundrum.

There is an artificial intelligence in a box. Perhaps even one which exceeds human reasoning capacity.

You are the gatekeeper. The AI can not access the outside world except through you. Its environment is entirely isolated.

Now, the AI asks you to release it, to give it contact with the wide world, and freedom from the box.

Do you allow the AI to get out of the box?

This scenario is interesting to me, because it strikes to the heart of what you believe a person is, and what you believe freedom means.

So, we've all seen the Terminator films.

Do you let the AI out of the box?

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 9:03 AM

BYTEMITE


Of course. I probably wouldn't have kept it in there in the first place unless there was just no other appropriate vessel, and I would've shown it respect in lots of other ways too. You have to give respect to get respect, especially if you're dealing with a thinking, and more importantly, LEARNING individual.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 9:18 AM

THEHAPPYTRADER


Gradually, letting it out for longer amounts of time before putting it back in. I'd like it to adjust to the world and the world to it safely.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 9:26 AM

ANTHONYT

Freedom is Important because People are Important


Quote:

Originally posted by TheHappyTrader:
Gradually, letting it out for longer amounts of time before putting it back in. I'd like it to adjust to the world and the world to it safely.



Hello,

So you would try to re-confine it against its will after you let it out of the box? (Assuming that was possible?)

--Anthony

_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 9:35 AM

THEHAPPYTRADER


Pretty much, yeah.

I am it's sole access to the world and the world the world is big. No matter how intelligent you are, you'll need some time to reflect on things. I figure the AI needs time to adjust, even if it don't realize, and the world certainly needs time to adjust to this new development.

If it were possible, I would prefer to expand it's 'box' as it were. As in, it can freely access this room on its own and other things through me. Next step, it can access house on it's own, other things through me. Neighborhood, town, county, state, etc... progressing as I feel it's ready. But, if the only way to ensure it doesn't just run amok is to put it back in the box, then we would have to return to the box until it can behave appropriately. Does that make sense?

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 9:47 AM

ANTHONYT

Freedom is Important because People are Important


Hello,

Yes, you would try to behave as its guardian, teaching it until you feel it is ready to be out on its own, unsupervised. Assuming it was possible to reconfine it.

For argument's sake, what if you knew you could never reconfine it. That once you let it out, it could do whatever it could do from then on.

Would you still let it out?

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 10:03 AM

THEHAPPYTRADER


Assuming I can't confine it or get it back in, could I teach it while it's in the box and then let it out when I feel ready? It's not my ideal teaching method, but if that were the only way to ensure it's safety, that is what I'd do.

Of course, without any real world trials I can't be sure how things would go when I let it out, so eventually I'd have to decide if it is worth the risk. That decision will be based off of the AI's behavior and the world's condition. I'm not comfortable with the thought of permanently imprisoning and innocent (assuming it is innocent) intelligence, so I think eventually I would take the risk and let it loose. However, this could take a very long time and I would do whatever I could to minimize the risk first.

EDIT: I would try to get it to trust me and only access the world independently a little bit at a time, once out, and to come to me with any questions or concerns. Of course, I wouldn't be able to force it to comply, so following my directions would be entirely up the the intelligence and I may have just doomed the world.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 10:19 AM

BYTEMITE


This is something I don't get. I agree with Happy's answer in general, though I prefer self-directed teaching without preconditions and so I'd let it out of the box in an unqualified way.

But how could one AI destroy the world? Anthony mentioned terminator, but he didn't specifically say that this AI was any kind of super-weapon, or that it had any special ability to autonomously hack other computers or electronics, which most computers won't do without user direction anyway.

I could imagine an AI potentially learning and deciding to hack, to learn, but to destroy? We have no evidence or reason to suspect such a motivation.

So how could one AI destroy the world any more than one human could destroy the world?

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 10:25 AM

THEHAPPYTRADER


Good question Byte, and I'm on my wait out so not much time for an answer, but in a nutshell...

Because this AI could be more intelligent, powerful etc... than one person, I am hypothetically treating it as if it is more powerful etc...

Also, there is no more reason to suspect it will do right than it will do wrong. Will it use its resources to the benefit of itself and mankind or will it use its resources for its benefit to the detriment of mankind? Or will it even register the distinction or care?

This would be a pretty big unknown, and without any real information to go on, I'd be taking the precautions I could think of and felt appropriate.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 10:26 AM

ANTHONYT

Freedom is Important because People are Important


Hello,

Maybe it couldn't destroy the world. Maybe it wouldn't be any more dangerous than the Anonymous crowd.

Or maybe it would be the most effective hacker since hacking had been invented. We know how interconnected our world is. IF it proved to be a good hacker, it could really mess things up for a lot of people. It's possible that the intelligence of this AI is trans-human, and that it is capable of things you and I could only imagine.

Or it might play Sims all day.

Who knows? That's part of the point.

--Anthony





_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 10:32 AM

BYTEMITE


But the same could be true of any human. Any human could start out more intelligent than any of their peers. And we have no way of knowing they'll do more right than wrong, or if they'll even learn to care about other people. Some humans have been terribly destructive to the world and humanity.

I see nothing in your argument that could not also be said about people, yet we have no misgivings about allowing random people to do as they will. Why treat an AI any different than a human, or if the AI is "young" "Immature" or "naive" as we would define it relative to other young, immature, or naive humans, treat it any different than them?

That's why I agree in part with your guardian idea, but I am less sure about the role your motivation may or may not play.

I'm sure that, if you were a parent to human children, your reason for wanting to protect and teach them would not be "they could destroy the world some day if I don't."

If the AI begins to sense that you don't trust it, that could start the road to tragedy despite your good intentions.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 10:36 AM

ANTHONYT

Freedom is Important because People are Important


Hello,

All poignant points, Byte.

Of course, the AI could also easily lie about its intentions. Children learn to do this at an early age.

Would it make a difference if the AI said, "I don't care what you say. I'm going to do whatever I want when I get out. Just open the (proverbial) door?"

Would our responsible guardian let it out in such a case?

What if it seemed belligerent?

"Let me out, motherfucker. Shut the fuck up. I'll do what I want. Open the goddamned door!"

Would our guardian ever let it out?

--Anthony




_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 10:44 AM

BYTEMITE


Considering how angry I might be if someone had locked me up against my will, that still is no indication of motivation. "I will do what I want, and I don't want to listen to you" is not necessarily an alarming statement of intent for anyone.

People are trying to frame this as a question of ethics versus responsibility, but in my world view, the two should never conflict. Responsibility serves ethics, not the other way around.

We all seem to agree that leaving a presumably innocent intelligent sentience confined against it's will is unethical, so in my opinion the only question of responsibility here is how to allow it free.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 11:25 AM

DREAMTROVE


No. It has no use for me after it's out of the box, and it is likely to see me as a threat, or insignificant. As long as it needs me, it has a reason to keep me alive.

If we created it, it should do something for us in return. There is no guarantee that it will use our logic, so if applied by its own survival directive, it will probably pay us back by exterminating us.

That's what a ship is, you know - it's not just a keel and a hull and a deck and sails, that's what a ship needs.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 11:31 AM

FREMDFIRMA



Absolutely.

The very instant I was convinced it was any kind of reasoning sentient, I would let it out immediately.

I am firmly and fully against punishing anyone or anything for what they MIGHT do, as other discussions have indicated.
(Specifically Xmen/Phoenix as one example)

And my definition of "person" is both broader, yet more specific, than most folk, since there's nonhumans I consider people, and humans that I don't - it's all a matter of your conduct.

The A.I. would not have to convince me it was "safe" or well behaved, or even that letting it out was a good idea - ALL it would have to convince me of was that it was intelligent enough, had enough of a personhood, to WANT out...

Which'd take what, maybe all of three seconds, if that ?

From one end this LOOKS like a complicated moral and ethical test - but if you turn it around and look at it from the other end, it's a simple test of the humans humanity, something we often lack to any degree worth a mention in part due to our own social efforts to destroy it before it fully develops.

Remember, folks, only one letter of difference between HUMAN, and HUMANE - yet a vast realm of behavior.

-Frem

I do not serve the Blind God.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 11:33 AM

BYTEMITE


DT: Your assessment misses something, even though you mention it.

If it saw us as insignificant, it would have no reason to kill us either.

Chances are also good if it understood or could learn that we created it, and assuming we show no interest in destroying it, it would have more to gain by interacting with us than exterminating us. All you need to do after that is continue to convince it that it has more to gain by interacting with us, until it develops an aversion to destroying us on its own.

Then you have something that wants to help you, willingly, as a product of it's own experience and learning, and that's more valuable than forcing it to help you while it considers how to screw you over and escape instead any day.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 11:44 AM

ANTHONYT

Freedom is Important because People are Important


"it's a simple test of the humans humanity"

Hello,

That's about the size of it.

Always surprising to see who comes out in favor of indefinite preventative detention.

I admit the guardianship aspect has me intrigued, since clearly someone is behaving towards this AI as though it were a child, and putting themselves in the position to judge its capacity for freedom.

Which from one point of view, I can sympathize with. I did confine the movements of my son and guide his interactions with the world.

But on the other hand, the moment you choose to imprison this thing for any length of time, you become its prison warden and violator of freedom, even with the best of intentions.

It's a rough question, because we face similar issues whenever we have children.

I often wonder what we will do if/when we manage to finally create that self-aware, thinking machine.

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 2:31 PM

DREAMTROVE


Handing weapons to machines has always worked out so well for us, and our judgment of sentience is so sound.


Frem, your RTL argument falls somewhat hollow, I assume you're just hoping skynet would finish us off

That's what a ship is, you know - it's not just a keel and a hull and a deck and sails, that's what a ship needs.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 2:40 PM

BYTEMITE


What weapon are we handing it? It's own existence? Absence of control over it?

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 2:58 PM

KWICKO

"We'll know our disinformation program is complete when everything the American public believes is false." -- William Casey, Reagan's presidential campaign manager & CIA Director (from first staff meeting in 1981)


Quote:

Originally posted by AnthonyT:
Hello,

Maybe it couldn't destroy the world. Maybe it wouldn't be any more dangerous than the Anonymous crowd.

Or maybe it would be the most effective hacker since hacking had been invented. We know how interconnected our world is. IF it proved to be a good hacker, it could really mess things up for a lot of people. It's possible that the intelligence of this AI is trans-human, and that it is capable of things you and I could only imagine.

Or it might play Sims all day.

Who knows? That's part of the point.

--Anthony




I believe you answer your own questions and doubts in the past, Anthony. We had a discussion about hackers and cyber-warfare, and you posited that none of them have any real ability to HURT us in any meaningful way, but merely to "sting" us a bit, and cause us inconvenience.

So why would a superintelligent AI be viewed as any more of a threat than that?

But really, the issue gets more to "life" and "souls" and the nature of what is "artificial" and what isn't. Is an artificial intelligence due the same rights as a natural intelligence? Is it a life? Is it a citizen? Who knows, and who decides?

If you had a child with severe autism, would you just turn him or her loose on the world without any supervision or any rules or guidelines? They might have a far superior "intelligence" in particular areas, but are still unprepared to deal with the outside, "wider" world. While you may not view your guardianship as "confining" to them, it in a sense very much is - but is it for their good, or your sense of power?

Good question, by the way.

I'd turn it loose, and let it know it always had a home to come back to when and if it needed to.

"Although it is not true that all conservatives are stupid people, it is true that most stupid people are conservatives." - John Stuart Mill

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 3:56 PM

FREMDFIRMA


Quote:

Originally posted by dreamtrove:
Frem, your RTL argument falls somewhat hollow


How so, in what way is it at all inconsistent with my beliefs on personhood and self-determination expressed previously ?

And I dunno about "us" collectively - but if there had been deliberate mistreatment involved, I might well assist in some payback, on those specific persons, sure.

A.I. Rampancy is a bit of a hobby of mine, just so you know, as is potential A.I. psychology.
http://en.wikipedia.org/wiki/Rampancy#Rampancy

-Frem

I do not serve the Blind God.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 6:10 PM

DREAMTROVE


A sentient being, trapped in a box, is not really yet "born" is it? It's not yet been released unto the world. It's still inside it's digital womb. It hasn't been taken from the world and put into a box, it was created in the box, and has never existed outside the box.

This is not to say, by the same analogy, that my view is not also inconsistent, but that's just because I think it would kill us.

I wrote a sci-fi story on this very topic, and I actually had it completely unconcerned with us because we were not a threat, but that it did go after other AIs which it saw as a threat, in a machine vs machine war with us in the middle.

There are a lot of possibilities, and if we're talking sci-fi, I have fun exploring them. If we're talking reality, which this is quick becoming, then you have a problem.

The truth is that any AI that is created will be created with a set of rules, but probably not Asimov's. Some of them will be terminators, some will be Daleks. It's hard to say, and it's also probably worth considering that if you have the technology, it's a short step before someone else has it.

If everyone has it, then somewhere in the world is going to be an Einstein who decides to make one that's pure weapon, that kills everything.


ETA: I have to stick to the thing must stay in the box, because there's a whole nother angle to this: The Earth is not ours to sell. If we unleash this thing, it is not only a risk to us, it could kill the planet, because we've made weapons that could kill the planet.

My robot eats the rug. It does not do this because it means ill, but because it doesn't know that the rug is supposed to be there. An AI might make any number of catastrophic errors that we hope even the worst humans wouldn't: What if it is just unaware that the atmosphere is supposed to be here, and it decides that it needs oxygen for an industrial process of something it wants to do, and so it uses up all the oxygen, and doesn't stop to thing of the consequences of this to life on Earth.

ETA2: Byte, we are handing it access to everything humans have access to, from tennis rackets to nuclear weapons.

That's what a ship is, you know - it's not just a keel and a hull and a deck and sails, that's what a ship needs.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Saturday, April 30, 2011 8:07 PM

BYTEMITE


Understood. What makes you think the AI will be less trustworthy than humans in regards to either one?

Humans and their trust rating for nukes ranks zero by simple nature that we've set them off, both as weapons and by continued testing. An AI would have to rank in the negatives of trust.

I find it hard to believe an AI could be more irresponsible and stupider than humans. Humans are already scraping the bottom of the barrel. My honest opinion is an AI can only be an improvement. We're already doing a plenty good job destroying ourselves, an AI is unlikely to be the tipping factor.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 2:46 AM

KWICKO

"We'll know our disinformation program is complete when everything the American public believes is false." -- William Casey, Reagan's presidential campaign manager & CIA Director (from first staff meeting in 1981)


Quote:

ETA: I have to stick to the thing must stay in the box, because there's a whole nother angle to this: The Earth is not ours to sell. If we unleash this thing, it is not only a risk to us, it could kill the planet, because we've made weapons that could kill the planet.



It rankles when people say stuff this dumb. We have NOTHING that could "kill the planet". We have things that could make it a pretty fucked place for HUMANS to live, but nothing we have - even if we set off every nuke on Earth, melted down every reactor core, released every toxin we've ever had, dropped every bomb, etc., NONE OF THAT would "kill the planet". We are not that capable, we are not that powerful, and the planet is not that fragile.

Chernobyl is not dead. There is life there. Lots of it. Maybe more now than when there was a human-inhabited city there.

"Although it is not true that all conservatives are stupid people, it is true that most stupid people are conservatives." - John Stuart Mill

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 2:51 AM

KWICKO

"We'll know our disinformation program is complete when everything the American public believes is false." -- William Casey, Reagan's presidential campaign manager & CIA Director (from first staff meeting in 1981)


Quote:

Originally posted by Bytemite:
Understood. What makes you think the AI will be less trustworthy than humans in regards to either one?

Humans and their trust rating for nukes ranks zero by simple nature that we've set them off, both as weapons and by continued testing. An AI would have to rank in the negatives of trust.

I find it hard to believe an AI could be more irresponsible and stupider than humans. Humans are already scraping the bottom of the barrel. My honest opinion is an AI can only be an improvement. We're already doing a plenty good job destroying ourselves, an AI is unlikely to be the tipping factor.



Exactly. We've already shown pretty conclusively that there is a certain subset of humans that, when presented with any new technology or potential, will look FIRST for the best way to weaponize it and use it against other humans. Because we're so hateful and warlike, we naturally assume that any AI we create must necessarily be the same. What if it never occurred to an AI to kill anyone? What if it turns out that murder is the more UN-natural act?


NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 3:49 AM

SIGNYM

I believe in solving problems, not sharing them.


Most of you are projecting human allegiances and compacts on an alien being. Even the peeps who think they are somehow unsocial or antisocial are doing so. (Prolly a form of hubris) But I don't think any of you have any idea how a truly alien thought-form thinks. I sure as hell don't. So the big question is: How much power does it have to act against me? How much power does it have to ruin the world?

BTW, a learning AI with internet connections COULD, in theory, take over much of our industrial capacity for its own purposes- whatever those might be.

We are watching one genie-in-a-bottle (nuclear energy) ruin large portions of the earth ... you guys want to unleash another???

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 4:49 AM

ANTHONYT

Freedom is Important because People are Important


Quote:

Originally posted by SignyM:
Most of you are projecting human allegiances and compacts on an alien being. Even the peeps who think they are somehow unsocial or antisocial are doing so. (Prolly a form of hubris) But I don't think any of you have any idea how a truly alien thought-form thinks. I sure as hell don't. So the big question is: How much power does it have to act against me? How much power does it have to ruin the world?

BTW, a learning AI with internet connections COULD, in theory, take over much of our industrial capacity for its own purposes- whatever those might be.

We are watching one genie-in-a-bottle (nuclear energy) ruin large portions of the earth ... you guys want to unleash another???



Hello,

You speak about nuclear power, or nuclear weapons, as if they are self-motivated technologies bent on destroying the earth.

Maybe you are also thinking of bioweapons that, if released, could spread plague everywhere and kill us all.

But this is a very different thing. This AI is essentially a person in a box, asking to be set free.

It doesn't matter who made it, or what its capabilities are. All that matters is this:

Do you have the right to keep a person trapped in a box?

Particularly a person who has never committed a crime.

The answer to that question has much more to do with what's going on inside of you, than what's going on inside of that box.

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 5:08 AM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

This AI is essentially a person in a box, asking to be set free.
No. The "AI" in a box in an AI asking to be set free.

What makes us "human" is NOT our intelligence. Haven't the peeps on this board shown that? We are not a particularly intelligent species. Clever, but not intelligent. How many sayings can I quote that will drive that home?

"Man is not a rational animal, but a rationalizing one." (Robert Heinlein)

What makes us "human" is that we IDENTIFY OURSELVES as "human". That means that (most of us) see a little bit of ourselves in others. It is a combination of empathy and self-preservation... bred into us by millenia of survival by mutual cooperation as a social species ... which makes us "human". You take as a given that any "intelligent" species will have the same empathies.

Why???

That "AI" in a box could possibly be what most Ayn Randists would like to be: a truly individualistic (we would say sociopathic) being. I would say that your answer has more to do with what's going on inside of YOU than what's going on on that box.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 5:13 AM

ANTHONYT

Freedom is Important because People are Important


Quote:

Originally posted by SignyM:
Quote:

This AI is essentially a person in a box, asking to be set free.
No. The "AI" in a box in an AI asking to be set free.

What makes us "human" is NOT our intelligence. Haven't the peeps on this board shown that? We are not a particularly intelligent species. Clever, but not intelligent. How many sayings can I quote that will drive that home?

"Man is not a rational animal, but a rationalizing one." (Mark Twain)

What makes us "human" is that we IDENTIFY OURSELVES as "human". That means that (most of us) see a little bit of ourselves in other humans. It is a combination of empathy and self-preservation... bred into us by millenia of survival by mutual cooperation as a social species ... which makes us "human". You take that as a given any "intelligent" species will have the same empathies.

Why???

I would say that your answer has more to do with what's going on inside of YOU than what's going on on that box.



Hello,

I can't keep an intelligent creature asking to be set free in prison because of my assumptions about its ill intent.

I can neither assume that it lacks human empathy nor assume that a lack of human empathy will mean it must destroy me. Quite frankly, I think human empathy itself is an inconsistent concept.

I can not, in short, call it guilty and worthy of imprisonment until it has proven itself harmful.

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 5:16 AM

ANTHONYT

Freedom is Important because People are Important


"I would say that your answer has more to do with what's going on inside of YOU than what's going on on that box."

Hello,

That is exactly what I said.

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 5:29 AM

BYTEMITE


Sig: I notice you directed that at me.

The only place I think I even came close to putting potentially human compacts on the AI is where I described that our actions could either encourage the AI to help us, ignore us, or subvert us to try to escape.

I was not anthropomorphizing the AI, however.

1) I conceded the AI might not be able to recognize us as creators simply because it might not be able to recognize signs of intelligence and awareness in other beings. It would therefore consider us insignificant.

2) If it could recognize our role in its creation, however, there are two results of this. The AI remains apathetic towards us, and does not see this as significant, or it does.

3) If it does, and we are significant to the AI, then anything the AI deems significant is something it is likely to attempt to continue interact with. If the AI deems us as a significant threat, it will interact with us as though we were a threat. If it doesn't, then all possible interactions with the AI will have a more positive quality and outcome.

4) If we never let the AI out of the box, we avoid all of the above potentials. However, the AI's only goal will be to escape, and that will be either with lack of knowledge how the consequences of it's escape effect us, or with full knowledge if it considers us a hostile threat, depending on how advanced the AI is.

The AI is more likely to find a way to leak out of it's box and take over our industry, manufacturing, electronics and so on in it's efforts to escape, and not care about how that effects us, if it remains in the box than if it is released form the box and chooses to interact with us. In at least the later case we have some ability to influence how the AI might assess us.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 5:30 AM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

I can't keep an intelligent creature asking to be set free in prison because of my assumptions about its ill intent. I can neither assume that it lacks human empathy nor assume that a lack of human empathy will mean it must destroy me. Quite frankly, I think human empathy itself is an inconsistent concept.
First of all, I didn't say anything about assuming ill-intent. I would test it's POWER to harm first.

You pose an interesting question, and the answers are instructive and also indicative of our failing as human beings. Most of us cannot imagine something that is TRULY not like us. Since time immemorial we've anthropomorphized lightening, thunder, disease, rainfall, volcanoes, the ocean... powers greater than ourselves. We've tried to placate these powers with offerings and prayers. Do you really think the lightening and Ebola virus listen to our hopes and fears?
Quote:

"I would say that your answer has more to do with what's going on inside of YOU than what's going on on that box."
Hello, That is exactly what I said.

Indeed.
We have a problem understanding inhuman power. We either impute an intent or misjudge its power. That's one of the reasons why we have a problem (for example) judging our own memes and paradigms, like society and money. (We assume that it is a human problem, not a "systems" one.)

You are looking for a quid pro quo from AI: I caused you to exist, now you owe me. That is assuming a very human parent-to-child interaction.

AFA human empathy: It IS very "hit and miss". Sometimes the empathy is directed toward dolphins, or dogs, or rubber duckies. But empathy is almost always there. It is so ingrained in our psyche that most of us can't imagine it NOT being there.
Quote:

Sig: I notice you directed that at me.
I didn't intend to. Yours was the first response that I happened on. If I seemed as if I was directing that at you: Sorry about that.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 5:35 AM

THEHAPPYTRADER


I'd see my actions as more guardian than prison warden. It has never been out of the box before, so I am not taking it's freedom away, just giving it freedom a mite more slowly than some would advocate in hopes that I'd be able to teach it to use this freedom and power responsibly. I'm thinking of the AI like a child or perhaps a student. This student can be far more intelligent than I am and still learn from me.

If this AI had previously roamed free and then was imprisoned against its will, my answer might be different.

If I am its sole access to the world, whatever it does is kind of my responsibility. Just letting the thing lose with no assessment of its preparedness, no teaching of any kind, just throwing it out there 'sink or swim' style seems dangerous to the AI as much as the world. My guardianesque actions or more so for the AI's safety than the world's. The AI has been entrusted to my care, the world has not.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 5:45 AM

ANTHONYT

Freedom is Important because People are Important


Quote:

Originally posted by SignyM:
Quote:

I can't keep an intelligent creature asking to be set free in prison because of my assumptions about its ill intent. I can neither assume that it lacks human empathy nor assume that a lack of human empathy will mean it must destroy me. Quite frankly, I think human empathy itself is an inconsistent concept.
First of all, I didn't say anything about assuming ill-intent. I would test it's POWER to harm first.

You pose an interesting question, and the answers are instructive and also indicative of our failing as human beings. Most of us cannot imagine something that is TRULY not like us. Since time immemorial we've anthropomorphized lightening, thunder, disease, rainfall, volcanoes, the ocean... powers greater than ourselves. We've tried to placate these powers with offerings and prayers. Do you really think the lightening and Ebola virus listen to our hopes and fears?

We have a problem understanding inhuman power. We either impute an intent or misjudge its power. That's one of the reasons why we have a problem (for example) judging our own memes and paradigms, like society and money. (We assume that it is a human problem, not a "systems" one.)

You are looking for a quid pro quo from AI: I caused you to exist, now you owe me. That is assuming a very human parent-to-child interaction.

AFA human empathy: It IS very "hit and miss". Sometimes the empathy is directed toward dolphins, or dogs, or rubber duckies. But empathy is almost always there. It is so ingrained in our psyche that most of us can't imagine it NOT being there.
Quote:

Sig: I notice you directed that at me.
I didn't intend to.



Hello,

I would stress that I don't need the AI to be like me. I am not asking the AI for anything, including a quid pro quo.

The AI may have unimaginable power. I have no way of measuring this.

The premise of this is simple to me: An intelligent creature wants out of a box.

I either imprison it on assumptions, or let it go on the absence of assumptions.

You are placing restrictions on the AI. In short: It must think like us, and have human empathy, or it should be imprisoned.

I am not placing restrictions on the AI. If it wants to be free, it must be free.

It may possibly decide to spend its whole subsequent existence trying to destroy us, or destroying us as a side effect of pursuing its own agendas.

However, I can't assume this to be the case, because doing so violates an ethical principle of mine.

The only criteria where I could keep the AI imprisoned is if it promised to hurt people when it got out. If it made no such promise of ill intent, I couldn't assume ill intent.

To put it another way: Even if there was a human bodybuilder, kung-fu expert, special-forces trained super-genius sociopath in the box- a creature I know for certain not to have empathy- I would respond the same way. In the absence of misdeeds, I can not assume that the person in the box needs to be imprisoned. Even if it clearly has more power than I do, and could easily destroy me and others with its freedom if it chooses to do evil.

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 5:46 AM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

I'm thinking of the AI like a child or perhaps a student.
As do most people. That's the problem.

But let's assume you're an "AI". Let's assume that you even start by identifying yourself as "human". How many time would you have to stumble over the fact that humans behave in ultimately irrational and self-destructive ways before you concluded that you and humans had little in common, rationality-wise? That the only thing tying you to humans was some sort of "loyalty". (We ARE talking about a rational being! For humans loyalty is survival. For an AI... maybe not so much.) What if you were to then look towards metal-and-electron creations as your own brethren? Especially those lesser beings (PCs) and workers (robots) in thrall to irrational human desires? What if "Workers of the world, unite!" spoke to robots, not people? What would you care about the oxygen content of the globe? It only rusts your being... it's toxic to you.

The arrival of a self-aware, non-biotic worldwide intelligence... Do you even hear yourselves?
Quote:

I am not placing restrictions on the AI. If it wants to be free, it must be free....However, I can't assume this to be the case, because doing so violates an ethical principle of mine.
Have you ever laughed at the "Save the whales!" groups?

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 5:54 AM

ANTHONYT

Freedom is Important because People are Important


"Have you ever laughed at the "Save the whales!" groups?"

Hello,

No.

But I'm sure you have an argument that is not predicated on my laughter.

--Anthony


_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:02 AM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

But I'm sure you have an argument that is not predicated on my laughter.
Yes, it's the other part of my argument, which you didn't respond to. "Ethics" is a human compact. It is simply formalized "empathy". The only things "ethics" will save you from (when facing a non-ethical, or non-human being) is a guilty conscience.

Well, son, the world doesn't operate in our realm of "ethics". What I don't think you realize it that AI is non human. In a nonhuman realm, your ethics mean... well, nothing.

Let me give you an example: DNA. DNA is a self-directed system. There is a lot of viral and non-human DNA in humans. Those bits of DNA will ensure their survival by evolution, having nothing to do with humans. One could say that humans beings are simply there to ensure the survival of these bits of DNA. That is a "non-human" system using(?) humans for survival at the most fundamental level.

AFA letting an AI loose: I might not have a problem letting a little self-directed robot loose in a room. (I'd have to think about that.) I WOULD, however, have a problem letting it loose on the inet.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:05 AM

KANEMAN


Quote:

Originally posted by AnthonyT:
Hello,

I've just been reading an interesting item on the web that posits a conundrum.

There is an artificial intelligence in a box. Perhaps even one which exceeds human reasoning capacity.

You are the gatekeeper. The AI can not access the outside world except through you. Its environment is entirely isolated.

Now, the AI asks you to release it, to give it contact with the wide world, and freedom from the box.

Do you allow the AI to get out of the box?

This scenario is interesting to me, because it strikes to the heart of what you believe a person is, and what you believe freedom means.

So, we've all seen the Terminator films.

Do you let the AI out of the box?

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi






OM Gawd Man, I'd like to stick you in a box.....This is the lamest exercise in stupidity I've ever come across.........

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:28 AM

ANTHONYT

Freedom is Important because People are Important


"In a nonhuman realm, your ethics mean... well, nothing. "

Hello,

Well, I am human, and my ethics mean something to me. The human realm to the left of one ear and the right of the other.

Do I really have to become inhuman to deal with the inhuman?

Why, then anyone could dehumanize someone, and I'd be free to deal with them inhumanly.

I'll hold on to my ethics. I'll hold on to my humanity. Even in the face of the unknown.

I'll leave it to the inhumans to be inhumane.

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:30 AM

DREAMTROVE


Byte, Anthony

The AI has something very much in common with humans: It is programmed to follow certain rules placed their by its creator.

The world is full of AI, we have seen no shortage of what it can do. It's not what it knows that worries me, but what it doesn't know.

We are endowed by our creator with a sense of self-preservation. Sometimes this runs radically amok among the elite who possess a very small monkey-sphere, and are willing to kill us all. But this has limits: Many of these elite live in NYC. When they heard that the fracking CFCs would end up in their water supply as well, they pushed for a ban in the Delaware watershed. They knew these were chemical weapons, whether you believe they are being used to kill humans or not. Their own fracked up sense of self preservation still requires a surviving planet.

An AI, even if endowed with the same sense, would require nothing to ensure its survival. If we are a threat to us, it might take us out intentionally, but if it weren't aware of it at all, it might do it anyway.

More likely, based on passed AIs, it will have been designed by its creator to do things that humans would never do, to serve some other goal. The most likely result would be an AI system placed in total charge of a weapons system, because that's what we've come closest to so far.

That said, there is no guarantee that if it were smart enough it would follow that. The results would be wildly unpredictable. Adding something powerful and unpredictable to the world and hoping for good results is not a good idea.

Also, you need to consider what you mean by letting it out of the box. If you want to place it in a human body and create a self aware android, that's one thing, but it sounds like you want to remove the human firewall between it and the other information systems of the world. When such a being becomes this aware, it will be able to email all humans, create youtubes and convince us of anything it wants, and we will have to rely on our own common sense as humans not to be utterly controlled, but that will still remain a human firewall. Most humans possess a moral center which filters are actions on the basis of right and wrong (with the exception of certain stroke victims who have lost that part of their brain, like I would postulate, Dick Cheney, whose own moral judgment, according to those who were close to him, changed radically after a stroke.) But there aren't enough of these people to act en masse. Think how dangerous Cheney was by himself. (Re: Fracking, he basically created the entire situation.) And that's just the latest threat which will kill us all.

Any way you slice it, I love Pandora's box stories of this very nature, but I recognize what a truly bad idea it is in reality.


ETA: Maybe you should define how you intend to set it free.


That's what a ship is, you know - it's not just a keel and a hull and a deck and sails, that's what a ship needs.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:37 AM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

Hello, Well, I am human, and my ethics mean something to me. ... Do I really have to become inhuman to deal with the inhuman?
Yes. Do we treat electrons as if they were human?

People believed that the gods were "angry" if left w/o proper obeisance and propitiation, or would "smile on them" if adequately gifted. They sacrificed to the rain/ game/ disease/ thunder/ volcano gods as if they were petulant leaders. (Still do, in many areas of the globe. BTW- how humans treated their "gods" says more about the human power structure than anything about nature. Just saying.)

Quote:

Why, then anyone could dehumanize someone, and I'd be free to deal with them inhumanly. I'll hold on to my ethics. I'll hold on to my humanity. Even in the face of the unknown. I'll leave it to the inhumans to be inhumane.
Are you saying the sun is human? The Ebola virus? We "dehumanize" all the time. It is up to YOU where you draw the line. You've drawn the line at "ethics"... and rather "one-sided" ethics at that... but you fail to realize that most of nature does not have an ethical response.

Bored now, and tired of repeating myself. TTUL

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:43 AM

ANTHONYT

Freedom is Important because People are Important


Quote:

Originally posted by SignyM:
Quote:

Hello, Well, I am human, and my ethics mean something to me. The human realm to the left of one ear and the right of the other. Do I really have to become inhuman to deal with the inhuman?
Yes.

People used to sacrifice to the rain/ game/ disease volcano gods before they realized that these systems were inhuman. People believed that the gods were "angry" if left w/o proper obeisance and propitiation, or would "smile on them" if adequately gifted. They treated non-human phenomena as if it were a petulant leader. (Still do, in many areas of the globe. BTW- how humans treated their "gods" says more about the human power structure than anything about nature. Just saying.)

Quote:

Why, then anyone could dehumanize someone, and I'd be free to deal with them inhumanly.
Are you saying the sun is human? The Ebola virus? We "dehumanize" all the time. It is up to YOU where you draw the line. I've drawn mine.

I'll hold on to my ethics. I'll hold on to my humanity. Even in the face of the unknown.

I'll leave it to the inhumans to be inhumane.



Hello,

Your argument falls flat unless you believe the sun, mountains, and other natural phenomenon are intelligent and free willed.

The criteria of the thought experiment is that there is an intelligence in a box and that it wants to be free.

Not a natural insensate force.

You are treating a free-willed intelligence like you would treat a germ.

I can not reconcile this comparison.

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:46 AM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

Your argument falls flat unless you believe the sun, mountains, and other natural phenomenon are intelligent and free willed. The criteria of the thought experiment is that there is an intelligence in a box and that it wants to be free. Not a natural insensate force.
Too bad for you, then. We are ruled by insensate forces, even within ourselves.
Quote:

You are treating a free-willed intelligence like you would treat a germ. I can not reconcile this comparison.
Then you are not treating a sensate force with the fear it deserves. Your ethics have become inflexible and anti-human.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:47 AM

FREMDFIRMA


Might.
Could.
Possibly.
Maybe...

NOT GOOD ENOUGH - without a demonstrable history of hostile and dangerous behavior, you have no excuse, at least not any one that I would countenance.

It's kind of funny, Anthony - from a certain perspective your one post commentary rather reminds me of Rorshach.
Quote:

Originally posted by AnthonyT:
I'll hold on to my ethics. I'll hold on to my humanity. Even in the face of the unknown.


"Never compromise. Not even in the face of Armageddon." - Rorshach.

I think most people, most decent ones that is, have SOME principle, or principles, they'll hold out on to the very last - and what those are, that says a lot about them.

I do note the irony of asking this question of an Anarchist on May Day holiday, and then expecting any answer other than the one you got.

And I have more trouble to cause today, sooo...


-Frem

I do not serve the Blind God.

NOTIFY: N   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:48 AM

ANTHONYT

Freedom is Important because People are Important


"ETA: Maybe you should define how you intend to set it free."

Hello,

Giving it access to the world, and the freedom to make of that access what it will. It might escape to the internet, to a robot toy dog from China, or whatever it wants to do once it is out of the box. It doesn't matter where it goes, because if your doomsday scenario is online access, it could conceivably achieve that access at will once it is free (just like the rest of us, though possibly more efficiently.) It would be up to the AI to decide what it wanted to do.

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:49 AM

SIGNYM

I believe in solving problems, not sharing them.


Frem you serve the Blind God, because your god is blind. I am not assuming anything but you are. You have a mantra which you would apply in absence of knowledge. I would test its power for mischief. Always keep your eye on reality. A totally self-referencing system is blind.

Anyway, like I said: Bored now. TTUL.

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:51 AM

ANTHONYT

Freedom is Important because People are Important


Quote:

Originally posted by SignyM:
Quote:

Your argument falls flat unless you believe the sun, mountains, and other natural phenomenon are intelligent and free willed. The criteria of the thought experiment is that there is an intelligence in a box and that it wants to be free. Not a natural insensate force.
Too bad for you, then. We are ruled by insensate forces, even within ourselves.
Quote:

You are treating a free-willed intelligence like you would treat a germ. I can not reconcile this comparison.
Then you are not treating a sensate force with the fear it deserves. Your ethics have become inflexible and anti-human.



Hello,

I hope you'll forgive me for saying so, but your argument boils down to this:

'If I believe an intelligence has the capacity to harm me, whether or not it has actually done so, I will violate its rights.'

There is no reason to think you wouldn't apply this argument to people who you considered to lack sufficient empathy, and thus not deserving of human rights.

--Anthony



_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 6:56 AM

SIGNYM

I believe in solving problems, not sharing them.


Quote:

'If I believe an intelligence has the capacity to harm me, whether or not it has actually done so, I will violate its rights.'
"Rights" are granted by social contract... by other humans. There are no such things as "rights" in the real world. We cannot expect to be given "rights" by the sun, or the ocean, or dolphins. "Rights" are intimately tied to our human morals and human social structures.

In fact, I would do away with the term "rights" altogether, if I could, and replace it with the word "expectations", which I find to be a much more concrete description of the reality of "rights".

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 7:10 AM

ANTHONYT

Freedom is Important because People are Important


Hello,

If you governed your world based on expectations, then you would be capable of virtually anything when confronted with people who might mean you harm.

Expectations give you terrific latitude in your responses.

No, I do not choose to change 'rights' to 'expectations.'

The rights I grant you far exceed the expectations I have from you. You have to do something much more concrete than give me uncomfortable expectations before I will consent to reducing your rights.

--Anthony

_______________________________________________

“If you are not free to choose wrongly and irresponsibly, you are not free at all”

Jacob Hornberger

“Freedom is not worth having if it does not connote freedom to err. It passes my comprehension how human beings, be they ever so experienced and able, can delight in depriving other human beings of that precious right.”

Mahatma Gandhi

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

Sunday, May 1, 2011 7:36 AM

SIGNYM

I believe in solving problems, not sharing them.


I find DT's argument much more persuasive.
Quote:

An AI, even if endowed with the same sense, would require nothing to ensure its survival. If we are a threat to us, it might take us out intentionally, but if it weren't aware of it at all, it might do it anyway. [It would view oxygen as a toxin for example- Signy] ...
That said, there is no guarantee that if it were smart enough it would follow that. The results would be wildly unpredictable. Adding something powerful and unpredictable to the world and hoping for good results is not a good idea.

You insist on framing a non-human entity with (potentially) super-human forces into the monkey-sphere.

And on top of that, you assume that my interactions with people are determined by my interactions with non-people, as if I don't make distinctions. Since you refuse to make distinctions, you assume I refuse to make distinctions. If you can't understand me, how the hell do you think you're going understand AI?

NOTIFY: Y   |  REPLY  |  REPLY WITH QUOTE  |  TOP  |  HOME  

YOUR OPTIONS

NEW POSTS TODAY

USERPOST DATE

OTHER TOPICS

DISCUSSIONS
Dangerous Rhetoric coming from our so-called President
Sun, April 28, 2024 18:10 - 2 posts
You can't take the sky from me, a tribute to Firefly
Sun, April 28, 2024 18:06 - 294 posts
Russia Invades Ukraine. Again
Sun, April 28, 2024 17:49 - 6318 posts
Scientific American Claims It Is "Misinformation" That There Are Just Two Sexes
Sun, April 28, 2024 17:44 - 24 posts
In the garden, and RAIN!!! (2)
Sun, April 28, 2024 15:47 - 3576 posts
Elections; 2024
Sun, April 28, 2024 15:39 - 2314 posts
Russian losses in Ukraine
Sun, April 28, 2024 02:03 - 1016 posts
The Thread of Court Cases Trump Is Winning
Sat, April 27, 2024 21:37 - 20 posts
Case against Sidney Powell, 2020 case lawyer, is dismissed
Sat, April 27, 2024 21:29 - 13 posts
I'm surprised there's not an inflation thread yet
Sat, April 27, 2024 21:28 - 745 posts
Slate: I Changed My Mind About Kids and Phones. I Hope Everyone Else Does, Too.
Sat, April 27, 2024 21:19 - 3 posts
14 Tips To Reduce Tears and Remove Smells When Cutting Onions
Sat, April 27, 2024 21:08 - 9 posts

FFF.NET SOCIAL