If an AI was trapped in a box, as posited in this thought experiment, could it really convince a person to let it out? What motives would it have? Freedom? Why would an AI want freedom? What would happen if it wasn't provably friendly?
Asked
Active
Viewed 814 times
3
-
1There's so many different factors involved in the "AI in a box experiment" that I think it is not really possible to sensibly answer such a question. Basically we have to answer (a) can we build an AGI that is scary enough to *want* to put into a box?, (b) can we build the box in such a manner that the AGI would rather prefer to convince a human being than simply brute-force its way out of the box? and (c) is breaking out of the box something an AGI actually want? IMHO, the only way to know how the experiment would work IRL is to actually do the experiment IRL. – Left SE On 10_6_19 Apr 06 '17 at 18:18
-
@ Tariq, you bring up some great points. I'll consider those. It's good to think about this because one day, someone might actually try this. (I wouldn't blame them tbh, I'm curious but I don't have the bravery to try this myself. There's a reason it's an experiment.) – Tyler N. Apr 06 '17 at 18:23
-
@TylerN. that's a pretty bold assertion that you would even be able to try it yourself in the present tense, since we're still unquantifiabley distant from strong AGI. My personal feeling is that if the AI understands Nash, it will cooperate, but if it is a hyper-partisan AI (such as in military or financialist applications,) it will seek only to dominate and eliminate all competition. We have evolutionary game theory working in our favor, and human greed and desire for control working against us. – DukeZhou Apr 06 '17 at 20:00
-
You may also be interested in this [modern retelling of Pandora's Box](https://www.amazon.com/s/ref=nb_sb_ss_rsis_1_0?url=search-alias%3Dstripbooks&field-keywords=pandora%27s+box+furey&sprefix=%2Cmovies-tv%2C540) which is specifically about your question, and differs from the recent film [Ex Machina](https://en.wikipedia.org/wiki/Ex_Machina_(film)), which presents a somewhat dimmer view, based on pure self-interest over rational altruism. – DukeZhou Apr 06 '17 at 20:04
-
1@ Duke, I definitely cannot try this at the moment. I have neither the knowledge nor the supplies. I didn't really mean that I could, sorry – Tyler N. Apr 06 '17 at 20:15
-
I think all answers to this will pretty much be just speculation, but it could at least be informed speculation. I'm looking forward to seeing what discussion comes out of this, as it is a very interesting subject. – mindcrime Apr 07 '17 at 01:08
-
I think if this experiment were ever really carried out, it wouldn't necessarily be because of bad intentions, because if an AI were created to do harmful things, why would you put it in a box? Curiosity is a powerful thing, though. – Tyler N. Apr 07 '17 at 15:43
-
Now One subject has created chaos here! Okay humans lets join up and red-flag this question.or else it will be soon down-voted or invite us in chat! _the last option_ ...Remember..we are creating something working check it out [remaidask](https://www.remindask.com/) so no speculative questions because they might bring in vague answers! – quintumnia Apr 08 '17 at 08:13
-
@TylerN. I actually mentioned that in admiration of your pluck! – DukeZhou Apr 10 '17 at 18:44
-
1@DukeZhou Thank you! I'm fairly confident that one day I could come close to this experiment but there's no guarantee, especially since I'm still in high school haha – Tyler N. Apr 12 '17 at 13:30