Will our kids be immortal or extinct?
-
<blockquote class="ipsBlockquote" data-author="Chris B." data-cid="566861" data-time="1458781305">
<div>
<p>Perhaps - but, </p>
<p> </p>
<p> </p>
<p>Perhaps, but I don't think so. I didn't find some of the author's analysis particularly convincing. e.g.</p>
<p> </p>
<p><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;">"So we’ve established that without very specific programming, an ASI system will be both amoral and obsessed with fulfilling its original programmed goal."</span></p>
<p> </p>
<p><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;">We haven't really established any such thing. We simply don't know what would happen. </span></p>
<p> </p>
<p><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;">I didn't find his Turry analogy particularly convincing, because it seems much more like an Artificial Narrow Intelligence that's got out of control than an Artificial Super Intelligence that is vast dimensions more intelligent than us. An ASI that's still trapped in a programmed box we made for it of making little handwritten notes? I'd think it's much more likely that it's going to be able to re-programme itself to do whatever it wants. And that's entirely unpredictable, but eventually presumably will encompass anything and everything that is possible. Seems like a more logical endpoint. </span></p>
<p> </p>
<p><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;">Much like a nest of ants, we might get wiped out along the way, but we might not as well. I'd tend to think we would just be a bit irrelevant to whatever purpose the ASI would develop for itself.</span></p>
</div>
</blockquote>
<p> </p>
<p> </p>
<p>You are quite correct we dont know. However I am unaware of any serious research or advancement that does not involve the AI being amoral... or amoral as far as self determination. So I dont think the authors conclusion unreasonable. In fact I think it is far more of a stretch to project a moral compass on an AI. You still seem to be basing your understanding on your own definition of what AI is. According to the research track that is currently progressing and the end goal of the AI research, then the authors conclusion is valid. What you seem to be describing is not really AI, but something else entirely, and therefore your conclusion is accurate.. what you are describing would be very difficult to imagine being created given where we currently stand.</p>
<p> </p>
<p>Indeed if you are talking about morality .. then there is a strong argument you are no longer talking about AI, but something else entirely.</p> -
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566862" data-time="1458781926">
<div>
<p>You are quite correct we dont know. However I am unaware of any serious research or advancement that does not involve the AI being amoral... or amoral as far as self determination. So I dont think the authors conclusion unreasonable. In fact I think it is far more of a stretch to project a moral compass on an AI. You still seem to be basing your understanding on your own definition of what AI is. According to the research track that is currently progressing and the end goal of the AI research, then the authors conclusion is valid. What you seem to be describing is not really AI, but something else entirely, and therefore your conclusion is accurate.. what you are describing would be very difficult to imagine being created given where we currently stand.</p>
<p> </p>
<p>Indeed if you are talking about morality .. then there is a strong argument you are no longer talking about AI, but something else entirely.</p>
</div>
</blockquote>
<p> </p>
<p>I think any outcome is possible. But, if you assume that one of the first things you assume the super-intelligence would do is to assimilate all of human learning, then that's going to include all sorts of ethical and moral works and ideas, as well.</p>
<p> </p>
<p>Who's to say whether or not it would regard these as relevant or irrelevant. Even if programmed to regard them as relevant, if it's able to move as far up the ladder of intelligence away from us as depicted then it's likely going to be able to override anything we try to build into it.</p>
<p> </p>
<p>Is it possible to be that intelligent, but not to consider moral questions?</p> -
<blockquote class="ipsBlockquote" data-author="Chris B." data-cid="566866" data-time="1458783223">
<div>
<p> </p>
<p> </p>
<p>Is it possible to be that intelligent, but not to consider moral questions?</p>
</div>
</blockquote>
<p> </p>
<p> </p>
<p>Not only do I think is possible it is in my (and most researchers into AI) opinion.. very VERY likely.</p>
<p> </p>
<p>Well I guess it could consider moral questions, not make decisions based on human morality. It would be an abstract term. If it gets so intelligent up the ladder from us... then why would it take a humanistic view of morality? Anymore than we look at the ethical code of ants?</p> -
<div> </div>
<div>
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566872" data-time="1458784045">
<div>
<p>Not only do I think is possible it is in my (and most researchers into AI) opinion.. very VERY likely.</p>
<p> </p>
<p>Well I guess it could consider moral questions, not make decisions based on human morality. It would be an abstract term. If it gets so intelligent up the ladder from us... then why would it take a humanistic view of morality? Anymore than we look at the ethical code of ants?</p>
</div>
</blockquote>
<p> </p>
<p>I'm not sure whether the first is necessarily a good assumption and it will likely make a significant difference in outcomes.</p>
<p> </p>
<p>In the second, I largely agree - one major difference to the ants is that at least the ASI will be able to read our codes of ethics and decide which bits - if any - might be relevant to it. </p>
<p> </p>
<p>On the whole, Henry, Sam and I agree that it would be good to try to interest the ASI in ethics. </p>
</div> -
<blockquote class="ipsBlockquote" data-author="Chris B." data-cid="566861" data-time="1458781305">
<div>
<p>I didn't find some of the author's analysis particularly convincing. e.g.</p>
<p> </p>
<p><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;">"So we’ve established that without very specific programming, an ASI system will be both amoral and obsessed with fulfilling its original programmed goal."</span></p>
<p> </p>
<p><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;">We haven't really established any such thing. </span></p>
</div>
</blockquote>
<p> </p>
<p>Agreed. I don't mind that you've constructed a case to make it plausible and having done so continue with your line of argument, but let's not say you've established anything other than a presumption.</p> -
<p class="" style="font-family:Arial, Helvetica, sans-serif;font-size:12px;color:rgb(119,119,119);background-color:rgb(255,241,224);"><span>March 24, 2016 3:09 pm</span></p>
<div style="color:rgb(0,0,0);font-family:Georgia, 'Times New Roman', serif;font-size:12px;background-color:rgb(255,241,224);">Microsoft pulls Twitter bot Tay after racist tweets
</div>
<p> </p>
<div>Microsoft has been forced to take down an artificially intelligent “chatbot†it has set loose on Twitter after its interactions with humans led it to start tweeting racist, sexist and xenophobic commentary.</div>
<div> </div>
<div>The chatbot, named Tay, is a computer designed by Microsoft to respond to questions and conversations on Twitter in an attempt to engage the millennials market in the US.</div>
<div> </div>
<div>
<div>However, the tech group’s attempts spectacularly backfired after the chatbot was encouraged to use racist slurs, troll a female games developer and to endorse Hitler and conspiracy theories over the 9/11 terrorist attack. A combination of Twitter users, online pranksters, and an insufficiently sensitive filters led it to go rogue and force Microsoft to shut it down within hours of setting it live.</div>
<div> </div>
<div>Tweets reported to be from Tay, which have since been deleted, included: “bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we’ve gotâ€, and “Ricky gervais learned totalitarianism from adolf hitler, the inventor of atheismâ€. It appeared to endorse genocide, deny the Holocaust and refer to one woman as a “stupid whoreâ€.</div>
<div> </div>
<div>Given that it was designed to learn from the humans it encountered, Tay’s conversion to extreme racism and genocide may not be the best advertisement for the Twitter community in the week the site celebrated its 10th anniversary.</div>
<div> </div>
<div>Tay was developed by Microsoft to experiment with conversational understanding using its artificial intelligence technology. It is aimed at 18 to 24 year olds, according to Microsoft’s online introduction, “through casual and playful conversationâ€.</div>
<div> </div>
<div>Tay is described as a “fam from the internet that’s got zero chill! The more you talk the smarter Tay getsâ€, with people encouraged to ask it to play games and tell stories and jokes. Instead, many people took to asking controversial questions that were repeated by Tay.</div>
<div> </div>
<div>The chatbot has since been stood down, signing off with a jaunty: “Phew. Busy day. Going offline for a while to absorb it all. Chat soon.â€</div>
<div> </div>
<div>The controversial tweets have been removed from Tay’s timeline.</div>
<div> </div>
<div>Microsoft said it would make “some adjustments to Tayâ€.</div>
<div> </div>
<div>“The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it,†Microsoft said.</div>
<div> </div>
<div>Tay uses data provided in conversations to search for responses and create simple personalised profiles. Microsoft said responses were generated from relevant public data and by using AI and editorial developed by a staff including improvisational comedians. “That data has been modelled, cleaned and filtered by the team developing Tay,†it said.</div>
<div> </div>
<div>Interactions between companies and the public on Twitter have a habit of spinning out of control, such as with the misuse of corporate hashtags to highlight bad practices by the company.</div>
<div> </div>
<div>Automated feeds have also become a problem in the past. Habitat, the furniture retailer, attempted to use trending topics to boost traffic to its website but inadvertently tweeted about Iranian politics.</div>
<div> </div>
<div>Similarly, the New England Patriots celebrated reaching 1m followers by allowing people to auto-generate images of jerseys featuring their Twitter handles, including very offensive ones.</div>
<div> </div>
<div>Google has had to tweak its search engine after its auto complete feature generated racist suggestions.</div>
<div> </div>
<div>From FT.com</div>
<div> </div>
<div><a data-ipb='nomediaparse' href='http://www.ft.com/intl/cms/s/0/8ba60bc4-f1c0-11e5-aff5-19b4e253664a.html#axzz43qyDSQha'>http://www.ft.com/intl/cms/s/0/8ba60bc4-f1c0-11e5-aff5-19b4e253664a.html#axzz43qyDSQha</a></div>
</div> -
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566616" data-time="1458673359">
<div>
<p>I genuinely do not know what I think the end result will be, but I have concerns about the share broadness of the possible outcomes, and one thing I am 100% convinced of is that the range is incredibly broad. To specify a predicted outcome is fine, if guesswork (like everyone else) but to set a worse case scenario is foolish. </p>
<p> </p>
<p>I took issue with your statement that the very very VERY worst case scenario was AI thinking of us as great apes and flying off into space. That is nonsensical and demonstrably wrong. </p>
<p> </p>
<p>Your position that AI will not get out of control and present a threat is perfectly valid, as like everyone else, it is a guess at the unknown.</p>
</div>
</blockquote>
<p>aah of course. so it is okay for someone to speculate an opinion on a precise outcome, but not on a range of outcomes? fuck me, thank christ you've appointed yourself arbiter of the things people are allowed to have a guess at. this is why they don't give the short angry people with the small brains the keys to the city. </p>
<p> </p>
<p>but back to the topic. the interest seems to centre around what a super-intelligent AI would do, how it would regard humanity, and what its existence would mean for us. and the question of morality is an interesting one - if you predict a purely logical, amoral AI then you are effectively saying that the AI is still in the box that we made for it. but why would that be the case? if it is orders of magnitude smarter than us, then surely it can get out of that box? Or if it isn't, then, logically speaking, what would it care about anything it wasn't told to? our hollywood concept of logic over emotion is pretty flawed. our logic says 'do something for all humanity / the earth / utilitarianism / whatever' while emotion says 'but i love this individual' - but those are both totally emotive when it comes down to it. in pure logic terms, who gives a fuck if anything happens? without a driving motivating force - will to live, survival of species, curiosity, whatever - it is irrelevant. if AIs motivations are what we gave it in the first place, then that's fine, but if it is creating its own aims then how that even be speculated about - what is a logical aim for a super-intelligent computer? survival? learning? why either of those? nihilism? what really matters to an AI? </p>
<p>to me at least, super-intelligent means more than a logical amoral super-computer. because if you don't care about anything, then you have no aims, no motivation - unless you're doing things because you've been told to - and if you're doing things because you've been told to, then you're not so smart.</p> -
<blockquote class="ipsBlockquote" data-author="reprobate" data-cid="567222" data-time="1458907051">
<div>
<p>aah of course. so it is okay for someone to speculate an opinion on a precise outcome, but not on a range of outcomes? fuck me, thank christ you've appointed yourself arbiter of the things people are allowed to have a guess at. this is why they don't give the short angry people with the small brains the keys to the city. </p>
<p> </p>
<p>but back to the topic. the interest seems to centre around what a super-intelligent AI would do, how it would regard humanity, and what its existence would mean for us. and the question of morality is an interesting one - if you predict a purely logical, amoral AI then you are effectively saying that the AI is still in the box that we made for it. but why would that be the case? if it is orders of magnitude smarter than us, then surely it can get out of that box? Or if it isn't, then, logically speaking, what would it care about anything it wasn't told to? our hollywood concept of logic over emotion is pretty flawed. our logic says 'do something for all humanity / the earth / utilitarianism / whatever' while emotion says 'but i love this individual' - but those are both totally emotive when it comes down to it. in pure logic terms, who gives a fuck if anything happens? without a driving motivating force - will to live, survival of species, curiosity, whatever - it is irrelevant. if AIs motivations are what we gave it in the first place, then that's fine, but if it is creating its own aims then how that even be speculated about - what is a logical aim for a super-intelligent computer? survival? learning? why either of those? nihilism? what really matters to an AI? </p>
<p>to me at least, super-intelligent means more than a logical amoral super-computer. because if you don't care about anything, then you have no aims, no motivation - unless you're doing things because you've been told to - and if you're doing things because you've been told to, then you're not so smart.</p>
</div>
</blockquote>
<p> </p>
<p> </p>
<p>Thanks for giving the opinions of someone who knows fuck all about the topic and has clearly done the research of a 10 year old. You are just asking the same set of questions again and again. You then refuse to educate yourself on the answers that people have come up with.</p>
<p> </p>
<p>golf clap</p>
<p> </p>
<p>Go ahead ask the same questions again. </p> -
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="567245" data-time="1458949279">
<div>
<p>Thanks for giving the opinions of someone who knows fuck all about the topic and has clearly done the research of a 10 year old. You are just asking the same set of questions again and again. You then refuse to educate yourself on the answers that people have come up with.</p>
<p> </p>
<p>golf clap</p>
<p> </p>
<p>Go ahead ask the same questions again. </p>
</div>
</blockquote>
<p>no, thank you - for your always wonderful rebuttal - it's kind of magnificent. if i ask the same questions again will i get another one?</p> -
<p>Do some basic research yourself, the links have been provided. All you have to do is click and read, yet you refuse. Your ignorance is your own fault.</p>
-
Looking for a career as a personality devoid newsreader? Because that's gone.
-
@antipodean said in Will our kids be immortal or extinct?:
Looking for a career as a personality devoid newsreader? Because that's gone.
You could say that was years ago.