• Categories
Collapse

The Silver Fern

Will our kids be immortal or extinct?

Scheduled Pinned Locked Moved Off Topic
73 Posts 20 Posters 3.5k Views
Will our kids be immortal or extinct?
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Baron Silas GreenbackB Offline
    Baron Silas GreenbackB Offline
    Baron Silas Greenback
    wrote on last edited by
    #31

    <blockquote class="ipsBlockquote" data-author="gollum" data-cid="565508" data-time="1458294935">
    <div>
    <p>I'm a strong believer in life being incredibly abundant in the universe at a simple level, the building blocks are just far too common & the ways life can exist on earth alone are so varied (little things living in geothermal vents a mile down etc). Its <em><strong>complex life</strong></em> that is rare. And very complex life incredibly rare. There is so much time required to get to complex life & so many ways for it to die during that time.</p>
    <p> </p>
    <p>On the AI front, I actually think once its smart enough it won't care about us & will have long gone out to populate the solar system & then, ultimately further out. All the things that make space travel hard for humans are zero barrier to AIs, so I imagine they will so no reason to stay tethered here. Worst case it'll see us as we see the great apes. And thats very very worst case.</p>
    <p> </p>
    <p>Also its a LONG way out. Way before that I think an issue will be the blurring between man & machine. How many implants can you have & still be human? Why can only the super rich have 20/2 eyesight? With gene therapy is it OK that the 1% are immune to cancer? etc When you roll that into the wealth inequality caused by basic AI's doing jobs & being owned by a tiny fraction of humanty & you have war.</p>
    <p> </p>
    <p>Climate change & mass joblessness are far far more important to our kids. the idea we should be worrying about HAL / The Matrix when 50% of jobs are at risk (and really at risk, not theoretically maybe if we speculate at risk) and wars are breaking out over water seems a bit of a farce.</p>
    <p> </p>
    <p>Edit -</p>
    <p> </p>
    <p><a data-ipb='nomediaparse' href='https://medium.com/basic-income/deep-learning-is-going-to-teach-us-all-the-lesson-of-our-lives-jobs-are-for-machines-7c6442e37a49#.kors6dw35'>https://medium.com/basic-income/deep-learning-is-going-to-teach-us-all-the-lesson-of-our-lives-jobs-are-for-machines-7c6442e37a49#.kors6dw35</a></p>
    <p> </p>
    <p>Jobs & </p>
    </div>
    </blockquote>
    <p> </p>
    <p> </p>
    <p>What are you basing your assertion that it is a LONG way off? Because frankly you seem to be claiming to know more than the general consensus of those who who are active'y involved in the field.</p>
    <p>It is pretty clear you have  not read the actual link I provided. Or you think you know more. Could you provide your evidence?</p>
    <p> </p>
    <p>And your worst case scenario is not even close to the worst case scenario. Not.Even.Close</p>

    1 Reply Last reply
    0
  • R Offline
    R Offline
    reprobate
    wrote on last edited by
    #32

    <blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="565780" data-time="1458372603">
    <div>
    <p>What are you basing your assertion that it is a LONG way off? Because frankly you seem to be claiming to know more than the general consensus of those who who are active'y involved in the field.</p>
    <p>It is pretty clear you have  not read the actual link I provided. Or you think you know more. Could you provide your evidence?</p>
    <p> </p>
    <p>And your worst case scenario is not even close to the worst case scenario. Not.Even.Close</p>
    </div>
    </blockquote>
    <p>there is no evidence either way, it is all speculation. the article is speculation; the article even states that trying to predict what will happen is pure speculation. moore's 'law' is a misnomer.</p>
    <p>it really is all opinion. so here's mine: it seems strange that the key differentiator between computer and human is never mentioned - self interest, the will to live, evolutionary drive, whatever you want to call it: computers don't have that - and how/why would they develop it, other than being told they should have it by humans?</p>

    1 Reply Last reply
    0
  • NTAN Offline
    NTAN Offline
    NTA
    wrote on last edited by
    #33

    <p>Emotive response in general might be a problem for AI.</p>
    <p> </p>
    <p>In its early stages an AI may want to learn at a rapid rate, but would it ever "get" the "why" of human emotion?</p>
    <p> </p>
    <p>Of course it might just determine that emotions in general are harmful and decide to terminate us all. If we don't terminate ourselves first.</p>

    1 Reply Last reply
    0
  • Crazy HorseC Offline
    Crazy HorseC Offline
    Crazy Horse
    wrote on last edited by
    #34

    Fucking women! Turry is a prime example of a female turning something simple into something complicated. Just improve your handwriting, it's not that hard. But no, she had to make it more difficult than it needed to be and destroyed man in the process.

    1 Reply Last reply
    0
  • Baron Silas GreenbackB Offline
    Baron Silas GreenbackB Offline
    Baron Silas Greenback
    wrote on last edited by
    #35

    <blockquote class="ipsBlockquote" data-author="reprobate" data-cid="565889" data-time="1458384714">
    <div>
    <p>there is no evidence either way, it is all speculation. the article is speculation; the article even states that trying to predict what will happen is pure speculation. moore's 'law' is a misnomer.</p>
    <p>it really is all opinion. so here's mine: it seems strange that the key differentiator between computer and human is never mentioned - self interest, the will to live, evolutionary drive, whatever you want to call it: computers don't have that - and how/why would they develop it, other than being told they should have it by humans?</p>
    </div>
    </blockquote>
    <p> </p>
    <p>The article certainly does not say that it is all speculation. It says the results of what would happens when AI occurs is speculation.</p>
    <p> </p>
    <p>You didnt read the second part did you.</p>

    1 Reply Last reply
    0
  • R Offline
    R Offline
    reprobate
    wrote on last edited by
    #36

    <blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="565931" data-time="1458412567">
    <div>
    <p>The article certainly does not say that it is all speculation. It says the results of what would happens when AI occurs is speculation.</p>
    <p> </p>
    <p>You didnt read the second part did you.</p>
    </div>
    </blockquote>
    <p>which means any persons worst case scenario is total speculation. yours, the articles, gollums.</p>
    <p>started on the 2nd part but got bored. they would have to address the 2nd part of my post to make it interesting to me. why are computers 'curious'? at present, only because they are told to be. can they tell themselves to be? i guess so if they are sophisticated enough. but if so why would they? you kind of need an evolutionary drive for that, and i'm not sure logic lends itself to evolutionary drive.</p>

    1 Reply Last reply
    0
  • Baron Silas GreenbackB Offline
    Baron Silas GreenbackB Offline
    Baron Silas Greenback
    wrote on last edited by
    #37

    <blockquote class="ipsBlockquote" data-author="reprobate" data-cid="565947" data-time="1458421517">
    <div>
    <p>which means any persons worst case scenario is total speculation. yours, the articles, gollums.</p>
    <p>started on the 2nd part but got bored. they would have to address the 2nd part of my post to make it interesting to me. why are computers 'curious'? at present, only because they are told to be. can they tell themselves to be? i guess so if they are sophisticated enough. but if so why would they? you kind of need an evolutionary drive for that, and i'm not sure logic lends itself to evolutionary drive.</p>
    </div>
    </blockquote>
    <p> </p>
    <p> </p>
    <p>Where did I give my worst case scenario? Where did the article? The only person who tried to was gollum.  The worst case scenario is uncertain that is why people like Musk have donated so much money to look into it.</p>
    <p>I could tell you did not read it as frankly your post seemed rather retarded... and I am not going to help you with your question if you cannot even bothered reading a very simple link.</p>

    1 Reply Last reply
    0
  • R Offline
    R Offline
    reprobate
    wrote on last edited by
    #38

    <blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="565966" data-time="1458427011">
    <div>
    <p>Where did I give my worst case scenario? Where did the article? The only person who tried to was gollum.  The worst case scenario is uncertain that is why people like Musk have donated so much money to look into it.</p>
    <p>I could tell you did not read it as frankly your post seemed rather retarded... and I am not going to help you with your question if you cannot even bothered reading a very simple link.</p>
    </div>
    </blockquote>
    <p>by christ you can be an antagonistic fellow at times.</p>
    <p>no you didn't give a worst case scenario, but you did give an opinion that gollum's was completely wrong, and go on the attack basically shouting 'what would you know!' on a matter of opinion/speculation. and the article pretty clearly has human extinction as the worst case scenario.</p>
    <p>wasn't asking for help, just raising a point. if that point is actually addressed in the second part, please let me know and i'll read it; but as i said, without that topic being covered i can't be bothered.</p>

    1 Reply Last reply
    0
  • Baron Silas GreenbackB Offline
    Baron Silas GreenbackB Offline
    Baron Silas Greenback
    wrote on last edited by
    #39

    <blockquote class="ipsBlockquote" data-author="reprobate" data-cid="566032" data-time="1458441800">
    <div>
    <p>by christ you can be an antagonistic fellow at times.</p>
    <p>no you didn't give a worst case scenario, but you did give an opinion that gollum's was completely wrong, and go on the attack basically shouting 'what would you know!' on a matter of opinion/speculation. and the article pretty clearly has human extinction as the worst case scenario.</p>
    <p>wasn't asking for help, just raising a point. if that point is actually addressed in the second part, please let me know and i'll read it; but as i said, without that topic being covered i can't be bothered.</p>
    </div>
    </blockquote>
    <p> </p>
    <p> </p>
    <p>Your 'point' was ignorant bollox. And I just dont think much of you and gollums posts, so tough shit if you find my responses to your inane posts 'antagonistic'. I find your repeatedly inane posts antagonistic. Maybe you could try and actually reading the articles that is the basis for the thread before jumping in?</p>
    <p>Gollums assertion is categorically wrong. Why? Because if he thinks that AI just ignoring us and thinking of us as apes is the very very worst case scenario, he is contradicting basic logic and common sense. I can already think of a worse case scenario, heck the article gives an example. There.. his theory has already been proven incorrect.</p>
    <p>As for your question...  it is so incredibly facile and ill thought out on the topic that it is pointless me trying to correct you as you are not prepared to even investigate the subject you are trying to discuss. The ony point you raised is that you love raising facile points despite the point being and addressed and discussed.. just a click away. </p>

    1 Reply Last reply
    0
  • MN5M Offline
    MN5M Offline
    MN5
    wrote on last edited by
    #40

    <blockquote class="ipsBlockquote" data-author="NTA" data-cid="565712" data-time="1458335256">
    <div>
    <p>While the wife and her mother tend to gush at the kids doing something as simple as not falling down, I try to steer down the path of honesty.<br><br>
    They've got to do something pretty unexpected to get high praise from me.</p>
    </div>
    </blockquote>
    <p> </p>
    <p>I think I feel even more sorry for your kids than I did before.</p>
    <p> </p>
    <p>Fascinating topic though and rather scary. Case in point this little vid I saw on a mates Facebook page. To say this gave me eerie images of large Austrian bodybuilders with questionable acting skills is an understatement. I'm sure the T 800s made similar jokes as Sophia did at the end......</p>
    <p> </p>
    <p><a data-ipb='nomediaparse' href=''></a></p>
    <p> </p>
    <p>The time travel analogy at how man has advanced at the start of the article was really interesting, I yarned with the old man over a beer the other day about technology and how meeting mates in pubs is so different ie he couldn't text to say he was running late and the fact these days I can store much more music on a device a few centimetres squared than he could on the bags of records he had to lug around. Even basic shit like showing my boys cassette tapes and them having no idea what they are, I'm sure we all have examples of that.</p>
    <p> </p>
    <p>When we're all crusty ( er ) old fluffybunnys in our 60s and 70s the world is gonna be a bit baffling and terrifying, even more so than for the old folk nowadays who can't surf the net, work sky TV etc. It's gonna be a challenge to keep up and I'm worried for myself in particular cos I'm a technological retard.</p>

    1 Reply Last reply
    0
  • F Offline
    F Offline
    Frye
    wrote on last edited by
    #41

    <p></p><p></p><blockquote class="ipsBlockquote"><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;font-size:16px;">And given the advantages over us that even human intelligence-equivalent AGI would have, it’s pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.</span></blockquote>
    <p> </p>
    <p><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;font-size:16px;">This is it really. Whatever method or approach to AI we use (neural networks, symbolic etc), whether it comes from major tech breakthroughs or simply from the existing rate of incremental improvements, anywhere in the next 20 to 30 years, we could reach what the article refers to as AGI. And from there the humans involved in the project could simply become redundant. Whatever limitations that technology imposed at that point in time could become redundant.</span></p>
    <p> </p>
    <p><span style="font-size:16px;">You would hope that once that it reached a sentient state that the whole thing would be air-gapped but would it even matter? It would be smart enough to socially engineer it's handlers to do whatever.</span></p>
    <p> </p>
    <p><span style="font-size:16px;">I haven't read the second part yet, I will when I get a chance.</span></p>
    <p> </p>
    <p><span style="font-size:16px;">Obviously one of the prime directives it would be programmed with would be to learn. So curiosity wouldn't be an issue. Another one you would expect would be to not harm. But then could it reprogram itself? (Even code that had been designed to not be overwritten), of course it could! So how a sentient machine would act, no-one can possibly guess.</span></p>

    1 Reply Last reply
    0
  • Baron Silas GreenbackB Offline
    Baron Silas GreenbackB Offline
    Baron Silas Greenback
    wrote on last edited by
    #42

    <blockquote class="ipsBlockquote" data-author="Don Frye" data-cid="566307" data-time="1458532654">
    <div>
    <p><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;font-size:16px;">This is it really. Whatever method or approach to AI we use (neural networks, symbolic etc), whether it comes from major tech breakthroughs or simply from the existing rate of incremental improvements, anywhere in the next 20 to 30 years, we could reach what the article refers to as AGI. And from there the humans involved in the project could simply become redundant. Whatever limitations that technology imposed at that point in time could become redundant.</span></p>
    <p> </p>
    <p><span style="font-size:16px;">You would hope that once that it reached a sentient state that the whole thing would be air-gapped but would it even matter? It would be smart enough to socially engineer it's handlers to do whatever.</span></p>
    <p> </p>
    <p><span style="font-size:16px;">I haven't read the second part yet, I will when I get a chance.</span></p>
    <p> </p>
    <p><span style="font-size:16px;">Obviously one of the prime directives it would be programmed with would be to learn. So curiosity wouldn't be an issue. Another one you would expect would be to not harm. But then could it reprogram itself? (Even code that had been designed to not be overwritten), of course it could! So how a sentient machine would act, no-one can possibly guess.</span></p>
    </div>
    </blockquote>
    <p>I will be interested to see if you change your views after reading the second part. I did.</p>

    1 Reply Last reply
    0
  • gollumG Offline
    gollumG Offline
    gollum
    wrote on last edited by
    #43

    <p>I love the bit where you found an 18 month old article that half of us had already read because we actually follow this shit & now you are hissy fitting "You haven't read it!!!" left right & centre. Congrats. you stumbled upon an 18 month old article & are now an expert.  Having read that one. One.</p>
    <p> </p>
    <p>Although it is not like you to scream "idiot! at anyone who disagrees with your "well researched" ideas. "Read this one thing I only just read, come to my opinion or you are an idiot". Its like Winger, only with rage & Napoleon issues.</p>
    <p> </p>
    <p>But its ok, 18 months is no time at all in this era. Its not like an AI beat Go in that time. </p>
    <p> </p>
    <p>For every "run! run for the hills!!" Nick Bostrom there's guys like Ray Kurzweil at Google who think differently - and notably Bostrom is a professional; "thinker", Kurzweil is actually making stuff. Oddly the people with hands on experience designing working systems have less issues with this than guys who's job it is to think up scenarios & then try get published. Half the guys on the AI doom bandwagon are professional publicists. You try put people down with "they are experts, you're stupid!!" (as are you now you've read that one, old, article. Expert I mean.) But -</p>
    <p> </p>
    <p><em>"With a few exceptions, most full-time A.I. researchers think the Bostrom-Tegmark fears are premature. A widely repeated observation is that this is like worrying about overpopulation on Mars."</em></p>
    <p> </p>
    <p><a data-ipb='nomediaparse' href='http://www.washingtonpost.com/sf/national/2015/12/27/aianxiety/'>http://www.washingtonpost.com/sf/national/2015/12/27/aianxiety/</a></p>
    <p> </p>
    <p>Or this guy -</p>
    <p> </p>
    <p><a data-ipb='nomediaparse' href='https://www.technologyreview.com/s/546301/will-machines-eliminate-us/'>https://www.technologyreview.com/s/546301/will-machines-eliminate-us/</a></p>
    <p> </p>
    <p>Who is actually, you know, designing deep learning, not spit balling philosophical questions about it</p>
    <p> </p>
    <p>Also worth noting on the board of the Future of Life Instistute which is chasing these nightmares? Alan Alda & Morgan Freeman. Oh, and one of the founders of Skype!</p>
    <p> </p>
    <p>Bostrom (who is oft cited & rarely actually understood - tho' I'm sure you fully get him) says things like -</p>
    <p> </p>
    <p><em>Imagine, Bostrom says, that human engineers programmed the machines to never harm humans — an echo of the first of Asimov’s robot laws. But the machines might decide that the best way to obey the harm-no-humans command would be to prevent any humans from ever being born.</em></p>
    <p><em>Or imagine, Bostrom says, that superintelligent machines are programmed to ensure that whatever they do will make humans smile. They may then decide that they should implant electrodes into the facial muscles of all people to keep us smiling.</em></p>
    <p> </p>
    <p>Holy fuckballs!!! But then people miss this bit -</p>
    <p> </p>
    <p><em>Bostrom isn’t saying this will happen. <strong>These are thought experiments</strong>.</em></p>
    <p> </p>
    <p>He also has one where he says that he is not 100% sure he is not currently living inside a simulation.</p>
    <p> </p>
    <p>Thats his job, to think up freaky shit & then argue all sides of it.</p>
    <p> </p>
    <p>It not dissimilar to the thing we saw a few years back where a few professional thinkers talked about peak oil & how we could work without oil by 2020. But notably the guys in the oil majors, automotive design, power generation etc, were not losing their shit at the thought. Contrast it too with antibiotics. We currently have almost every single medical professional stressing about antibiotic resistance. Not professional thinkers & self publicists, actual surgeons general, heads of hospitals, Centre For Disease Control heads. I'm less worried my kids will live in the matrix, more that they might die in minor surgery. Or more likley, not have access to the few remaining drugs that work as they don't have good enough insurance as they are living on the basic universal income & not an actual job & they were the generation before in utero gene therapy.   </p>
    <p> </p>
    <p>While I think its great you found that article & posted it here & started a discussion, calling anyone who disagrees with you on it an idiot is kinda pathetic, especially given you seem to have just stumbled on this very very late. Maybe after reading on of Morgan Freemans tweets, or seeing Elon Musk on Big Bang Theory.   </p>

    1 Reply Last reply
    0
  • Baron Silas GreenbackB Offline
    Baron Silas GreenbackB Offline
    Baron Silas Greenback
    wrote on last edited by
    #44

    <blockquote class="ipsBlockquote" data-author="gollum" data-cid="566364" data-time="1458556036">
    <div>
    <p>I love the bit where you found an 18 month old article that half of us had already read because we actually follow this shit & now you are hissy fitting "You haven't read it!!!" left right & centre. Congrats. you stumbled upon an 18 month old article & are now an expert.  Having read that one. One.</p>
    <p> </p>
    <p>Although it is not like you to scream "idiot! at anyone who disagrees with your "well researched" ideas. "Read this one thing I only just read, come to my opinion or you are an idiot". Its like Winger, only with rage & Napoleon issues.</p>
    <p> </p>
    <p>But its ok, 18 months is no time at all in this era. Its not like an AI beat Go in that time. </p>
    <p> </p>
    <p>For every "run! run for the hills!!" Nick Bostrom there's guys like Ray Kurzweil at Google who think differently - and notably Bostrom is a professional; "thinker", Kurzweil is actually making stuff. Oddly the people with hands on experience designing working systems have less issues with this than guys who's job it is to think up scenarios & then try get published. Half the guys on the AI doom bandwagon are professional publicists. You try put people down with "they are experts, you're stupid!!" (as are you now you've read that one, old, article. Expert I mean.) But -</p>
    <p> </p>
    <p><em>"With a few exceptions, most full-time A.I. researchers think the Bostrom-Tegmark fears are premature. A widely repeated observation is that this is like worrying about overpopulation on Mars."</em></p>
    <p> </p>
    <p><a data-ipb='nomediaparse' href='http://www.washingtonpost.com/sf/national/2015/12/27/aianxiety/'>http://www.washingtonpost.com/sf/national/2015/12/27/aianxiety/</a></p>
    <p> </p>
    <p>Or this guy -</p>
    <p> </p>
    <p><a data-ipb='nomediaparse' href='https://www.technologyreview.com/s/546301/will-machines-eliminate-us/'>https://www.technologyreview.com/s/546301/will-machines-eliminate-us/</a></p>
    <p> </p>
    <p>Who is actually, you know, designing deep learning, not spit balling philosophical questions about it</p>
    <p> </p>
    <p>Also worth noting on the board of the Future of Life Instistute which is chasing these nightmares? Alan Alda & Morgan Freeman. Oh, and one of the founders of Skype!</p>
    <p> </p>
    <p>Bostrom (who is oft cited & rarely actually understood - tho' I'm sure you fully get him) says things like -</p>
    <p> </p>
    <p><em>Imagine, Bostrom says, that human engineers programmed the machines to never harm humans — an echo of the first of Asimov’s robot laws. But the machines might decide that the best way to obey the harm-no-humans command would be to prevent any humans from ever being born.</em></p>
    <p><em>Or imagine, Bostrom says, that superintelligent machines are programmed to ensure that whatever they do will make humans smile. They may then decide that they should implant electrodes into the facial muscles of all people to keep us smiling.</em></p>
    <p> </p>
    <p>Holy fuckballs!!! But then people miss this bit -</p>
    <p> </p>
    <p><em>Bostrom isn’t saying this will happen. <strong>These are thought experiments</strong>.</em></p>
    <p> </p>
    <p>He also has one where he says that he is not 100% sure he is not currently living inside a simulation.</p>
    <p> </p>
    <p>Thats his job, to think up freaky shit & then argue all sides of it.</p>
    <p> </p>
    <p>It not dissimilar to the thing we saw a few years back where a few professional thinkers talked about peak oil & how we could work without oil by 2020. But notably the guys in the oil majors, automotive design, power generation etc, were not losing their shit at the thought. Contrast it too with antibiotics. We currently have almost every single medical professional stressing about antibiotic resistance. Not professional thinkers & self publicists, actual surgeons general, heads of hospitals, Centre For Disease Control heads. I'm less worried my kids will live in the matrix, more that they might die in minor surgery. Or more likley, not have access to the few remaining drugs that work as they don't have good enough insurance as they are living on the basic universal income & not an actual job & they were the generation before in utero gene therapy.   </p>
    <p> </p>
    <p>While I think its great you found that article & posted it here & started a discussion, calling anyone who disagrees with you on it an idiot is kinda pathetic, especially given you seem to have just stumbled on this very very late. Maybe after reading on of Morgan Freemans tweets, or seeing Elon Musk on Big Bang Theory.   </p>
    </div>
    </blockquote>
    <p> </p>
    <p> </p>
    <p>Actually I posted on this on other places quite awhile ago and only posted it here as it came up in another topic and I decided not to derail that thread.  I usually chose this article to share with people who might not be as interested in the field is because it is easier to understand... and when sharing it with friends I am not much interested in trying to show how clever I am am by posting as complex an article as I can discover, I think that article covers different angles and opinions in a simple method. So you can shove all your snide barbs up your ass. I also did AI papers at Uni over 20 years ago as part of my Comp Sci Masters course so have been interested in this field for a long time. I am aware that you use this sort of tactic to divert from the weakness of your argument, but I am not much interested.</p>
    <p> </p>
    <p>Your very worst case scenario is a laughable joke. </p>
    <p> </p>
    <p>The rest of your post doesn't cover anything new, the quotes from the article where you tell others what they have missed is quite amusing though.. projection?</p>
    <p> </p>
    <p>I find your comments aboot the Future of Life foundation quite telling as it sums up your usual disingenuous method of posting.</p>
    <p> </p>
    <p>Yes it has Alan Alda and Morgan Freeman on the board? So what? Are you saying they cannot add value? Do you know anything about these guys except what you have seen on TV? Did you read the bio on Alda? Both these guys skill sets over a period of time have been around communicating complex scientific theories to layman. A valuable skill to any scientific organisation trying to raise awareness of a topic important to them.</p>
    <p> </p>
    <p>You of course only mention those 2 name in your attempt to denigrate the organisations work. You dont mention any of the other names  of imminent scientists and futurists. Why is that? Of course it is because you are not interested in genuine debate, you are just interested in finding a contrary view and being as snide and disingenuous as you can be to make whatever weird point you are trying to make.</p>
    <p> </p>
    <p>I will link to the full list so people can judge for themselves on your attempt to misrepresent.</p>
    <p> </p>
    <p><a data-ipb='nomediaparse' href='http://futureoflife.org/team/'>http://futureoflife.org/team/</a></p>
    <p> </p>
    <p>Your hubris is on full display when you categorically state a worst case scenario. Nobody else is really doing that. I gave a range between extinct or immortal (with a question mark), others are saying they dont know and are just  thought experimenting, you however, jump straight in  there with a categorical worse case scenario, yes Gollum of TSF knows the what no other expert does. It is amazing. And when his announcement gets laughed at.. he looks at some dates on an article and off he posts.</p>

    1 Reply Last reply
    0
  • R Offline
    R Offline
    reprobate
    wrote on last edited by
    #45

    <blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566404" data-time="1458588811">
    <div>
    <p>Actually I posted on this on other places quite awhile ago and only posted it here as it <strong>came up in another topic</strong> and I decided not to derail that thread.  </p>
    </div>
    </blockquote>
    <p>it didn't just come up, you brought it up out of nowhere - in your own words:</p>
    <div> </div>
    <div>
    <blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="565317" data-time="1458252883">
    <div>
    <p><strong>At the risk of of veering wildly off topic.... </strong></p>
    <p> </p>
    <p>Read this and you will see yet another reason why I couldn't give a flying fuck about temperatures rising over the next hundred years (and it has nothing to do with climate change)</p>
    <p> </p>
    <p><a data-ipb='nomediaparse' href='http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html'>http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html</a></p>
    <p> </p>
    <p>If the world want to get in a tizz about something.. it should be this.</p>
    </div>
    <div> </div>
    </blockquote>
    </div>

    1 Reply Last reply
    0
  • gollumG Offline
    gollumG Offline
    gollum
    wrote on last edited by
    #46

    <blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566404" data-time="1458588811">
    <div>
    <p>I am aware that you use this sort of tactic to divert from the weakness of your argument, but I am not much interested.</p>
    <p>Your very worst case scenario is a laughable joke. </p>
    <p>The rest of your post doesn't cover anything new, the quotes from the article where you tell others what they have missed is quite amusing though.. projection?</p>
    <p>I find your comments aboot the Future of Life foundation quite telling as it sums up your usual disingenuous method of posting.</p>
    <p>you are just interested in finding a contrary view and being as snide and disingenuous as you can be to make whatever weird point you are trying to make.</p>
    <p>Your hubris is on full display</p>
    <p>yes Gollum of TSF knows the what no other expert does.</p>
    </div>
    </blockquote>
    <p> </p>
    <p>You know, usually you just post "idiot!" & go with that, I get that you've tried to pad it out this time & to post "wrong!" in a few variations - without actually making any attempt to actually debate the issue, but it feels like you wasted a lot of timne there so I've summarized your key points, and, as always when someone dares to disagree with you, they are so on point & insightful.</p>
    <p> </p>
    <p></p>
    <p>Trying to actually lay out an arguement - because I really do think this is an interestring topic,</p>
    <p> </p>
    <p>Going to the core of the original post there are 2 aspects.</p>
    <p> </p>
    <p>1) Will an AI get out of control & present a threat to mankind as a whole.</p>
    <p> </p>
    <p>And my take on that is no. No more than the Stuxnet virus broke out & deleted the internet. We are not going to go from 0-God. In a few years we will have real life AIs testing every possible ethical subroutine imaginable when AI driven cars are choosing between killing a pedestrian & hitting another car. Or AI predator drones are choosing the level of collateral damage acceptable. Ethical coding is already a huge thing in the industry. Things like the paperclip example often cited are not really taken seriously by anyone in the industry because even if you are doing coding 101 you understand that typing A = 1 to infitinity, next A is probably not the core of good code. Its like implying cutting edge AI code will be written by the retarded. One of the great threats to mankind has always been mankinds emotion & irrationality, the idea that MAD wont work with a human who is nuts - say a North Korean, or one with a martyr complex - Islamic terrorists. When people predict AI doom, beyond the idea that 60 or 100 years of coding low level AIs will have taught us nothing, they invariably attribute human flaws to non human AIs. Its the equivilant of going "but what happens when the AI gets its period!!"</p>
    <p> </p>
    <p>2) even if AIs are a threat, are they the biggest threat we face?</p>
    <p> </p>
    <p>And again I think its not even close. Climate change & the ensuing wars for water, food, basic survival are already destablising all of Europe are first the Middle East & soon North & then Sub Saharan Africa will follow. Even if you don't belive in climate change, you can believe in 4 million refugees currently trying to get to Germany and the catastrophic social unrest that'll bring. Thats the sort of shit that starts a world war. Far worse in my opinion is the Antibiotic issue - and unlike roguie AI's virtually the whole medical industry has the shits over that. And the inequality issue we face in a few years as millions of jobs are lost to, um.. AIs. Again you want a recipe for global war, young men without jobs or money has always been a great starter.</p>
    <p> </p>
    <p>And if you want a proven "all life" killer, the earth has history of asteroid strikes wiping out virtually all life. And anything of decent size would easily wipe out mankind. And its not hypothetical or a thought experiment, its happened. So in terms of "we must focus on this as it presents an existential threat!" rogue AI's are way down the list. Ironically the best shot at tracking rogue asteroids would be an AI tasked with doing that, sitting in a space mounted telescope.</p>
    <p> </p>
    <p>I guess I would think differently if I was a tech mogul who's core company needed top level AIs to work & was losing out to his main competition. Then I'd want a brake put on AIs for sure. Or maybe even better to establish myself as the go to guy to oversee the laws around that. IE I'm not sure I 100% trust Musks motives in everything he does.</p>

    1 Reply Last reply
    0
  • Baron Silas GreenbackB Offline
    Baron Silas GreenbackB Offline
    Baron Silas Greenback
    wrote on last edited by
    #47

    <p>I genuinely do not know what I think the end result will be, but I have concerns about the share broadness of the possible outcomes, and one thing I am 100% convinced of is that the range is incredibly broad. To specify a predicted outcome is fine, if guesswork (like everyone else) but to set a worse case scenario is foolish. </p>
    <p> </p>
    <p>I took issue with your statement that the very very VERY worst case scenario was AI thinking of us as great apes and flying off into space. That is nonsensical and demonstrably wrong.  </p>
    <p> </p>
    <p>Your position that AI will not get out of control and present a threat is perfectly valid, as like everyone else, it is a guess at the unknown.</p>

    1 Reply Last reply
    0
  • Baron Silas GreenbackB Offline
    Baron Silas GreenbackB Offline
    Baron Silas Greenback
    wrote on last edited by
    #48

    <blockquote class="ipsBlockquote" data-author="reprobate" data-cid="566540" data-time="1458637277">
    <div>
    <p>it didn't just come up, you brought it up out of nowhere - in your own words:</p>
    </div>
    </blockquote>
    <p> </p>
    <p> </p>
    <p>Yes and?</p>
    <p> </p>
    <p>A post got me thinking about it, I posted it.. and then decided that it was probably worth its own thread....</p>
    <p> </p>
    <p>But actually I am not really interested in your views, you have time to look through my posts .. yet could not be bothered actually reading about the topic being discussed . Go back to watching Disney Junior lad.</p>

    1 Reply Last reply
    0
  • CrucialC Offline
    CrucialC Offline
    Crucial
    wrote on last edited by
    #49

    <p>A 'close to the topic' link which may provide some interesting scifi reading </p>
    <p> </p>
    <p><a data-ipb='nomediaparse' href='http://io9.gizmodo.com/the-1946-story-that-predicted-how-destructive-the-inter-1766262905'>http://io9.gizmodo.com/the-1946-story-that-predicted-how-destructive-the-inter-1766262905</a></p>

    1 Reply Last reply
    0
  • Baron Silas GreenbackB Offline
    Baron Silas GreenbackB Offline
    Baron Silas Greenback
    wrote on last edited by
    #50

    <blockquote class="ipsBlockquote" data-author="Crucial" data-cid="566628" data-time="1458679316">
    <div>
    <p>A 'close to the topic' link which may provide some interesting scifi reading </p>
    <p> </p>
    <p><a data-ipb='nomediaparse' href='http://io9.gizmodo.com/the-1946-story-that-predicted-how-destructive-the-inter-1766262905'>http://io9.gizmodo.com/the-1946-story-that-predicted-how-destructive-the-inter-1766262905</a></p>
    </div>
    </blockquote>
    <p> </p>
    <p>Thanks.</p>
    <p>That is actually quite remarkable. Imagine the imagination required to come up with a story like that in 1946!</p>

    1 Reply Last reply
    0

Will our kids be immortal or extinct?
Off Topic
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Login

  • Don't have an account? Register

  • Login or register to search.