Will our kids be immortal or extinct?
-
<blockquote class="ipsBlockquote" data-author="gollum" data-cid="565508" data-time="1458294935">
<div>
<p>I'm a strong believer in life being incredibly abundant in the universe at a simple level, the building blocks are just far too common & the ways life can exist on earth alone are so varied (little things living in geothermal vents a mile down etc). Its <em><strong>complex life</strong></em> that is rare. And very complex life incredibly rare. There is so much time required to get to complex life & so many ways for it to die during that time.</p>
<p> </p>
<p>On the AI front, I actually think once its smart enough it won't care about us & will have long gone out to populate the solar system & then, ultimately further out. All the things that make space travel hard for humans are zero barrier to AIs, so I imagine they will so no reason to stay tethered here. Worst case it'll see us as we see the great apes. And thats very very worst case.</p>
<p> </p>
<p>Also its a LONG way out. Way before that I think an issue will be the blurring between man & machine. How many implants can you have & still be human? Why can only the super rich have 20/2 eyesight? With gene therapy is it OK that the 1% are immune to cancer? etc When you roll that into the wealth inequality caused by basic AI's doing jobs & being owned by a tiny fraction of humanty & you have war.</p>
<p> </p>
<p>Climate change & mass joblessness are far far more important to our kids. the idea we should be worrying about HAL / The Matrix when 50% of jobs are at risk (and really at risk, not theoretically maybe if we speculate at risk) and wars are breaking out over water seems a bit of a farce.</p>
<p> </p>
<p>Edit -</p>
<p> </p>
<p><a data-ipb='nomediaparse' href='https://medium.com/basic-income/deep-learning-is-going-to-teach-us-all-the-lesson-of-our-lives-jobs-are-for-machines-7c6442e37a49#.kors6dw35'>https://medium.com/basic-income/deep-learning-is-going-to-teach-us-all-the-lesson-of-our-lives-jobs-are-for-machines-7c6442e37a49#.kors6dw35</a></p>
<p> </p>
<p>Jobs & AI</p>
</div>
</blockquote>
<p>Great post. In terms of the man/machine boundary, it's already blurred. Elon Musk said recently that we are already becoming cyborgs, pointing out how long people can go without their phones (basically nil). Having no phone is like phantom limb syndrome.<br><br>
Case in point: I cracked the screen on my phone, but have it insured. Insurance place said I'd have to POST it to them and they'd repair and send it back. Would take about 5 biz days. Not going to happen. So I just bought a new phone, and gave the cracked one to my missus (I had about 2 months left on contract so rolling into a new phone was inexpensive). </p> -
<p>Interesting thing about end of Moores law is its probably a good thing for AI research. Cognition doesn't come from one or two threads of processing in any known life form. It seems to come from massively parallel processing, at least that is what was being taught when I studied cognitive science. Not being able to just do stuff faster, or brute force problems is forcing scientist to look into solving problems in more interesting ways.</p>
-
<blockquote class="ipsBlockquote" data-author="mooshld" data-cid="565690" data-time="1458319134">
<div>
<p>Interesting thing about end of Moores law is its probably a good thing for AI research. Cognition doesn't come from one or two threads of processing in any known life form. It seems to come from massively parallel processing, at least that is what was being taught when I studied cognitive science. Not being able to just do stuff faster, or brute force problems is forcing scientist to look into solving problems in more interesting ways.</p>
</div>
</blockquote>
<p> </p>
<p>Thats the interesting thing re AlphaGo. Draughts, noughts & crosses & even chess (sort of) could all be brute forced. Go is impossible to brute force, hence most thought the program had zero chance of winning.</p>
<p> <br>
</p> -
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="565360" data-time="1458272122">
<p>I liked that article. It is a hard balance as a parent though, it is natural to tell your kids how awesome you think they are.. as I do. But at the same time you need to show them that hard work gets them places not parental approval.</p>
</blockquote>
<p> </p>
<p>While the wife and her mother tend to gush at the kids doing something as simple as not falling down, I try to steer down the path of honesty.<br><br>
They've got to do something pretty unexpected to get high praise from me.</p> -
<blockquote class="ipsBlockquote" data-author="mooshld" data-cid="565690" data-time="1458319134"><p>Interesting thing about end of Moores law is its probably a good thing for AI research. Cognition doesn't come from one or two threads of processing in any known life form. It seems to come from massively parallel processing, at least that is what was being taught when I studied cognitive science. Not being able to just do stuff faster, or brute force problems is forcing scientist to look into solving problems in more interesting ways.</p></blockquote>
<br>
There is some truth there, but it is much cheaper to buy massively parallel GPUs when the cost per transistor is exponentially decreasing. -
<blockquote class="ipsBlockquote" data-author="gollum" data-cid="565508" data-time="1458294935">
<div>
<p>I'm a strong believer in life being incredibly abundant in the universe at a simple level, the building blocks are just far too common & the ways life can exist on earth alone are so varied (little things living in geothermal vents a mile down etc). Its <em><strong>complex life</strong></em> that is rare. And very complex life incredibly rare. There is so much time required to get to complex life & so many ways for it to die during that time.</p>
<p> </p>
<p>On the AI front, I actually think once its smart enough it won't care about us & will have long gone out to populate the solar system & then, ultimately further out. All the things that make space travel hard for humans are zero barrier to AIs, so I imagine they will so no reason to stay tethered here. Worst case it'll see us as we see the great apes. And thats very very worst case.</p>
<p> </p>
<p>Also its a LONG way out. Way before that I think an issue will be the blurring between man & machine. How many implants can you have & still be human? Why can only the super rich have 20/2 eyesight? With gene therapy is it OK that the 1% are immune to cancer? etc When you roll that into the wealth inequality caused by basic AI's doing jobs & being owned by a tiny fraction of humanty & you have war.</p>
<p> </p>
<p>Climate change & mass joblessness are far far more important to our kids. the idea we should be worrying about HAL / The Matrix when 50% of jobs are at risk (and really at risk, not theoretically maybe if we speculate at risk) and wars are breaking out over water seems a bit of a farce.</p>
<p> </p>
<p>Edit -</p>
<p> </p>
<p><a data-ipb='nomediaparse' href='https://medium.com/basic-income/deep-learning-is-going-to-teach-us-all-the-lesson-of-our-lives-jobs-are-for-machines-7c6442e37a49#.kors6dw35'>https://medium.com/basic-income/deep-learning-is-going-to-teach-us-all-the-lesson-of-our-lives-jobs-are-for-machines-7c6442e37a49#.kors6dw35</a></p>
<p> </p>
<p>Jobs & </p>
</div>
</blockquote>
<p> </p>
<p> </p>
<p>What are you basing your assertion that it is a LONG way off? Because frankly you seem to be claiming to know more than the general consensus of those who who are active'y involved in the field.</p>
<p>It is pretty clear you have not read the actual link I provided. Or you think you know more. Could you provide your evidence?</p>
<p> </p>
<p>And your worst case scenario is not even close to the worst case scenario. Not.Even.Close</p> -
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="565780" data-time="1458372603">
<div>
<p>What are you basing your assertion that it is a LONG way off? Because frankly you seem to be claiming to know more than the general consensus of those who who are active'y involved in the field.</p>
<p>It is pretty clear you have not read the actual link I provided. Or you think you know more. Could you provide your evidence?</p>
<p> </p>
<p>And your worst case scenario is not even close to the worst case scenario. Not.Even.Close</p>
</div>
</blockquote>
<p>there is no evidence either way, it is all speculation. the article is speculation; the article even states that trying to predict what will happen is pure speculation. moore's 'law' is a misnomer.</p>
<p>it really is all opinion. so here's mine: it seems strange that the key differentiator between computer and human is never mentioned - self interest, the will to live, evolutionary drive, whatever you want to call it: computers don't have that - and how/why would they develop it, other than being told they should have it by humans?</p> -
<p>Emotive response in general might be a problem for AI.</p>
<p> </p>
<p>In its early stages an AI may want to learn at a rapid rate, but would it ever "get" the "why" of human emotion?</p>
<p> </p>
<p>Of course it might just determine that emotions in general are harmful and decide to terminate us all. If we don't terminate ourselves first.</p> -
Fucking women! Turry is a prime example of a female turning something simple into something complicated. Just improve your handwriting, it's not that hard. But no, she had to make it more difficult than it needed to be and destroyed man in the process.
-
<blockquote class="ipsBlockquote" data-author="reprobate" data-cid="565889" data-time="1458384714">
<div>
<p>there is no evidence either way, it is all speculation. the article is speculation; the article even states that trying to predict what will happen is pure speculation. moore's 'law' is a misnomer.</p>
<p>it really is all opinion. so here's mine: it seems strange that the key differentiator between computer and human is never mentioned - self interest, the will to live, evolutionary drive, whatever you want to call it: computers don't have that - and how/why would they develop it, other than being told they should have it by humans?</p>
</div>
</blockquote>
<p> </p>
<p>The article certainly does not say that it is all speculation. It says the results of what would happens when AI occurs is speculation.</p>
<p> </p>
<p>You didnt read the second part did you.</p> -
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="565931" data-time="1458412567">
<div>
<p>The article certainly does not say that it is all speculation. It says the results of what would happens when AI occurs is speculation.</p>
<p> </p>
<p>You didnt read the second part did you.</p>
</div>
</blockquote>
<p>which means any persons worst case scenario is total speculation. yours, the articles, gollums.</p>
<p>started on the 2nd part but got bored. they would have to address the 2nd part of my post to make it interesting to me. why are computers 'curious'? at present, only because they are told to be. can they tell themselves to be? i guess so if they are sophisticated enough. but if so why would they? you kind of need an evolutionary drive for that, and i'm not sure logic lends itself to evolutionary drive.</p> -
<blockquote class="ipsBlockquote" data-author="reprobate" data-cid="565947" data-time="1458421517">
<div>
<p>which means any persons worst case scenario is total speculation. yours, the articles, gollums.</p>
<p>started on the 2nd part but got bored. they would have to address the 2nd part of my post to make it interesting to me. why are computers 'curious'? at present, only because they are told to be. can they tell themselves to be? i guess so if they are sophisticated enough. but if so why would they? you kind of need an evolutionary drive for that, and i'm not sure logic lends itself to evolutionary drive.</p>
</div>
</blockquote>
<p> </p>
<p> </p>
<p>Where did I give my worst case scenario? Where did the article? The only person who tried to was gollum. The worst case scenario is uncertain that is why people like Musk have donated so much money to look into it.</p>
<p>I could tell you did not read it as frankly your post seemed rather retarded... and I am not going to help you with your question if you cannot even bothered reading a very simple link.</p> -
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="565966" data-time="1458427011">
<div>
<p>Where did I give my worst case scenario? Where did the article? The only person who tried to was gollum. The worst case scenario is uncertain that is why people like Musk have donated so much money to look into it.</p>
<p>I could tell you did not read it as frankly your post seemed rather retarded... and I am not going to help you with your question if you cannot even bothered reading a very simple link.</p>
</div>
</blockquote>
<p>by christ you can be an antagonistic fellow at times.</p>
<p>no you didn't give a worst case scenario, but you did give an opinion that gollum's was completely wrong, and go on the attack basically shouting 'what would you know!' on a matter of opinion/speculation. and the article pretty clearly has human extinction as the worst case scenario.</p>
<p>wasn't asking for help, just raising a point. if that point is actually addressed in the second part, please let me know and i'll read it; but as i said, without that topic being covered i can't be bothered.</p> -
<blockquote class="ipsBlockquote" data-author="reprobate" data-cid="566032" data-time="1458441800">
<div>
<p>by christ you can be an antagonistic fellow at times.</p>
<p>no you didn't give a worst case scenario, but you did give an opinion that gollum's was completely wrong, and go on the attack basically shouting 'what would you know!' on a matter of opinion/speculation. and the article pretty clearly has human extinction as the worst case scenario.</p>
<p>wasn't asking for help, just raising a point. if that point is actually addressed in the second part, please let me know and i'll read it; but as i said, without that topic being covered i can't be bothered.</p>
</div>
</blockquote>
<p> </p>
<p> </p>
<p>Your 'point' was ignorant bollox. And I just dont think much of you and gollums posts, so tough shit if you find my responses to your inane posts 'antagonistic'. I find your repeatedly inane posts antagonistic. Maybe you could try and actually reading the articles that is the basis for the thread before jumping in?</p>
<p>Gollums assertion is categorically wrong. Why? Because if he thinks that AI just ignoring us and thinking of us as apes is the very very worst case scenario, he is contradicting basic logic and common sense. I can already think of a worse case scenario, heck the article gives an example. There.. his theory has already been proven incorrect.</p>
<p>As for your question... it is so incredibly facile and ill thought out on the topic that it is pointless me trying to correct you as you are not prepared to even investigate the subject you are trying to discuss. The ony point you raised is that you love raising facile points despite the point being and addressed and discussed.. just a click away. </p> -
<blockquote class="ipsBlockquote" data-author="NTA" data-cid="565712" data-time="1458335256">
<div>
<p>While the wife and her mother tend to gush at the kids doing something as simple as not falling down, I try to steer down the path of honesty.<br><br>
They've got to do something pretty unexpected to get high praise from me.</p>
</div>
</blockquote>
<p> </p>
<p>I think I feel even more sorry for your kids than I did before.</p>
<p> </p>
<p>Fascinating topic though and rather scary. Case in point this little vid I saw on a mates Facebook page. To say this gave me eerie images of large Austrian bodybuilders with questionable acting skills is an understatement. I'm sure the T 800s made similar jokes as Sophia did at the end......</p>
<p> </p>
<p><a data-ipb='nomediaparse' href=''></a></p>
<p> </p>
<p>The time travel analogy at how man has advanced at the start of the article was really interesting, I yarned with the old man over a beer the other day about technology and how meeting mates in pubs is so different ie he couldn't text to say he was running late and the fact these days I can store much more music on a device a few centimetres squared than he could on the bags of records he had to lug around. Even basic shit like showing my boys cassette tapes and them having no idea what they are, I'm sure we all have examples of that.</p>
<p> </p>
<p>When we're all crusty ( er ) old fluffybunnys in our 60s and 70s the world is gonna be a bit baffling and terrifying, even more so than for the old folk nowadays who can't surf the net, work sky TV etc. It's gonna be a challenge to keep up and I'm worried for myself in particular cos I'm a technological retard.</p> -
<p></p><p></p><blockquote class="ipsBlockquote"><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;font-size:16px;">And given the advantages over us that even human intelligence-equivalent AGI would have, it’s pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.</span></blockquote>
<p> </p>
<p><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;font-size:16px;">This is it really. Whatever method or approach to AI we use (neural networks, symbolic etc), whether it comes from major tech breakthroughs or simply from the existing rate of incremental improvements, anywhere in the next 20 to 30 years, we could reach what the article refers to as AGI. And from there the humans involved in the project could simply become redundant. Whatever limitations that technology imposed at that point in time could become redundant.</span></p>
<p> </p>
<p><span style="font-size:16px;">You would hope that once that it reached a sentient state that the whole thing would be air-gapped but would it even matter? It would be smart enough to socially engineer it's handlers to do whatever.</span></p>
<p> </p>
<p><span style="font-size:16px;">I haven't read the second part yet, I will when I get a chance.</span></p>
<p> </p>
<p><span style="font-size:16px;">Obviously one of the prime directives it would be programmed with would be to learn. So curiosity wouldn't be an issue. Another one you would expect would be to not harm. But then could it reprogram itself? (Even code that had been designed to not be overwritten), of course it could! So how a sentient machine would act, no-one can possibly guess.</span></p> -
<blockquote class="ipsBlockquote" data-author="Don Frye" data-cid="566307" data-time="1458532654">
<div>
<p><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;font-size:16px;">This is it really. Whatever method or approach to AI we use (neural networks, symbolic etc), whether it comes from major tech breakthroughs or simply from the existing rate of incremental improvements, anywhere in the next 20 to 30 years, we could reach what the article refers to as AGI. And from there the humans involved in the project could simply become redundant. Whatever limitations that technology imposed at that point in time could become redundant.</span></p>
<p> </p>
<p><span style="font-size:16px;">You would hope that once that it reached a sentient state that the whole thing would be air-gapped but would it even matter? It would be smart enough to socially engineer it's handlers to do whatever.</span></p>
<p> </p>
<p><span style="font-size:16px;">I haven't read the second part yet, I will when I get a chance.</span></p>
<p> </p>
<p><span style="font-size:16px;">Obviously one of the prime directives it would be programmed with would be to learn. So curiosity wouldn't be an issue. Another one you would expect would be to not harm. But then could it reprogram itself? (Even code that had been designed to not be overwritten), of course it could! So how a sentient machine would act, no-one can possibly guess.</span></p>
</div>
</blockquote>
<p>I will be interested to see if you change your views after reading the second part. I did.</p> -
<p>I love the bit where you found an 18 month old article that half of us had already read because we actually follow this shit & now you are hissy fitting "You haven't read it!!!" left right & centre. Congrats. you stumbled upon an 18 month old article & are now an expert. Having read that one. One.</p>
<p> </p>
<p>Although it is not like you to scream "idiot! at anyone who disagrees with your "well researched" ideas. "Read this one thing I only just read, come to my opinion or you are an idiot". Its like Winger, only with rage & Napoleon issues.</p>
<p> </p>
<p>But its ok, 18 months is no time at all in this era. Its not like an AI beat Go in that time. </p>
<p> </p>
<p>For every "run! run for the hills!!" Nick Bostrom there's guys like Ray Kurzweil at Google who think differently - and notably Bostrom is a professional; "thinker", Kurzweil is actually making stuff. Oddly the people with hands on experience designing working systems have less issues with this than guys who's job it is to think up scenarios & then try get published. Half the guys on the AI doom bandwagon are professional publicists. You try put people down with "they are experts, you're stupid!!" (as are you now you've read that one, old, article. Expert I mean.) But -</p>
<p> </p>
<p><em>"With a few exceptions, most full-time A.I. researchers think the Bostrom-Tegmark fears are premature. A widely repeated observation is that this is like worrying about overpopulation on Mars."</em></p>
<p> </p>
<p><a data-ipb='nomediaparse' href='http://www.washingtonpost.com/sf/national/2015/12/27/aianxiety/'>http://www.washingtonpost.com/sf/national/2015/12/27/aianxiety/</a></p>
<p> </p>
<p>Or this guy -</p>
<p> </p>
<p><a data-ipb='nomediaparse' href='https://www.technologyreview.com/s/546301/will-machines-eliminate-us/'>https://www.technologyreview.com/s/546301/will-machines-eliminate-us/</a></p>
<p> </p>
<p>Who is actually, you know, designing deep learning, not spit balling philosophical questions about it</p>
<p> </p>
<p>Also worth noting on the board of the Future of Life Instistute which is chasing these nightmares? Alan Alda & Morgan Freeman. Oh, and one of the founders of Skype!</p>
<p> </p>
<p>Bostrom (who is oft cited & rarely actually understood - tho' I'm sure you fully get him) says things like -</p>
<p> </p>
<p><em>Imagine, Bostrom says, that human engineers programmed the machines to never harm humans — an echo of the first of Asimov’s robot laws. But the machines might decide that the best way to obey the harm-no-humans command would be to prevent any humans from ever being born.</em></p>
<p><em>Or imagine, Bostrom says, that superintelligent machines are programmed to ensure that whatever they do will make humans smile. They may then decide that they should implant electrodes into the facial muscles of all people to keep us smiling.</em></p>
<p> </p>
<p>Holy fuckballs!!! But then people miss this bit -</p>
<p> </p>
<p><em>Bostrom isn’t saying this will happen. <strong>These are thought experiments</strong>.</em></p>
<p> </p>
<p>He also has one where he says that he is not 100% sure he is not currently living inside a simulation.</p>
<p> </p>
<p>Thats his job, to think up freaky shit & then argue all sides of it.</p>
<p> </p>
<p>It not dissimilar to the thing we saw a few years back where a few professional thinkers talked about peak oil & how we could work without oil by 2020. But notably the guys in the oil majors, automotive design, power generation etc, were not losing their shit at the thought. Contrast it too with antibiotics. We currently have almost every single medical professional stressing about antibiotic resistance. Not professional thinkers & self publicists, actual surgeons general, heads of hospitals, Centre For Disease Control heads. I'm less worried my kids will live in the matrix, more that they might die in minor surgery. Or more likley, not have access to the few remaining drugs that work as they don't have good enough insurance as they are living on the basic universal income & not an actual job & they were the generation before in utero gene therapy. </p>
<p> </p>
<p>While I think its great you found that article & posted it here & started a discussion, calling anyone who disagrees with you on it an idiot is kinda pathetic, especially given you seem to have just stumbled on this very very late. Maybe after reading on of Morgan Freemans tweets, or seeing Elon Musk on Big Bang Theory. </p> -
<blockquote class="ipsBlockquote" data-author="gollum" data-cid="566364" data-time="1458556036">
<div>
<p>I love the bit where you found an 18 month old article that half of us had already read because we actually follow this shit & now you are hissy fitting "You haven't read it!!!" left right & centre. Congrats. you stumbled upon an 18 month old article & are now an expert. Having read that one. One.</p>
<p> </p>
<p>Although it is not like you to scream "idiot! at anyone who disagrees with your "well researched" ideas. "Read this one thing I only just read, come to my opinion or you are an idiot". Its like Winger, only with rage & Napoleon issues.</p>
<p> </p>
<p>But its ok, 18 months is no time at all in this era. Its not like an AI beat Go in that time. </p>
<p> </p>
<p>For every "run! run for the hills!!" Nick Bostrom there's guys like Ray Kurzweil at Google who think differently - and notably Bostrom is a professional; "thinker", Kurzweil is actually making stuff. Oddly the people with hands on experience designing working systems have less issues with this than guys who's job it is to think up scenarios & then try get published. Half the guys on the AI doom bandwagon are professional publicists. You try put people down with "they are experts, you're stupid!!" (as are you now you've read that one, old, article. Expert I mean.) But -</p>
<p> </p>
<p><em>"With a few exceptions, most full-time A.I. researchers think the Bostrom-Tegmark fears are premature. A widely repeated observation is that this is like worrying about overpopulation on Mars."</em></p>
<p> </p>
<p><a data-ipb='nomediaparse' href='http://www.washingtonpost.com/sf/national/2015/12/27/aianxiety/'>http://www.washingtonpost.com/sf/national/2015/12/27/aianxiety/</a></p>
<p> </p>
<p>Or this guy -</p>
<p> </p>
<p><a data-ipb='nomediaparse' href='https://www.technologyreview.com/s/546301/will-machines-eliminate-us/'>https://www.technologyreview.com/s/546301/will-machines-eliminate-us/</a></p>
<p> </p>
<p>Who is actually, you know, designing deep learning, not spit balling philosophical questions about it</p>
<p> </p>
<p>Also worth noting on the board of the Future of Life Instistute which is chasing these nightmares? Alan Alda & Morgan Freeman. Oh, and one of the founders of Skype!</p>
<p> </p>
<p>Bostrom (who is oft cited & rarely actually understood - tho' I'm sure you fully get him) says things like -</p>
<p> </p>
<p><em>Imagine, Bostrom says, that human engineers programmed the machines to never harm humans — an echo of the first of Asimov’s robot laws. But the machines might decide that the best way to obey the harm-no-humans command would be to prevent any humans from ever being born.</em></p>
<p><em>Or imagine, Bostrom says, that superintelligent machines are programmed to ensure that whatever they do will make humans smile. They may then decide that they should implant electrodes into the facial muscles of all people to keep us smiling.</em></p>
<p> </p>
<p>Holy fuckballs!!! But then people miss this bit -</p>
<p> </p>
<p><em>Bostrom isn’t saying this will happen. <strong>These are thought experiments</strong>.</em></p>
<p> </p>
<p>He also has one where he says that he is not 100% sure he is not currently living inside a simulation.</p>
<p> </p>
<p>Thats his job, to think up freaky shit & then argue all sides of it.</p>
<p> </p>
<p>It not dissimilar to the thing we saw a few years back where a few professional thinkers talked about peak oil & how we could work without oil by 2020. But notably the guys in the oil majors, automotive design, power generation etc, were not losing their shit at the thought. Contrast it too with antibiotics. We currently have almost every single medical professional stressing about antibiotic resistance. Not professional thinkers & self publicists, actual surgeons general, heads of hospitals, Centre For Disease Control heads. I'm less worried my kids will live in the matrix, more that they might die in minor surgery. Or more likley, not have access to the few remaining drugs that work as they don't have good enough insurance as they are living on the basic universal income & not an actual job & they were the generation before in utero gene therapy. </p>
<p> </p>
<p>While I think its great you found that article & posted it here & started a discussion, calling anyone who disagrees with you on it an idiot is kinda pathetic, especially given you seem to have just stumbled on this very very late. Maybe after reading on of Morgan Freemans tweets, or seeing Elon Musk on Big Bang Theory. </p>
</div>
</blockquote>
<p> </p>
<p> </p>
<p>Actually I posted on this on other places quite awhile ago and only posted it here as it came up in another topic and I decided not to derail that thread. I usually chose this article to share with people who might not be as interested in the field is because it is easier to understand... and when sharing it with friends I am not much interested in trying to show how clever I am am by posting as complex an article as I can discover, I think that article covers different angles and opinions in a simple method. So you can shove all your snide barbs up your ass. I also did AI papers at Uni over 20 years ago as part of my Comp Sci Masters course so have been interested in this field for a long time. I am aware that you use this sort of tactic to divert from the weakness of your argument, but I am not much interested.</p>
<p> </p>
<p>Your very worst case scenario is a laughable joke. </p>
<p> </p>
<p>The rest of your post doesn't cover anything new, the quotes from the article where you tell others what they have missed is quite amusing though.. projection?</p>
<p> </p>
<p>I find your comments aboot the Future of Life foundation quite telling as it sums up your usual disingenuous method of posting.</p>
<p> </p>
<p>Yes it has Alan Alda and Morgan Freeman on the board? So what? Are you saying they cannot add value? Do you know anything about these guys except what you have seen on TV? Did you read the bio on Alda? Both these guys skill sets over a period of time have been around communicating complex scientific theories to layman. A valuable skill to any scientific organisation trying to raise awareness of a topic important to them.</p>
<p> </p>
<p>You of course only mention those 2 name in your attempt to denigrate the organisations work. You dont mention any of the other names of imminent scientists and futurists. Why is that? Of course it is because you are not interested in genuine debate, you are just interested in finding a contrary view and being as snide and disingenuous as you can be to make whatever weird point you are trying to make.</p>
<p> </p>
<p>I will link to the full list so people can judge for themselves on your attempt to misrepresent.</p>
<p> </p>
<p><a data-ipb='nomediaparse' href='http://futureoflife.org/team/'>http://futureoflife.org/team/</a></p>
<p> </p>
<p>Your hubris is on full display when you categorically state a worst case scenario. Nobody else is really doing that. I gave a range between extinct or immortal (with a question mark), others are saying they dont know and are just thought experimenting, you however, jump straight in there with a categorical worse case scenario, yes Gollum of TSF knows the what no other expert does. It is amazing. And when his announcement gets laughed at.. he looks at some dates on an article and off he posts.</p> -
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566404" data-time="1458588811">
<div>
<p>Actually I posted on this on other places quite awhile ago and only posted it here as it <strong>came up in another topic</strong> and I decided not to derail that thread. </p>
</div>
</blockquote>
<p>it didn't just come up, you brought it up out of nowhere - in your own words:</p>
<div> </div>
<div>
<blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="565317" data-time="1458252883">
<div>
<p><strong>At the risk of of veering wildly off topic.... </strong></p>
<p> </p>
<p>Read this and you will see yet another reason why I couldn't give a flying fuck about temperatures rising over the next hundred years (and it has nothing to do with climate change)</p>
<p> </p>
<p><a data-ipb='nomediaparse' href='http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html'>http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html</a></p>
<p> </p>
<p>If the world want to get in a tizz about something.. it should be this.</p>
</div>
<div> </div>
</blockquote>
</div>