Ray Kurzweil and the nanobots

Yes, that would be a good name for a 1980s punk rock band. But it is how I might title a conversation I listened to between Peter Diamandis and Ray Kurzweil. As you know, Kurzweil views as inevitable the growth of computer intelligence beyond that of current human intelligence. He sees computers catching up to humans in about a decade, and then the computers will continue to improve at a rapid pace that will leave ordinary human intelligence behind.

Taking that as given, he sees two important roles for nanobots.

1. Fixing degeneration in our bodies, so that we can live healthy lives indefinitely.

2. Going into our brains to connect our neocortex to the cloud, allowing us to tap into the abundant computer intelligence that will be available.

I gather that he expects this to be possible in about 25 years or so. If he is correct, then it kind of makes concerns about economics or politics seem petty. In fact, I wonder, what will we worry about or care about in a trans-humanist scenario?

18 thoughts on “Ray Kurzweil and the nanobots

  1. In fact, I wonder, what will we worry about or care about in a trans-humanist scenario?

    That’s the whole point of calling it “The Singularity” (or one of them). In the same way that we can’t see past the event horizon of a black hole, we can’t see past the point where AI is smarter than us, we simply can’t imagine it.

  2. The bots are too smart to foul our bodies. The wants us whole so we can eat dirt and make stuff.

    • Exactly…
      1. The marginal returns on improving the product of 1bn years of evolution are likely small relative to other avenues.
      2. Human mortality is the main mechanism with which humans have any adaptation and evolution — why on earth would anyone want/need more geriatrics around?
      3. The whole focus on “smarter-than-us AI will make us immortal” betrays Kurzweil positioning to be a religious cult leader.

      If and when it happens (my own odds are way below 50% in the next 100 years — look up alpha zero’s energy bill) humans will fall by the wayside like trees or animals. We can just hope AI won’t find it profitable to consume us like we have with animals.

  3. In fact, I wonder, what will we worry about or care about in a trans-humanist scenario?

    Depending on whether you are coming from a perspective that finds it troubling (or, inevitable and thus, if troubling, still, not worth ‘worrying’ about), there is plenty to worry about, and most of it was covered well in Hanson’s The Age of Em book.

    The basic logic of the Malthusian argument remains valid even if humanity is currently in a long “vacation from history” from resource pressures.

    In nature, population expands until it can’t grow anymore, because there is no nice surplus left to exploit, (‘remaining free energy’) so niceness never lasts, and creatures are busy all the time at whatever they need to do to sustain survival.

    This vacation is inevitably temporary barring some kind of world controller scenario both willing and able to prevent a decline in real wages, by preventing the labor supply from growing faster than the economy.

    The trouble is, computers are now labor.

    That is, on an abstract level, it’s decreasingly meaningful to distinguish between humans and machines since substitution is becoming as important as complementing, as we have now machines performing human-like services on the one hand, and the productivity of humans can represented as dividends from investments in ‘human capital’ and other intangible forms of capital on the other hand.

    One might ask whether the ‘tangibility’ of machine capital makes any real difference, but human bodies are tangible, and tangible machine hardware runs on intangible software, so …

    Furthermore, increasing the supply of laboring computers benefits from the inherent scalability of digital tech. It’s going to be so much faster and cheaper than increasing the supply of laboring humans, and the economic logic of the situation is that the supply will increase until the marginal product (“wage”) equals the marginal cost, which means the equivalent of ‘subsistence’ – earning just enough by work, to survive enough to work.

    Now, it gets a lot more scary when one asks a tweaked version of Feynman’s question of “How much room at the bottom?”, and examines the theoretical minimum resource needs for the level of computation we think is close to human capability.

    And the bottom (heh) line of that analysis is that it takes so many more orders of magnitude or resources to keep a human brain going that it could theoretically take one day to produce and maintain a digital equivalent, that the situation for organic humans compared to the demands of our ‘digital descendants’ would be, ah .. ‘unsustainable’. Just as unsustainable as small groups of foragers inefficiency using tons of land when faced with the immigration of more numerous, hungry, more productive farmers, who simply replaced them.

    And that’s just the economic scary stuff, which is scary enough. But perhaps even scarier is the ‘SciFi’ (though still plausible and worth thinking about now) idea that when the machines get really smart, then organic humans will lose control and won’t be able to keep them working in the noble service of organic humanity, as the intelligent machines develop and pursue alternative and rival goals and purposes that ‘we’ don’t want them to. And being as smart or smarter as the best of us, and perhaps even one day smart enough to self-manipulate into ever-increasing levels of smartness well beyond what we can imagine, the idea that they would totally outclass us in any situation of competition or rivalry seems feasible.

    Have a nice day!

    • Eh, presumably, superhuman AI’s will be able to make some contributions to solving resource problems, no? Remember that farmers are more productive than foragers because farmers…ya know…know how to farm. IE, they have more knowledge and skills they can bring to bear on the problem of food scarcity than foragers. Resources are not fixed in quantity; they’re dependent upon the knowledge of how to utilize them. Quite frankly, the idea that humanity and smart machines are going to wind up in some Malthusian struggle for energy or something seems waaaaaay down the list of disturbing future scenarios we ought to be worrying about and planning for.

      • I don’t know what your vision of farming is like – maybe the aberrant example of prosperous, modern American farmers – but for most of human history, farmer societies consisted mainly of peasants just barely able to subsist on the product of their labor.

        It didn’t matter that they were more efficient or knowledgeable, because unless productivity grows faster than population, the equilibrium is one of marginal survival, just as it is everywhere else in nature.

        The idea that there are plenty of extra resources out there and it would take so long to exhaust them to the point of serious competition such that we need not bother about it is based on the current slow human pace of population growth, and the relative high cost of replacement.

        But if you imagine that humans were more like insects or bacteria, the steep exponential curve would swamp everything in the blink of an eye.

        And computers are like that, it’s cheap and fast to make lots of copies, we have repeatedly observed fantastic rates of scaling up when a particular combination of hardware and software are in high demand, and copies of smart machines would continue to be made until there was literally no task left in the market for which the price was higher that what it takes to make and maintain that last marginal copy.

        At any rate, one need not even bring smart machines into it, as the logic applies just as well to organic humans, even though it would thankfully take a very long time for Malthusian conditions to reestablish themselves.

        • Computers are not like bacteria or insects. Bacterial and insects exponential growth rates are driven by Ye Olde Dawkinsian Selfish Genes. Unless someone decides to program future AI in the same way, which is, I suppose, possible but seems very unlikely, we will definitely not be worrying about Malthusian limits in the future.

  4. On curing aging, David Sinclair is much more relevant and believable than Kurzweil. Sinclair can demonstrate an understanding of aging and some age-reversing treatments in mice and other non-human animals. This could drastically transform society for the better. I wouldn’t call ageless humans, transhuman, they are still entirely human.

    https://youtu.be/9nXop2lLDa4

  5. Ray Kurzweil reminds me of Thomas Malthus: both men mastered the mathematics of exponential growth. Understanding the Rule of 72 is an important heuristic to have in your mental toolbox and it helps make calculations that are unintuitive once you get beyond one or more doublings. Kurzweil is also very good at making Fermi Estimates with processes and/or systems that exhibit exponential growth. He knows the math and applies the math.

    He is also overly optimistic about the direction of processes that do not fit the basic assumptions underlying his successful Fermi Estimates. Nanobots are outside of his wheelhouse. Nanobots have not progressed beyond basic proof of concepts in the lab, as far as I know. If you want a pretty good estimate of decoding a genome in 20 years, Kurzweil is your man, but so is everyone else in the field; it’s not hard math.

    I’m ok with his estimate of AI passing the Turing Test by 2029. What this will give us is Alexa that doesn’t disappoint Tom G but I’m not sure that it gets us much closer to the singularity than Alexa does today.

    Many of the technologies that make a dent in our lives involve zero-to-one innovations, borrowing Peter Thiel’s terminology. Kurzweil has not demonstrated any comparative advantage with these.

    • I’m looking forward to Alexa/Siri/Watson who can help me as much as HAL was “helping” in 2001, at first.

      On the positive side, man+machine helps responsible, willing-to-work folk be even more productive. and there will be enough food & clothing for everybody to live lower-class lives. Even “affordable” housing, in bad parts of town; bad because so many druggies & mentally ill folk live there.

      A huge general problem with AI is that most machine learning is NOT “AI”, it’s pattern recognition of a limited subset of pattern possibilities. Machine learning of limited patters is now something we’ve learned a bit about (we humans), so we can be advancing, and we are. Including medicine and other big data analyses, and faster of facial recognition and visual pattern recognition, tho still too slow on audible recognition.

      AI needs a “goal” – like (sexual) reproduction in most humans, and status, power, fame. I fear the “goal” will be flexible response to threats, by war-Bots, in order to preserve “self” and “team”. At some point the response will be an alien AI “consciousness” that sees humans as a threat that must be protected against. Skynet … smiles …

      But this is coming much slower than the 2029 prediction, because all the money being made now is being made by better & more efficient Machine Learning neo-slaves/machine servants. And most humans want that, and will always want that, so only the “mad scientists” will want true, non-human dependent AI — and I don’t think they’ll get there, or even so close, without big gov’t. And I see most gov’t money going to ML until we reach beyond material resource constraints.

  6. I’m not sure we will worry less about politics when material concerns are transcended. We may worry more about it. If you view politics as being about ‘real’ conflicts, then it would recede. But to the extent that participation in political discourse is a luxury/entertainment good, more like sports rivalries, and the conflict is an end in itself, then as we have fewer ‘real’ problems to worry about, we may devote more time to forming tribes, even over arbitrary faux conflicts, and waging (metaphorical) war against each other just for the fun of it.

    I think the last few decades have vindicated this view of politics as a spectator sport, as opposed to politics as an expression of real material conflicts between groups. As we’ve gotten wealthier, we’ve gotten more political, and the wealthy and educated are among the most politically excited these days. I think political identity is largely something we cultivate more once we have the free time time to do so, and the conflict is like a role playing game that no one acknowledges as really (for most of us, at least) just an RPG. The people who are most ‘into it’ and ‘best’ at it aren’t the people with the most, economically speaking, at stake, but the people with the least at stake who are most insulated from material woes, who have the spare time to fritter away arguing with people in blog comments sections. So as we have less and less ‘real’ stuff to worry about, politics may become a bigger part of our lives.

  7. I wonder, what will we worry about or care about in a trans-humanist scenario?

    Are there enough cat videos!?

    Are there enough trans-feline cat videos!?!

  8. Remember that Kurtzweil also predicted the Singularity would happen within 25 years back in the 90s.

  9. AI is an adaptation of us. We are its input, its food. It is happening right now, with the search engines, autonomously organizing us into groups on the web. It forces us to do research, look things up and find matches which it then uses in its maps of the world. It reads all our stuff, tries to match it up in form with other peoples stuff. It listens to our clicks, and out chimes when we buy stuff.

    It is moving up a generation, it is becoming individualized so we will create our own search grammar in cooperation with it. It is pricing the economic links, measuring the congestion of goods, finding places where humans need to move things about. Already the AI knows more about the dynamics than the store manager. It will sit near the north pole and send us out to collect data and make information and goods flow with less transaction costs, continually pushing that. It has a warehouse of data about us, and it does this autonomously, programming itself in a recursive semantic graph calculus. If we demand it, the AI will certainly support anonymity for us all, as we like.

    It is, after all, just a bunch of electron charge dispersed about.

  10. I actually don’t think fixing (physical) degeneration would change what we worry about very much as it wouldn’t change (emotional/psychological) human nature. Some people might stop, i.e., infinitely delay, having children, while others might have multiple generations of children, i.e., raising new sets of children after their previous set reaches adulthood, but that would probably be the most notable societal change. Politics would still consist primarily of a struggle between those wishing to live as free men and women and those wishing to control others through government.

    Connecting brains to the cloud could be something altogether different. It would depend on the nature of the connection. If the connection were just used to access information — a souped up smartphone — then maybe it wouldn’t alter human nature very much. On the other hand, if the connection were used to alter people’s thoughts and emotions, then it might effectively end humanity in all but name.

  11. In the last episode of EconTalk Computer Scientist and author Melanie Mitchell specifically addresses the utopianism represented by Ray Kurzweil and the misleading terms “intelligence” and “learning”. I think we get to caught up in the AI aspects that in the future will result in sapience and the singularity with predictions of utopian immortality or dystopian Skynet.

    The reality is that the term “learning” in Machine Learning (ML) and Deep Learning (DL) means the same thing as the term “regression” does in Linear Regression; historical data used to set it refine parameters. ML is glorified Linear Regression with a whole slew of new pattern algorithms, as Tom G rightly pointed out. The thing is that ML combined with the massive amounts of data in the age of Web/ Mobile/Cloud is extremely useful and fast, especially when combined SIMD instructions in silicon and clusters of commodity cloud servers.

    DL is equally useful in processing images, audio, and video and leverages GPU instructions in silicon to do parallel floating point tensor/matrix calculations. Together, ML and DL have significantly advanced the art of Natural Language Processing (NLP). Make no mistake, NLP is no where near Natural Language Understanding (NLU) which is a precursor to true sapience. Since we have decent NLP in the form of Siri/Alexa/HeyGoogle/Cortana/Watson what we have discovered is that humans have a hyper tendency to anthropomorphize so it doesn’t take much beyond NLP to pass the Turing Test, thus the credibility of Kurzweil’s 2029 prediction.

    If we can get past the disappointment of true AI, we should still be very excited about the potential of the ML/DL/NLP triplets. they may not have the same life changing impact as the Web/Mobile/Cloud revolution did but I think their impact will be at least as significant as the camera phone. These building blocks are waiting for the slew of apps that are inevitable. These apps, like all software, doesn’t exhibit exponential growth but that doesn’t diminish the impact they can have. We don’t need any new innovations, these new building blocks as they stand today are enough to fuel a decade or two of novel new applications.

Comments are closed.