have you heard the Good News?

People get on me for being a curmudgeon about techno-utopianism. And I don’t doubt that there’s lots of great benefits to technology that I enjoy every day. But so many people who write about technology make it so, so hard.

To wit, “The Dawn of the Age of Artificial Intelligence.”

The advances we’ve seen in the past few years—cars that drive themselves, useful humanoid robots, speech recognition and synthesis systems, 3D printers,Jeopardy!-champion computers—are not the crowning achievements of the computer era. They’re the warm-up acts.

Humanoid robots, speech recognition and  synthesis, and 3D printers are not examples of artificial intelligence by any definition. Self-driving cars and Watson (the Jeopardy-playing-computer) are not artificial intelligence under the original intent of artificial intelligence, which was to approximate human cognitive function, under the sensible theory that human cognitive function has been massively successful at changing the world.

How can we be so sure? Because the exponential, digital, and recombinant powers of the second machine age have made it possible for humanity to create two of the most important one-time events in our history: the emergence of real, useful artificial intelligence (AI) and the connection of most of the people on the planet via a common digital network.

I am not sure what the words “exponential,” “digital,” or “recombinant” explain here. Processing power has increased exponentially over its history, but has actually slowed in its growth, as we’ve reached plateaus in heat, and this is likely to slow growth in mobile chips just as it has already down with CPUs. Alternatives like quantum computers appear years away from practical implementation. Current technologies are digital, but then, so is the abacus. Recombinant, I suppose, refers to the dream that computers will become self-teaching, despite the lack of a clear explanation of how this process will work beyond the very specific and limited ways in which computers are already self-teaching. As for the greater claim that AI will lead the way to the next great stage of human evolution, well, I have aggregated links so that you can read people who know much better than me that AI has been one of the great scientific failures of the last century.

Trivial uses of AI include recognizing our friends’ faces in photos and recommending products. More substantive ones include automatically driving cars on the road, guiding robots in warehouses, and better matching jobs and job seekers. But these remarkable advances pale against the life-changing potential of artificial intelligence.

Again, I’m not sure why some of these are considered AI at all. Facial recognition, automated product recommendations a la Amazon, and matching jobs and job seekers are all impressive but based upon the Bayesian probability techniques that computers have been using successfully for decades. Self-driving cars  can be considered a form of very limited artificial intelligence, but you’re talking about a domain which is remarkably rule-bound and which involves very little need for the kind of “learning” that is typically imagined in these discussions.

To take just one recent example, innovators at the Israeli company OrCam have combined a small but powerful computer, digital sensors, and excellent algorithms to give key aspects of sight to the visually impaired (a population numbering more than twenty million in the United States alone). A user of the OrCam system, which was introduced in 2013, clips onto her glasses a combination of a tiny digital camera and speaker that works by conducting sound waves through the bones of the head. If she points her finger at a source of text such as a billboard, package of food, or newspaper article, the computer immediately analyzes the images the camera sends to it, then reads the text to her via the speaker.

This does sound somewhat impressive– if it works, as advertised, and is vetted by impartial and external experts who give it a rigorous and skeptical review. I would recommend that anyone remember examples like Siri or Google Translate, both of which have attracted breathless praise from the media but eventual, grudging admission from the people who actually use them day-to-day that they are remarkably limited tools, even while admitting how impressive they are technologically.

Reading text ‘in the wild’—in a variety of fonts, sizes, surfaces, and lighting conditions—has historically been yet another area where humans outpaced even the most advanced hardware and software. OrCam and similar innovations show that this is no longer the case, and that here again technology is racing ahead.

They don’t actually show that, because you haven’t shown it yet.

Digital technologies are also restoring hearing to the deaf via cochlear implants and will probably bring sight back to the fully blind; the FDA recently approved a first-generation retinal implant. AI’s benefits extend even to quadriplegics, since wheelchairs can now be controlled by thoughts. Considered objectively, these advances are something close to miracles—and they’re still in their infancy.

Not to be repetitive, but cochlear implants are not AI, and saying “digital technologies… will probably bring sight back to the fully blind” is pure assertion and entirely lacking in specificity. Controlling wheelchairs with thoughts, even if we accept the unproven premise that they are currently viable technologies for most quadriplegics, is in fact the opposite of AI.

There is no better resource for improving the world and bettering the state of humanity than the world’s humans—all 7.1 billion of us.

This is a statement without content.

Our good ideas and innovations will address the challenges that arise, improve the quality of our lives, allow us to live more lightly on the planet, and help us take better care of one another.

Of course, “we” are also responsible for the challenges that have arisen, the inequality of our lives, our heavy footprint on this planet, and our own failure to take care of one another.

It is a remarkable and unmistakable fact that, with the exception of climate change, virtually all environmental, social, and individual indicators of health have improved over time, even as human population has increased.

I tip my hat to this deployment of “unmistakable,” for its sheer audacity. First, the notion that virtually all environmental indicators have improved over time, even excepting climate change, is so ludicrous that I don’t know how to process it. We’re poisoning the oceans with carbon dioxide, threatening to leave vast expanses of water incapable of sustaining life. We’re staring down another mass extinction, caused by human beings. An incredibly lax regulatory culture, across a large variety of countries, leaves people questioning whether they can breathe the air or drink the water. And that’s all before pointing out that excepting climate change from a list of environmental ills is like excepting a knife between your ribs in a review of your current health. As far as the social and individual indicators go, well, I’ve long since given up convincing the teleologists out there that progress is neither constant nor inevitable. But I will insist on saying that, just to pick a few, the fires that are burning Kiev and Caracas and Cairo and Thailand and Syria, etc etc etc, probably should count as indicators of social health, even if you insist that things are good on balance. Or perhaps the fact that the advantage of family wealth lasts 10 to 15 generations might be a negative indicator. Individual indicators of health? You mean like physical health? Well we have an obesity epidemic you might have heard of. I could go on.

The economist Julian Simon was one of the first to make this optimistic argument, and he advanced it repeatedly and forcefully throughout his career. He wrote, “It is your mind that matters economically, as much or more than your mouth or hands. In the long run, the most important economic effect of population size and growth is the contribution of additional people to our stock of useful knowledge. And this contribution is large enough in the long run to overcome all the costs of population growth.”

Sounds great. Where’s the proof? How is this a responsible statement?

We do have one quibble with Simon, however. He wrote that, “The main fuel to speed the world’s progress is our stock of knowledge, and the brake is our lack of imagination.” We agree about the fuel but disagree about the brake. The main impediment to progress has been that, until quite recently, a sizable portion of the world’s people had no effective way to access the world’s stock of knowledge or to add to it.

In the industrialized West we have long been accustomed to having libraries, telephones, and computers at our disposal, but these have been unimaginable luxuries to the people of the developing world. That situation is rapidly changing. In 2000, for example, there were approximately seven hundred million mobile phone subscriptions in the world, fewer than 30 percent of which were in developing countries.

By 2012 there were more than six billion subscriptions, over 75 percent of which were in the developing world. The World Bank estimates that three-quarters of the people on the planet now have access to a mobile phone, and that in some countries mobile telephony is more widespread than electricity or clean water.

So let’s look at the industrialized West, where everybody with a library card has been able to get access to troves of information for decades. Do all of our people contribute meaningfully to the stock of useful information? No, of course not. Even with those libraries and the free public education and access to the internet. A tiny sliver of the population, actually, is contributing to the stock of useful information. You could argue– I am arguing– that a big part of the reason that more people aren’t is because of social and economic inequalities that prevent them from doing so. But the idea thrown around here, that increasing access to the internet is going to suddenly result in this vast explosion of people powering an all-candy-themed-app economy and in so doing result in utopia, is worse than unscientific. It’s an impediment to the incredibly hard work, tough choices, and inter-class conflict that real progress will require.

Today, people with connected smartphones or tablets anywhere in the world have access to many (if not most) of the same communication resources and information that we do while sitting in our offices at MIT. They can search the Web and browse Wikipedia. They can follow online courses, some of them taught by the best in the academic world. They can share their insights on blogs, Facebook, Twitter, and many other services, most of which are free. They can even conduct sophisticated data analyses using cloud resources such as Amazon Web Services and R, an open source application for statistics.13 In short, they can be full contributors in the work of innovation and knowledge creation, taking advantage of what Autodesk CEO Carl Bass calls “infinite computing.”

They can do those things. Most of them don’t do those things, and the vast preponderance of history– you know, history, that stuffy old subject we should throw out so that we can teach kids to code– tells us that increasing access to knowledge is woefully insufficient to resulting in actual productive use of that knowledge by those who struggle under economic exploitation.

Until quite recently rapid communication, information acquisition, and knowledge sharing, especially over long distances, were essentially limited to the planet’s elite. Now they’re much more democratic and egalitarian, and getting more so all the time. The journalist A. J. Liebling famously remarked that, “Freedom of the press is limited to those who own one.” It is no exaggeration to say that billions of people will soon have a printing press, reference library, school, and computer all at their fingertips.

Millions of people now have a printing press, reference library, school, and computer all at their fingertips. The number of them who do the things that the authors of this piece want people to do fits comfortably in a rounding error. When books ceased to be hand-copied by monks in monasteries, books stopped being the purview of the aristocratic and clerical elite, and yet most of the world’s population remained illiterate for hundreds of years. When televisions became ubiquitous, people wrote breathless pieces arguing that they would give all students the same skills as those taught in the most expensive schools, and yet the vast inequalities in our education system only deepened. And since I was in high school, we’ve been cramming internet-enabled computers into our places of learning, and arguing against evidence that the inevitable result would be the rapid democratization of knowledge and creation. We have seen nothing of the kind, and we have no satisfactory reason to believe that this will change in the coming days ahead. It’s great that you can go on Wikipedia and use it write your own blog. Most people are not doing even that, and that is a far cry from the absurd revolutionary utopianism of this piece and hundreds more like it.

We believe that this development will boost human progress. We can’t predict exactly what new insights, products, and solutions will arrive in the coming years, but we are fully confident that they’ll be impressive. The second machine age will be characterized by countless instances of machine intelligence and billions of interconnected brains working together to better understand and improve our world. It will make mockery out of all that came before.

I encourage you to read this paragraph out loud, and understand it as encompassing the spirit of the piece. The sound you hear as you read will be the sound of nothing at all.

6 Comments

  1. The gains they’ve gotten out of machine learning and its variant “deep learning” are pretty impressive, but I agree that they’re not really “artificial intelligence”. There’s scant money going into real AI attempts, because most people and companies don’t really want that. People would probably just like a Siri That Actually Works Good and doubles as a personal assistant/fact-checker/system security program. Companies want automation that saves them money.

    Good point about the gains slowing down. We’re still on the Moore’s Law schedule, but not for the reasons that we used to be. The gains these days are coming from more and more parallelism, getting better output from networking programs among computer cores (which is also why we’re seeing the rise of cloud services – cloud services can run programs on thousands of cores linked together, instead of just the 4-6 you have inside your PC). Predicting that that will continue forever is . . . tricky – we may run into annoying problems trying to manage those programs as they get ever more complex.

    Not sure about the quantum computer stuff, or even if it will be an improvement over the best regular stuff.

    1. The main reason nobody has been pumping money into AI, was that it has been an epic fail now for 50 years. Plus business doesn’t really do long term investments.

      Even most of the things people talk about are less impressive when you look at them closely. The Bayesian stuff is a useful too, but don’t believe the hype, it still needs a lot of human monitoring and tuning. And while the driverless cars are impressive (essentially they are modelling the brains of an insect), it remains to be seen whether they could handle a city such as London.

  2. Title typo alert.

    Aside from that, great blog post, Freddie!

    “It is a remarkable and unmistakable fact that, with the exception of climate change, virtually all environmental, social, and individual indicators of health have improved over time…”

    Someone send a memo to all those animals in the Gulf of Mexico who are only *faking* being killed by an oil spill. After all, we’ve got a fact here that’s remarkable *and* unmistakable! How often does that happen?

  3. This is the first time I have seen your blog. I basically agree with most everything you are saying. What bothers me the most is how few people seem willing or able to give your point of view any credence whatsoever. When the reality of stagnant growth finally cannot be denied (say around 2030), the resulting social disruption is going to almost impossible to manage in any civilized way.

    By the way, I think that one of biggest problems these days is that people walk only in their own narrow worlds. There are very few people who bridge the scientific world and the world of real people and limited possibilities.

Leave a Comment

Your email address will not be published. Required fields are marked *