some links for AI skeptics

It’s not unusual for me to feel exasperated by things I read online, but reading this piece on “sentient code” from VentureBeat had me throwing my hands in the air. It’s everything wrong with popular writing about artificial intelligence: filled with hype, vague where specificity is needed most, and generally credulous in a breathless, gee-whiz style that should not appear in professional journalism. You’d hardly know, from reading the piece, just how much of a failure AI has been throughout its 60 year history.

As a corrective, here are some resources that display an appropriate skepticism towards AI, amidst all the hype.

  • This piece by Peter Kassan is a really valuable overview of the basic dynamics of AI, what a thorough failure the field has been thus far, and why so many people underestimate the scope of the project.
  • David Deutsch, the father of quantum computing, has another broad overview, one which is more philosophically and culturally oriented than Kassan’s.
  • This long interview with Noam Chomsky demonstrates the symbiosis  between linguistics and AI, and how both have been ill served by the rise of so-called “Big Data.” In particular, the interview is a good overview of a very broad argument about not just AI but the nature of knowledge: are probabilistic models a sufficient substitute for explanatory models? Chomsky says no, and I suspect he’s right.
  • This interview with C. Randy Gallistel, a neurologist and psychologist from Rutgers, is an entertaining and informative breakdown of the fundamental problem with AI: we still have essentially no idea whatsoever how the brain encodes, stores, and decodes information.
  • Gallistel’s book Memory and the Computational Brain is fairly challenging, but can definitely be appreciated by a hardworking amateur (like me). It really lays out the need to understand cognition, as distinct from the anatomy and physiology of the brain, before we can ever unlock intelligence and through it AI.
  • This piece on Douglas Hofstadter and his attempts to return the study of AI to the study of how human minds think, published in the Atlantic, is just fantastic, and really drills into the differences between computer ventures like Watson and Deep Blue and what was originally meant by artificial intelligence.

7 responses

  1. are probabilistic models a sufficient substitute for explanatory models? Chomsky says no, and I suspect he’s right.

    I agree. I see running a Monte Carlo as a substitute for thinking.

    (For example the classic “calculate π by throwing darts at a square board”.)

  2. You may be interested to learn of Doug Lenat’s Cyc project. Whether or not you judge it a success, or even a good idea, I don’t think anyone can deny that the project is at least trying to model the way people actually think, by trying to assemble a complete, explicit compendium of all the common-sense rules that people use to reason about the world, such as “when you sweat, you become wet”.

    One of your later blog posts discusses the problem of translating the sentences “The committee denied the group a permit because they feared violence” and “…they advocated violence”. You observed that to correctly translate these sentences you have to attribute “they” correctly, even though its antecedent is different in the two cases, and to do that you have to understand something about the power relationships between groups that seek parade permits and the authorities that grant or withhold them. Unlike most AI projects, the Cyc project is not trying to sidestep the need for that understanding, but to model it explicitly.

Leave a Reply

Your email address will not be published. Required fields are marked *