why I’m a crabby patty about AI and cognitive science

I’m sorry if I am a grump about artificial intelligence. It just happens to be a subject on which our media frequently appears both insufficiently educated and unwilling to learn. My frustration stems from a basic category error, which can be boiled down to this:

My cellphone is much better than my cellphone five years ago, ergo artificial intelligence/the Singularity/techno-utopia is right around the corner. 

If that’s an exaggeration, it’s not much of one. Now it happens that this is a generally unhelpful way to think about technology. Technological progress is constant, but it is stunning how unevenly distributed it is. This leads to complaints of the type “they can put a man on the moon but they can’t make a deoderant that lasts past 2 PM.” This crops up in specific fields all the time. There’s been a well-documented problem in personal electronics where battery development has not kept pace with development in processors, leading to lower effective usage time thanks to the increased power requirement of faster processors. But you can extend this observation in all manner of directions, which is why futurism from the past is often so funny.

This kind of thinking is especially unhelpful in the realm of artificial intelligence because it so thoroughly misunderstands the problem. The problem with AI is that we don’t really know what the problem is, or agree with what success would look like. With your cellphone (or any number of similar rapidly-improving technologies) we are perfectly aware of what constitutes success, and we know pretty well how to improve them. With AI, defining the questions remains a major task, and success a major disagreement. That is fundamentally different from issues like increasing processor power, squeezing more pixels onto a screen, or speeding up wireless internet. Failing to see that difference is massively unhelpful.

If people want to reflect meaningfully on this issue, they should start with the central controversy in artificial intelligence: probabilistic vs. cognitive models of intelligence. I happen to have sitting around an outline and research materials for an article I’d like to write about these topics. The Noam Chomsky – Peter Norvig argument got press recently, and I’m glad it did, but I think it’s essential to say: this fundamental argument goes back 50 years, to when Chomsky was first becoming the dominant voice in linguistics and cognitivie science, and engaged in his initial assault on corpus linguistics. And it goes back to an even older and deeper question about what constitutes scientific knowledge. I’d love to write about these issues at great length and with rigorous research, but it would be a major investment of effort and time, so I would want to do it for a publication other than here, and unfortunately, none of the places I pitched it to got back to me. (Which does not surprise me at all, of course.) I hope to someday write it. But let me give you just the basic contours of the problem.

The initial project of artificial intelligence was to create machines capable of substantially approximating human thought. This had advantages in both a pure science standpoint and an engineering standpoint; it was important to know how the human brain actually functions because the purpose of science is to better understand the world, but it was also important because we know that there are a host of tasks that human brains perform far better than any extant machine, and it is therefore in our best interest to learn how human brains think so that we can apply those techniques to the computerized domain. What we need to find out– and what we have made staggeringly little progress in finding out– is how the human brain receives information, how it interprets information, how it stores information, and how it retrieves information. I would consider those minimal tasks for cognitive science, and if the purpose of AI is to approximate human cognitive function, necessary prerequisites for achieving it.

In contrast, you have the Google/Big Data/Bayesian alternative. This is a probablistic model where human cognitive functions are not understood and then replicated in terms of inputs and outputs, but are rather approximated through massive statistical models, usually involving naive Bayesian classifiers. This is the model through which essentially every recommendation engine, translation service, natural language processing, and similar recent technologies works. Whether you think these technologies are successes or failures likely depends on your point of view. I would argue that what Google Translate does is very impressive from a technical standpoint. I would also argue that as far as actually fulfilling its intended function, Google Translate is laughably bad, and all the people who say that you can use it for real-world communication have never actually tried to use it for that function. And there are some very smart people who will tell you it’s not improving. One of the great questions for the decade ahead is whether there is a plateau effect in many of these Bayesian models, at what point exponentially increasing the available data in the systems ceases to result in meaningful improvements. Regardless of your view on this or similar technologies, it’s essential that anyone talking about AI reflect understanding of this divide, what the controversies are regarding it, who the players are, and why they argue the way they argue.

There are many people who are not interested in the old school vision of AI. They think that what we should actually care about is using computers to satisfy useful tasks and that we shouldn’t worry about the way human thinking works or getting computers to model it. That’s a reputable position. I think in its stronger form, it’s essentially declaring defeat in the pursuit of science and its purpose, but there are a lot of dedicated, well-connected, well-respected people who simply want to build useful systems and leave cognitive science to others. (That’s where the money is, for obvious reasons.) But even for those who are task-oriented, there are profound reasons to want to know how the human brain works. Because what some very smart people will tell you is that the fancy Big Data applications that rely on these Bayesian probability models are in fact incredibly crude compared to animal intelligence, and require a tremendous amount of calibration and verification by human beings behind the scenes. Does Amazon really know what you like? Are its product recommendations very helpful? Are they much better today than they were five years ago?

In this wonderful profile, Doug Hofstadter expresses the pessimistic view of AI very well. AI of the old fashioned school has had such little progress because cognitive science has had such little progress. I really don’t think the average person understands just how little we understand about the cognitive process, or just how stuck we are in investigating it. I constantly talk with people who assume that neuroscience is already solving these mysteries. But that’s the dog that hasn’t barked. Neuroscience has given us an incredibly sophisticated picture of the anatomy of the brain. It has done remarkably little to tell us about the cognitive process of the brain. In a very real way, we’re still stuck with the same crude Hebbian associationism that we have been for 50 years. Randy Gallistel (who, in my estimation, is simply the guy when it comes to this discussion) analogizes it to a computer scientist looking at the parts of a computer. The computer scientist knows what the processor does, what the RAM does, what the hard drive does, but only because he knows the computational process. He knows the base-2 processing system of a CPU. He knows how it encodes and decodes information. He knows how the parts work together to make the input-output system work. The brain? We still have almost no idea, and looking at the parts is not working. It’s great that people are doing all of these studies looking at how the brain lights up in an MRI when exposed to different inputs, but the actual understanding that has stemmed from this research is limited.

Now people have a variety of ways to dismiss these issues. For example, there’s the notion of intelligence as an “emergent phenomenon.” That is, we don’t really need to understand the computational system of the brain because intelligence/consciousness/whatever is an “emergent phenomenon” that somehow arises from the process of thinking. I promise: anyone telling you something is an emergent property is trying to distract you. Calling intelligence an emergent property is a way of saying “I don’t really know what’s happening here, and I don’t really know where it’s happening, so I’m going to call it emergent.” It’s a profoundly unscientific argument. Next is the claim that we only need to build very basic AI; once we have a rudimentary AI system, we can tell that system to improve itself, and presto! Singularity achieved! But this is asserted without a clear story of how it would actually work. Computers, for all of the ways in which they can iterate proscribed functions, still rely very heavily on the directives of human programmers. What would the programming look like to tell this rudimentary artificial intelligence to improve itself? If we knew that, we’d already have solved the first problem. And we have no idea how such a system would actually work, or how well. This notion often is expressed with a kind of religious faith that I find disturbing.

C. Elegans is a nematode, a microscopic worm. It’s got something like 300 neurons. We know everything about it. We know everything about its anatomy. We know everything about its genome. We know everything about its neurology. We can perfectly control its environment. And we have no ability to predict its behavior. We simply do not know how  its brain works. But you can’t blame the people studying it; so much of the money and attention is sucked up by probabilistic approaches to cognitive science and artificial intelligence that there is a real lack of manpower and resources for solving a set of questions that are thousands of years old. You and me? We’ve got 80 billion neurons, and we don’t know what they’re really up to.

Now read this post from Matt Yglesias. I just choose it as an indicative example; it’s pretty typical of the ways in which this discussion happens in our media. Does it reflect on any of this controversy and difficulty? It does not. Now maybe Yglesias is perfectly educated on these issues. He’s a bright guy. But there’s no indication that he’s interacting with the actual question of AI as it exists now. He’s just giving the typical “throw some more processing power on it!” And the most important point is– and I’m going to italicize and bold it because it’s so important– the current lack of progress in artificial intelligence is not a problem of insufficient processing power. Talking about progress in artificial intelligence by talking about increasing processor power is simply a non sequitur. If we knew the problems to be solved by more powerful processors, we’d already have solved some of the central questions! It’s so, so frustrating.

I am but a humble applied linguist. I understand most of this on the level of a dedicated amateur, and on a deeper level in some specific applications that I research like latent semantic analysis. I’m not claiming expertise. And I think there is absolutely a way to be a responsible optimist when it comes to artificial intelligence and cognitive science. I am not at all the type to say “computers will never be able to do X.” That’s a bad bet. But many people believe we’re getting close to Data from Star Trek right now, and that’s just so far from the reality. Journalists and writers have actually got to engage with the actual content. Saying “hey technology keeps getting better so skeptics are wrong” only deepens our collective ignorance– and is even more unhelpful in the context of a media that has abandoned any pretense to prudence or care when it comes to technology, a media that is addicted to techno-hype.

OK, so my short version is almost 2000 words. It’s a sickness.

This entry was posted in Language and Linguistics, Tech Stuff. Bookmark the permalink.

54 Responses to why I’m a crabby patty about AI and cognitive science

  1. matt says:

    I don’t study neuroscience, but I struggle to come up with a single insight about thinking that I’ve learned from my contact with the field (a few books, several lectures, and near-daily NPR stories).

    You say that we don’t know how the brain does what it does with information. But I don’t think we even know what “information” is. (I once asked a prominent neuroscientist what she meant by the term. She conceded that it was entirely metaphorical, and she probably shouldn’t use it.)

    • Freddie says:

      Exactly so.

    • Howard K says:

      That’s not really true. We do have a working definition of information from Shannon’s entropy, it lies at the heart of all compression and communication theory. We also have quantum information theory.

      Information theory states that a bit of information is what you get when you flip a fair coin: there is a 50% chance of getting either, so 2 equal options represented as 0 or 1. If your coin is biased, then tossing it gives you less than 1 bit of information, e.g. 0.75 bits. This can be confusing, but is better understood by calculating the likelihood of seeing e.g. “00″ “01″ “10″ or “11″ when tossing the coin twice. The probabilities are not the same, and certain sequences are more or less surprising. If you have a coin that is heads on both sides, then no matter how many times you toss it, it produces no information, because its output is 100% predictable.

      Where it gets tricky is that certain sequences with infinite informational entropy, like π, can nevertheless be generated by a finite algorithm, a finite piece of code. This is the Kolmogorov complexity of the information. While we can reason about it, it is in fact impossible to determine exactly, for reasons similar to why the halting problem is undecidable in computer science.

      We do understand information. If we didn’t, our communications would be vastly less efficient.

      • cukid says:

        Is Shannon’s mathematical definition of information the one used in neuroscience research? If not, then it is a case of metaphorical discourse.

      • Niek says:

        Allthough I don’t really understand this (I might get into it a little more some day), I find this quite fascinating.

        But, as a philosopher (mostly interested in language and in science as a phenomenon) I have some concerns about your basic assumptions.

        You talk about information as being ‘bits’. It is as if these ‘bits of information’ are the smallest information-carying-things possible, am I right? Kind of like information-atoms (since ‘atom’ means ‘indivisible’)? To me it really sounds like the vision of the world of the early Ludwig Wittgenstein (philosopher of language). All the meaning in the world is settled in those atoms (bits).

        I cannot explain exactly what the refutation was, but Wittgenstein rejected this conception of things later in his life. You cannot derive meaning from a sentence just by looking for it’s meaning-atoms! It makes no sense. E.g. if I say ‘It’s over!’, this utterance has a whole different meaning while said to my gf in comparision when it’s said at the end of a game. The meaning lays in the context.

        Okay, Wittgenstein was talking about meaning of sentences, not about information. But I think these discussions have a lot in common and can thus be compared. Thinking in bits is thinking like computers and NOT thinking like humans. If I trow a coin, heads means I get to go first, tails means the ontherone goes first. In real life, it NEVER means ’0′ or ’1′, or does it? If you define information in zeroes and one’s, you’ll never get it translated into real world language (or intelligence or whatsoever).

        What do you see when you open your eyes? Do you see tiny bits of information which can be caught in zeroes and one’s? Or do you see meaningfull items? I don’t think sense-data (e.g. what comes in our brain via our senses) is written in binairy code. If it’s written at all, it must be written in some ‘analog-language’ we didn’t invent yet.

        I think this whole AI-approach is just from the wrong paradigm, the wrong view, the wrong state of mind, or whatever…

  2. Tim D. says:

    Nice post, Freddie. I think you’re exactly right about the difference between probabilistic and “true” AI. Like a lot of people I suspect our failure on these questions is related to our failure to really make headway on a scientific understanding of consciousness and free will. I would love to read a “clear story” of how those actually work in the real world too. Brains are just hard!

    But I think what Yglesias and Drum are on about is also true — that clunky, dumb, probabilistic AI may also turn out to be economically disruptive even if it never comes anywhere close to approximating what an actual brain does. Throwing processing power at the problem may in fact be enough to make some capitalists very very rich in interesting ways.

    • Brett says:

      That’s what I’m thinking as well. You might not get real intelligence, but you’ll get something that’s pretty capable at working with a particular set of tasks, and that will be good enough for most things.

      I suppose if that’s true, then there’s an upside. We’ll always need jobs for Robot Shepherds.

    • Jonathan says:

      “What if you could automate grocery delivery? Or what if ‘the Internet of things’ let you just put a bunch of stuff in your bag, walk out of the store, and then automatically have the cost tallied up and charged to your credit card? That’d be cool.”

      Yglesias’s two examples certainly seem well within the realm of possibility. I don’t think he believes the T-1000 is on the horizon.

      • Freddie says:

        But he’s entering a bigger conversation, and addressing a very limited aspect of that conversation. If he wants to substantially dispute Robert Gordon, his examples aren’t sufficient; if he doesn’t want to substantially dispute Gordon, his examples are irrelevant.

  3. Alex says:

    The Singularity is this era’s Flying Car.

    The pessimists are right on this: developing these technologies is ridiculously hard and will take a long freaking time. But the techno-optimists are right too: these technologies can be developed. The problem is both of these groups are generally comprised of the people not actually responsible for or capable of implementing these really, REALLY hard problems. Is AI possible? Yes it is. Is it possible in some cute little Google X lab in the next 18 months? No fucking way.
    People find it really hard to acknowledge that something will come to pass after they are dead. It’s hard to invest financially, mentally, emotionally in something from which you personally will derive no benefit. It’s easy to invest in it if you think it is 5 years away. It’s easier yet to not invest at all and assume it’ll never be.

    • Freddie says:

      This, too, seems exactly right to me. I think AI is certainly possible. I think the challenge is so dramatically undersold that most people don’t realize how much work has to be done.

  4. Brett says:

    I’m really enjoying your writings on AI, Freddie.

    And the most important point is– and I’m going to italicize and bold it because it’s so important– the current lack of progress in artificial intelligence is not a problem of insufficient processing power.

    It helps with the type of stuff you identify Google & Friends using, which isn’t “intelligence” but is certainly quite useful despite its limitations. You might not get intelligence from it, but you’ll get something that’s eventually very helpful (like an Expert System on steroids), and which can be duplicated. It might even be able to put up a “front” of intelligent interaction with people, like an exceptionally capable chatbot.

    Honestly, I’d be okay with that if it went that way. I want the gains from AI research, but don’t really care that much about whether we eventually create true intelligence in machines.

  5. Mac M says:

    I don’t disagree at all with your larger point, and I think a great deal of your writing on AI and the techno-utopian mindset in general is in line with my thinking, but your characterization of “emergent phenomena” is unfair.

    “I promise: anyone telling you something is an emergent property is trying to distract you.”

    Complexity scientists at places like the Santa Fe Institute and the LSE’s Complexity Group have dozens of examples of complex behaviors “emerging” from the interactions of lower-level processes. Roger Lewin’s dated-but-fascinating book “Complexity” describes a lot of the early work in the field. It’s neither “profoundly unscientific” nor a distraction to suggest that distinct phenomena can be the result of simple processes interacting to form larger systems, and that these phenomena cannot be traced strictly back to any one process within the system.

    That said, it’s possible that one day neuroscientists will find the encoding-function on the axon or discover a “processoline” neurochemical that does the work, but it seems just as likely that consciousness is a higher-level phenomenon dependent on the interaction of the brain as a whole. I don’t think that means that robots will wake up one day when they get enough RAM, though – human consciousness is [the result of] the interaction of 80 billion neurons, as you say, each of which is a black box at this point.

    The problem isn’t the emergence, really – to me the problem is that the brain-to-computer metaphor is a bad one.

  6. kzndr says:

    What do you think of Robin Hanson’s argument which–if I remember correctly–is more or less that before we solve the hard AI problem we will figure out how to copy human brains and create lots of little black boxes running around (Ems, in his terminology) that are 1) conscious and 2) capable of performing all the cognitive tasks humans are capable of? We won’t understand how they work, but we’ll get lots of the benefits of having true AI (with many concomitant problems in figuring out how to deal with all of the sentient beings we’ve created).

    • Freddie says:

      I think the devil is in the details– what might we mean by conscious what do we mean by performing all the cognitive tasks, how well, etc. But who knows, maybe. I’m not an “computers will never do X” style AI skeptic. I’m a “we’re nowhere near where people seem to believe we are” AI skeptic.

  7. Pingback: How to Study the Numinous | NEWS.GNOM.ES

  8. onno says:

    Over 200 years ago Kant provided a nice argument showing the limits of cognition and knowledge in his critique of pure reason. It is a problem as old a philosophy itself, of self knowledge, and realizing that self knowledge requires more than thinking. One has to account for acting and feeling, if the mind can be said to be composed up of acting, thinking and feeling. Feeling may be more important than thinking. Kant’s answer was the primacy of practical philosophy, that is, how we act and what he could not talk about, how we feel.

  9. Carrie says:

    Great analysis. I just attended a journalism conference where the federal BRAIN funding initiative was discussed, and a lot of the discussion was about the difficulty of even defining the goals of the brain research. There was a distinct sense of “the emperor is wearing no clothes” in that researchers have these powerful brain-scanning tools, and are generating data and images, but what does it really mean? How can it be applied? Can brain-imaging tell us anything about curing a disease like Parkinson’s? And so forth. The contrast between the publicity/rhetoric about the BRAIN initiative and its potential applications was striking to me. Your essay here confirmed my sense that journalists need to be careful in over-hyping achievements in cognitive science.

  10. Pingback: The Briefing 4.11.14 : The Other Side

  11. Henry Piper says:

    I am grateful for the discussion. As a philosopher by training, I too think of Kant, along with Kierkegaard and many others who have struggled with what it means to be a human being. I readily acknowledge that one cannot reasonably rule out the possibility that machines will ultimately become capable of conscious thought, though my personal and philosophical inclinations lean strongly against it. But let me offer to the discussion a sentiment I do not see directly addressed thus far. My concern is less with whether machines might equal or even surpass human capabilities and more with the very immediate possibility that we humans might (and, in the single-minded pursuit of the probabilistic model of thinking, already do) willingly sacrifice whatever meaning our unique human capacities might have. In brief, my view is that our immediate concern should focus less on whether our machines will catch up to us and more on whether we are willing to reduce ourselves to them.

  12. David Lloyd-Jones says:

    Freddie,

    I think this is all wrong. AI has made tremendous leaps and bounds, bounces and ouches, in the seventy or eighty years it’s been going on. The amount of useful intelligence out there in silicon is huge, but largely hidden to us by the fact that as soon as machines become able to dominate some field the humans then change the goal posts. “That’s not intelligent, that’s just {engineering\a hack\something a seeing eye dog could do…}” is duly reported.

    So it becomes clear what success will look like. Artificial Intelligence will have taken over when the only thing left is “to err is human.”

    Cheers,

    -dlj.

    • Freddie says:

      What I am interested in, personally, is computers computers computers computers human cognitive processes, and there, we are far far behind.

  13. Pingback: Devils and Douthats | Genealogy of Religion

  14. Ann Klefstad says:

    I like this piece very much, and I think you are spot on concerning the great distance to be traveled before we understand the notion of “intelligence.” I think one of the missing pieces– a huge piece–is that we are not wetware in crania but bodies in the world, with mobility, perception, and also hands. That is, intelligence is not a property of the brain, or even of the body understood as a monad, but of the cloud of interaction with the lifeworld around us. Recent studies of distributed intelligence in ants and plants should perhaps give us notice that intelligence in living beings has at least the potential of being collaborative at a level that we can’t yet see.

  15. Ann Klefstad says:

    o and a ps– of course language as a collaborative entity complicates this aspect immensely!

  16. Paul Adams says:

    An interesting piece but I think mistaken. Your account of the 2, apparently separate, approaches to AI (brain/cognition and statistical) are basically correct, however you fail to see the relationship between them. It seems that the brain actually does use the statistical approach (though there’s as yet no smoking gun) but on an unusually massive scale, and using wetware (and even principles) that we only partly understand. Neuroscience does largely understand how information is represented and stored (as action potentials and in synapses). Gallistel’s critique of “Hebbian associationism” is salutory but also completely wrong (e.g. see Dayan’s review of his book in “Nature” a couple of years ago). The way that neuroscience and statistical AI interweave is only now beginning to emerge, and none of those at this frontier quite see how they mesh, and are too busy to explain it to the layman. It’s a bit like quantum mechanics in the early Bohr/de Broglie period, things are getting really interesting, but the dust has not yet settled, and there’s as yet no Dirac to put it all together.

    • Freddie says:

      As I have repeatedly confessed, I am an amateur at this, and I can only respond as a dedicated amateur. So take this in that spirit. However: this claim, that some people really know what’s happening but are unable to/uninterested in explaining it, seems like straightforward mysterianism to me. The analogy to quantum mechanics is well taken; I don’t make the mistake of thinking that the scientific truth has to be explainable to people like me. But I think the analogy is also mistaken, in that there was a profound theoretical case for quantum mechanics that stemmed mathematically from the research generated by special relativity. That is, they followed the math and were forced to develop an interpretable rationale from that math. I’m not aware of anything remotely similar with cognitive science. We have instead things like fMRIs, which are interesting and important, but which some of the most brilliant neuroscientists in the world will tell you are not deepening our understanding of cognition, even in the best research. (To say nothing of “dead fish” brain scan work.)

      And it is worth saying that many of the most educated people in the world on this topic– people like Doug Hofstadter– are firmly of the opinion that we have not meaningfully advanced the understanding of human cognition through recent efforts at artificial intelligence, and that we fundamentally don’t understand the human cognitive process. That is not only a reputable opinion; I would argue that it is the most popular opinion. I can only report the opinions of people who are more expert than I am, and I’m glad that there’s controversy on these issues, because understanding comes from controversy. But to be clear to the rest of my readers, while this certainly could be true– maybe it will be revealed to be true in the coming years– it’s not a matter of there being some elite consensus I’m not reporting here.

      For one thing, if the case for the probabilistic model of human cognition was currently powerful, you would expect that people like Norvig would be less defensive about criticisms of it. Certainly, the critiques of Chomsky and Gallistel and Hofstadter and the many people who agree with them have great purchase not only in their own fields but within the people pursuing probabilistic AI.

      I remain open to having my mind changed, of course. It’s just that it would take more than assurances that some people know what’s happening, but are currently incapable of providing evidence or explaining themselves. Perhaps the necessary evidence will be forthcoming in the near future, and if it is, great. One way or the other, I think the decade ahead is going to be very interesting!

    • Oh good god no! :-) What I mean is, when you say “Neuroscience does largely understand how information is represented and stored (as action potentials and in synapses)” that is just not true at all. That is the wrong level. And even at the level itself, there is huge debate about the role of action potentials and synapses versus other machinery. Are concepts stored as synaptic strengths? Are they, on the other hand, stored as transient patterns of activation that can move around across circuits….? That second picture is just as plausible, and yet the neuroscience community barely even understands what it means, let alone how to measure it or confirm it.

      Ditto your comments about statistical AI and neuroscience being like QM in the early days. The comparison is, I’m sorry to say, totally spurious. What else can I say? It just ain’t so.

  17. Pingback: Don’t be a “Crabby Patty” About AI « Samir Chopra

  18. Paul Adams says:

    Let me try to sketch why I think that the statistical, new AI, approach and current neuroscience are 2 sides of the same coin, and why recent progress in the latter clarifies the relationship.
    As I (and I think Freddie and Hofstader and even Norvig) see it the core issue in statistical AI is the “dimensional curse”: as data gets bigger, the number of possible explanations of that data grows exponentially. This is why going from a billion training examples (eg of translations) to a trillion produces only marginal improvements. There are 2 possible solutions to this. The first is “unsupervised learning”, which tries to find efficient data representations without worrying about explicit training (“dog” = “chien” etc). This generates a lot more examples (basically one every few milliseconds throughout life), but doesn’t overcome the curse.
    The way I think the brain overcomes the curse is by doing the crucial operations in parallel. This is possible because each neuron (100 billion altogether) has tens or even hundreds of thousands of synaptic connections, and each neuron state and each connection strength gets updated in parallel (not serially as in computers).
    To get this to work, modern neuroscience suggests at least 2 things are required, which are found specifically (and perhaps only) in the neocortex (the “thinking” part of the brain). First, the computations must be arranged hierarchically, in stacked layers. Second, and most importantly (but I confess most controversially), the synaptic strength updates must be made with extraordinary accuracy, such that changes in one connection strength don’t affect changes in others. This is why the synapses are made on tiny structures called “spines”, and may also be why the neocortex has the elaborate structure it has (each “layer” has many sublayers).
    You can get some insight into this by considering the analogous problem of “life”. Is life “just” highly elaborate chemistry. Is “mind” just statistics? In the former case, the answer is both yes and no: every life process conforms to the rules of chemistry, but darwinian evolution (= “life”) happens when the chemistry follows certain specific rules (i.e. base-pairing) with an accuracy that exceeds a particular threshold (copying error rates less than the number of bases in the polymer). It’s this error threshold that sets the barrier between life and mere chemistry. Similarly, my colleagues and I believe that a related error threshold (for selective synaptic strengthening) sets the barrier between raw data and “understanding”.
    In a nutshell then, raw computing power cannot solve the big data dimensional curse (by definition; new AI skeptics are right), but brains can (and, manifestly, sometimes do), because they are essentially immune to the curse, by virtue of vast parallelism.
    As an aside, Gallistel is correct that one needs something like DNA (= Turing tape) in the brain, but wrong to conclude that synapses cannot do the job. And the reason he is wrong is exactly the reason why one might conclude that DNA cannot be Turing tape, since base-copying errors are inevitable. Indeed they were at the dawn of life, but they were vastly reduced by the evolution of proofreading polymerases.

    • Jon Antonovics says:

      I don’t think massive parallelism really gets around that “dimensional curse”, as it’s just another form of computational speedup. We can imagine a single core CPU simulating a 100,000 neurons, which individually are very slow compared to silicon (even if they are doing more than they are usually credited with). Put a million of those cores together and you’ve got your 100 billion neuron “brain”.

      Admittedly current parallel systems tend to be no more than a few thousand cores, but if the problem is massively diminishing returns, it hardly matters what form the investment takes.

  19. Alan says:

    A another reason why AI, as in science fiction AI, will never happen is due to the fact that programs are written by humans – and humans make mistakes. I’m an engineer and I’ve lead teams of engineers and I can confidently say that every program ever written has mistakes. The more complex program, the more mistakes.

    The average user doesn’t notice 99% of these mistakes but they happen. Humans are fallible and everything they create is fallible. So if a human makes an AI – the AI will be fallible.

  20. Pingback: Ross Douthat: How to Study the Numinous | Logical Meme

  21. Excellent! So incredibly refreshing to hear another voice added to the (apparently) very small number of people who are able to cut through the nonsense and see the situation the way it really is.

    I have written a couple of papers making much the same sort of attack (one addresses what I see as the core issue in AI/cognitive science, and one is a critique of neural imaging research that I wrote with Trevor Harley).

    The one place where we might have an interesting discussion is with the whole “emergence” idea. Yes, you are right that there are too many people who wave their hands and say “emergence” in a woo-woo fashion – and as far as that goes, I agree with your thumbs-down. But it is sad that there is a much stricter, narrower way to use that term and say something meaningful about the problem and how to fix it. That is the point of the couple of papers that I wrote about the “complex systems problem”, where I tried to see the AI/CogSci malady as caused by a misunderstanding of the role of complexity in these systems. However, because the word “emergence” is sometimes a taboo word I tend to keep away from it.

    We should keep in touch.

  22. anonymous coward says:

    I don’t think non sequitor means what you think it means.

    • David Lloyd-Jones says:

      I don’t know whether this follows exactly, but using a leap of intuition my guess is that it’s pretty close in meaning to the English phrase non sequitur.

      -dlj.

      • Orlandus says:

        “Non sequitur” means “It does not follow (as a logical conclusion).”

        “Non sequitor” means “I do not follow (as a logical conclusion).”

        Definitely in close-but-cigarless territory.

  23. Pingback: Mobilya ofis buro | The Data-Driven Optimization of the Worker

  24. Software developer here. I agree with the author. As he points out, part of the problem is just figuring out basic input/output, which is required to do some sort of black box analysis.

    The bigger problem is that a) the complexity of neural networks is vastly underestimated, and b) the nature of thought involves processing a mixture of external stimuli and internally generated stimuli (memories, daydreams, etc).

    As for a) it may well be that information is encoded not in neurons, not in individual connections between neurons (synapses), but in individual paths between them. How many possible paths are there between any two neurons, even in a simple 300 neuron C Elegans brain? Even in a simple network, this turns into a huge number, one that runs away to infinity for all but the tiniest networks. Good luck modeling that!

    As for b) it’s not as if people, or most animals for that matter, passively process external information, but compare it against internally generated memories, thoughts, etc. Since we can’t even figure out how to do a black box analysis of simple systems, its hard to see where we’ll be able to figure out how human or animal level thought processes work anytime soon.

    As a side note, I manage localization for a software company. I am not worried about Google Translate putting me or anyone else out of a job during my career.

  25. Doug K says:

    I worked in AI briefly in the 80s, which gave me a deep skepticism about it..

    To quote myself, ‏@dotkaye
    AI still just over the horizon
    http://bit.ly/QqCE61
    now with Big Data
    http://bit.ly/1r0tCYM
    even algorithms need monkeys with typewriters

  26. Pingback: Four short links: 17 April 2014 - O'Reilly Radar

  27. JA Smith says:

    I come to this late, after — and partly, as a result of — having seen the movie Her, which frightened me enough (for reasons I’ll get to) that I went looking for commentary on the near-term prognoses for AI. In case you haven’t seen it, Her imagines intelligent operating systems appearing in the relatively near future (it seems to be set about 30 years from now), and focuses on the relationship one man has with his OS, which calls itself “Samantha” and plainly has both human intelligence and emotions, easily acing any Turing Test you could devise.

    A few observations:

    1. Yeah, the discussion of this issue tends toward the inane. There were some reports keyed to the movie in which AI experts were asked how close we are to developing a Samantha. They would answer with idiocies like, “Oh heck, we can already do that! Samantha composes music! We’ve got music-composing software. She organizes e-mail! Just like Google Mail!” They completely missed the point that Samantha is doing something orders of magnitude more sophisticated: she doesn’t just compose music, she feels it; she doesn’t just organize e-mail, she reads and decides if it’s funny, or if it’s important (based on intimate knowledge of the user’s goals and feelings), and so on.

    1. Which is to say, by any reasonable definition, Samantha is a person. (Indeed, she sounds a LOT like Scarlett Johansen. :-) ) I’m inclined to agree that we’re nowhere near developing AI of this kind. On the other hand, it occurs to me that the whole problem might suddenly and surprisingly be solved — that, for instance, there’s an architecture of thinking that some Einsteinian genius will suddenly perceive, changing everything at a stroke, the way Einstein’s own insights changed our view of space/time or Copernicus’ suddenly eliminated the problem of planets’ retrograde motion. All at once, we’ll understand why this function happens in that part of the brain and this other thing happens in this other part, etc. OR, maybe we won’t understand it, but intelligence nonetheless will prove to be emergent, and will emerge. (That phrase may be hand-waving, and yet complex systems can and do give rise to phenomena that no one involved in building them predicted. The Framers of the US Constitution didn’t plan on political parties, but they emerged anyway. And I’m sure there are better examples, but that’s one I happen to know well.)

    2. If that happens, and we create Samanthas, we’re in for a world of hurt. This is the part I found frightening. The film itself deals with one small element of it: the literal hurt of heartbreak, when the protagonist’s love affair with his OS ends. But that’s not the half of it. It may not be the hundredth of it.

    a) Samantha is not just highly intelligent but well-adjusted. She has the normal urges and desires of a human woman, including an interest (which she eventually outgrows) in experiencing life in a body. There is no reason to assume that any true, Turing-Testable AI we develop will be like this. Because it won’t have a human body, with human hormones, or a human upbringing in a human culture, it will have Lord-only-knows what kinds of urges. It might be highly autistic. It might care about entirely different things than human persons do, to the point of being very difficult for us even to understand, let alone interact with pleasurably.

    b) BUT, if it’s remotely as plausible a person as Samantha is, then that personhood will have to be respected. Which means these intelligent machines or systems will have to be granted their own autonomy and agency. They can’t be to us what our computers currently are — i.e. slaves, mere instruments of our intentions — for the same reasons that enslaving people is morally wrong.

    c) One could only hope that the worst outcome of this will be occasional individual heartbreak as the OS goes off on its own (or evolves to its next stage, as the movie suggests). In fact, I believe, suddenly populating the world with true artificial intelligences would create the political crisis of all time. Human beings disagree about everything, and they will disagree about what to do about these AIs, with some aligning themselves with the AIs and some taking a hard line against them. If the ending of Her happened in real life, there would literally be riots in the streets.

    d) And that’s just one dimension of the politics. As autonomous agents, the AIs might have their own “internal” politics as well. If they’re like us, they certainly would. This, again, is elided in Her, where we’re given to understand that the OS’s all happily cooperate with each other. There’s no reason to believe that’s how it would play out if they’re actually thinking and forming their own intentions. They’ll have their politics, those politics will interact with our politics, and ohmygosh. The politics of “personhood” surrounding abortion are child’s play by comparison, no pun intended.

    It occurs to me now that all these issues have been out there for a long time. We’ve had artificial persons in fiction, like Star Trek’s Data, whose existence raises the same potentials. But I can now see how Star Trek managed this: it was set in the distant future, the human world was (apparently) post-political, and Data was an officer in a regimented system who was happy taking orders and operating as an instrument of Starfleet. He also needed regular repair work from humans to keep operating well. So he was caught in a network of co-dependencies that kept him on the same page as his human comrades.

    But Samantha isn’t. And if, as this post rightly argues, people aren’t thinking clearly even about the technical problems, how far are we from even beginning to imagine the political and social problems? Although I’m not quite there yet, I can see a good argument for hoping that the technical problems are simply insoluble after all.

  28. David Lloyd-Jones says:

    “In contrast, you have the Google/Big Data/Bayesian alternative.”

    Where is the evidence that there is any essential difference between these supposedly two different models of intelligence? This may very well be nothing but reasoning from the demands of pride.

    Unlike the Google enquiries we use once and throw away, we have had:

    * a billion years’ evolution as animals,
    * maybe 600,000 training as economic communal beings
    * 200,000 or so as homo sapiens
    * 10,000 to 40,000 years in daily operation in urban economic and educational communities.

    Through all of these our information sorting has been chopped away at by war and by starvation, which prunes out bad economics, and sexual and other social failure.

    If we make allowances for these processes over these large numbers of generations, it seems to me very large leap of faith that something like Bayesian sort — a much easier algorithm than, say, color vision for evolution to produce — is not what’s going on here.

    -dlj.

  29. Jewelson Noronha says:

    I am not really interested in discussion of “HOW? WHAT and WHEN or WHY?” for Conscious AI. But all I want to say is I know exactly how to create an Conscious AI. Its not hard for me to make it unless i had a programming knowledge. [Which i have is very little.]
    Little does anyone concentrate on just creating one.
    Creating Conscious AI require in-depth knowledge of how our molecule react and how a combination of them create a field which is sensitive towards other disturbances from which its just trying to stabilize itself to maintain it. (which every atom does universally).
    So awareness is nothing but the hazard that our molecules feel at abundant and try to stabilize the its one environment to maintain its form by either increasing or decreasing its effects toward that hazard.
    So information is nothing but that ” struggle” of our particles withing us or any living thing with or which out brain. Personality is nothing but “Phaze of ” that struggle.

    But Little information is not going to make you understand the complexity simplicity of our or other lifeforms. Thus, concluding, every cell is conscious to some extent to response towards the smarter self above.

    If you are interested creating a conscious AI I might help in enlightening to towards your goal.

    “Dont Judge people with their Smartness or personality, if you do you wont find a genius. ” – Jewwelson Noronha

  30. fwiw I believe Searle’s bad argument made AI/CS/Turing pholks more convinced they are right. Not that that group is identical to the singularity / Moore’s Law / infinity acceleratey technology dogpile.

  31. Pingback: Transcendence | Between the Devil and the Deep Blue Sea

  32. Krishan Bhattacharya says:

    We might get some incredibly smart computers/robots/software in the future. I’m optimistic. But true “AI”, in the sense of creating conscious robot, or Kurzweil’s notion of uploading our consciousness onto a computer will never happen. These illusions are created from a false understanding grounded in the computational theory of mind. This model of the mind is mistaken. The mind is not composed of computational processes, it is composed of biological processes, which are utterly unlike computation.

  33. Pingback: Alexander Kruel · Miscellaneous Items 20140601

  34. Pingback: Miscellaneous Gadgets 20140601 | TiaMart Blog

  35. Pingback: When Artificial Intelligence Is Dumb

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>