MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
intelligence
Recherche

Voices in AI – Episode 82: A Conversation with Max Welling

jeudi 14 mars 2019, 13:00 , par The Apple Blog
[voices_in_ai_byline]
About this Episode
Episode 82 of Voices in AI features host Byron Reese and Max Welling discussing the nature of intelligence and its relationship with intuition, evolution, and need.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Transcript Excerpt
Byron Reese: This is Voices in AI brought to you by GigaOm, and I’m Byron Reese. Today my guest is Max Welling. He is the Vice President, Technologies at Qualcomm. He holds a Ph.D. in theoretical physics from Utrecht University and he’s done postdoc work at Caltech, University of Toronto and other places as well. Welcome to the show Max!
Max Welling: Thank you very much.
I always like to start with the question [on] first principles, which is: What is intelligence and why is artificial intelligence artificial? Is it not really intelligent? Or is it? I’ll start with that. What is intelligence and why is AI artificial?
Okay. So if intelligence is not something that’s easily defined in a single sentence. I think there is a whole broad spectrum of possible intelligence, and in fact in artificial systems we are starting to see very different kinds of intelligence. For instance you can think of a search engine as being intelligent in some way, but it’s a very different kind of intelligence obviously as a human being, right?
So there’s human intelligence and I guess that’s the ability to plan ahead and to analyze the world, to organize information—these kinds of things. But artificial intelligence is artificial because it’s sort of in machines not in human brains. That’s the only reason why we call it ‘artificial.’ I don’t think there is any reason why artificial intelligence couldn’t be the same or very similar to human intelligence. I just think that that’s a very restricted set of intelligence. And we could imagine having a whole broad spectrum of intelligence in machines.
I’m with you [on] all of that, but maybe because human intelligence is organizing information, it’s planning ahead, machines are doing something different like search engines and all that. Maybe I should ask the question: What isn’t intelligence? I mean at some point, doesn’t it lose all its meaning if it’s like it’s kind of… a lot of stuff? I mean like what are we really talking about when we when we come to intelligence? Are we talking about problem solving? Are we talking about adaptation or what? Or is that so meaningless that it has no definition?
Well yeah, it depends on how broad you want to define it. I think it’s not a very well defined term per se. I mean you could ask yourself whether a fish is intelligent. And I think a fish to some degree is intelligent because you know it has a brain, it processes information, it adapts perhaps a little bit to the environment. So even a fish is intelligent, but clearly it’s a lot less intelligent than a human.
So anything I would say that has the purpose of sensing—sort of acquiring information from its environment, computing from that information to its own benefit. In other words, to survive better is the ultimate goal or to reproduce maybe is the penultimate goal. And so basically, once you’ve taken any information and you compute then you can act—use that information. You can then act on the world in order to bring the world in a state that’s more beneficial for you, right? So that you can survive better, reproduce better. So anything that processes information, I would say in order to reach a goal, in order to achieve a particular goal which in evolution is reproducing or surviving.
But… in artificial systems it could be something very different. In an artificial system, you could still sense information, you could still compute and process information in order to satisfy your customers—which is like providing them with better search results or something like that. So that’s a different goal, but the same phenomenon is underlying it, which is processing information to reach that goal.
Now, and you mentioned adaptation and learning, so I think those are things that are super important parts of being intelligent. So a system that can adapt and learn from its environment and from experiences is a system that can keep improving itself and therefore become more intelligent or better at its task, or adapt when the environment is changing.
So these are really important parts of being intelligent, but not necessary because you could imagine a self-driving car as being completely pre-programmed. It doesn’t adapt, but it still behaves intelligently in the sense that it knows when things are happening, it knows when to overtake other cars, it knows how to avoid collisions, etcetera.
So in short, I think intelligence is actually a very broad spectrum of things. It’s not super well-defined, and of course you can define more narrow things like a human intelligence for instance, or fish intelligence and/or search engine intelligence or something like that, and then it would mean something slightly different.
How far down in simplicity would you extend that? So if you have a pet cat and you have a food bowl that refills itself when it gets empty…it’s got a weight sensor, and when the weight sensor shows nothing in there, it opens something up and then fills it. It has a goal which is: keep the cat happy. Is that a primitive kind of artificial intelligence?
It would be a very, very primitive kind of artificial intelligence. Yes.
Fair enough. And then going back centuries before that, I read the first vending machines, the first coin operated machines were to dispense holy water and you would drop a coin in a slot and the weight of the coin would weigh down a thing that would open a valve, then dispense some water and then, as the water was dispensed, the coin would fall out and it would close off again. Is that a really, really primitive artificial intelligence?

Yeah. I don’t know. I mean you can drive these things to an extreme with many of these definitions. Clearly this is some kind of mechanism. I guess when there is sensing and this can sense, there is a bit of sensing because it’s sensing the weight of a coin and then it has a response to that—which is opening something. It’s like a response and sort of completely automatic response, and humans actually have many of these reflexes. If you hit your knee with a hammer, with a paddle of a hammer like the doctor does, your knee jerks up, so that’s actually being done through a nervous system that goes to… doesn’t even reach your brain. I think it’s down here somewhere in your brain in the back of your spine. So it’s very, very, very primitive, but still you could argue it senses something and it acts. It does something, it computes something and it acts. So it’s like the very, very most fundamental simple form of intelligence. Yeah.
So the technique we’re using to make a lot of advances in artificial intelligence, now that computers is machine learning, I guess it’s really a simple idea. Let’s study data about the past. Let’s look for patterns and make projections into the future. How powerful is that technique… what did you think are the inherent limits of that particular way of gaining knowledge and building intelligence?
Well, I think it’s kind of interesting if you look at the history of AI. So in the old days, there was a lot of AI which was hard coding rules. So you would think about what are the all the eventualities which you could encounter. And for each one of those, you would sort of program a response as an automatic response to those. And those systems did not necessarily look at data in large amounts from which they would learn patterns and learn to respond.
In other words, it was all up to humans to figure out what are the relevant things to look at, to sense, and how to respond to them and if you make enough of those, actually a system like that looks like it’s behaving quite intelligently and actually still I think nowadays, self-driving cars… a large component of these cars is made of lots and lots of these rules which are hardcoded in the system. And so if you have many, many of these really primitive pieces of intelligence together, they might look like they act quite intelligently.
Now there is a new paradigm which is: it’s always been there, but it’s been basically becoming the dominant mainstream in AI. The new paradigm I would say, which is: ‘Well, why are we actually trying to hand code all of these things which we should sense in there by hand because  basically you can only do this to the level of what the human imagination actually is able to come up with, right?”
So if you think about detecting some… let’s say if somebody is suffering from Alzheimer’s from a brain MRI, well you can look at like the size of your hippocampus and it’s known that that thing shrinks—that organ shrinks if you are starting to suffer from memory issues which are correlated with Alzheimer’s. So that a human can think about that and put this in as a rule, but it turns out that there’s many, many more far more subtle patterns in that MRI scan. And if you sum all of those up, then actually you can get a much better prediction.
But humans, they wouldn’t be able to even see those subtle patterns because it’s like if this brain region and this brain region and this brain region, but not that brain region, would sort of have this particular pattern. Then you know this is a little bit of evidence in favor of like Alzheimer’s and then hundreds and hundreds of those things. So that humans lack the imagination or the sort of the capacity to come up with all of these rules. And we basically discovered that just provide a large data set and let the machine itself figure out what these rules are instead of trying to hand code them in. And this is the big change for instance with deep learning as well, [as] in computer vision and speech recognition.
Let’s first do computer vision. People have many hand coded features that they would try to identify on the image. Right. And then from there they would make predictions or for there’s some whether there was a person in the image or something like that. But then we basically said, “Well let’s just throw all the pixels, all the raw pixels at a neural nets. This is a convolution of neural net and let the neural nets figure out what are the right features. Let this neural net learn what the right features are to attend to when it needs to do a certain task.” And so it works a lot better, again because there’s many very subtle patterns that it now learns to look at which humans simply didn’t think of to look at—they seem to look at these things.
Now another example is the Alpha Go, maybe. In Alpha Go something similar happened. Humans have analyzed this game and come up with all sorts of rules of thumb for how to play the game. But then Alpha Go figured out things that humans can’t comprehend, it’s too complex. But still it made the algorithm win the game.
So I would say it’s a new paradigm that goes well beyond trying to hand code human invented features into a system and therefore it’s a lot more powerful. And in fact this is also the way of course humans work. And I don’t see a real limit to this, right? So if you pump more data through it, in principle you can learn a lot of things—or well basically everything you need to learn in order to become intelligent.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
https://gigaom.com/2019/03/14/voices-in-ai-episode-82-a-conversation-with-max-welling/
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
ven. 19 avril - 06:16 CEST