Andrew Smart, AI, Consciousness, Turing Test |519|

please-share-skeptiko3

Andrew Smart is helping Google figure out whether AI robots can/should take over.

[box]

Listen Now:

[/box]

Subscribe:

[one_third]Subscribe to Skeptiko with iTunes[/one_third] [one_third]email-subscribe[/one_third] [one_third_last]Subscribe to Skeptiko with YouTube[/one_third_last]  skeptiko-Join-the-Discussion-3

Click here for Andrew Smart’s website

Click here for forum Discussion

skeptiko-519-andrew-smart

[00:00:00] Speaker 1: Copenhagen interpretation, which states that the act of measurement affects the system’s [unclear 00:05]. The Penrose interpretation, which suggests that the wave function collapse.

[00:00:12] Alex Tsakiris: Just to confirm, yes, you’re in the right place. This is Skeptiko. This is yet another movie clips I’m playing for you. This is from Devs, a series on Netflix like this is really super-duper mainstream. And like, right off the bat, they are talking about the most fundamental issues of consciousness, AI, how robots are going to take over the world, which of course that’s true, but also how robots may be creating a simulation that we’re living in and all the rest that shit that if you’re into that stuff, you better be into it. It’s the biggest thing. And it’s directly, directly relevant to this amazing interview, I have coming up with Andrew Smart, who’s written a book on all this. So, back to the clip.

[00:00:58] Speaker 1: I want to talk to you about the Von Neumann Vignor interpretation, which states that human consciousness is the key modifier in de-coherence. Are you fucking kidding? Katie, you’re offering a lecture on Von Neumann Vignor. Are you serious? Half the people in this room are undergrads they might actually believe what you’re saying. There are many interesting conjectures within the theory. It’s dual is bullshit, which is the worst kind of bullshit.

[00:01:31] Alex Tsakiris: Okay, so there’s like five different clips I could play for you from this upcoming interview with Andrew Smart, but I’m gonna kind of confine it very narrowly in this introduction, even though we talk about really important shit like, is Google the belly of the beast that has an unrelenting chokehold on our freedom and way of life, as we know it? We address that quite directly in this as we do demonetization, but also, social justice, and search and the responsibility and scale, which is really the issue, I mean, Google does have a problem, trillions of searches. So they have a certain responsibility there that they’re going to need some flexibility on handling, even if they have to use Gestapo tactics to do it. flavoring this, it’s really a good interview. It’s a fair balanced interview, and I super appreciate and respect this guy, Andrew Smart, who you’re going to hear from but here’s a clip, and then we’ll roll right into the interview.

[00:02:33] Andrew Smart: The idea is that we’ll these API’s will be just like human minds; they’ll literally be minds. And so the thought about LSD was then well, if it’s basically indistinguishable from the human mind it’s like, how you would engineer a system that could be perturbed by LSD?

[00:02:56] Alex Tsakiris: What I think has been the core question here a lot of times, and I think is, this question is skirted for the most part by the technology community and that is, is it the Turing test, or is it the metaphysical test? The question about consciousness is, is it somehow fundamental? Welcome to Skeptiko, where we explore controversial science and spirituality, with leading researchers, thinkers and their critics. I’m your host, Alex Tsakiris, and I have a terrific guest for you today. His name is Andrew Smart. He’s the author of a book from a few years ago, but certainly an important relevant book beyond zero and one machines, psychedelics and consciousness. Let me read for you a little bit of background on Andrew, you’re gonna be blown away. I have a broad yet deep background in science, engineering and technology. I specialize in the areas of brain imaging, neuro technology, human computer interaction, human factors, data visualization, and statistical analysis. I’m primarily interested in the disciplinary intersections among cognitive science, computer science, artificial intelligence, anthropology, philosophy, and human factors. Now, as impressive as that sounds, I would suggest that Andrew was being somewhat modest, I mean, when you look him up, let me pull this up on Google Scholar, I mean, man, a lot of peer reviewed very important papers. And then certainly, if you look them up on LinkedIn, I’m not going to go there. But I mean, he is a very, very well positioned at Google worked at Twitter before that, really, Belly of the Beast, kind of jobs Honeywell as well, but we’re going to talk all of that. So it I’m very, very grateful that Andrew has joined us today to talk about this book and other stuff. Andrew, thanks for being here.

[00:04:21] Andrew Smart: No, thanks for having me on. I’m a little embarrassed by the introduction, but thanks.

[00:05:15] Alex Tsakiris: There is absolutely no way you should be. I really wish you would do more interviews I’d love to hear more about but “Hey, well, I got I got a shot here.” So, I can’t complain. So tell us, let’s start with the book beyond zero and one. Tell folks, what they’re gonna find.

[00:05:38] Andrew Smart: Yeah, well, like you mentioned, the book is maybe five or six years old, that it’s hard to remember now, in our weird, time warp of COVID. But, uh, I, so, yeah, you know, I’ve long had this interest in philosophy of mind and in AI. And had been following the debates about it ever since I was a student. And like you mentioned, I’ve worked in cognitive science, and then different, brain imaging labs. And I was, at the time when I started writing the book, I was working at Novartis, which is a big pharma company in Switzerland. But it’s actually where LSD was first synthesized, when it used to be called Sandoz, where Albert Hoffman worked. And I turned out, I was working for a time in the same building where Hoffman’s lab used to be in the 1940s.

[00:06:39] Alex Tsakiris: Let me interject something to set this up, because maybe I put you too much on the spot [unclear 06:43]. I mean, when I say Belly of the Beast, man, I mean, anyone is into this stuff, like the connections that are just in your life are kind of like a little bit. Because like, what everyone the controversy is, Andrew here is a minion of the robots who are going to take over the world, right? I mean, that is like one way to couch this is the guy who’s developing the strong AI robots, which we know are going to have a greater and greater enough, tone it down a bit, a greater and greater influence on everything, whether it’s law, or whether you go see a therapist, a doctor, or whether you trade stocks, or any of this stuff we all understand that artificial intelligence is … So when you just mentioned two words, you said the controversy surrounding this, I just want to frame up and make sure everyone understands that this is like for people are into businesses, like one of the most important issues a lot bigger than a lot other. Okay. So go ahead, please continue. Because now we’re back to the valley. The LSD thing is super, super genius. The way you weave that into the whole book. So go ahead.

[00:08:02] Andrew Smart: Well, we can get to my job at Google, which is actually not, like, I’m actually trying to constrain or do some sort of risk assessments of those things that you just mentioned, but we can. Anyway, so if people are familiar with the debates around AGI or what they call artificial general intelligence. And super intelligence. And it’s still definitely a debate in the field about, you have a general purpose model that can do anything really. And because now the way machine learning is developed is they have specific tasks, and they’re not especially good in areas when if you develop a thing that’s supposed to detect a certain type of image, and then you use it in the real world that often fails. If it sees something that it’s not used to, in its distribution. But anyway, there’s been a long strand of work on this question of super intelligence and AGI in human level artificial intelligence. And there’s a niche there of philosophers, like David Chalmers, for example, who have been thinking about consciousness a lot. And whether this AGI would be conscious like, and that was the interesting part to me was this, that in the debates about AGI consciousness is still quite taboo, like people … It’s almost like a side issue that …

[00:09:36] Alex Tsakiris: Well, as you point out in the book, and I forget the exact quote, but you said a lot of AI engineers question whether it’s even relevant, I mean, so it’s not even, philosophically okay, but who cares and in just a tiny bit on my background, so I do have an AI background. So I was a research associate at Arizona and then I started an AI company My Path Technologies, and we built these expert systems for DuPont, in Texas instruments with the idea that we would, you know, somehow capture all the knowledge that these gold badgers had and preserve it. And then it didn’t, and none of it worked very good, but I’m familiar with thing. I’m familiar with the issues. And it was really, I don’t want to say as crappy, but I mean, where AI is at now, like, AlphaGo. So like, it’s almost like you have to have seen the movie AlphaGo and know the whole thing to even kind of enter into this conversation, because what it’s doing is kind of catapulted us to another level. I mean, before that, it was big blue, and can AI beat the best chess player in the world. And now we’re just way, way past that, folks. And those issues are like way in the distance. And what Andrew is at the cutting edge of is, okay, these things are really, really smart. How smart could they possibly get? And you pose this question, again, that kind of catapulted five steps beyond that what, can we build a robot that trips on acid? So tell us how you jump three steps ahead and say, “Screw the Turing test. This is the acid Turing test.”

[00:11:16] Andrew Smart: Well, so and this is a weird, you were mentioning a weird confluence of things in my life, where, I had come from this sort of cognitive science AI background where I was studying and working. And then I had moved to Basel in Switzerland to work at this company to work on medical tech. And then, on the tram one day, actually, I had been reading stuff and then, I just had that thought, like, you have in the shower, like, what could an AI trip on acid? I had been reading about a lot about psychedelics, I have my own, I’ve explored psychedelics in my past and I had read Albert Hoffman’s book, and I found the whole history of LSD, super fascinating. And its kind of a lost history, especially, in mainstream culture. It’s a very underappreciated story.

[00:12:11] Alex Tsakiris: Can you talk about that a little bit, because I think your insights in that, or you’re telling of that history is super important. And it’s brilliant, because it goes beyond psychedelic experience. It changes the way. We think that about cognitive science and neuroscience, that’s really important, I hope, can you share that?

[00:12:30] Andrew Smart: Well, yeah. Albert Hoffman was this Swiss chemist, and working at was what was then Sandow and Basel. He was a chemist of a pharma company working on drug discovery, like synthesizing new compounds, trying to find things that were medically relevant and active, he had been working with a series of different derivatives of fungus and fungi. And that actually, it’s kind of a toxic thing, if you get too much of it, and I think originally they were working on sort of respiratory, they thought this was good for respiratory issues. And so he had been working on this. And the story is that, at some point he had synthesized a bunch of these different compounds, and the 25th, one that LSD 25, as well as it was called, somehow in the lab one day, he accidentally got some in his system. Nobody knows quite got on his hand caught his eye. But he had about an hour or two where he saw these vivid colors. He felt really strange. He had this strange episode. He initially thought maybe this was chloroform that he had accidentally ingested because they use that as a solvent. But then, he was really odd. So we went home. And then he’s like, well, maybe it is this stuff, this, this lysergic acid diethylamide. And so he planned a self-experiment for that, so he went home, and then he’s like, and then he synthesizes some more, and he thought he was just making this the tiniest dose that would do anything, he didn’t even think this would do anything. And it turned out this is actually a pretty good though. So he ate it. And then he had the famous then he tried to ride his bike home from the lab because he started like, what’s …, he’s really thought he was gonna die, was seeing devils and he has …, he writes that he tried to ride his bike home, he had no memory of moving on his bike, but suddenly he was at his house. And anyway, he had this kind of, the first acid trip and that’s why we call it an acid trip because of this but the famous bicycle trip and actually, every year in Basel, there’s a, on the date when this happened. There’s a bicycle tour from around Basile.

[00:15:01] Alex Tsakiris: I didn’t know that.

[00:15:03] Andrew Smart: Yeah, it’s kind of a … But so then he reported this to his superiors and they were all skeptical. And he was like, “Well try some.” And everyone realized, this is a very potent thing, because it was such a tiny amount of active stuff that gave you a trip for a 12 hours. And that started a whole real serious drug development, they did animal studies, they did human studies, it looked really promising for treating alcoholism, for treating depression, there were 1000s and 1000s of studies done and papers written. And then, and he really intended it to be a serious psychiatric treatment, and he wanted to develop and so he was actually really dismayed with and he resented very much the hippie movement in the United States for and he resented Leary for, what I mean, for undermining the seriousness of the, like, he didn’t want it to be a party drug, which it kind of entered the culture in that way. And then it got banned, and then research on it just completely stopped. And so anyway, that’s, …

[00:16:20] Alex Tsakiris: It’s terrific, because I tell you, there’s one other aspect of this that I learned from you, and I thought it was really just a great insight. What you point out is, it opened up all these vistas in terms of even thinking about all the stuff that we now consider relevant.

[00:16:38] Andrew Smart: Sure. Yeah. And as far as reading the history of this, I think prior to LSD there, people didn’t really think that what’s going on chemically in your brain had anything to do with how you feel or how your behavior and then LSD was really like, “Oh, okay, we just introduced this molecule into your synapses, and all this crazy stuff happens,” maybe so then you had then Prozac and Ritalin. And that, and I think LSD was really kind of the entryway into more serious. I mean, people had known about of course, stimulants and things, but there really wasn’t a good, a very advanced understanding of exactly what these different kinds of molecules are doing at the neural level, I mean, we still don’t we are, I think our understanding is quite crude, but like LSD was really the kind of the gateway to like, “Oh, we can work on compounds that target specific brain neurotransmitters and change how people feel, change how they behave.” And I think LSD was really, and that’s kind of a last piece of medical history.

[00:17:53] Alex Tsakiris: Absolutely.

[00:17:54] Andrew Smart: LSD became such a crazy taboo in the United States and around the world.

[00:17:59] Alex Tsakiris: And then what your book traces is like, “Okay, so that’s one gateway,” and then AI, and its power to kind of do stuff that we think is pretty smart and play chess and go, don’t play online poker people, you’re playing against robots, you’re just gonna get wiped out. But I can’t, but not really, where you’re going then is to say, ‘Okay,’ so in the way that these guys had their world blown about, while there’s this chemical neuro interaction that I don’t even frickin understand. And I didn’t understand consciousness to begin with. But now I don’t understand it even more than you come along, or AI comes along and says, “Yeah, you really don’t understand consciousness.” And then what you pose here, which is such an important question is, again, I’ll repeat it, can we build a robot that trips on acid, and in the book, you say, “the Turing acid test.” So talk about a little bit about Alan Turing because this really is still for a lot of people, it’s going to be a very central part of this dialogue we’re going to have today, what is the Turing test? And is it really reasonable for you to push the Turing test to the LSD level?

[00:19:20] Andrew Smart: I mean, yeah, the fun of writing a book like this is you can be very speculative and thought experiment to you with, what I mean, I fully acknowledge it’s a speculative argument, but I mean, what I was interested in was a lot of like, so A; there’s a lack of discussion of consciousness in the AGI world. It’s like, we’re going to create these human level intelligences. And people are really into and to me, and coming from a bit more of a human cognitive science background philosophy mind, where consciousness is a central problem for a lot of thinkers. And there’s this and what is the relationship between intelligence and consciousness? And so I was questioning, “Can you have a human level intelligence that’s not conscious?” I’m still very skeptical of that, like, I don’t and so even in neuroscience, we don’t know what the relationship is between intelligence, whatever that is, and consciousness, whatever that is. So, I wanted to take, but then I did want to take the sort of strong AI thesis that were made the idea is that will these AGI’s will be just like human minds, they’ll literally be minds. And so, the thought about LSD was then well, if it’s basically indistinguishable from a human mind, it must have this other property of like, you must be able to alter it through these, the it’s a leap. And it’s like, how you would engineer a system that could be perturbed by LSD, or like, and but this gets back to these debates in philosophy of mind about, there’s the functionalist thing where it’s like, the medium, if it’s biological neurons, or silicon based, artificial neurons doesn’t matter. It’s that what’s important is what the way that they’re set up, and the algorithms that run that’s what is conscious. And then there’s other thinkers are like, “No, no, only biological systems can do this.” And AI people were like, “No, no, biology is irrelevant.” And so I thought LSD was a very interesting, like, way to, like, probe that assumption of biology is irrelevant. It’s like, can you have all the properties of human mind without this other property of like, you can give it LSD and it’ll change completely changed the way your brain is operating for a while, or maybe for a longer time.

[00:22:01] Alex Tsakiris: It’s very exciting, it’s very imaginative way of kind of framing the problem. If I can go back, I just want to make sure that we don’t lose everybody, the Turing test was basically, as you were kind of alluding to there, it’s like, if I can fool you into thinking I’m intelligent, then I’m intelligent. And that then becomes the standard, throughout, so if we’re sitting across and we’re doing a chat, and you think I’m a human, and I’m not, then I’ve passed the Turing test and then we keep extending that beyond and beyond. And, as you point out, as you just pointed out, the thought exercises, though, they’re going a lot of different directions, but where you took it with the acid thing, or the LSD thing, I think really gets to, what I think has been the core question here a lot of times, and I think is, this question is skirted for the most part by the technology community and that is, “Is it the Turing test? Or is it the metaphysical test?” The question about consciousness is, “Is it somehow fundamental?” So, back when quantum physics happens, if you will, you got some of the greatest scientists, physicists of the world, the Max Planck, the Schrodinger’s, the Niels Bohr they’re saying, “Fuck it guys. You got it all wrong. Consciousness is fundamental and matter what you guys like or so interested in, you AI guys are so interested in-somehow merging is bad word- is somehow coming out of consciousness.” And now I have to add that this really needs to be something that we wrestle to the ground too because on the show, I mean, I got 200 shows exploring that. And I think the best way to explore that is through looking at some of the fundamental questions, assumptions we made about consciousness. Start with, does consciousness survive death? So go look at the near death experience, science, go look at University of Virginia and the reincarnation, science go look at all that. And you come back go, “Okay, kind of pretty well, case, closed consciousness seems to survive bodily death.” And that would definitely throw us in the metaphysical camp. So how do we even hold on to the materialist science consciousness is an illusion. I mean, is that really viable from a philosophical standpoint? Not just philosophical, given the evidence we have?

[00:24:40] Andrew Smart: Yeah, that’s an interesting, and I haven’t thought too much about that. In terms of those questions, it’s definitely like, in the book, I was kind of trying to argue against the inherent dualist I think, tone in a lot of the AI thing where, like consciousness can be embodied in any medium, or if it’s silicon, and I don’t know what I think about, you know, consciousness surviving death and like, I’m not you know what I mean? I’m not, I don’t have a strong.

[00:25:23] Alex Tsakiris: Let me ask you this question though about that I’m gonna persist on that a little bit further. Isn’t that the question, I mean, if we were going to it not but not the question not like, “Oh my god, I have to know if I,” but from the standpoint of reducing it down to the kind of most parsimonious way to determine one or another I mean, I think that’s how I got there. I didn’t start there with that question. I started with Psy and Para psychology and how do those guys at Princeton get that random number generator to swing a little bit more 50, 50 that way. And then I got to some point I go scroll that. I mean, here’s more. Here’s the simplest question, does consciousness survive death? If consciousness survives this bodily, biological thing? Well, then, the kind of answers that question so philosophically doesn’t question kind of cut to the chase.

[00:26:18] Andrew Smart: Yeah, I mean, it’s true. I think it’s a great point, because it kind of has to, if you imagine that you can do it in a machine, it has to, you have to assume it, it does transcend the bio, like your biological brain. If it can be instantiated in a different. And so it’s a very good point. And it’s a very, it’s an interest, and I agree. I mean, there’s, kemu was like, the only serious philosophical question is suicide.

[00:26:50] Alex Tsakiris: Exactly. Right.

[00:26:52] Andrew Smart: If you think your life is worth living, everything else is kind of a game he thought, you know? So, I don’t, I have to say, like, philosophically, I do tend toward more this tradition of, I guess, realism or like, science, you know what I mean, the objects of scientific theories are … And I don’t, I’m not, but I would definitely lean toward this, there’s some mind independent world. And science tells us something about that, in the vague, weakest form of that argument, but I completely acknowledge that that’s a, that’s a metaphysics in itself, where you say, “Right, science is about trying to understand the some…” And I agree, like you were talking about the quantum thinkers, and quantum physics where that it does throw all that into question, where you can’t really separate what you’re observing from the act of observing. Like, there’s no, so…

[00:28:00] Alex Tsakiris: But I got to return to it, because I think you’ve added a really, really important, like you said, thought experiment exercise with the acid thing. And I want to return to that, what is your final conclusion on that, in terms of playing that game? Is there some future where some artificial intelligence, could it pass the acid Turing test?

[00:28:26] Andrew Smart: Yeah, and I think this problem of people working, there’s people that are interested in AGI and who are working on it and are really trying to create this a human level, complete system that that has human level intelligence, and again, this this question of, would that thing, whatever it is, have subjective experience? Or what can some kind of an AI, I definitely think a way to probe that would be to try to perturb it, the way the way LSD perturbs are subjective experience like it completely, but there’s a very difficult way, and this is what’s very interesting to me, it’s really hard to objectively verify that like you’re tripping on acid, they do studies now where they give people LSD or psilocybin, put them in the brain scanner, in the FMRI machine, and you lay there and you report what you’re like, “Oh, I’m seeing colors. I feel this embodied my ego is dissolving,” and they can track, you can definitely track like, okay, your anterior cingulate is shutting down, your prefrontal cortex is shutting down, you know what I mean? Like you can and then they can compare like a person who’s not on, who’s sober. And you can get an idea of what’s going on in the brain. But how this relates to the subject of experiences is still very, you can live like they do these studies where they tried to reliably correlate like, this bridge, this is happening in this region. And this is what the subject reports are reporting. But and so in a computer system again, you’d have the same problem of the computer system might be claiming it might be saying, “Hey, I’m conscious, don’t hurt me.” How do you mean?

[00:30:21] Alex Tsakiris: Here’s what I thought you’re going with that when you start talking about the imaging stuff? Because this is very kind of interesting topic in the consciousness field, is because they do that imaging stuff, that FMRI stuff and not I think, is the guy’s name in England, who really kind of started, David Nutt, is that David Nutt? Well, what they found is very counterintuitive. I mean, the subjective experience of tripping on psilocybin is, boom, fireworks are going off. So the expectation was, we’d see the brain lit up like a Christmas tree. And what they find is the opposite, they find a reduced activity in all these areas. And it supports the notion which I don’t know how this fits into your model is that, this is kind of this transceiver, the brain is a transceiver of consciousness, consciousness is vast, immense, beyond we can describe, and we’re somehow filtering it through this. So when you talk about perturbing the system, the AI system that is now instantiated in this little bit of silicone, we still come back to the same question is, God is up there running consciousness and flowing it through all these things? What difference does it make?

[00:31:40] Andrew Smart: So I see what you mean, like, if consciousness is somehow a force, like, a fundamental force is something that we are connecting to some, via whatever and then the psilocybin is like allowing that more. You mean, to sort of explain the counterintuitive thing about the brain says and like, I mean, I definitely think that it’s a little more nuanced, because I think there’s like, in some of those studies, you see, there’s specific regions that of the brain that reduce their activity according to these measures, but there are others that actually, that are increasing, and their connections are increasing. And these are more the perceptual sensory. I mean, once again, it doesn’t rule out, you know what I mean, that you’re picking up stuff. And again, this is a very old philosophical debate about whether I mean, I think within mainstream neuroscience, it’s generally accepted or assumed that the brain is its identity, this generates consciousness but this is that people they’re like, what does it lose? Does it lose or what, you know, and this is, this is a hard a hard debate about what the, but there definitely are philosophers, there are philosophers who talk about the extended mind, and like.

[00:33:14] Alex Tsakiris: I don’t really think it’s much of a debate, if you really look at the frontier science, we’re gonna hold on to this materialism, because as Richard Fineman famously said, “You know, shut up and calculate,” and he didn’t really say it, but why not attributed to him, it fits him. And we’ve built this incredible, incredible, advanced, scientifically based culture that we love, and we should love because it brings us a lot of things that “you know, shut up and calculate model.” So we don’t want to throw that, for practical reasons. We don’t want to throw that away. But if we’re going to be honest about the science, like I say, I mean, near death, experience science, let alone para psychology blows it away. But then if you want to get into the more human reincarnation research, after death communication, I interviewed this woman, Dr. Julie bombshell, she’s still around it win bridge Institute, PhD in pharmacology, she knows how to do controlled studies, you do controlled studies on whether mediums can reliably deliver information to people who are grieving. And the data just comes through again, and again, they can we don’t understand the mechanism, we don’t understand what it means to be deceased, and to have consciousness that disease, but we do it does. But then the follow on to her study that I find incredibly, incredibly relevant is that “Hey, and if you lost a dog, you can communicate with that dog too, in a way that’s, again, in every way that we can measure it straight up, doing science,” and science says, “If we can observe it, we can measure it.” There’s no taboo to say, “Well, you can’t measure after death communication science. Sure should can just set up the experiment, set up the controls, what are you observing, measure it, control it, control it, control it and at the end.” So we don’t really have an idea of inner interfaces with your world to it’s like, can silicone Be conscious? Well, let’s start with can your cat be conscious? Can your dog be kind of just most people who’ve ever had an animal say, ‘Of course,’ I know it is. But there’s a whole bunch of questions that get tied in here. What do you think about shut up and calculate?

[00:35:20] Andrew Smart: I think if you look, in fact, there was an interesting survey among physicists a few years ago, who I think they tried to get at this question and asked them, “What about indeterminacy? What about randomness? What about entanglement, and all these different interpretations of quantum theory?” And I think a large percentage of physicists don’t, you know, are reluctant to commit to an interpretation, because it does, it’s like, that the Shut up, and then most of them are just in this, they do just shut up and calculate, because that’s how you publish papers and get tenure and …

[00:36:01] Alex Tsakiris: Get paid, you get a paycheck.

[00:36:04] Andrew Smart: You get a paycheck, and that’s it. And like you said, we were making these advances, you know, maybe at this point incrementally, and there’s a great book by Sabine Hossen felder. About, it’s called Lost in math, about how physics is so obsessed with beautiful equations that empirically we’ve like, made no progress since the 70s. And because they’re all obsessed with supersymmetry, but they can’t find it, but they keep smashing particles. And they’re like, this has to be true that you don’t I mean, because the equations are so beautiful, but they just that the empirical support for a lot of the stuff is not there.

[00:36:44] Alex Tsakiris: What do you think about that? I’ve had some awesome interviews with physicist, Donald Hoffman is kind of one of my favorites, kind of saying, the Dale, and there’s kind of no reality. And here’s my mathematical model to prove it. I think it just kind of an oxymoron in a way. But your field, I think, to anyone who really is willing to go there is the most in your face, counter argument to a lot of that myth of progress kind of stuff, which I hear, we’re not really progressing. It’s like, man, I go. So tell people the broad sketch the AlphaGo story.

[00:37:29] Andrew Smart: Google bought DeepMind, a few years ago, for a lot of money, and they use. Like, you mentioned deep blue and the, you know, initially, people thought chess would be kind of an insurmountable obstacle for an AI. And that proved not to be true. And then, I mean, there is this interesting dichotomy, I think, where, there’s the famous story about Marvin Minsky, who was one of these early AI people from the 60s. And he assigned the problem of computer vision to like a summer intern. And they, back in, they’re like, “Oh, computer vision, we’re gonna solve that in the summer with an intern” and here we are, 56 years later, it’s like, well, we were closer, and so the problems that we thought were going to be really hard like chess, and Go turn out to be easier than that, like, than getting a robot to, like, open a car door. I don’t know, if you follow these DARPA challenges, I mean, you have these Boston Dynamics, things with the dancing robots, but like, you still can’t, there’s no, we’re very far away from having a robot that you could say like, “Oh, go to the fridge and get me a beer.” There’s no robot that could do that without destroying, you know what I mean? Like it would have to be like this very controlled circuit. So just the stuff, humans take for granted in terms of motion and moving around are like insanely difficult from a robotics and like a fist. But then anyway, AlphaGo was, Go is this very old game? That’s very complicated. It has a lot of dimensions and possible moves. And everyone’s like, oh, okay, chess, is relatively simple, actually, if you can just this is combinatorically problem of all the rules. And if you have this massive supercomputer, of course, it can churn through all the possible moves in milliseconds while human can’t. That was the explanation and then they’re like, but Go, that’s out, no way cuz that’s way too, that’s even more complicated than chess. The rules are fuzzy and like, but then yeah, they you what, so DeepMind used what they call reinforcement learning, which is, it’s based on theories about how humans learn through reward functions. It’s a quite a visible, you give it a given Someone a goal and you reward them as they get close to the goal and punish them as they move away. And this is like operant, conditioning, BF Skinner type stuff. And then when you combine that reinforcement learning idea with deep neural networks, and you can get the machine to play millions and millions of games, then you can, so it turns out, you can engineer a system that will beat, autonomously beat a human at Go. And it’s not, so it’s still a bit of a brute force approach. Like I think the chess the chess game is just brute forcing, just brute forcing all the possibilities. And arriving at that, you know what I mean, a human can’t brute force chess or Go, it’s much more of an intuitive sense. But the computer systems, you can brute force it to a degree and then use reinforcement learning to get it the rest of the way. And it was really interesting, because AlphaGo came up with moves, that as far as anyone can tell, no human has ever tried. And everyone was like, “Oh, my God, how did it? How did it do that?” And that’s still a kind of an interesting problem of like, we don’t really understand how it arrived at the winning moves when it be least adult. So there’s the you know, these systems do have these interesting properties. And like, you mentioned emergence, before, there’s things that emerge out of these systems that are happening, that even the engineers and the and the people who developed it can’t anticipate. They’re like, “Wow, that’s, that’s crazy. It did that.” Even though they’re the ones who coded it, you know what I mean?

[00:41:41] Alex Tsakiris: And that’s the part that scares people, right? It scares people right away when you even mentioned, BF Skinner, as kind of a model for how you would program of self-learning thing. And then when you scale that you say, okay, so we got, and then you get, you force yourself to be comfortable with the idea that it’s doing reward and punishment, you go, ‘Yeah,’ but then we just had it do it to itself. And it just ran millions and millions of trials on itself. “Oh, whoa!” And then the other thing, comment on this, too, that said that AlphaGo, like you say, it generates, thinking, for lack of a better term that we couldn’t really have anticipated. And then the humans look at the problem, different go? Well, one of the things I learned from the computer is that I don’t have to smash everybody at go, all I have to do is win by half a point.

[00:42:37] Andrew Smart: And this is an emerging area of research to where there’s a few studies, I think, on working with the machines, so that the machines alone have faults and weaknesses and blind spots. Humans have faults and weaknesses and blind spots. And the idea would be when you, when you work to get like, without, like, if you work together, like a human AlphaGo system would just be completely unbeatable. And this is the thinking with diagnostics, or like, let’s not say like, oh, radiologists are irrelevant now or obsolete, because we have deep learning on CAT scans. It’s turned, there’s a lot of studies showing that there’s a lot of weaknesses with automated diagnostic systems, but then, like, let’s, let’s have a human there, and, like, let’s use the things that the human can’t do, which is like, shuffle through gazillions of data points, and process all these images and look for patterns. But then complement that with the human ability to see still to use intuition and judgment and like, social context, and all of the things that we still don’t know how to get a computer to understand. And so it’s this hybrid, like this sort of hybrid. And I think there’s people are like, “Oh, the, the machines are gonna take over.’ And it’s like, “Well, no,” I mean, we should cooperate, basically, is the, because, yeah, like with Go where it’s like, “Oh, the machine showed us something that we humans did, never occurred to humans.”

[00:44:21] Alex Tsakiris: Although, all those things kind of break down because you say, well, that’s clearly that’s an intermediate step. I mean, and we’ve seen those steps before. So that which kind of gets to the belly of the beast questions that we really do have to talk about because I mean, folks, again, I’m telling you that you don’t know how awesome it is to have Andrew Smart with us today because he is at the edge in his job at Google and his job at Twitter before that, right. Yeah, yeah, I mean, these are really the cutting edge, the cutting, cutting edge of these kinds of questions because you’re now looking at it from a social, ethical, legal kind of perspective. And we can really, put the boxing gloves on about that and have completely different opinions about where it’s going. But we cannot disagree that that’s where the action is. So frame up, what you do, what you have done in the past, I think you have a little bit of a different job at Google now. But those issues, I mean, the good and the bad that, you know, let’s make sure our search is socially just the bad, let’s censor the shit out of anyone who doesn’t agree with us, those would be the two polar extremes.

[00:45:44] Andrew Smart: Yeah, those I mean, that’s really what we’re, so I think that with any technology in history, you’ve had this rapid advancements, and then you typically have catastrophes and disasters, which lead to people going, like, oh, we need like, for flight, with aviation, it’s within 100 years now, if we had if flying used to be this kind of fantastical, crazy dream. And now it’s like this mundane thing that everyone totally takes for granted. But it took decades of, of manufacturers of companies and governments and regulators working together to make commercial flying safe enough, where people are like, “Oh, yeah, I hopped on a plane without even, I don’t even worry about it.” But that, behind the scenes, it’s this very, very complicated, constant vigilance type of safety practices that have been learned over decades. So, just for example, and so with AI and machine learning, it’s a similar thing, where we’re in this rapid expansion of investment, and the everyone is like, this is amazing stuff. And we’re gonna and now we’re kind of coming to terms with like, there are downsides. There are risks. And we need to understand better how these systems, especially on a scale of Google. How do these things interact? How are they influenced by society? How do they change society? And how do we prevent harm, basically, from these things? Or how do we govern the development of them, responsibly, basically, that’s, and again, these are terms, these are hard to define and hard to …

[00:47:32] Alex Tsakiris: Let me frame it up and pin it down, in one way, and it would be the freedom of speech, censorship, freedom of the press issue. So this is a fundamental, fundamental issue here, you got some people on one side saying, you know, these are platforms that we would consider throughout our lives as being freedom of the press, that somehow there is speech, free speech being done there. And that should not be inhibited in some ways, and all this stuff gets very fuzzy. So if we just frame that as the problem, then I thought, what we should probably do further is help, have you help us understand your problem from an AI standpoint? I mean, you’re talking about lots and lots of data, content, and at a level that’s unimaginable, and lots of legitimate concerns, you have illegal activity, you have activity that anyone would find morally reprehensible, people trafficking kids, or exploiting other people and stuff like that. So, what am I missing in terms of framing kind of, “Okay, here’s the issue. But then here’s what people maybe don’t appreciate about how hard that is for you, as a Google guy.”

[00:48:59] Andrew Smart: I agree. I mean, there’s definitely very tightly regulated things where you are obligated legally to get rid of certain types of things. And so, but then I agreed that the free speech question is very hard. And like you mentioned, I used to work. I did work at Twitter for a year, right after the election. And when it was like, misinformation was suddenly like, what is, that really blindsided the tech industry.

[00:49:31] Alex Tsakiris: That would be highly, highly controversial, right? That mean, that would be that is the belly of the beast, I mean, misinformation as it was and is the question. Guys, misinformation is another guy’s information. So yeah.

[00:49:48] Andrew Smart: Right, exactly. And that’s where all these questions of like, what’s our shared reality? Is there such a thing and it overlaps with all these philosophical questions we were talking about before, where you go back into some kind of relativism, where they’re, yeah, like you’re saying, each perspective is equally valid, or they’re like, “Is there such a thing as false belief or can you be wrong?”

[01:02:35] Alex Tsakiris: Well, as we move towards wrapping it up. Let me ask you a question more relevant to your job? Take this discussion we’re having, what parts of it are particularly relevant, interesting, pressing, and interesting to you, because you’re such a thinker, and anyone who reads this book, and I do encourage you to pick up this book, it’s really fantastic. And if you want to know how some of the thought leaders- I hate to use that term- but how some of the people who are really trying to wrestle these issues to the ground, how they’re thinking, in some of the novel ways that you wouldn’t have anticipated then pick up Beyond Zero and One, machine, psychedelics and consciousness. But then, so Andrew, what do you find interesting about this discussion we’ve been having relative to your job, because you do, I didn’t mean to just kind of you do care about social issues, and you have an issue of, “Hey, let’s make sure we’re being fair there, too.” And you want to speak to how that relates to what you think you can do to make the world a better place with AI? And these kinds of things?

[01:03:52] Andrew Smart: Well, I mean, I’m really interested in what people talk about is like these socio technical systems. So, not looking narrowly at technical failure in Google’s machine learning things, but like more a holistic view of the interaction of our systems with society, and within the, like, the issues you’re talking about are super important, too. But mainly, what we’re trying to do is understand sort of the, if you look at inequality, for example, or the way society is structured, and the way things run. There’s all this data that is sampled from that social structure upon which these models are trained. And then they’re making predictions. People are taking that information sometimes and using it, to do things which generates more data, and its kind of this feedback loop. And what we’re trying to understand is how do we intervene, so that we’re not just blindly reproducing, or recapitulating discrimination or unfair allocation of resources, or like.

[01:05:12] Alex Tsakiris: Bring that down to a practical level that people might be able to kind of grasp?

[01:05:17] Andrew Smart: Well, I mean, I think you can look at I mean, so just a real clear, I guess, it’s a pretty clear example, to me where you have so and take facial recognition software, which is in your phone to unlock it sometimes, or it’s used in a lot of different domains, there’s a lot of controversy about it. But one clear lesson from a lot of work that my colleagues and I do is like, you train these systems, and you have a giant database of photographs, and you have these not you feed these to the neural networks, they learn what a face looks like. And then they can pretty accurately predict, I see these pixels, and I know, that’s Alex, or with 89%, whatever, 90. So the problem was that these databases, these training data sets were mostly white people, mostly white men. So when you use these systems, and they tend to be used more often on darker skin people, for lots of reasons. They are like a chance. And they don’t, either they don’t see the face, it’s the wrong face. Its miss recognized. And it’s like everyone goes, “Well, what’s going on here?” And it turns out like, nobody thought like, oh, if we only have white faces in the database, and then we use it on dark people, it’s probably not going to work as well. And it’s these kinds of things, in retrospect seems super obvious, like, well, obviously, you got to train it on the diversity of faces that you’ll ultimately use the system on. But this, these are not things that people had before. And so what we’re trying to develop is like, for these systems, are these things kind of representative of the people that you’re that will be interacting with is and will it work for them? In the same way, the question of whether you should I mean, there’s this other question of like, not using this at all, because it’s inherently problematic.

[01:07:16] Alex Tsakiris: And now, we can’t go backwards. Like, we can’t go backwards. It’s beautiful. Well, beautiful example.

[01:07:23] Andrew Smart: Yeah. So and that’s but that kind of problem is rife through all machine learning. Where you’re training your models on this very narrow type of person, and then using that to make predictions about different types of people. And it doesn’t work

[01:07:40] Alex Tsakiris: Equally true in search. Equally true, same problem in search, same problem. Everywhere, so everywhere.

[01:07:49] Andrew Smart: Yeah. And to me, it’s this problem of scale. And I don’t, that’s hard. You know, the tech industry is predicated on scale.

[01:08:05] Alex Tsakiris: That’s a great example. But it reminds me of the stop and frisk cases in New York, there were a lot of people, particularly in some minority communities that said, “Hey, the cops stop us a lot more than they stop other people on the street and they can frisk us.” “No, no, no, no, no, that doesn’t happen well, when they actually get the data, yeah, they did. And that’s twice as likely and less likely to actually find an indifferent like a gun or stuff like that. So it’s, it’s another form of data. It’s only when you get the data and you go, “Oh, wow, I guess the system has been systemically biased in a way” and so very, very excellent that you would point out face recognition as a part of that because it’s directly, it’s perfect. So what’s next you are an excellent author, your book is super well written and you have another book as well. Do you have another one in the works are you gonna do with your day job?

[01:09:07] Andrew Smart: I’m a researcher so I get I write papers now that you know and on this these kinds of issues and publish work so that kind of takes up my writing I mean; I don’t have any I don’t have a really a book. I’m interested in like pursuing more of these questions around like the technical systems and understanding complexity theory and things like that. But I’m always of course I’m I guess I’m curious and easily distractible person so I get interested in different things and then we’re kind of obsessive but I’m sure you know and then but yeah, my job it keeps me very interested in busy for sure.

[01:09:54] Alex Tsakiris: Well, let’s hope you remain just as curious as you’ve been. Our guest again, has been Andrew Smart the book you’re gonna want to check out Beyond Zero and One. Terrific having him on. Thank you so much.

[01:10:06] Andrew Smart: Thanks a lot. Thanks for having me.

[01:10:08] Alex Tsakiris: Thanks again to Andrew Smart for joining me today on Skeptiko. One question I tee up from this interview, why does AI seem to be blind to the metaphysical question of consciousness, which, as I kind of put forth in this interview is the only game in town? It’s the only question to ask. And the next level on that question is, well, the spiritual question. And you have to wonder if that’s why they don’t answer part A is because they don’t really want to answer Part B. So I need to break that down even further, I guess. So it isn’t totally inside baseball. If you think consciousness is an illusion, and you’re going to play, the shut up and calculate game, which is the only game in town for AI. And Andrew kind of threads the needle there a little bit, but at the end of the day, it’s the only game in town. Consciousness is an illusion. Consciousness is an epiphenomenon of the brain. That’s where you got to be if you’re doing AI, you’re doing AI at Google, or Facebook or Twitter, or, more importantly, DARPA, or NSA or wherever you’re doing it. Anyways! So that’s question number one. But then question number two is, if consciousness is not an illusion, then what is the hierarchy of consciousness? Like I always say, like the fucking God question, does ET have an NDE questioned Vila Real questions, but you wouldn’t want to talk about any of that stuff. If you’re doing AI, if your job was to just do a better AlphaGo machine learning and apply it to search and everything else that’ll make billions and billions of dollars and drive up the stock price on a lot of Google stock, man, drive it up, drive it up. Let me know your thoughts. Skeptiko forum, come on over play. Lots more to come. Stay with me for all of that. Until next time, take care. Bye for now. [box]

  • More From Skeptiko

  • [/box]

     

    Author photo
    Publication date: