Category Archives for Artificial Intelligence (AI)

Voices in AI – Episode 68: A Conversation with Suju Rajan

About this Episode

Episode 68 of Voices in AI features host Byron Reese and Suju Rajan discuss differences in machine and human learning as well as where machines could take us in advertising, privacy, and medicine.

Suju has a Ph.D. in Machine Learning from the University of Texas. Dr. Rajan is also currently the head of research at Criteo.

Visit to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm and I’m Byron Reese. Today I’m so excited our guest is Suju Rajan. She is the head of research at Criteo, and she holds a Ph.D. in Machine Learning from the University of Texas. Welcome to the show.

Suju Rajan: Great to be here, Byron.

You know, we’re based in Austin, so I drive by your alma mater every day almost, so it’s kind of like a hometown interview.

That’s pretty cool. Go Longhorns!

We’re recording this in August and you picked a good time not to be here, you know?

I can imagine. I think when I graduated they actually were at the Rose Bowl (not that they actually won it), so I’m happy I was there at the right time.

There you go. So I always like to start with the simple question: what is artificial intelligence, or if you prefer, what is intelligence?

Let’s go with artificial intelligence because I don’t think I’m quite qualified to answer what is intelligence overall.

Let’s say the classical definition of artificial intelligence, what can I say, it’s more ‘textbook,’ right?

So this is where the whole field started off a few decades ago in fact, where the goal was to create intelligence in machines, which was comparable to human-level intelligence, and what does that mean?

What do we think when we say someone is intelligent, right?

So it is the ability for us to reason, to be able to extrapolate to situations that we hadn’t been in before and to come out of it relatively unscathed in some sense.

So, the ability to reason, to make sort of facile decisions, to be able to solve a longer-term problem than just the task at hand, and the relative information to do this is what I think is the standard of artificial intelligence.


So that’s a really high bar because a simple definition is: ‘systems that respond to their environments.’

So let me take it a step-down, so that’s to your point, my high bar.

Today the way artificial intelligence is being used overall in media and maybe in some portions of even the community is the ability to perform really well at certain specific tasks at a level that is comparable to what a human would do.

Now nobody questions, ‘is it really human-like?

Because it’s within a really constrained environment within the space of the data the thing has been trained on. If you look at some of the tests that they’ve done, it’s in a very narrow domain.

Now ‘do we all agree that that is artificial intelligence?’ becomes an interesting debate, but I want to say that the mainstream has focused a lot more on intelligence in very narrow specific tasks, but I wouldn’t call it artificial intelligence.

AI Tech

All right, so your particular area of study is a technique used in artificial intelligence, called machine learning. And machine learning, simply put, is: you take a lot of data about the past and you study it and you make predictions about the future, is that a fair oversimplification?

A fair oversimplification, yes.

And so the philosophic implication is that the future behaves like the past, and in a lot of cases, that’s what a cat looks like tomorrow is probably what a cat looked like yesterday. But what a cellphone looks like tomorrow is not what a cellphone looked like 10 years ago, right? And how Chess is played tomorrow is the same as it’s been played for 400 years, so, that’s a really good application of it. What are some good applications of AI and things that aren’t so good?

Okay, the great question here again, so, I think you’ve sort of nailed the whole answer.

So imagine that your goal is somewhat fixed, right?

And we as humans know what that goal needs to be.

So if you could figure out that all that you had the system to do, was to recognize cats in a picture—and this is a very, very well defined problem—maybe we mess up how we train the model.

We are not careful how it can be adapted and so forth, but within the scope of these sorts of problems, where the goal is really well defined.

Chess, for all its beauty, is still a constrained problem, right?

Artificial Intelligence Good or Bad

There is a fixed space that you can explore and maybe I’m over trivializing this, but in some sense, it’s a constrained problem.

It’s here that we have made lots of good progress and at least the algorithms that we are inventing are enabling us to make lots of progress in that sphere.

Now what it is not good at is to be able to do a longer-term task.

So imagine that there was this interesting problem that someone was talking to me about… If you wanted to graduate from a school with a good GPA or if you wanted to land a specific good job, now what is the set of courses that you would have to take, how would you have to perform, and so on and so forth?

But the kind of data that we had to solve this particular problem through an AI system, became so trivialized that it was almost laughable, the sorts of things that came out of it.

So in terms of a long-term projection where the path is pretty fuzzy, and it really comes down to human experience and having to talk to bunches of people and constantly learning and readjusting and so on and so forth.

These sorts of longer-term goals in which the end state is not as clear, we have a long, long, long, long way to go.

Listen to this one-hour episode or read the full transcript at


Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.


Voices in AI – Episode 67: A Conversation with Amir Khosrowshahi

About this Episode

Episode 67 of Voices in AI features host Byron Reese and Amir Khosrowshahi talk about the explainability, privacy, and other implications of using AI for business.

Amir Khosrowshahi is VP and CTO at Intel. He holds a Bachelor’s Degree from Harvard in Physics and Math, a Master’s Degree from Harvard in Physics, and a Ph.D. in Computational Neuroscience from UC Berkeley.

Visit to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm.

I’m Byron Reese. Today I’m so excited that my guest is Amir Khosrowshahi. He is a VP and the CTO of AI products over at Intel.

He holds a Bachelor’s Degree from Harvard in Physics and Math, a Master’s Degree from Harvard in Physics, and a Ph.D. in Computational Neuroscience from UC Berkeley.

Welcome to the show, Amir.

Amir Khosrowshahi: Thank you, thanks for having me.

I can’t imagine someone better suited to talking about the kinds of things we talk about on this show, because you’ve got a Ph.D. in Computational Neuroscience, so, start off by just telling us what is Computational Neuroscience?

So neuroscience is a field, the study of the brain, and it is mostly a biologically minded field, and of course, there are aspects of the brain that are computational and there are aspects of the brain that are opening up the skull and peering inside and sticking needles into areas and doing all sorts of different kinds of experiments.

Human Brain & Neuron Model

Computational neuroscience is a combination of these two threads, the thread that there [are] computer science statistics and machine learning and mathematical aspects to intelligence, and then there’s biology, where you are making an attempt to map equations from machine learning to what is actually going on in the brain.

I have a theory that I may not be qualified to have and you certainly are, and I would love to know your thoughts on it.

I think it’s very interesting that people are really good at getting trained with a sample size of one, like draw a made-up alien you’ve never seen before and then I can show you a series of photographs, and even if that alien’s upside down, underwater, behind a tree, whatever, you can spot it.

Further, I think it’s very interesting that people are so good at transfer learning, I could give you two objects like a trout swimming in a river, and that same trout in a jar of formaldehyde in a laboratory and I could ask you a series of questions: Do they weigh the same, are they the same color, do they smell the same, are they the same temperature?

And you would instantly know, and yet, likewise, if you were to ask me if hitting your thumb with a hammer hurts, and I would say “yes,” and then somebody would say, “Well, have you ever done it?”

And I’m like, “yeah,” and they would say, “when?” And it’s like, I don’t really remember, I know I have.

Somehow we take data and throw it out, and remember metadata, and yet the fact a hammer hurts your thumb is stored in some little part of your brain that you could cut it out and somehow forget that.

And so when I think of all of those things that seem so different than computers to me, I kind of have a sense that human intelligence doesn’t really tell us anything about how to build artificial intelligence. What do you say?

Okay, those are very deep questions, and actually, each one of those items is a separate thread in the field of machine learning and artificial intelligence.

There are lots of people working on things, so the first thing you mentioned I think, was one shot learning where you have, you see as something that’s novel.

From the first time you see it, you recognize it as something that’s singular and you retain that knowledge to then identify if it occurs again—such as for a child it would be like a chair, for you, it’s potentially an alien.

So, how do you learn from single examples?


That’s an open problem in machine learning and is very actively studied because you want to be able to have a parsimonious strategy for learning and the current ways that—it’s a good problem to have—the current ways that we’re doing learning in, for example, online services that sort photos and recognize objects and images.

It’s very computationally wasteful and it’s actually wasteful in the usage of data.

You have to see many examples of chairs to have an understanding of a chair, and it’s actually not clear if you actually have an understanding of a chair, because the models that we have today for chairs, do make mistakes.

When you peer into where the mistakes were made, it seems like there the machine learning model doesn’t actually have an understanding of a chair, it doesn’t have a semantic understanding of a scene or of grammar, or of languages that are translated, and we’re noticing these efficiencies and we’re trying to address them.

AI Tech

You mentioned some other things, such as how do you transfer knowledge from one domain to the next.

Humans are very good at generalizing.

We see an example of something in one context, and it’s amazing that you can extrapolate or transfer it to a completely different context.

That’s also something that we’re working on quite actively, and we have some initial success in that we can take a statistical model that was trained on one set of data, and then we can then apply it to another set of data by using that previous experience as a warm start, and then moving away from that old domain to the new domain.

This is also possible to do in continuous time.

Much of the things we experience in the real world—they’re not stationary, and that’s a statistics change with time.

We need to have models that can also change.

For a human it’s easy to do that, it’s very good at going from… it’s good at handling non-stationary statistics, so we need to build that into our models, be cognizant of it, we’re working on it. And then [for] other things you mentioned—that intuition is very difficult.

It’s potentially one of the most difficult things for us to translate from human intelligence to machines, and remembering things and having kind of a hazy idea of having done something bad to yourself with a hammer, that I’m not actually sure where that falls in into the various subdomains of machine learning.

Listen to this one-hour episode or read the full transcript at


Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.


Voices in AI – Episode 73: A Conversation with Konstantinos Karachalios

Today’s leading minds talk AI with host Byron Reese

About this Episode

Episode 73 of Voices in AI features host Byron Reese and Konstantinos Karachalios discuss what it means to be human, how technology has changed us in the far and recent past, and how AI could shape our future.

Konstantinos holds a Ph.D. in Engineering and Physics from the University of Stuttgart, as well as being the managing director at the IEEE standards association.

Visit to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today our guest is Konstantinos Karachalios. He is the Managing Director at the IEEE Standards Association, and he holds a Ph.D. in Engineering and Physics from the University of Stuttgart. Welcome to the show.

Konstantinos Krachalios: Thank you for inviting me.


So we were just chatting before the show about ‘what does artificial intelligence mean to you?’ You asked me that and it’s interesting because that’s usually my first question: What is artificial intelligence, why is it artificial, and feel free to talk about what intelligence is.

Yes, and first of all we see really a kind of mega-wave around the ‘so-called’ artificial intelligence—it started two years ago.

There seems to be a hype around it, and it would be good to distinguish what is marketing, what is real, and what is propaganda—what are dreams what are nightmares, and so on.

I’m a systems engineer, so I prefer to take a systems approach, and I prefer to talk about, let’s say, ‘intelligent systems,’ which can be autonomous or not, and so on.

The big question is a compromise because the big question is: ‘what is intelligence?’ because nobody knows what is intelligence, and the definitions vary very widely.


I myself try to understand what is human intelligence at least, or what are some expressions of human intelligence, and I gave a certain answer to this question when I was invited in front of the House of the Lord’s testimony.

Just to make it brief, I’m not a supporter of the hype around artificial intelligence, also I’m not even supporting the term itself.

I find it obfuscates more than it reveals, and so I think we need to re-frame this dialogue, and it takes also away from human agency.

So, I can make a critique of this and also I have a certain proposal.


Well, start with your critique If you think the term is either meaningless or bad, why? What are you proposing as an alternative way of thinking?

Very briefly because we can talk really for one or two hours about this: My critique is that the whole of this terminology is associated also with a perception of humans and of our intelligence, which is quite mechanical.

That means there is a whole school of thinking, there are many supporters there, who believe that humans are just better data processing machines.

Well let’s explore that because I think that is the crux of the issue, so you believe that humans are not machines?

Apparently not. It’s not only we’re not machines, I think because evidently, we’re not machines, but we’re biological, and machines are perhaps mechanical although now the boundary has blurred because of biological machines and so on.

Human Brain & Neuron Model


You certainly know the thought experiment that says, if you take what a neuron does and build an artificial one and then you put enough of them together, you could eventually build something that functions like the brain. Then wouldn’t it have a mind and wouldn’t it be intelligent, and isn’t that what the human brain initiative in Europe is trying to do?

This is weird, all this you have said starts with a reductionist assumption about the human—that our brain is just a very good computer.

It ignores really the sources of our intelligence, which are really not all in our brain.

Our intelligence has really several other sources.

We cannot reduce it to just the synapses in the neurons and so on, and of course, nobody can prove this or another thing.

I just want to make clear here that the reductionist assumption about humanity is also a religious approach to humanity, but a reductionist religion.

Data Warehouse

And the problem is that people who support this, believe it is scientific, and, I do not accept it.

This is really a religion, and a reductionist one and this has consequences about how we treat humans, and this is serious.

So if we continue propagating a language that reduces humanity, it will have political and social consequences, and I think we should resist this and I think the best way to express this is an essay by Joichi Ito with the title which says “Resist Reduction.”

And I would really suggest that people read this essay because it explains a lot that I’m not able to explain here because of time.


So you’re maintaining that if you adopt this, what you’re calling a “religious view,” a “reductionist view” of humanity, that in a way that can go to undermine human rights and the fact that there is something different about humans that is beyond purely humanistic.

For instance, I was in an AI conference of a UN organization that brought all other UN organizations with technology together.

It was two years ago, and there they were celebrating a humanoid, which was pretending to be a human.

The people were celebrating this and somebody there asked this question to the inventor of this thing: “What do you intend to do with this?”

And this person spoke publicly for five minutes and could not answer the question and then he said, “You know, I think we’re doing it because if we don’t do it, others were going to do it, it is better we are the first.”

Artificial Intelligence Good or Bad


I find this a very cynical approach, a very dangerous one, and nihilistic.

These people with this mentality, we celebrate them as heroes. I think this is too much.

We should stop doing this anymore, we should resist this mentality and this ideology.

I believe we make machines a citizen, you treat your citizens like machines, then we’re not going very far as humanity.

I think this is a very dangerous path.


Listen to this one-hour episode or read the full transcript at

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.


Machine Learning

Speed and Scale: Advanced Analytics with Machine Learning

Artificial Intelligence and Machine Learning (ML) can turn massive amounts of data into deep insights that drive revenue and decrease costs. But ML’s not an island – in fact, it’s carried out most successfully when paired with advanced analytics.

Artificial Intelligence-Machine Learning-Deep Learning Technologies

To facilitate the best analytics work, enterprises need the right platforms and tools to load data, prepare it, ensure high quality and integrate with corporate data governance processes.

How can you get all that working harmoniously, especially in the cloud?

It takes the right tools, strategy, and workflow, but it can be done.

Apple Podcast Girl

Join us for this free 1-hour webinar, from GigaOm Research, to find out how.

The webinar features GigaOm analyst Andrew Brust, Deepsha Menghani, Product Marketing Manager at Microsoft, and Mark Balkenende, Director Technical Product Marketing at Talend.

In this 1-hour webinar, you will learn how:

  • Data analytics, data quality, and data governance can be tightly intertwined with data science
  • Technologies like Apache Spark can serve both your data engineering and machine learning needs
  • Cloud services can be combined with open-source software and analytics ecosystem tools for maximum benefit

Machine Learning and AI
Register now to join GigaOm Research, Microsoft, and Talend for this free expert webinar.

Who Should Attend:

  • CIOs
  • CTOs
  • Chief Data Officers
  • Data Scientists
  • Data Engineers
  • Data Stewards
  • Analytics professionals



Voices in AI – Episode 72: A Conversation with Irving Wladawsky-Berger

About this Episode

Episode 72 of Voices in AI features host Byron Reese and Irving Wladawsky-Berger discuss the complexity of the human brain, the possibility of AGI and its origins, the implications of AI in weapons, and where else AI has and could take us.

Irving has a Ph.D. in Physics from the University of Chicago, is a research affiliate with the MIT Sloan School of Management, he is a guest columnist for the Wall Street Journal and CIO Journal, he is an agent professor of the Imperial College of London, and he is a fellow for the Center for Global Enterprise.

Visit to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese.

Today our guest is Irving Wladawsky-Berger.

He is a bunch of things. He is a research affiliate with the MIT Sloan School of Management.

He is a guest columnist for the Wall Street Journal and CIO Journal.

He is an adjunct professor at the Imperial College of London.

He is a fellow for the Center for Global Enterprise, and I think a whole lot more things. Welcome to the show, Irving.

Irving Wladawsky-Berger: Byron it’s a pleasure to be here with you.

Artificial Intelligence Good or Bad

So, that’s a lot of things you do. What do you spend most of your time doing?

Well, I spend most of my time these days either in MIT-oriented activities or writing my weekly columns, [which] take quite a bit of time.

So, those two are a combination, and then, of course, doing activities like this – talking to you about AI and related topics.

Business Technologies & Strategies

So, you have an M.S. and a Ph.D. in Physics from the University of Chicago.

Tell me… how does artificial intelligence play into the stuff you do on a regular basis?

Well, first of all, I got my Ph.D. in Physics in Chicago in 1970.

I then joined IBM research in Computer Science.

I switched fields from Physics to Computer Science because as I was getting my degree in the ‘60s, I spent most of my time computing.


And then you spent 37 years at IBM, right?

Yeah, then I spent 37 years at IBM working full time, and another three and a half years as a consultant.

So, I joined IBM research in 1970, and then about four years later my first management job was to organize an AI group.

Now, Byron, AI in 1974 was very very very different from AI in 2018.

I’m sure you’re familiar with the whole history of AI.

If not, I can just briefly tell you about evolution.

I’ve seen it, having been involved with it in one way or another for all these years.

AI-Artificial Intelligence Benefits & Risks

So, back then did you ever have occasion to meet [John] McCarthy or any of the people at the Dartmouth [Summer Research Project]?

Yeah, yeah.


So, tell me about that. Tell me about the early early days in AI, before we jump into today.

I knew people at the MIT AI lab… Marvin Minsky, McCarthy, and there were a number of other people.

You know, what’s interesting is at the time the approach to AI was to try to program intelligence, writing it in Lisp, which John McCarthy invented as a special programming language; writing in rules-based languages; writing in Prolog.

At the time – remember this was years ago – they all thought that you could get AI done that way and it was just a matter of time before computers got fast enough for this to work.

Clearly, that approach toward artificial intelligence didn’t work at all.

You couldn’t program something like intelligence when we didn’t understand at all how it worked…

AI Operations

Well, to pause right there for just a second…

The reason they believed that – and it was a reasonable assumption – the reason they believed it is because they looked at things like Isaac Newton coming up with three laws that covered planetary motion, and Maxwell and different physical systems that only were governed by two or three simple laws and they hoped intelligence was.

Do you think there’s any aspect of intelligence that’s really simple and we just haven’t stumbled across it, that you just iterate something over and over again?

Any aspect of intelligence that’s like that?

I don’t think so, and in fact, my analogy… and I’m glad you brought up Isaac Newton.

This goes back to physics, which is what I got my degrees in.

This is like comparing classical mechanics, which is deterministic.

You know, you can tell precisely, based on classical mechanics, the motion of planets.

If you throw a baseball, where is it going to go, etc?

And as we know, classical mechanics does not work at the atomic and subatomic levels.

We have something called quantum mechanics, and in quantum mechanics, nothing is deterministic.

You can only tell what things are going to do based on something called a wave function, which gives you a probability.

I really believe that AI is like that, that it is so complicated, so emergent, so chaotic; etc., that the way to deal with AI is in a more probabilistic way.

That has worked extremely well, and the previous approach where we try to write things down in a sort of deterministic way like classical mechanics, that just didn’t work.

Byron, imagine if I asked you to write down specifically how you learned to ride a bicycle.

I bet you won’t be able to do it.

I mean, you can write a poem about it.

But if I say, “No, no, I want a computer program that tells me precisely…”

If I say, “Byron I know you know how to recognize a cat.

Tell me how you do it.”

I don’t think you’ll be able to tell me, and that’s why that approach didn’t work.

Artificial Intelligence-Machine Learning-Deep Learning Technologies

And then, lo and behold, in the ‘90s we discovered that there was a whole different approach to AI-based on getting lots and lots of data in very fast computers, analyzing the data, and then something like intelligence starts coming out of all that.

I don’t know if it’s intelligence, but it doesn’t matter.

I really think that to a lot of people the real point where that hit home is when in the late ‘90s, IBM’s Deep Blue supercomputer, beat Garry Kasparov in a very famous [chess]match.

I don’t know, Byron, if you remember that.

Listen to this one-hour episode or read the full transcript at


Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.


AI (ML/DL) Operations

AI Operations: It Can’t Be Just an Afterthought

In the worlds of machine learning (ML) and deep learning (DL), operations and deployment is a subject that often falls by the wayside.

And the split reality between everyday on-premises Artificial Intelligence (AI) work and the industry’s fascination with more aspirational cloud-based AI work only makes matters worse.

Reasons to use AI

Reasons to use AI

For the adoption of AI/ML/DL to be actionable for Enterprise customers, the full spectrum of on-premises and cloud-based work needs to be accommodated.

Deployment and operations across environments need to be consistent.

On-premises provisioning and deployment should feel cloud-like in ease-of-use, and hybrid scenarios need to be handled robustly.

Installation and management of frameworks and models need to be handled too.

Join us for this free 1-hour webinar, from GigaOm Research, to explore these matters.

Live Podcast

The Webinar features GigaOm analyst Andrew Brust and special guests, Adnan Khaleel from Dell EMC, and Professor Sambit Bhattacharya of Fayetteville State University, a customer of Bright Computing.

This webinar is sponsored by Dell EMC, NVIDIA, and Bright Computing.

In this 1-hour webinar, attendees discover:

  • How cross-premises AI deployment is both necessary and achievable
  • What “AI Ops” looks like today, and where it’s going
  • The sweet spot of ML/DL training workloads between the data center and cloud

Register now to join GigaOm Research and Dell EMC for this free expert webinar.

Who Should Attend:

  • CIOs
  • CTOs
  • Chief Data Officers
  • Data Scientists
  • IT/Data Center specialists
  • DevOps professionals


Voices in AI – Episode 71: A Conversation with Paul Daugherty

About this Episode

Episode 71 of Voices in AI features host Byron Reese and Paul Daugherty discuss transfer learning, consciousness, and Paul’s book “Human + Machine: Reimagining Work in the Age of AI.” Paul Daugherty holds a degree in computer engineering from the University of Michigan and is currently the Chief Technology and Innovation Officer at Accenture.

Visit to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. Today my guest is Paul Daugherty. He is the Chief Technology and Innovation Officer at Accenture. He holds a computer engineering degree from the University of Michigan. Welcome to the show, Paul.

Paul Daugherty: It’s great to be here, Byron.


Looking at your dates on LinkedIn, it looks like you went to work for Accenture right out of college and that was a quarter of a century or more ago.

Having seen the company grow…

What has that journey been like?

Thanks for dating me. Yeah it’s actually been 32 years, so I guess I’m going on a third of a century, joined Accenture back in 1986, and the company’s evolved in many ways since then.

It’s been an amazing journey because the world has changed so much since then and a lot of what’s fueled the change in the world around us has been what’s happened with technology.

I think [in] 1986 the PC was brand new, and we went from that to networking and client-server and the Internet, cloud computing mobility, internet of things, artificial intelligence, and the things we’re working on today.

So it’s been a really amazing journey fueled by the way the world’s changed, enabled by all this amazing technology.

AI System in Robot

So let’s talk about that, specifically artificial intelligence.

I always like to get our bearings by asking you to define either artificial intelligence or if you’re really feeling bold, define intelligence.

I’ll start with artificial intelligence which we define as technology that can sense, think, act and learn, is the way we describe it.

And [it’s] systems that can then do that, so sense: like a vision in a self-driving car, think: making decisions on what the car does next, acts: in terms of they actually steer the car and then learn: to continuously improve behavior.

So that’s the working definition that we use for artificial intelligence, and I describe it more simply to people sometimes, as a fundamental technology that has more human-like capability to approximate the things that we’re used to assuming and thinking that only humans can do: speech, vision, predictive capability and some things like that.

So that’s the way I define artificial intelligence.

Intelligence I would define differently.

Intelligence I would just define more broadly.

I’m not an expert in neuroscience or cognitive science or anything, but I define intelligence generally as the ability to both reason and comprehend and then extrapolate and generalize across many different domains of knowledge.

And that’s what differentiates human intelligence from artificial intelligence, which is something we can get a lot more into.

Because I think the fact that we call this body of work that we’re doing artificial intelligence, both the word artificial and the word intelligence, I think it’s to leads to misleading perceptions on what we’re really doing.

AI (ML/DL) Operations

So, expand that a little bit. You said that’s the way you think human intelligence is different than artificial, — put a little flesh on those bones, in exactly what way do you think it is?

Well, you know the techniques we’re really using today for artificial intelligence, they’re generally from the branch of AI around machine learning, so machine learning, deep learning, neural nets, etc.

And it’s a technology that’s very good at using patterns and recognizing patterns in data to learn from observed behavior, so to speak.

Not necessarily intelligence in a broad sense, it’s an ability to learn from specific inputs.

And you can think about that almost as idiot savant-like capability.

So yes, I can use that to develop Alpha Go to beat the world’s Go master, but then that same program wouldn’t know how to generalize and play me in tic-tac-toe.

And that ability, the intelligence ability to generalize, extrapolate, rather than interpolate, is what human intelligence is differentiated by, and the thing that would bridge that, would be artificial general intelligence, which we can get into a little bit, but we’re not at that point of having artificial general intelligence, we’re at a point of artificial intelligence, where it could mimic very specific, very specialized, very narrow human capabilities, but it’s not yet anywhere close to human-level intelligence.

Listen to this one-hour episode or read the full transcript at

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.


Voices in AI – Episode 70: A Conversation with Jakob Uszkoreit

About this Episode

Episode 70 of Voices in AI features host Byron Reese and Jakob Uszkoreit discuss machine learning, deep learning, AGI, and what this could mean for the future of humanity.

Jakob has a master’s degree in Computer Science and Mathematics from Technische Universität Berlin.

Jakob has also worked at Google for the past 10 years currently in deep learning research with Google Brain.

Visit to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today our guest is Jakob Uszkoreit, he is a researcher at Google Brain, and that’s kind of all you have to say at this point. Welcome to the show, Jakob.


Let’s start with my standard question which is: What is artificial intelligence, and what is intelligence if you want to start there, and why is it artificial?


Jakob Uszkoreit: Hi, thanks for having me.

Let’s start with artificial intelligence specifically.

I don’t think I’m necessarily the best person to answer the question of what intelligence is in general, but I think for artificial intelligence, there are possibly two different kinds of ideas that we might be referring to with that phrase.

One is kind of the scientific or the group of directions of scientific research, including things like machine learning, but also other related disciplines that people commonly refer to with the term ‘artificial intelligence.’

AI-ML-Robotics Technologies

But I think there’s this other may be a more important use of the phrase that has become much more common in this age of the rise of AI if you want to call it that, and that is what society interprets that term to mean.

I think largely what society might think when they hear the term artificial intelligence, is actually automation, in a very general way, and maybe more specifically, automation where the process of automating [something] requires the machine or the machines doing so to make decisions that are highly dynamic in response to their environment and in our ideas or in our conceptualization of those processes, require something like human intelligence.

So, I really think it’s actually something that doesn’t necessarily, in the eyes of the public, have that much to do with intelligence, per se.

It’s more the idea of automating things that at least so far, only humans could do, and the hypothesized reason for that is that only humans possess this ephemeral thing of intelligence.

AI (ML/DL) Operations


Do you think it’s a problem that a cat food dish that refills itself when it’s empty, you could say has a rudimentary AI, and you can say Westworld is populated with AIs, and those things are so vastly different, and they’re not even really on a continuum, are they?

General intelligence isn’t just a better narrow intelligence, or is it?


So I think that’s a very interesting question.

Whether basically improving and slowly generalizing or expanding the capabilities of narrow bits of intelligence, will eventually get us there, and if I had to venture a guess, I would say that’s quite likely actually.

That said, I’m definitely not the right person to answer that.

I do think that guesses, that aspects of things are today still in the realms of philosophy and extremely hypothetical.

Artificial Intelligence Good or Bad


But the one trick that we have gotten good at recently that’s given us things like AlphaZero, is machine learning, right?

And it is itself a very narrow thing.

It basically has one core assumption, which is the future is like the past.

And for many things, it is: what a dog looks like in the future, is what a dog looked like yesterday.

But, one has to ask the question, “How much of life is actually like that?”

Do you have an opinion on that?


Yeah, so I think that machine learning is actually evolving rapidly from the initial classic idea of basically trying to predict the future just in the past, and not just the past as a kind of encapsulated version of the past.

So, it’s basically a snapshot captured in this fixed static data set.

You expose machines to that, you allow it to learn from that, train on that, whatever you want to call it, and then you evaluate how the resulting model or machine or network does in the wild or on some evaluation tasks, and tests that you’ve prepared for it.


It’s evolving from that classic definition towards something that is quite a bit more dynamic, that is starting to incorporate learning in situ, learning kind of “on the job,” learning from very different kinds of supervision, where some of it might be encapsulated by data sets, but some might be given to the machine through somewhat more high-level interactions, maybe even through language.

There are at least a bunch of lines of research attempting that.

Also quite importantly, we’re starting slowly but surely to employ machine learning in ways where the machine’s actions actually have an impact on the world, from which the machine then keeps learning.

I think that that’s actually something [for which] all of these parts are necessary ingredients if we ever want to have narrow bits of intelligence, that maybe have a chance of getting more general.

Maybe then in the more distant future, might even be bolted together into somewhat more general artificial intelligence.

Listen to this one-hour episode or read the full transcript at

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.


Future of Humanity

This Much I Know: Byron Reese on Conscious Computers and the Future of Humanity

Recently GigaOm publisher and CEO, Byron Reese, sat down for a chat with Seedcamp’s Carlos Espinal on their podcast ‘This Much I Know.’ It’s an illuminating 80-minute conversation about the future of technology, the future of humanity, Star Trek, and much, much more.

You can listen to the podcast at Seedcamp or Soundcloud, or read the full transcript here.


Carlos Espinal: Hi everyone, welcome to ‘This Much I Know,’ the Seedcamp podcast with me, your host Carlos Espinal bringing you the inside story from founders, investors, and leading tech voices.

Tune in to hear from the people who built businesses and products scaled globally, failed fantastically, and learned massively.

Welcome, everyone!

On today’s podcast, we have Byron Reese, the author of a new book called The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Not only is Byron an author, but he’s also the CEO of publisher GigaOm, and he’s also been a founder of several high-tech companies, but I won’t steal his thunder by saying every great thing he’s done.

I want to hear from the man himself. So welcome, Byron.


Byron Reese: Thank you so much for having me. I’m so glad to be here.



Excellent. Well, I think I mentioned this before: one of the key things that we like to do in this podcast is getting to the origins of the person; in this case, the origins of the author.

Where did you start your career and what did you study in college?


I grew up on a farm in east Texas, a small farm. And when I left high school I went to Rice University, which is in Houston. And I studied Economics and Business, a pretty standard general thing to study. When I graduated, I realized that it seemed to me that like every generation had… something that was ‘it’ at that time, the Zeitgeist of that time, and I knew I wanted to get into technology. I’d always been a tinkerer, I built my first computer, blah, blah, blah, all of that normal kind of nerdy stuff that I did.

But I knew I wanted to get into technology. So, I ended up moving out to the Bay Area and that was in the early 90s, and I worked for a technology company and that one was successful, and we sold it and it was good. And I worked for another technology company, got an idea and spun out a company, and raised the financing for that. And we sold that company. And then I started another one and after 7 hard years, we sold that one to a company and it went public and so forth. So, from my mother’s perspective, I can’t seem to hold a job; but from another view, it’s kind of like the thing of our time instead. We’re in this industry that changes so rapidly. There are more opportunities, which always come along and I find that whole feeling intoxicating.



That’s great. That’s a very illustrious career with that many companies having been built and sold. And now you’re running GigaOm. 

Do you want to share a little bit for people who may not be as familiar with GigaOm and what it is and what you do?


Certainly. And I hasten to add that I’ve been fortunate that I’ve never had a failure in any of my companies, but they’ve always had harder times. They’ve always had these great periods of like, ‘Boy, I don’t know how we’re going to pull this through,’ and they always end up [okay]. I think tenacity is a great trait in the startup world because they’re all very hard. And I don’t feel like I figured it all out or anything. Everyone is a struggle.

GigaOm is a technology research company. So, if you’re familiar with companies like Forrester or Gartner or those kinds of companies, what we are is a company that tries to help enterprises, help businesses deal with all of the rapidly changing technology that happens. So, you can imagine if you’re a CIO of a large company and there are so many technologies, and it all moves so quickly and how does anybody keep up with all of that? And so, what we have are a bunch of analysts who are each subject matter experts in some area, and we produce reports that try to orient somebody in this world we’re in and say ‘These kinds of solutions work here, and these work there’ and so forth.

And that’s GigaOm’s mission. It’s a big, big challenge because you can never rest. Big new companies I find almost every day that I’ve never even heard of and I think, ‘How did I miss this?’ and you have to dive into that, and so it’s a relentless, nonstop effort to stay current on these technologies.




On that note, one of the things that describe you on your LinkedIn page is the word ‘futurist.’

Do you want to walk us through what that means in the context of a label and how does the futurist really look at industries and how they change?


Well, it’s a lower case ‘f’ futurist, so anybody who seriously thinks about how the future might unfold, is to one degree or another, a futurist. I think what makes it into a discipline is to try to understand how change itself happens, how does technology drives changes, and to do that, you almost by definition, have to be a historian as well. And so, I think to be a futurist is to be deliberate and reflective on how it is that we came from where we were, in savagery and low tech and all of that, to this world we are in today and can you in fact look forward.

The interesting thing about the future is it always progresses very neatly and linearly until it doesn’t until something comes along so profound that it changes it. And that’s why you hear all of these things about one prediction in the 19th Century was that, by some year in the future, London would be unnavigable because of all the horse manure or the number of horses that would be needed to support the population, and that maybe would have happened, except you had the car, and like that. So, everything’s a straight-line, until one day it isn’t. And I think the challenge of the futurist is to figure out ‘When does it [move in] a line and when is it a hockey stick?’



So, on that definition of line versus hockey stick, your background as having been CEO of various companies, a couple of which were media-centric, what is it that drew you to artificial intelligence specifically to futurize on?

Well, that is a fantastic question. Artificial intelligence is, first of all, a technology that people widely differ on its impact, and that’s usually like a marker that something may be going on there.

There are people who think it’s just oversold hype.

It’s just data mining, big data renamed.

It’s just the tool for raising money better.

Then there are people who say this is going to be the end of humanity, as we know it.

And philosophically the idea that a machine can think, maybe, is a fantastically interesting one, because we know that when you can teach a machine to do something, you can usually double and double and double and double and double its ability to do that over time.

And if you could ever get it to reason, and then it could double and double and double and double, well that could potentially be very interesting.

Humans only evolve, computers are able to evolve kind of at the speed of light, they get better and humans evolve at the speed of life.  It takes generations.  And so, if a machine can think, a question famously posed by Alan Turing, if a machine could think then that could potentially be a game-changer. Likewise, I have a similar fascination for robots because it’s a machine that can act, that can move and can interact physically in the world. And I got to thinking what would happen, what is it a human in a world where machines can think better and act better, then what are we? What is uniquely human at that point?

And so, when you start asking those kinds of questions about technology, that gets very interesting. You can take something like air conditioning and you can say, wow, air conditioning. Think of the impact that had. It meant that in the evening’s people wouldn’t… in warm areas, people don’t go out on their front porch anymore. They close the house up and air condition it, and therefore they have less interaction with their neighbors. And you can take some technology as simple as that and say that had all these ripples throughout the world.

The discovery of the new world ended the Italian Renaissance effectively because it changed the focus of Europe in a whole different direction. So, when those sorts of things had those kinds of ripples through history, you can only imagine what if the machine could think like that’s a big deal. Twenty-five years ago, we made the first browser, the Mosaic browser, and if you had an enormous amount of foresight and somebody said to you, in 25 years, 2 billion people are going to be using this, what do your think’s going to happen?

If you had an enormous amount of foresight, you might’ve said, well, the Yellow Pages are going to have it rough and the newspapers are, and travel agents are, and stockbrokers are going to have a hard time, and you would have been right about everything, but nobody would have guessed there would be Google, or eBay, or Etsy, or Airbnb, or Amazon, or $25 trillion worth of a million new companies.  And all that was, was computers being able to talk to each other. Imagine if they could think. That is a big question.



You’re right and I think that there is…I was joking and I said ‘Tinder’ in the background just because that’s a social transformation. Not even like a utility, but rather the social expectation of where certain things happen that was brought about that. So, you’re right… and we’re going to get into some of those [new platforms] as we review your book. In order to do that, let’s go through the table of contents. So, for those of you that don’t have the book yet, because hopefully, you will after this chat, the book is broken up into five parts and in some ways, these parts are arguably chronological in their stage of development.

The first one I would label as the historical, and it’s broken out into the fourth ages that we’ve had as humans, the first age being language and fire, the second one being agriculture and cities, the third one being writing and wheels, and the fourth one being the one that we’re currently in, which is robots and AI. And we’re left with three questions, which are: what is the composition of the universe, what are we, and what is yourself? And those are big, deep philosophical ones that will manifest themselves in the book a little bit later as we get into consciousness.

Part two of the book is about narrow AI and robots. Arguably I would say this is where we are today, and Seedcamp as an investor in AI companies has broadly invested in narrow AI through different companies. And this is I think the cutting edge of AI, as far as we understand it. Part three in the book covers artificial general intelligence, which is everything we’ve always wanted to see, where science fiction represents quite well, everything from that movie AI, with the little robot boy, to Bicentennial Man with Robin Williams, and sort of the ethical implications of that.

Then part four of the book is computer consciousness, which is a huge debate, because as Byron articulates in the book, there’s a whole debate on what is consciousness and there’s a distinction between a monist and a dualist and how they experience consciousness and how they define it. And hopefully, Byron will walk us through that in more detail. And lastly, the road from here, it is the future, as far as we can see it in the futurist portion of the book, I mean part three, four and five are all futurist portions of the book, but this one is where I think, Byron, you go to the ‘nth’ degree possible with a few exceptions. So maybe we can kick off with your commentary on why you have broken up the book into these five parts.


Well you’re right that they’re chronological, and you may have noticed each one opens with what you could call a parable, and the parables themselves are chronological as well. The first one is about Prometheus and it’s about technology, and about how the technology changed and all the rest. And like you said, that’s where you want to kind of lay the groundwork of the last 100,000 years and that’s why it’s named something like ‘the road to here,’ it’s like how we got to where we are today.

And then I think there are three big questions that everywhere I go I hear one variant of them or another. The first one is around narrow AI and like you said, it’s a real technology that’s going to impact us, what’s it going to do with jobs, what’s this going to do in warfare, what will it do with income? All of these things we are certainly going to deal with. And then we’re unfortunate with the term ‘artificial intelligence,’ because it can mean many different things, but one is that it can be narrow AI, it can be a Nest thermometer that can adjust the temperature, but it can also be Commander Data of Star Trek. It can be C-3PO out of Star Wars. It can be something as versatile as a human and fortunately those two things share the same name, but they’re different technologies, so it has to kind of be drawn out on its own, and to say, “Is this very different thing that shares the same name likely? possible? What are its implications and whatnot?”

Interestingly, of the people who believe we’re going to build [an AGI] very immensely and when, some say as soon as five years, and some say as long away as five hundred. And that’s very telling that these people had such wide viewpoints on when we’ll get it. And then to people who believe we’re going to build one, the question then becomes, ‘well is it alive? Can it feel pain?  Does it experience the world? And therefore, by that basis does it have rights?’ And if it does, does that mean you can no longer order it to plunge your toilet when it gets stopped up, because all you’ve made is a sentient being that you can control, and is that possible?

And why is it that we don’t even know this? The only real thing any of us know is our own consciousness and we don’t even know where that comes about. And then finally the book starts 100,000 years ago. I wanted to look 100,000 years out or something like that. I wanted to start thinking about, no matter how these other issues shake out, what is the long trajectory of the human race? Like how did we get here and what does that tell us about where we’re going? Is human history a story of things getting better or things getting worse, and how do they get better or worse and all of the rest. So that was a structure that I made for the book before I wrote a single word.



Yeah, and it makes sense. Maybe for the sake of not stealing the thunder of those that want to read it, we’ll skip a few of those, but before we go straight into questions about the book itself, maybe you can explain who you want this book to be read by. Who is the customer?

There are two customers for the book. The first is people who are in the orbit of technology one way or the other like it’s their job or their day-to-day, and these questions are things they deal with and think about constantly. The value of the book, the value prop of the book is that it never actually tells you what I think on any of these issues. Now, let me clarify that ever so slightly because the book isn’t just another guy with another opinion telling you what I think is going to happen. That isn’t what I was writing it for at all.

What I was really intrigued by is how people have so many different views on what’s going to happen. Like with the jobs question, which I’m sure we’ll come to. Are we going to have universal unemployment or are we going to have too few humans? These are very different outcomes all by very technical-minded informed people. So, what I’ve written or tried to write is a guidebook that says I will help you get to the bottom of all the assumptions underlying these opinions and do so in a way that you can take your own values, your own beliefs, and project them onto these issues and have a lot of clarity. So, it’s a book about how to get organized and understand why the debate exists about these things.

And then the second group are people who, just see headlines every now and then where Elon Musk says, “Hey, I hope we’re not just the boot loaders for the AI, but it seems to be the case,” or “There’s very little chance we’re going to survive this.” And Stephen Hawking would say, “This may be the last invention we’re permitted to make.” Bill Gates says he’s worried about AI as well. And the people who see these headlines, they’re bound to think, “Wow, if Bill Gates and Elon Musk and Stephen Hawking are worried about this, then I guess I should be worried as well.” Just on the basis of that, there’s a lot of fear and angst about these technologies.

The book actually isn’t about technology. It’s about how much you believe and what that means for your beliefs about technology. And so, I think after reading the book, you may still be afraid of AI, you may not, but you will be able to say, ‘I know that why Elon Musk, or whoever thinks what they think. It isn’t that they know something I don’t know, they don’t have some special knowledge I don’t have, it’s that they believe something. They believe something very specific about what people are, what the brain is.  They have a certain view of the world as completely mechanistic and all these other things.’ You may agree with them, you may not, but I tried to get at all of the assumptions that live underneath those headlines you see. And so why would Stephen Hawking say that, why would he? Well, there are certain assumptions that you would have to believe to come to that same conclusion.



Do you believe that’s the main reason that very intelligent people will disagree on with respect to how optimistic they are about what artificial intelligence will do? You mentioned Elon Musk who is pretty pessimistic about what AI might do, whereas there are others like Mark Zuckerberg from Facebook, who is pretty optimistic, comparatively speaking. Do you think it’s this different account of what we are, that’s explaining the difference?

Absolutely. The basic rules that govern the universe and what our self is, what is that voice you hear in your head?



The three big questions.

Exactly.  I think the answer to all these questions boil down to those three questions, which as I pointed out, are very old questions. They go back as far as we have writing, and presumably therefore they go back before that, way beyond that.



So we’ll try to answer some of those questions and maybe I can prod you. I know that you’ve mentioned in the past that you’re not necessarily expressing your specific views, you’re just laying out the groundwork for people to have a debate, but maybe we can tease some of your opinions.


I make no effort to hide them. I have beliefs about all those questions as well, and I’m happy to share them, but the reason they don’t have a place in the book is: it doesn’t matter whether I think I’m a machine or not. Who cares whether I think I’m a machine? The reader already has an opinion of whether a human being is a machine. The fact that I’m just one more person who says ‘yay’ or ‘nay,’ that doesn’t have any bearing on the book.



True. Although, in all fairness, you are a highly qualified person to give an opinion.

I know, but to your point, if Elon Musk says one thing and Mark Zuckerberg says another, and they’re diametrically opposed, they are both eminently qualified to have an opinion and so these people who are eminently qualified to have opinions have no consensus, and that means something.



That does mean something. So, one thing I would like to comment about the general spirit of your book, is that I generally felt like the book was built from a position of optimism. Even towards the very end of the book, towards the 100,000 years in the future, there was always this underlying tone of, we will be better off because of this entire revolution, no matter how it plays out versus not. And I think that maybe I can tease out of you that fact that you are telegraphing your view on ‘what are we?’ Effectively, are we a benevolent race in a benevolent existence, or are we something that’s more destructive in nature? So, I don’t know if you would agree with that statement about the spirit of the book or whether…


Absolutely. I am unequivocally, undeniably optimistic about the future, for a very simple reason, which is, there was a time in the past, maybe 70,000 years ago, that humans were down to something like maybe 1000 breeding pairs. We were an endangered species and we were one epidemic, one famine, one away from total annihilation and somehow, we got past that. And then 10,000 years ago, we got agriculture and we learned to regularly produce food, but it took us 90 percent of our people for 10,000 years to make our food.

But then we learned a trick and the trick is technology because what technology does is it multiplies what you are able to do. And what we saw is that all of a sudden, it didn’t take 90 percent, 80 percent, 70, 60, all the way down, in the West to 2 percent. And furthermore, we learned all of these other tricks we could do with technology. It’s almost magic that what it does is it multiplies human ability. And we know of no upward limit of what technology can do and therefore, there is no end to how it can multiply what we can do.

And so, one has to ask the question, “Are we on balance going to use that for good or ill?” And the answer obviously is for good. I know maybe it doesn’t seem obvious if you caught the news this morning, but the simple fact of the matter is by any standard you choose today, life is better than it was in the past, by that same standard anywhere in the world. And so, we have an unending story of 10,000 years of human progress.

And what has marred humanity for the longest time is the concept of scarcity.  There was never enough good stuff for everybody, not enough food, not enough medicine, not enough education, not enough leisure, and technology lets us overcome scarcity. And so, I think if you keep that at the core, that on balance, there have been more people who wanted to build than destroy, we know that, because we have been building for 10,000 years. That on balance, on the net, we use technology for good on the net, always, without fail.



I’d be interested to know the limits to your optimism there. Is your optimism probabilistic? Do you assign, say a 90 percent chance to the idea that technology and AI will be on balance, good for humans? Or do you think it’s pretty precarious, there’s maybe a 10 percent chance, 20 percent chance that that might be a point where if we fail to institute the right sort of arrangements, it might be bad? How would you sort of describe your optimism in that sense?


I find it hard to find historic cases where technology came along that magnified what people were able to do and that was bad for us. If in fact, artificial intelligence makes everybody effectively smarter, it’s really hard to spin that into a bad thing. If you think that’s a bad thing, then one would advocate that maybe it would be great if tomorrow everybody woke up with 10 fewer IQ points. I can’t construct that in my mind.

And what artificial intelligence is, is it’s a collective memory of the planet. We take data from all these people’s life experiences and we learn from that data, and so to somehow say that’s going to end up badly, is to say ignorance is better than knowledge. It’s to say that, yeah, now that we have a collective memory of the planet, things are going to get worse. If you believe that, then it would be great if everybody forgot everything they know tomorrow. And so, to me, the antithetical position that somehow making everybody smarter, remembering our mistakes better, all of these other things can somehow lead to a bad result…I think is…I shall politely say, unproven in the extreme.

You see, I believe that people are inherently…we have evolved to be by default, extremely cautious. Somebody said it’s much better to mistake a rock for a bear and to run away from it, than it is to mistake a bear for rock and just stand there. So, we are a skittish people and our skittishness has served us well. But what happens is it means anytime you’re born with some bias, some cognitive bias, and we’re born I think with one of fear, it does one well to be aware of that and to say, “I know I’m born this way. I know that for 10,000 years things have gotten better, but tomorrow they might just be worse.” We come by that honestly, it served us well in the past, but that doesn’t mean it’s not wrong.



All right, well if we take that and use that as a sort of a veneer for the rest of the conversation, let’s move into the narrow AI portion of your book. We can go into the whole variance of whether robots are going to take all of our jobs, some of our jobs, or none of our jobs and we can kind of explore that.

I know that you’ve covered that in other interviews, and one of the things that maybe we also should cover is how we train our AI systems in this narrow era. How we can inadvertently create issues for ourselves by having old data sets that represent social norms that have changed and therefore skew things in the wrong way, and inherently create momentum for machines to believe and make wrong conclusions of us, even though we as humans might be able to derive that out of contextual relevance at some point, but is no longer. Maybe you can just kick off that whole section with commentary on that.


So, that is certainly a real problem. You see when you take a data set and let’s say the data is 100 percent accurate and you come up with some conclusion about it, it takes on a halo of, ‘well that’s just the facts, that’s just how things are, that’s just the truth.’ And in a sense, it is just the truth, and AI is only going to come to conclusions based on like you said, the data that it’s trained on. You see, the interesting thing about artificial intelligence is it has a philosophical assumption behind it, and it is that the future is like the past, and for many things that is true. A cat tomorrow looks like a cat today and so you can take a bunch of cats from yesterday, or a week ago, or a month or a year and you can train it and it’s going to be correct. A cell phone tomorrow doesn’t look like a cell phone ten years ago though, and so if you took a bunch of photos of cell phones from 10 years ago, trained an AI, it’s going to be fabulously wrong.  And so, you hit the nail on the head.

The onus is on us to make sure that whatever we are teaching is a truth that will be true tomorrow, and that is a real concern. There is no machine that can kind of ‘sanity check’ that for you, that you tell the machine, “This is the truth, now, tell me about tomorrow,” but people have to get very good at that. Luckily there’s a lot of awareness around this issue, like people who assemble large datasets, are aware of data has a ‘best-by date that varies widely. For how to play a game of chess, it’s hundreds of years. That hasn’t changed.  If it’s what a cell phone looks like, it’s a year. So the trick is to just be very cognizant of the data you’re using.

I find the people who are in this industry are very reflective about these kinds of things, and this gives me a lot of encouragement. There have been times in the past where people associated with new technology had a keen sense that it was something very serious, like the Manhattan project in the United States in World War II, or the computers that were built in the United Kingdom in that same period.  They realized they were doing something of import, and they were very reflective about it, even in that time. And I find that to be the case with people in AI today.



I think that generally speaking, a lot of the companies that we’ve invested in this sector and in this stage of effectively narrow-based AI, as you said, are going through and thinking through it. But what’s interesting is that I’ve noticed that there is a limit to what we can teach as metadata to data for machine learning algorithms to learn and evolve by themselves. So, the age-old argument is that you can’t build artificial general intelligence. You have to grow it. You have to nurture it. And it’s done over time. And part of the challenge of nurturing or growing something is knowing what pieces of input to give it. 

Now, if you use children as the best approximation of what we do, there’s a lot of built-in features, including curiosity and a desire to self-preserve and all these things that then enable the acquisition of metadata, which then justifies and rewrites existing data as either valid or invalid, to use your cell phone example. How do you see us being able to tackle that when we’re inherently flawed in our ability to add metadata to existing data? Are we going to effectively never be able to make it to artificial general intelligence because of our inability to add that add color to data so that it isn’t effectively a very tarnished and limited utility?


Well, yes, it could very easily be the case, and by the way, that’s an extreme minority view among people in AI. I will just say that upfront. I’m not representing a majority of people in AI, but I think that could very well be the case. Let me just dive into that a little bit about how people know what we know. How is it that we are generally intelligent, have general intelligence? If I asked, “Does it hurt your thumb when you hit it with a hammer?” You would say “yes,” and then I would say, “Have you ever done it?” “Yes.” And then I would say, “Well, when?” And you likely can’t remember, and so you’re right, we have data that we take somehow learning from, and we store it and we don’t know how we store it. There’s no place in your brain which is ‘hitting your thumb with a hammer hurts,’ and then if I somehow could cut that out, you no longer know that.  It doesn’t exist. We don’t know how we’d do that.

Then we do something really clever. We know how to take data we know in one area and apply it to another area.  I could draw a picture of a completely made-up alien that is weird beyond imagination. And I could show that picture to you and then I could give you a bunch of photographs and say find that alien in these. And if the alien is upside down or underwater or covered in peanut butter, or half behind a tree or whatever, you’re like, “There it is. There it is. There it is. There it is.” We don’t know how we do that. So, we don’t know how to make computers do it.

And then if you think about it if I were to ask you to imagine a trout swimming in a river, and imagine the same trout in a jar of formaldehyde and in a laboratory. “Do they weigh the same?” You would say, “yeah.” “Do they smell the same?” “Uh, no.” “Are they the same color?” “Probably not.” “Are they the same temperature?” “Definitely not.” And even though you have no experience with any of that, you instinctively know how to apply it. These are things that people do very naturally, and we don’t know how to make machines do them.

If you were to think of a question to ask a computer like, “Dr. Smith is having lunch at his favorite restaurant when he receives a phone call.  Looking worried, he runs out the door neglecting to pay his bill. Are the owners liable to call the police?” You would say a human would say no. Clearly, he’s a doctor. It’s his favorite restaurant, he must eat there a lot, he must’ve gotten an emergency call. He ran out the door forgetting to pay.  We’ll just ask him to pay the next night he comes in. The amount of knowledge you had to have, just to answer that question is complex in the extreme.

I can’t even find a chatbot that can answer [the question:] “What’s bigger, a nickel or the sun?”  And so to try to answer a question that requires this nuance and all of this inference and understanding and all of that, I do not believe we know how to build that now. That would be, I believe, a statement within the consensus. I don’t believe we know how to build it, and even if you were to say, “Well, if you had enough data and enough computers, you could figure that out.” It may just literally be impossible, like every instantiation of every possibility. We don’t know how we do it. It’s a great mystery and it’s even hotly debated [around] even if we knew how we do it, could we build a machine to do it? I don’t even know that that’s the case.



I think that’s part of the thing that baffles me in your book. I’m jumping a little bit around here in your book now. You do talk about consciousness and you talk about sentience and how we know what we know, who we are, what we are. You talk about the dot on pets and how they identify themselves as themselves, and with any engineering problem, sometimes you can conceive of a solution before actually the method by which to get there is accomplished.  You can conceive the idea of flying. You just don’t know what combination of anything that you are copying from birds or copying from leaves, or whatever, will function in getting to that goal: flying.

The problem with this one is that from an engineering point of view, this idea of having another human or another human-like entity that not only has consciousness but has free will and sentience as far as we can perceive it, [doesn’t recognize that] there’s a lot of things that you described in your chapter on consciousness that we don’t even know how to qualify. Which is a huge catalyst in being able to create the metadata that structures data in a way that then gives the illusion and perception of consciousness. Maybe this is where you give me your personal opinion… do you think we’ll ever be able to create an answer to that engineering question, such that technology can be built around it? Because otherwise we might just be stuck on the formulation of the problem.


The logic that says we can build it is very straightforward and seemingly ironclad. The logic goes like this. If we figure out how a neuron works, we can build one. Either physically build one or model it on a computer.  And if you can model that neuron in a computer, then you learn how it talks to other neurons and then you model 100 billion of them in the computer, and all of a sudden you have a human mind.  So that that says, we don’t have to know it, we just have to understand the physics.  So, the position just says whatever a neuron does, it behaves the laws of physics and if we can understand how those laws are interacting, then we will be able to build it. Case closed. There’s no question at all that it cannot be done.

So I would say that’s the majority viewpoint. The other viewpoint says, “Well wait a minute, we have this brain that we don’t understand how it works. And then we have this mind, and a mind is a concept everybody uses and if you want a definition, it’s kind of everything your brain can do that an organ doesn’t seem like it would be able to. You have a sense of humor; your liver may not have a sense of humor.  You have emotions, your stomach may not have emotions, and so forth.” So somehow, we have a mind that we don’t know how it comes about. And then to your point, we are conscious and what that means is we experience the world. I feel warmth, [whereas] a computer measures temperature. Those are very different things and we not only don’t know how it is that we are conscious, but we also don’t even know how to ask the question in a scientific method, nor what the answer looks like.

And so, I would say my position to be perfectly clear is, we have brains we don’t understand, minds we don’t understand and consciousness we don’t understand.  And therefore, I am unconvinced that we can ever build something like this. And so I see no evidence that we can build it because the only example that we have is something that we don’t understand. I don’t think you have to appeal to spiritualism or anything like that, to come to that conclusion, although many people would disagree with me.



Yeah, it’s interesting. I think one thing underlying the pessimistic view is this belief that while we may not have the technology now or have an idea of how we’re going to get there, the kinetic sort of an AI explosion—that’s what I think Nick Bostrom, the philosopher has called it—may be pretty rapid in the sense that once there is material success in developing these AI models, that will encourage researchers to sort of pile on and therefore they bring in more people to produce those models and then secondly if there are advancements in self-improving AI models. So there’s a belief that it may be pretty quick that we get super intelligence that underlies this pessimism and the belief that we sort of has to act now.  What would be your thoughts on that?


Oh, well I don’t agree. I think that’s the “Is that a bear or a rock?” kind of thing. The only evidence we really have for that scenario is movies, and they’re very compelling and I’m not conspiratorial, and they’re entertaining. But what happens is you see that enough, and you do something that has a name, it’s called ‘reasoning from fictional evidence’ and that’s what we do. Where you say, “Well, that could happen, and when you see it again, and yeah, that could happen. That really could again.”  Again, and again and again.

To put it in perspective, when I say we don’t understand how the brain works, let me be really clear about that. Your brain has 100 billion neurons, roughly the same number of stars in the Milky Way. You might say, “Well, we don’t understand it because there’s so many.” This is not true. There’s a worm called the nematode worm. He’s about as long as his hair is thick, and his brain has 302 neurons. These are the most successful creatures on the planet, by the way. Seventy percent of all animals are nematode worms and have 302 neurons. That’s it. [This is about] the number of pieces of cereal in a bowl of cereal.  So, for 20 years a group of people in something called the ‘open worm project’ had been trying to model those 302 neurons in a computer to get it to display some of the complex behavior that a nematode worm does.  And not only have they not done it, but there’s also even a debate among them whether it is even possible to do that. So that’s the reality of the situation. We haven’t even gotten to the mind.

Again, how is it that we’re creative? And we haven’t even gotten to, how is it that we experience the world? We’re just talking about how does a brain work, if it only has 302 neurons, a bunch of smart people, 20 years working on it, may not even be possible. So somehow to spin a narrative that, well, yeah, that all may be true, but what if there was a breakthrough and then it sped upon itself and sped up and then it got smarter and then it got so smart it had 100 IQ, then a thousand, then a million, then 100 million. And then it doesn’t even see us anymore. That’s as speculative as any other kind of scenario you want to come up with. It’s so removed from the facts on the ground that you can’t rebut it because it is not based on any evidence that you can rebuke.



You know, the fun thing about chatting with you, Byron, is that the temptation is to sort of jump into all these theories and which ones are your favorites. So because I have the microphone, I will.  Let me just jump into one.  Best science fiction theory that you like. I think we’ve touched on a few of these things, but what is the best-unified theory of everything, from science fiction that you feel like, ‘you know what, this might just explain it all?


Star Trek.



Okay. Which variant of it?  Because there’s not…


Oh, I would take either….I’ll take ‘The Next Generation.’ So, what is that narrative? We use technology to overcome scarcity. We have bumps all along the way. We are insatiably curious, and we go out to explore the stars as Captain Picard told the guy he thought out from the 20th Century. He said the challenge in our time is to better yourself, is to discover who you are. And what we found interestingly with the Internet, and sure, you can list all the nefarious uses you want. What we found is the minute you make blogs, 100 million people want to tell you what they think. The minute you make YouTube, millions of people want to upload video; the minute you make iTunes, music flourishes.

I think in my father’s generation, they didn’t write anything after they left college. We wake up in the morning, and we write all day long. You send emails constantly and so what we have found is that it isn’t that there were just a few people, and as the Italian Renaissance, there were only a few people who wanted to paint or cared to paint. It was like everybody probably did. Only there wasn’t enough of the good stuff, and so only either you had extreme talent or extreme wealth and then you got to paint.

Well, in the future, in the Star Trek variant of it, we’ve eliminated scarcity through technology, and everybody is empowered, every Dante to write their Inferno, every Marie Curie to discover radium, and all of the rest. And so that vision of the future, you know, Gene Roddenberry said in the future there will be no hunger and there will be no greed and all the children would know how to read.  That variant of the future is the one that’s most consistent with the past. That’s the one you can say, “Yeah, somebody in the 1400s looking at our life today, that would look like Star Trek to them. These people like to push up a button and the temperature in the room gets cooler, and they have leisure time. They have hobbies.”  That would’ve seemed like science fiction.



I think there’s a couple of things that I want to tackle with the Star Trek analogy to get us sort of warmed up on this and I think Kyran’s waiting here at the top to ask some of them, but I think the most obvious one to ask, if we use that as a parable of the future, is about Lieutenant Commander Data. Lieutenant Commander Data is one of the characters starring in The Next Generation and is the closest attempt to artificial general intelligence, and yet he’s crippled from fully comprehending the human condition because he’s got an emotion chip that has to be turned off because when it’s turned on, he goes nuts; and his brother is also nuts because he was overly emotional.  And then he ends up representing every negative quality of humanity. So to some extent, not only have I just show off my knowledge of the Star Trek era…


Lore wasn’t over overly emotional. He got the chip that was meant for Data and it wasn’t designed for him. That was his backstory.



Oh, that’s right. I stand corrected, but maybe you can explore that.  In that future, walk us through why you think Gene had that level of limitation for Data, and whether or not that’s an implication of ultimately the limits of what we can expect from robots.

Well, obviously that story is about…that whole setup is just not hard science. Right? That whole setup is like you said, it’s embodying us and it’s the Pinocchio Story of Data wanting to be a boy and all of the rest. So, it’s just storytelling as far as I’m concerned. You know, it’s convenient that he has a positronic brain, and having removed part of his scalp, you just see all this light coursing through, but that’s not something that science is behind, like Warp 10 or something, the tri-quarter. You know Uhura in the original series, she had a Bluetooth device in her ear all the time, right?



Yeah, but I guess with the Data metaphor, I guess what I’m asking is: the limitations that prevented Data from being able to do some of the things that humans do, and therefore ultimately come around the full circle into being a fully independent, conscious, free-willed, sentient being, were entire because of some human elements he was lacking. I guess the question and you brought it up in your book is, whether or not we need those human elements to really drive that final conversion of a machine to some sort of entity that we can respect as an equivalent peer to us.


Yeah. Data is a tricky one because he could not feel pain, so you would say he’s not sentient. And to be clear, sentient means, it’s often misused, to mean ‘smart.’ That’s sapient. Sentient means you can experience pain. He didn’t, but as you said, at some point in the show, he experienced emotional pain through that chip, and therefore he is sentient. They had a whole episode about, “Does Data have a soul?” And you’re right, I think there are things that humans do that it’s hard to…unless you start with the assumption everything in a human being is mechanistic, in physics and that you’re a bag of chemicals with electrical impulses going through you.

If you start with that, then everything has to be mechanical, but most people don’t see themselves that way, I have found, and so if there is something else, some emergent or something else that’s going on, then yeah, I believe that has to be wrapped up in our intelligence. That being said, everybody, I think has had this experience of when you’re driving along and you kind of space [out] and then you kind of ‘come to’ and you’re like, “Holy cow, I’m three miles along. I don’t remember driving there.” Yet you behaved very intelligently. You navigated traffic and did all of that, but you weren’t kind of conscious. You weren’t experiencing the world at least that much. That may be the limit of what we can do, that a person during that three minutes when you’re kind of spaced because that person also didn’t write a new poem or do anything creative. They just merely mechanically went through the motions of driving. That may be the limit. That may be that last little bit that makes us human.



The Star Trek view has two pieces to it. It has a technological optimism, which I don’t contest. I think I’m aligned with you and agree with that. There’s also an economic or a social optimism there and that’s also about how that technology is owned, who owns the means of production, who owns the replicators.  When it comes to that, how precarious do you think the Star Trek Universe is in the sense that if the replicators are only in the hand of a certain group of people if they’re so expensive that only a few people learn them, or only a few people own the robots,  then it’s no longer such an optimistic scenario that we have. I’d just be interested in hearing your views there.


You’re right, that the replicator is a little bit convenient…I don’t want to say it’s a cheat, but it’s a convenient way to get around scarcity and they never really go into, well, how is it that anybody could go to the library and replicate whatever they wanted.  Like how did they get that?  I understand those arguments. We have [a world where] the ability of a person using technology to affect a lot of lives goes up and that’s why we have more billionaires. We have more self-made billionaires now; a higher percentage of billionaires are self-made now than ever before. You know, Google and Facebook together made 12 billionaires. The ability to make a billion dollars gets easier and easier, at least for some people (not me) because technology allows them to multiply and affect more lives and you’re right. So that does tend to make more super, super, super-rich people. But, I think the income inequality debate is a little…maybe needs a slight bit of focus.

To my mind, it doesn’t matter all that much how many super-rich people there are. The question is how many poor people are there? How many people have a good life? How many people can have medical care and can, you know, if I could get everybody to that state, but I had to make a bunch of super-rich people, it’s like, absolutely, we’ll take that? So I think, income inequality by itself is a distraction.

I think the question is how do you raise the lot of everybody else and what we know about technology is that it gets better over time and the prices fall over time. And that goes on ad infinitum. Who could have afforded an iPhone 20 years ago?  Nobody. Who could have afforded the cell phone 30 years ago? Rich people. Who could have afforded any of this stuff all these years ago?  Nobody but the very rich, and yet now because they get rich, all the prices of all that continue to fall and everybody else benefits from it.

I don’t deny there are all kinds of issues. You have your Hepatitis C vaccine, which costs $100,000 and there are a lot of people who need it and only a few people are going to [get it]. There are all kinds of things like that, but I would just take some degree of comfort that if history has taught us anything, is that the price of anything related to technology falls over time. You probably have 100 computers in your house.  You certainly have dozens of them, and who from 1960 would have ever thought that? Yet here they are here. Here we are in that future.

So, I think you almost have to be conspiratorial to say, yeah, we’re going to get these great new technologies, and only a few people are going to control them and they’re just going to use them to increase their wealth ad infinitum. And everybody else is just going to get the short end of the stick. Again, I think that’s playing on fear. I think that’s playing on all of that, because if you just say, “What are the facts on the ground? Are we better off than we were 50 years ago, 100 years ago, 200 years ago?” I think you can only say “yes.”



Those are all very good points and I’m actually tempted to jump around a little bit in your book and maybe revisit a couple of ideas from the narrow AI section, but maybe what we can do is we can merge the question about robot proofing jobs with some of the stuff that you’ve talked about in the last part, which is the road from here.

One of the things that you mentioned before is a general idea that the world is getting better, no matter what. These things that we just discussed iPhones and computers being more and more accessible is an example of it.  You talked about the section of ‘murderous meerkats’ where you know, even things like crime are things that are improving over time, and therefore there is no real reason for us to fear the future. But at the same time, I’m curious as to whether or not you think that there is a decline in certain elements of society, which we aren’t factoring into the dataset of positivity.

For example, do we feel that there is a decline in the social values that have developed in the current era, in this sort of decline of social values, things like helping each other out, things like looking out for the collective versus the individual, has come and gone, and we’re now starting to see the manifestations of that through some of the social media and how it represents itself?  And I just wanted to get your ideas down the road from here and whether or not you would revisit them if somebody were to tell you and show you some sociologists’ research regarding the decline of social values, and how that might affect the kinds of jobs humans will have in the future versus robots.


So I’m an optimist about the future. I’m clear about that. Everything is hard. It’s like me talking about my companies. Everything’s a struggle to get from here to there. I’m not going to try to spin every single thing. I think these technologies have real implications on people’s privacy and they’re going to affect warfare and there are all these things that are real problems that we’re really going to have to have to think about.  The idea that somehow these technologies make us less empathetic, I don’t agree with. And you can just run through a list of examples like everybody kind of has a cause now. Everybody has some charity or thing that they support. Volunteerism, Go-Fund-Me’s are up…People can do something as simple as post a problem they have online and some stranger who will get nothing in return is going to give them a big, long answer.

People toil on a free encyclopedia and they toil in anonymity. They get no credit whatsoever. We had the ‘open-source movement. Nobody saw that. Nobody said, “Yeah, programmers are going to work really hard and write really good stuff and give it away.” Nobody said we’re going to have Creative Commons where people are going to create things that are digital and they’re going to give them away. Nobody said, “Oh yeah, people are going to upload videos on YouTube and just let other people watch them for free.” Everywhere you look, technology empowers us and our benevolence.

To take the other view is like a “Kids these days!” shaking your cane, “Get off my grass!” kind of view that things are bad now. They’re getting worse. This is what people have said for as long as people have been reflecting on age.  And so, I don’t buy any of that.  In terms of specifically about jobs, I’ve tried hard to figure out what the half-life of a job is.  And I think every 40 years, every 50 years, half of all the jobs vanish. Because what does technology do? It makes great new high-paying jobs, like a geneticist. And it destroys low-paying tedious jobs, like an order taker at a fast-food restaurant.

And what people sometimes say is, “You really think that order taker is going to become a geneticist? They’re not trained for these new jobs.” And the answer is, “Well, no.” What’ll happen is a college professor will become a geneticist and a high school biology teacher gets the college job and the substitute teacher gets hired at the high school job, all the way down. The question isn’t, “Can that person who lost their job to automation get one of these great new jobs?” The question is, “Can everybody on the planet do a job a little harder than the job they have today?” And if the answer to that is yes, then what happens is, every time technology creates great new jobs, everybody down the line gets a promotion. And that is 250 years of why have we had in the West full employment because employment other than during the depression has always been 5 to 10 percent… for 250 years.

Why have we had full employment for 250 years and rising wages? Even when something like the assembly line came out, or something like we replaced all the animal power with steam, you never had bumps in unemployment because people just used those technologies to do more. So yes, in 40 or 50 years, half the jobs are going to be gone, that’s just how the economy works. The good news is though when I think back to my K-12 education, and I think if I knew the whole future, what would I have taken then that would help me today.  And I can only think of one thing that I really just missed out on. And can you guess by the way?



Computer education?


No, because anything they taught me would no longer be useful. Typing. I should’ve taken typing. Who would have thought that that would be like the skill I need every day the most? But I didn’t know that. So you have to say, “Wow, like everything you have, everything that I do in my job today is not stuff I learned in school.” What we all do now is you hear a new term or concept and you google it and you click on that and you go to Wikipedia and you follow the link, and then it’s 3:00 AM in the morning and you wake up the next morning, and you know something about it.  And that’s what every single one of us does, what every single one of us has always done, what every single one of us will continue to do. And that’s how the workforce morphs. It isn’t that we’re facing this kind of cataclysmic disconnect between our education system and our job market. It’s that people are going to learn to do the new things, as they learned to be web designers, and they learned every other thing that they didn’t learn in school.



Yeah, we’d love to dive into the economic arguments in a second, but just to bring it back to your point that technology is always empowering. I’m going to play devil’s advocate here and mention someone we had on the podcast about a year ago. Tristan Harris, who’s the leader of an initiative called ‘Time Well Spent’, and his arguments were that the effects of technology can be nefarious. Two days ago, there was a New York Times article, referring to a research paper on statistical analysis and anti-refugee violence in Germany, and one of the biggest correlating factors was time spent on social media, suggesting that it isn’t always like beneficial or benign for humans. Just to play devil’s advocate here, what is your take on that?


So, is your point that social media causes people to be violent, or is the interpretation people prone to violence also are prone to using social media?



Maybe one variant of that, and Kyran can provide his own, is that the good is getting better with technology and the bad is getting worst with technology. You just hope that one doesn’t detonate something that is irreversible.


Well, I will not uniformly defend every application of technology to every single situation. I could rattle off all the nefarious uses of the Internet, right? I mean bilking people, you know them all, you don’t need me to list it. The question isn’t, “Do any of those things happen?” The question is, “On balance, are more people using the Internet for good, than evil?” And we know the answer is ‘good.’

It has to be because if we were more evil than good as a species, we never would have survived this way. We’re highly communal. We’ve only survived because we like to support each other, forget about all the wars, granted, all of the problems, all the social strife, all of that. But in the end, you’re left with the question, “How did we make progress to begin with?” And we made progress because there are more people who are working for progress than there are…who are carrying torches and doing all the rest. It just is simple.



I guess I’m not qualified to make this statement, but I’m going to go ahead and do it anyway. Humans have those attributes because we’re inherently social animals, and as a consequence, we’re driven to survive and forego being right at times, because we value the social structure more than we do our own selves; and we value the success of the social structure more than ourselves, and there’s always going to be deviations from that, but on average it then answers and shows and represents itself in the way that you have articulated it.

And that’s a theory that I have, but one of the things that if you accept that theory, well you can let me know or not, but let’s, for the sake of the question, let’s just assume that it’s correct, then how do you impart that onto a collection of artificial intelligence such that they mirror that? And as we start delegating more and more to that collective artificial intelligence, can we rely on them to have that same drive when they’re no longer as socially dependent on each other, the way that humans are for reproduction and defense and emotional validation?


That could well be the case, yes. I mean, we have to make sure that we program them to reflect an ethical code, and that’s an inherently very hard thing to do because people aren’t great at articulating them and even when they articulate them, they’re full of all these provisos and exceptions and everybody’s is different. But luckily, there are certain broad concepts that almost everybody agrees with. That life is better than death, and that building is better than destroying, and there are these very high-level concepts that we will need to take great pains in how we build our AIs, and this is an old debate, even in AI.

There was a man named Weizenbaum, who made a chatbot in the sixties. It was simple. You would say, “I’m having a bad day today,” and it would say, “Why are you having a bad day?” “I’m having a bad day because of my mother.” “Why are you having a bad day because of your mother?” Back and forth. Super simple. Everybody knew it was a chatbot, and yet he saw people getting emotionally attached to it, and he kind of turned on it and he said, “In the end, we never want computers.”

When the computer says ‘I understand,’ it’s just a lie, that there is no ‘I,’ and there is no understanding. And he came to believe we should never let computers do those kinds of things. They should never be…recipients of our emotions. We should never make them caregivers and all of these other things because, in the end, they don’t have any moral capacity at all. They have no empathy. They have faked empathy, they have simulated empathy, and so I think there is something to that, that there will just simply be jobs we’re not going to want them to do because, in the end, they’re going to require a person I think.

You see, any job a computer could do; a robot could do. If you make a person do that job, there’s a word for that. That’s dehumanizing. If a machine can, in theory, do a job, if you make a person do it, that’s dehumanizing.  You’re not using anything about them that makes them a human beings, you’re using them as a stand-in for a machine, and those are the jobs machines should do.

But then there are all the other jobs that only people can do, and that’s what I think people should do. I think they’re going to be a lot of things like that, that we are going to be uncomfortable with and we still don’t have any idea. Like, when you’re on a chatbot, you need to be told it’s a chatbot. Should robotic voices on the phone actually sound somewhat robotic, so you know that’s not a person? You think about R2-D2 or C-3PO, just think if their names were Jack and Larry.  That’s a subtle difference in how we regard them that we don’t know how we’re going to do that, but you’re entirely right. Machines don’t have any empathy and they can only fake it, and there are real questions if that’s good or not.



Well, that’s a great way of looking at it, and one of the things that have been really great during this chat is understanding the origin of some of these views and how you end up at this positive outcome at the end of the day on average. And the book does a really good job of leaving the reader with that thought in mind but arms them to have these kinds of engaging conversations. So thanks for sharing the book with us and thanks for providing your opinion on different elements of the book.

However, you know, it’d be great to get some thoughts about things that you feel that inspired you or that you left out of the book. For example, which movies have most affected you in the vein of this particular book. What are your thoughts on a TV show like Westworld and how that illustrates the development of the mind of artificial intelligence in the show? Maybe just share a little bit about how your thoughts have evolved.

Certainly, and I would also like to add, I do think there’s one way it can all go south. I think there is one pessimistic future and I think that will come about if people stop believing in a better tomorrow. I think pessimism is what will get us all killed. The reason we’ve had optimism, be so successful, is there’ve been a number of people who get up and say, “Somebody needs to invent the blank. Somebody needs to find a cure for this, somebody needs to do it. I will do it.” And you have enough people who believe in one form or another, in a better tomorrow.

There’s a mentality of, don’t polish brass on a sinking ship. And that’s where you just say, “Well what’s the point? Why bother? Why bother?” And if enough people said “Why to bother?” then we are going to have to build that world. We’re going to have to build that better world. And just like I said earlier with my companies, it’s going to be hard. Everybody’s got to work hard at it. And so, it’s not a gift, it’s not free.  We’ve clawed our way from savagery to civilization and we’ve got to keep clawing. But the interesting thing is, finally I think there is enough of the good stuff for everybody and you’re right, there are big distribution problems about that, and there are a lot of people who aren’t getting any of the good stuff, and those are all real things we’re going to have to deal with.

When it comes to movies and TV, I have to see them all because everybody asks me about them on shows. So I have to go see them.  And I used to loathe going to all the pessimistic movies that have far and away dominated…In fact, I even get to think of, you know, Black Mirror, it’s like I started writing out story ideas for a show in my head, I call ‘White Mirror.’  Who’s telling those stories about how everything can be good in the future? That doesn’t mean they’re bereft of drama. It just means that it’s a different setting to explore these issues.

I used to be so annoyed at having to go to all of these movies. I would go to see some movie like Elysium and then be like, yeah, they’re the 99 percent, yeah, they’re poor and beaten down. Yeah, they’re covered in dirt. And now, yeah, the 1 percent, I bet they live in someplace high up in the sky, pretty and clean. Yeah, there that is. And then, you know, you see Metropolis, the most expensive movie ever made, adjusted for inflation, from almost a century ago. And yeah, there is the 99 percent. They’re dirty, they’re covered in dirt, everybody forgets to bathe in the future. I wonder where the…oh yeah, the one percent, yeah, they live in that tower up there.  Oh, everything up there is white and clean. Wow. Isn’t that something? And I have to sit through these things.

And then I read a quote by Frank Herbert, and he said sometimes the purpose of science fiction is to keep the future from happening. And I said, okay, these are cautionary tales. These are warnings, and now I view them all like that.  And so, I think there are a lot of cautionary tales out there and very few things that we can…like Star Trek. You heard me answer that so quickly because there aren’t a lot of positive views about the future that is in science fiction. It just doesn’t seem to be as rich of ground to tells stories and even in that world, you had to have the Ferengi, and you had to have the Klingons and you had to have the Romulans and so forth.

So, I’ve watched them all and you know, I enjoy Westworld, like the next person.  But I also realized those are people playing those androids and that nobody can build a machine that does any of that. And so it’s fiction. It’s not speculative in my mind. It’s pure fiction. It’s what they are and that doesn’t mean they’re any less enjoyable… When I ask people on my AI podcast what science fiction influenced you, they all, almost all say Star Trek. That was a show that inspired people, and so I really gravitate towards things that inspire me and inspire me in a vision of a better tomorrow.



For me, if I had to answer that question, I would say The Matrix. And I think that it brings up a lot of philosophical questions and even questions about reality. And it’s dystopian in some ways I guess, but in some ways, it illustrates how we got there and how we can get out of it. And it has a utopian conclusion I guess because it’s ultimately in the form of liberation. But it is an interesting point you make.

And it actually makes me reflect back on all the movies that I’ve seen, and it actually also brings up another question which is whether or not it’s just representative of the times. Because if you look at art and if you look at literature over the years, in many ways they are inspired by what’s going on during that era. And you can see bouts of optimism, post-the resolution of some conflict. And then you can see the brewing of social upheaval, which then ends up with some sort of a conflict, and you see that all across the decades and it is interesting.  And I guess that brings up moral responsibility for us not to generate the most intense set of innovations around artificial intelligence, in a point where maybe society is quite split at the moment.  We might inject unfortunate conclusions into AI systems just because of the state of where we are in our geopolitical evolution.


Yeah. I call my airline of choice once a week to do something, and it asked me to state my member number, which unfortunately has an A, an H, and an 8 in it.  And it never gets it right. So that’s what people are trying to do with AI today, is it’s just like make a lot of really tedious stuff less tedious and use caller ID by the way. I always call from the same number, but that’s a different subject.

And so most of the problems that we try to solve with it are relatively mundane, and most of them are about how do we stop disease, and how do we… all of these very worthwhile things. It’s not a scary technology. It studies the past, looks for patterns in data, projects into the future. That’s it. And anything around that that tries to make it terrifying, I think is sensationalism. I think the responsibility is to tell the story about AI like that, without the fear, and emphasizing the positivity of all the good that can come out of this technology.



What do you think we’ll look upon 50 years from now and think, “Wow, why were we doing that?” How do you get away with that, the way that we look back today on slavery and think, “Why the hell did that happen?”


Well, I will give an answer to that. And it’s not my own personal ax to grind. To be clear, I live in Austin, Texas. We have barbecue joints here in abundance, but I believe that we will learn to grow meat in a laboratory and it will be not only environmentally, massively better, but it will taste better, and be cheaper and healthier and everything.  And so I think we’re going to grow all of our meat and maybe even all of our vegetables, by the way. Why do you need sunlight and rain and all of that?  But put that aside for a minute, I think we’re going to grow all of our meat in the future and I don’t know if you grow it from a cell, if it’s still veganism to eat it. Maybe it is, I don’t know, like strictly speaking, but I think once the best steak you’ve ever had in your life is 99 cents, everybody’s just going to have that.

And then we’ll look back at how we treat animals with a sense of collective shame of that, because the question is, “Can they feel?” In the United States, up until the mid-90s, veterinarians were taught that animals couldn’t feel pain and so they didn’t anesthetize them. They also operated on babies at the same time because they couldn’t feel pain. Now I think people care whether the chicken that they’re eating was raised humanely. And so, I think that expansion of empathy to animals, who now I think most people believe they do feel pain, they do experience sadness or something that must feel like that, and the fact that we essentially keep them in abhorrent conditions and all of that.

And again, I’m not grinding my own axe here. This isn’t something that…I don’t think it’s going to come up with people, like overnight changing. I think what’s gonna happen is there’ll be an alternative. The alternative will be so much better, but then everybody would use it and look back and think, how in the world did we do that?



No, I agree with that.  As a matter of fact, we’ve invested in a company that’s trying to solve that problem, and I’m going to post in the show notes just because they’re in stealth right now, but by the time this interview goes to print, hopefully, we’ll be able to talk about them. But yes, I agree with you entirely, and we put our money behind it. So, looking forward to that being one of the issues to be solved. Now another question is, what’s something that you used to strongly believe in, that now you think you were fundamentally misguided about?


Oh, that happens all the time. I didn’t write this book to start off by saying, “I will write a book that doesn’t really say what I think, it’ll just be this framework.” I wrote a book to try to figure out what I think because I would hear all of these proclamations about these technologies and what they could do. And so, I think I used to be way more in the AGI camp, that this is something we’re going to build and we’re going to have those things, like on Westworld. This was before Westworld though. And I used to be much more in that until I wrote the book, which changed me and I can’t say I disbelieve it, that would be the wrong way to say it, but I see no evidence for it. I think I used to buy that narrative a lot more and I didn’t realize it was less a technological opinion and more a metaphysical opinion. And so, like working through all of that and just understanding all of the biases and all of the debate. It’s very humbling because these are big issues and what I wanted to do as I said, is make a book that helps other people work through them.



Well, it is a great book. I’ve really enjoyed reading it. Thank you very much for writing it. Congratulations! You’re also the longest podcast we’ve ever recorded, but it’s a subject that is very dear to me, and one that is endlessly fascinating, and we could continue on, but we’re going to be respectful of your time, so thank you for joining us and for your thoughts.


Well, thank you. Anytime you want me back, I would love to continue the conversation.

Well, until next time guys. Bye. Thanks for listening. If you enjoyed the podcast, don’t forget to subscribe on iTunes and SoundCloud and leave us a review with your thoughts on our show.


Voices in AI – Episode 63: A Conversation with Hillery Hunter

About this Episode

Episode 63 of Voices in AI features host Byron Reese and Hillery Hunter discuss AI, deep learning, power efficiency, and understanding the complexity of what AI does with the data it is fed.

Hillery Hunter is an IBM Fellow and holds an MS and a Ph.D. in electrical engineering from the University of Illinois Urbana-Champaign.

Visit to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today, our guest is Hillery Hunter. She is an IBM Fellow, and she holds an MS and a Ph.D. in electrical engineering from the University of Illinois Urbana-Champaign. Welcome to the show, Hillery.

Thank you it’s such a pleasure to be here today, looking forward to this discussion, Byron.

Q: 1

So, I always like to start off with my Rorschach test question, which is: what is artificial intelligence, and why is it artificial?

You know that’s a great question. My background is in hardware and in systems and in the actual compute substrate for AI.

So one of the things I like to do is sort of demystifying what AI is.

There are certainly a lot of definitions out there, but I like to take people to the math that’s actually happening in the background.

So when we talk about AI today, especially in the popular press and such and people talk about the things that AI is doing, be it understanding medical stands or labeling people’s pictures on a social media platform, or understanding speech or translating language, all those things that are considered core functions of AI today are actually deep learning, which means using many-layered neural networks to solve a problem.

There are also other parts of AI though, that is much less discussed in the popular press, which includes knowledge and reasoning and creativity and all these other aspects.

And you know the reality is where we are today with AI, is we’re seeing a lot of productivity from the deep learning space and ultimately those are big math equations that are solved with lots of matrix math, and we’re basically creating a big equation that matches in its parameters to a set of data that it was fed.

Artificial Intelligence-Machine Learning-Deep Learning Technologies

Q: 2

So, would you say though that it is actually intelligent, or that it is emulating intelligence, or would you say there’s no difference between those two things?

Yeah, so I’m really quite pragmatic as you just heard from me saying,

“Okay, let’s go talk about what the math is that’s happening,” and right now where we’re at with AI is relatively narrow capabilities.

AI is good at doing things like classification or answering yes and no kind of questions on data that it was fed and so in some sense, it’s mimicking intelligence in that it is taking in sort of human sensory data a computer can take in.

What I mean by that is it can take in visual data or auditory data, people are even working on sensory data and things like that.

But basically, a computer can now take in things that we would consider sort of human process data, so visual things and auditory things, and make determinations as to what it thinks it is, but certainly far from something that’s actually thinking and reasoning and showing intelligence.

Reasons to use AI

Q: 3

Well, staying squarely in the practical realm, that approach, which is basically, let’s look at the past and make guesses about the future, what is the limit of what that can do?

I mean, for instance, is that approach going to master natural language for instance?

Can you just feed a machine enough printed material and have it be able to converse?

Like what are some things that the model may not actually be able to do?

Yeah, you know it’s interesting because there’s a lot of debate.

What are we doing today that’s different from analytics?

We had the big data era, and we talked about doing analytics on the data.

What’s new and what’s different and why are we calling it AI now?

To refer to your question from that direction, one of the things that AI models do, be it anything from a deep learning model to something that’s more in the knowledge reasoning area, is that they’re much better interpolators, they’re much better able to predict on things that they’ve never seen before.

Classical rigid models that people programmed in computers, could answer “Oh, I’ve seen that thing before.”

With deep learning and with more modern AI techniques, we are pushing forward into computers and models being able to guess on things that they haven’t exactly seen before.

And so in that sense, there’s a good amount of interpolation influx, whether or not and how AI pushes into forecasting on things well outside the bounds of what it’s never seen before and moving AI models to be effective at types of data that are very different from what they’ve seen before, is the type of advancement that people are really pushing for at this point.

Listen to this one-hour episode or read the full transcript at


Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.