Category Archives for Artificial Intelligence (AI)

Voices in AI – Episode 80: A Conversation with Charlie Burgoyne

Today’s leading minds talk AI with host Byron Reese

About this Episode

Episode 80 of Voices in AI features host Byron Reese and Charlie Burgoyne discussing the difficulty of defining AI and how computer intelligence and human intelligence intersect and differ.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese:

This is Voices in AI brought you by GigaOm and I’m Byron Reese.

Today my guest is Charlie Burgoyne.

He is the founder and CEO of Valkyrie Intelligence, a consulting firm with domain expertise in applied science and strategy.

He’s also a general partner for Valkyrie Signals, an AI-driven hedge fund based in Austin, as well as the managing partner for Valkyrie labs, an AI credit company.

Charlie holds a master’s degree in theoretical physics from Georgetown University and a bachelor’s in nuclear physics from George Washington University.

I had the occasion to meet Charlie when we shared a stage when we were talking about AI and about 30 seconds into my conversation with him I said we gotta get this guy on the show.

And so I think ‘strap in’ should be a fun episode. Welcome to the show, Charlie.

Charlie Burgoyne:

Thanks so much, Byron for having me, excited to talk to you today.

Let’s start with [this]:

maybe re-enact a little bit of our conversation when we first met.

Tell me how you think of artificial intelligence, like what is it?

What is artificial about it and what is intelligent about it?

AI Tech

Sure, so the further I get down in this field, I start thinking about AI with two different definitions.

Definitions of AI

It’s a servant with two masters.

It has its private sector, applied narrowband applications where AI is really all about understanding patterns that we perform and that we capitalize on every day and automating those — things like approving time cards and making selections within a retail environment.

And that’s really where the real value of AI is right now in the market and [there’s] a lot of people in that space who are developing really cool algorithms that capitalize on the potential patterns that exist and largely lay dormant in data.

In that definition, intelligence is really about the cycles that we use within a cognitive capability to instrument our life and it’s artificial in that we don’t need an organic brain to do it.

Now the AI that I’m obsessed with from a research standpoint (a lot of academics are and I know you are as well Byron) — that AI definition is actually much more around the nature of intelligence itself because, in order to artificially create something, we must first understand it in its primitive state and its in its unadulterated state.

And I think that’s where the bulk of the really fascinating research in this domain is going, is just understanding what intelligence is, in and of itself.

Now I’ll come kind of straight to the interesting part of this conversation, which is I’ve had not quite a hundred guests on the show.

I can count on one hand the number who think it may not be possible to build a general intelligence.

According to our conversation, you are convinced that we can do it. Is that true?

And if so why?

Yes… The short answer is I am not convinced we can create a generalized intelligence, and that’s become more and more solidified the deeper and deeper I go into research and familiarity with the field.

Neural Network

Decision-making with AI

If you really unpack intelligent decision-making, it’s actually much more complicated than a simple collection of gates, a simple collection of empirically driven singular decisions, right?

A lot of the neural network scientists would have us believe that all decisions are really the right permutation of weighted neurons interacting with other layers of weighted neurons.

From what I’ve been able to tell so far with our research, either that is not getting us towards the goal of creating a truly intelligent entity or it’s doing the best within the confines of the mechanics we have at our disposal now.

In other words, I’m not sure whether or not the lack of progress towards a true generalized intelligence is due to the fact that

  • (a) the digital environment that we have tried to create said artificial intelligence in is unamenable to that objective or
  • (b) the nuances that are inherent to intelligence… I’m not positive yet those are things through which we have an understanding of modeling, nor would we ever be able to create a way of modeling that.

I’ll give you a quick example: If we think of any science fiction movie that encapsulates the nature of what AI will eventually be, whether it’s Her, or Ex Machina or Skynet or you name it.

There are a couple of big leaps that get glossed over in all science fiction literature and film, and those leaps are really around things like motivation.

  • What motivates an AI, like what truly at its core motivates AI like the one in Ex Machina to leave her creator and to enter into the world and explore?
  • How is that intelligence derived from innate creativity?
  • How are they designing things?

How are they thinking about drawings and how are they identifying clothing that they need to put on?

All these different nuances are intelligently derived from that behavior.

We really don’t have a good understanding of that, and we’re not really making progress towards an understanding of that, because we’ve been distracted for the last 20 years with research in fields of computer science that aren’t really that closely related to understanding those core drivers.

So when you say a sentence like ‘I don’t know if we’ll ever be able to make a general intelligence,’ ever is a long time.

So do you mean that literally?

Tell me a scenario in which it is literally impossible — like it can’t be done, even if you came across a genie that could grant your wish.

It just can’t be done.

Like maybe time travel, you know — back in time, it just may not be possible.

Do you mean that ‘may not be possible?

Or do you just mean on a time horizon that is meaningful to humans?

I think it’s on the spectrum between the two.

But I think it leans closer towards ‘not ever possible under any condition.’

I was at a conference recently and I made this claim which admittedly as any claim with this particular question would be based on intuition and experience which are totally fungible assets.

But I made this claim that I didn’t think it was ever possible, and something the audience asked me, well, have you considered meditating to create a synthetic AI?

Artificial Intelligence

And the audience laughed and I stopped and I said: “You know that’s actually not the worst idea I’ve been exposed to.”

That’s not the worst potential solution for understanding intelligence to try and reverse engineer my own brain with as few distractions from its normal working mechanics as possible.

That may very easily be a credible aid to understanding how the brain works.

What is behind Gravity?

If we think about gravity, gravity is not a bad analog.

Gravity is this force that everybody and their mother who’s older than, you know who’s past fifth grade understands how it works, you drop an apple you know which direction it’s going to go.

Not only that but as you get experienced you can have a prediction of how fast it will fall, right?

If you were to see a simulation drop an apple and it takes twelve seconds to hit the ground, you’d know that that was wrong, even if the rest of the vector was correct, the scaler is off a little bit. Right?

The reality is that we can’t create an artificial gravity environment, right?

We can create forces that simulate gravity.

Centrifugal force is not a bad way of replicating gravity but we don’t actually know enough about the underlying mechanics that guide gravity such that we could create an artificial gravity using the same techniques, relatively the same mechanics that are used in organic gravity.

In fact, it was only a year and a half ago or so closer to two years now where the Nobel Prize for Physics was awarded to the individuals who identified that it was gravitational waves that permeate gravity (actually that’s how they do gravitons), putting to rest any argument that’s been going on since Einstein truly.

So I guess my point is that we haven’t really made progress in understanding the underlying mechanics, and every step we’ve taken has proven to be extremely valuable in the industrial sector but actually opened up more and more unknowns in the actual inner workings of intelligence.

If I had to bet today, not only is the time horizon on a true artificial intelligence extremely long-tailed but I actually think that it’s not impossible that it’s completely impossible altogether.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Visit VoicesInAI.com to access the podcast, or subscribe now:

  • iTunes
  • Play
  • Stitcher
  • RSS

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Source: gigaom.com

The AI Talent Gap: Locating Global Data Science Centers

Good AI talent is hard to find. The talent pool for anyone with deep expertise in modern artificial intelligence techniques is terribly thin. More and more companies are committing to data and artificial intelligence as their differentiator.

When Worlds Collide: Blockchain and Master Data Management

The early adopters will quickly find difficulties in determining which data science expertise meets their needs.

And the AI talent?

If you are not Google, Facebook, Netflix, Amazon, or Apple, good luck.

With the popularity of AI, pockets of expertise are emerging around the world.

For a firm that needs AI expertise to advance its digital strategy, finding these data science hubs becomes increasingly important.

In this article we look at the initiatives different countries are pushing in the race to become AI leaders and we examine existing and potential data science centers.

 

Race to Be a Global AI Power

It seems as though every country wants to become a global AI power.

  • With the Chinese government pledging billions of dollars in AI funding, other countries don’t want to be left behind.
  • In Europe, France plans to invest €1.5 billion in AI research over the next 4 years while Germany has universities joining forces with corporations such as Porsche, Bosch, and Daimler to collaborate on AI research.
  • Even Amazon, with a contribution of €1.25 million, is collaborating in the AI efforts in Germany’s Cyber Valley around the city of Stuttgart.
  • Not one to be left behind, the UK pledged £300 million for AI research as well.
  • Other countries to commit money to AI are Singapore, which committed $150 million, and Canada, which not only committed $125 million but also has large data science hubs in Toronto and Montreal.

Yoshua Bengio, one of the fathers of deep learning, is from Montreal, the city with the biggest group of AI researchers in the world.

Toronto has a booming tech industry that naturally attracts AI money.

 

Data Scientists Worldwide

Examining a variety of sources, data science professionals are spread across the regions where we would expect them.

The graphic below shows the number of members of the site Data Science Central.

Since the site is in English, we expect most of its members to come from English-speaking countries; however, it still gives us some insight as to which countries have higher representation.

Data Scientists Worldwide

Source

Global AI Hubs

It becomes difficult then to determine AI hubs without classifying talent by levels.

One example of this is India; despite its large number of data science professionals, many of them are employed in lower-skilled roles such as data labeling and processing.

So what would be considered a data science hub?

The graphic below defines a hub by the number of advanced AI professionals in the country.

The countries are shown here have AI talent working in companies such as Google, Baidu, Apple, and Amazon.

However, this omits a large group of talent that is not hired by these types of companies.

Global AI Hubs

Source

Global AI Talent Pool

Matching the previous graph with a study conducted by Element AI, we see some commonalities, but also see some new hubs emerge.

The same talent centers remain, but more countries are highlighted on the map.

Element AI’s approach consisted of analyzing LinkedIn profiles, factoring in participation in conferences and publications, and weighing skills highly.

Global AI Talent Pool

Source

Search for AI Talents

As you search for AI talent, we recommend basing your search on 4 factors:

  1. workforce availability,
  2. cost of labor,
  3. English proficiency,
  4. and skill level.

Kaggle, one of the most popular data science websites, conducted a salary survey with respondents from 171 countries. The results can be seen below.

AI Talent Search

Source

Salaries of AI Talent

Salaries are as expected, but show high variability.

By aggregating salary data and the talent pool map, you can decide which countries suit your goals better.

EF English Proficiency Index

The EF English Proficiency Index shows which countries have the highest proficiency in English and can further weed out those that may have a strong AI presence or low cost of labor, but low English proficiency.

 

In the end,

you want to hire professionals that understand the problems you are facing and can tailor their work to your specific needs.

With a global mindset, companies can mitigate talent scarcity.

If you are considering sourcing talent globally, we recommend hiring strong leadership locally, who act as AI product managers that can manage a team.

Hire production managers located on-site with your global talent.

They can oversee any data science or AI development and report back to the product manager.

KUNGFU.AI will continue to study these global trends and help ensure companies are equipped with access to the best talent to meet their needs.

Source: gigaom.com

Voices in AI - Naveen Rao

Voices in AI – Episode 79: A Conversation with Naveen Rao

About this Episode

Episode 79 of Voices in AI features host Byron Reese and Naveen Rao discussing intelligence, the mind, consciousness, AI, and what the day-to-day looks like at Intel.

Byron and Naveen also delve into the implications of an AI future.

Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Q-Byron Reese:

This is Voices in AI brought to you by GigaOm, and I’m Byron Reese.

Today I’m excited that our guest is Naveen Rao.

He is the Corporate VP and General Manager of the Artificial Intelligence Products Group at Intel.

He holds a Bachelor of Science in Electrical Engineering from Duke and a Ph.D. in Neuroscience from Brown University.

Welcome to the show, Naveen.

A-Naveen Rao:

Thank you. Glad to be here.

Q-1:

You’re going to give me a great answer to my standard opening question, which is: What is intelligence?

A-1:

That is a great question. It really doesn’t have an agreed-upon answer.

My version of this is about potential and capability.

What I see as an intelligent system is a system that is capable of decomposing structure within data.

By my definition, I would call a newborn human baby intelligent, because the potential is there, but the system is not yet trained with real experience.

I think that’s different than other definitions, where we talk about the phenomenology of intelligence, where you can categorize things, and all of this.

I think that’s where the outcropping of having actually learned the inherent structure of the world is.

AI Tech

Q-2:

So, in what sense by that definition is artificial intelligence actually artificial?

Is it artificial because we built it, or is it artificial because it’s not real intelligence?

It’s like artificial turf; it just looks like intelligence.

A-2:

No. I think it’s artificial because we built it. That’s all.

There’s nothing artificial about it.

The term intelligence doesn’t have to be on biological mush, it can be implemented on any kind of substrate.

In fact, there’s even research on how slime mold, actually…

Artificial Intelligence

Q-3:

Right. It can work mazes…

A-3:

… can solve computational problems, Yeah.

 

Q-4:

How does it do that, by the way? That’s really a pretty staggering thing.

A-4:

There’s a concept that we call gradients. Gradients are just how information gets more crystalized.

If I feel like I’m going to learn something by going in one direction, that direction is the gradient.

It’s sort of a pointer in the way I should go.

That can exist in the chemical world as well, and things like slime mold actually use chemical gradients that translate into information processing and actually learn the dynamics of a system.

Our neurons do that. Deep neural networks do that in a computer system.

They’re all based on something similar at one level.

AI System in Robot

Q-5:

So, let’s talk about the nematode worm for a minute.

A-5:

Okay.

 

Q-6:

You’ve got this worm, the most successful creature on the planet.

Seventy percent of all animals are nematode worms.

He’s got 302 neurons and exhibits certain kinds of complex behavior.

There have been a bunch of people in the OpenWorm Project, who spent 20 years trying to model those 302 neurons in a computer, just to get it to duplicate what the nematode does.

Even among them, they say: “We’re not even sure if this is possible.”

So, why are we having such a hard time with such a simple thing as a nematode worm?

Neural System

A-6:

Well, I think this is a bit of a fallacy of reductive thinking here, that, “Hey, if I can understand the 302 neurons, then I can understand the 86 billion neurons in the human brain.”

I think that fallacy falls apart because there are different emergent properties that happen when we go from one size system to another.

It’s like running a company of 50 people is not the same as running a company of 50,000. It’s very different.

 

Q-7:

But, to jump in there… my question wasn’t, “Why doesn’t the nematode worm tell us something about human intelligence?”

My question was simply, “Why don’t we understand how a nematode worm works?”

 

A-7:

Right. I was going to get to that. I think there are a few reasons for that.

One is, the interaction of any complex system – hundreds of elements – is extremely complicated.

There’s a concept in physics called the three-body problem, where if I have two pool balls on a pool table, I can actually 100 percent predict where the balls will end up if I know the initial state and I know how much energy I’m injecting when I hit one of the balls in one direction with a certain force.

If you make that three, I cannot do that in a closed-form system.

I have to simulate steps along the way.

That is called a three-body problem, and it’s computationally intractable to compute that.

So, you can imagine when it gets to 302, it gets even more difficult.

And what we see in big systems like in mammalian brains, where we have billions of neurons, and 300 neurons, is that you actually have pockets of closely interacting pieces in a big brain that interact at a higher level.

That’s what I was getting at when I talked about these emergent properties.

So, you still have that 302-body problem, if you will, in a big brain as you do in a small brain.

That complexity hasn’t gone away, even though it seemingly is a much simpler system.

The interaction between 302 different things, even when you know precisely how each one of them is connected, is just a very complex matter.

If you try to model all the interactions and you’re off by just a little bit on any one of those things, the entire system may not work.

That’s why we don’t understand it, because you can’t characterize every piece of this, like every synapse… you can’t mathematically characterize it.

And if you don’t get it perfect, you won’t get a system that functions properly.

Human Brain & Neuron Model

Q-8:

So, do you say that suggesting by extension that the Human Brain Project in Europe, which really is… You’re laughing and nodding.

What’s your take on that?

A-8:

I am not a fan of the Human Brain Project for this exact reason.

The complexity of the system is just incredibly high, and if you’re off by one tiny parameter, by a tiny little amount, it’s sort of like the butterfly effect.

It can have huge consequences on the operation of the system, and you really haven’t learned anything.

All you’ve learned how to do is model some micro dynamics of a system.

You haven’t really gotten any true understanding of how the system really works.

Data Warehouse

Q-9:

You know, I had a guest on the show, Nova Spivack, who said that a single neuron may turn out to be as complicated as a supercomputer, and it may even operate down at the Planck level.

It’s an incredibly complex thing.

A-9:

Yeah.

 

Q-10:

Is that possible?

A-10:

It is a physical system – a physical device.

One could argue the same thing about a single transistor as well.

We engineer these things to act within certain bounds… and I believe the brain actually takes advantage of that as well.

So, a neuron… to completely, accurately describe everything a neuron is doing, you’re absolutely right.

It could take a supercomputer to do so, but we don’t necessarily need to abstract a supercomputer’s worth of value from each neuron.

I think that’s a fallacy.

There are lots of nonlinear effects and all this kind of crazy stuff that are happening that really aren’t useful to the overall function of the brain.

Just like an individual neuron can do very complicated things, when we put a whole bunch of [transistors] together to build a processor, we’re exploiting one piece of the way that transistor behaves to make that processor work.

We’re not exploiting everything in the realm of possibility that the transistor can do.

Q-11:

We’re going to get to artificial intelligence in a minute.

It’s always great to have a neuroscientist on the show.

So, we have these brains, and you said they exhibit emergent properties.

Emergence is of course the phenomenon where the whole of something takes on characteristics that none of the components have. And it’s often thought of in two variants.

One is weak emergence, where once you see the emergent behavior, with enough study you can kind of reverse engineer… “Ah, I see why that happened.”

And one is a much more controversial idea of strong emergence that may not be discernible.

The emergent property may not be derivable from the component.

Do you think human intelligence is a weak emergent property, or do you believe in strong emergence?

 

AQ-11:

I do in some ways believe in strong emergence.

Let me give you the subtlety of that.

I don’t necessarily think it can be analytically solved because the system is so complex.

What I do believe is that you can characterize the system within certain bounds.

It’s much like how a human may solve a problem like playing chess.

We don’t actually pre-compute every possibility.

We don’t do that sort of a brute force kind of thing.

But we do come up with heuristics that are accurate most of the time.

And I think the same thing is true with the bounds of a very complex system like the brain.

We can come up with bounds of these emergent properties that are accurate 95 percent of the time, but we won’t be accurate 100 percent of the time.

It’s not going to be as beautiful as some of the physics we have that can describe the world.

In fact, even physics might fall into this category as well.

So, I guess the short answer to your question is: I do believe in strong emergence that will never actually 100 percent describe…

Prepare your mind

Q-12:

But, do you think fundamentally intelligence could, given an infinitely large computer, be understood in a reductionist format?

Or is there some break-in cause and effect along the way, where it would be literally impossible?

Are you saying it’s practically impossible or literally impossible?

 

AQ-12:

…To understand the whole system top to bottom, from the emerging…?

 

Q-13:

Well, to start with, this is a neuron.

AQ-13:

Yeah.

 

Q-14:

And it does this, and you put 86 billion together and voilà, you have Naveen Rao.

AQ-14:

I think it’s literally impossible.

 

Q-15:

Okay, I’ll go with that. That’s interesting. Why is it literally impossible?

AQ-15:

Because the complexity is just too high, and the amount of energy and effort required to get to that level of understanding is many orders of magnitude more complicated than what you’re trying to understand.

 

Q-16:

So now, let’s talk about the mind for a minute.

We talked about the brain, which is physics. To use a definition that most people I think wouldn’t have trouble with, I’m going to call the mind all the capabilities of the brain that seem a little beyond what three pounds of goo should be able to do… like creativity and a sense of humor.

Your liver presumably doesn’t have a sense of humor, but your brain does.

So where do you think the mind comes from?

Or are you going to just say it’s an emergent property?

AQ-16:

I do kind of say it’s an emergent property, but it’s not just an emergent property.

It’s an emergent property that is actually the coordination of the physics of our brain – the way the brain itself works – and the environment.

I don’t believe that a mind exists without the world.

You know, a newborn baby, I called intelligent because it has the potential to decompose the world and find meaningful structure within it in which it can act.

But if it doesn’t actually do that, it doesn’t have a mind. You can see that… if you had kids yourself.

I actually had a newborn while I was studying neuroscience, and it was actually quite interesting to see.

I don’t think a newborn baby is really quite sentient yet.

That sort of emerges over time as the system interacts with the real world.

So, I think the mind is an emergent property of the brain plus environments interacting.

 

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

 

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Source: gigaom.com

Artificial Intelligence

AI at the Edge: A GigaOm Research Byte

Artificial intelligence (AI), primarily in the form of machine learning (ML), is making increasing inroads into our lives.

Enterprise Data Governance with Modern Data Catalog Platforms: A GigaOm Research Byte

There are several primary reasons for this:

  1. The rapidly-increasing capability of computers used to build and train ML models.
  2. Greater data-capturing ability across the compute environment, often in the form of inexpensive sensors embedded in everyday consumer, business, and industrial products.
  3. The development of new algorithms and approaches that improve the accuracy of ML applications.
  4. The creation of software toolkits that make building and training ML applications substantially easier, and therefore less expensive.

In addition to these four ‘truths’, there are two other factors that are often overlooked that are equally as important in bringing AI into our lives.

When Worlds Collide: Blockchain and Master Data Management

These factors are not about where AI’s are built and trained, but where they are deployed and used:

  • A reduction in cost, and increase in performance, of chips doing AI inference “at the edge.”
  • The development of middleware allowing a broader range of applications to run seamlessly on a wider variety of chips.

It is these final two developments that will allow AI to enhance our lives in countless new ways and enable AI in our pockets, cars, houses, and a host of other places.

This report explores these latter two factors, ignoring how AI is built and trained while focusing on the methods by which AI impacts our lives.

AI Operations

It explores the natural architectural migration of AI from central, powerful computers where an AI algorithm or application may have historically been built, trained, and used, to an edge model.

In the edge model, the AI compute happens either on a user device or somewhere in the network stack beneath the traditional cloud, perhaps on an edge server.

Data Warehouse

This leads to a new AI model that is a match-fit for what is to come: building and training, which will mainly continue on ever-more-powerful (and power-hungry) cloud-based computers, and inference.

The inference will be performed at the device edge, or close to it.

It is where the AI will run on ever-more-powerful (but less power-hungry) chips.

This foundational change in the AI architecture will be the single biggest driver in the advance of AI at scale.

Data Virtualization

This new architecture has several advantages over a highly centralized or cloud model, specifically:

  • More scalable
  • Faster
  • Lower cost
  • More secure
  • Lower power

There are tradeoffs in this approach, including the fundamental constraints of the chipset and future upgradability.

Further, there are still several outstanding questions about this shift that only time will answer:

  • How far to the edge will AI compute finally be pushed to?
  • Which chip design price/performance combinations will prove to be the most popular?
  • How disposable will the chips of the future be?

It should be noted that there are use cases where this model of centralized training and edge inference will not be appropriate; cases where decision latency and power consumption are not factors.

Human Brain & Neuron Model

One can imagine, for instance, that a large and expensive medical device might ship data back to a central location to be processed and analyzed on a time scale (perhaps measured in seconds or minutes) that would be unacceptable in another application, such as a self-driving car.

We discuss these exceptions as well.

The final part of this report briefly explores the societal impact of this change in architecture.

Winston Churchill once said, “We shape our tools and then the tools shape us.”

We are the generation that is shaping the digital tools of tomorrow, and it is worth reflecting on how they might shape us in return.

Source: gigaom.com

Voices in AI – Episode 78: A Conversation with Alessandro Vinciarelli

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. I’m Byron Reese.

Today our guest is Alessandro Vinciarelli.

He is a full professor at the University of Glasgow.

He holds a Ph.D. in applied mathematics from the University of Bern. Welcome to the show, Alessandro.

A-1: Alessandro Vinciarelli: Welcome. Good morning.

 

Work in Artificial Intelligence

Q-1: Tell me a little bit about the kind of work you do in artificial intelligence. 

I work on a particular domain that is called social signal processing, which is the branch of artificial intelligence that deals with social psychological phenomena.

We can think of the goal of this particular part of the domain as trying to read the mind of people, and through this to interact with people in the same way as people do with one another.

AI Tech

Subtle Social Signals

Q-2: That is like picking up on subtle social cues that people naturally do, teaching machines to do that?

Exactly. At the core of this domain, there are what we call social signals that are nonverbal behavioral cues that people naturally exchange during their social interactions.

We talk here about, for example, facial expressions, spontaneous gestures, posture, how we talk in a broadcast, the way of speaking – not what people say, but how they say it.

The core idea is that basically, we can see facial expressions with our eyes, can hear the way people speak with our ears… and so it is also possible to sense these nonverbal behavioral cues with common sensors – like cameras, microphones, and so on.

Through automatic analysis of the signal into the application of artificial intelligence approaches, we can map the data information we extract from images, audio recordings, and so on into social cues and their meaning for the people that are involved in an interaction.

Commonness of Social Cues

Q-3: I guess implicit in that is an assumption that there’s a commonness of social cues across the whole human race? Is that the case?

Yes. Let’s say social signals are the point where nature meets nurture.

What does it mean?

It means that in the end, it’s something that is intimately related to our body, to our evolution, to our very natural being.

And in this sense, we all have a disposition of the same expressive means, in the sense that we all have the same way of speaking, the same voice, the same phonetic apparatus.

The face is the same for everybody.

We have the same muscles of disposition in order to express a facial expression.

The body is the same for everybody. So, from the way we talk to our bodies… is the same for all people around the world.

However, at the same time as we are a part of society, part of a context, we somewhat learn from others to express specific meaning, like for example a friendly attitude or a hostile attitude or happiness and so on, in a way that somewhat matches the others.

To give an example of how this can work when I moved to the U.K. … I’m originally from Italy, and I started to teach in this university.

A teaching inspector came to see me and told me, “Well, Alessandro, you have to move your arms a little bit less, because you sound very aggressive.

You look very aggressive to the students.”

You see, in Italy, it is quite normal to move hands a lot, especially when we communicate in front of an audience.

However, here in the U.K., when people use their arms – because everybody around the world does it – I have to do it in a bit more moderate way, in a more let’s say British way, in order to not sound aggressive.

So, you see, gestures communicate all over the world.

However, the accepted intensity you use changes from one place to the other.

Artificial Intelligence Good or Bad

Practical Applications of AI

Q-4: What are some of the practical applications of what you’re working on?

Well, it is quite an exciting time for the community working on these types of topics.

After the very pioneering years, if we look at the history of this particular branch of artificial intelligence, we can see that roughly the early 2000s was a very pioneering time.

Then the community was established more or less between the late 2000s and three or four years ago when the technology started to work pretty well.

And now we are at the point where we start seeing applications of these technologies initially developed at the research level in the laboratories in the real world.

To give an idea, think of today’s personal assistants that can not only understand what we say and what we ask but also how we express our requests.

Think of many animated characters that can interact with the actual agents, social robots, and so on.

They are slowly entering into reality and interacting with people like people do – through gestures, through facial expressions, and so on.

We see more and more companies that are involved and active in these types of domains.

For example, we have systems that manage to recognize the emotions of people through sensors that can be carried like a watch on the wrist.

We have very interesting systems.

I collaborate in particular with a company called Neurodata Lab that analyzes the content of multimedia material, trying to get an idea of its emotional content.

That can be useful in any type of service about video on demand.

There is a major force toward more human-computer interfaces or more in general human/machine interfaces that can figure out how we feel in order to intervene appropriately and interact appropriately with us.

These are a few major examples.

Data Warehouse

Non-verbal Communication

Q-5: So, there’s voice, which I guess you could use over a telephone to determine some emotional state. And there are facial expressions. And there are other physical expressions. Are there other categories beyond those three that bifurcate or break up the world when you’re thinking of different kinds of signals?

Yes, somewhat.

The very fact that we are alive and we have a body somewhat forces us to have nonverbal behavioral cues, how they are called, to communicate through our body.

And even if you try not to communicate, that becomes somewhat of a cue and becomes a form of communication.

And there are so many nonverbal behavioral cues that psychologists group them into five fundamental classes.

One is whatever happens with the head.

Facial expressions, we’ve mentioned, but there are also movements of the head, shaking, nodding, and so on.

Then we have the posture.

Now at this moment, we are talking into a microphone.

But, for example, when you talk to people, you tend to face them. You can talk to them by not facing them, but the type of impression would be totally different.

Then we have gestures. When we talk about gestures, we talk about the spontaneous movements we make.

So, it’s not like the OK gesture with the thumb. It’s not like pointing to something.

These have a pretty specific meaning.

For example, self-touching… that typically communicates some kind of discomfort.

It is restrictive movements we make when we speak from a cognitive point of view.

Speaking and gesturing is a cognitive bimodal unit, so it’s something that gets lumped together.

Then we have the way of speaking, as I mentioned.

Not what we say, but how we say it.

So, the sound of the voice, and so on.

Then there is appearance, everything we can do in order to change our appearance.

So, for example, the attractiveness of the person, but also the kind of clothes you wear, the type of ornaments you have, and so on.

And the last one is the organization of space.

For example, in a company, the more important you are, the bigger your office is.

So space from that point of view communicates a form of social verticality.

Similarly, we modulate our distances with respect to other people, not only in physical tasks but also in social terms.

The closer a person is to us from a social point of view, the closer we let them come from a physical point of view.

So, these are the five wide categories of social signals that psychologists fundamentally recognize as the most important.

Human Brain & Neuron Model

Training Artificial Intelligence

Q-6: Well, as you go through them, I guess I can see how AI would be used. They’re all forms of data that could be measured. So, presumably, you can train an artificial intelligence on them. 

That is exactly the core idea of the domain and of the application of artificial intelligence in these types of problems.

So, the point is that to communicate with others, to interact with others, we have to manifest our inner state to our behavior – to what we do.

Because we cannot imagine communicating something that is not observable… Whatever is observable, meaning it is accessible to our senses, is something that is accessible to artificial sensors.

Once you can measure, once you can extract data about something, that is where artificial intelligence comes into play.

At this point, you can extract data, and the data can be automatically analyzed, then you can automatically infer information about the social and psychological phenomena taking place from the data you managed to capture.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Source: gigaom.com

Voices in AI – Episode 77: A Conversation with Nicholas Thompson

About this Episode

Episode 77 of Voices in AI features host Byron Reese and Nicholas Thompson discussing AI, humanity, social credit, as well as information bubbles.

Nicholas Thompson is the editor in chief of WIRED magazine, contributing editor at CBS, co-founder of The Atavist, and also worked at The New Yorker and authored a Cold War-era biography.

Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

 

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese. Today my guest is Nicholas Thompson. He is the editor-in-chief of WIRED magazine. He’s also a contributing editor at CBS which means you’ve probably seen him on the air talking about tech stories and trends. He also co-founded The Atavist, a digital magazine publishing platform. Prior to being at WIRED, he was a senior editor at The New Yorker and editor of NewYorker.com. He also published a book called The Hawk and the Dove, which is about the history of the Cold War. Welcome to the show Nicholas.

Nicholas Thompson: Thanks, Byron.

How are you doing?

I’m doing great. So… artificial intelligence, what’s that all about?

(Laughs) It’s one of the most important things happening in technology right now.

So do you think it really is intelligent, or is it just faking it?

What is it like from your viewpoint?

Is it actually smart or not?

Oh, I think it’s definitely smart.

I think that the premise of artificial intelligence, which if you define it as machines making independent decisions, is very smart right now and soon to get even smarter.

Data Warehouse

Well, it always sounds like I’m just playing what they call semantic gymnastics or something.

But does the machine actually make a decision, or is it just no more than your clock makes a decision to advance the minute hand one minute?

The computer is as deterministic as that clock. It doesn’t really decide anything it just is a giant clockwork, isn’t it?

Right.

I mean that gets you into about 19 layers of a really complicated discussion.

I would say ‘yes’ in a way it is like a clock.

Keep Watch on Your Time

But in other ways, machines are making decisions that are totally independent of the instructions or the data that was initially fed it, are finding patterns that the humans won’t see, and couldn’t be coded in.

So in that way, it becomes quite different from a clock.

I’m intrigued by that.

I mean the compass points to the north.

It doesn’t know which way north is.

That would be giving it too much credit.

But it does something that we can’t do, called magnetic north. So how is that really is the compass intelligent by the way you see the world?

Is the compass intelligent by the way I see the world?

Well, the compass is…

Compass

I mean one of the issues here is that artificial intelligence uses two words that have very complicated meanings and their definition evolves as we learn more about artificial intelligence.

And not only that but the definition of artificial intelligence and the way it’s used changes constantly both as our technology evolves as it learns to do new things and as it develops its brand value.

So back to your initial question, “Is a compass that points to the north intelligent?”

It is intelligent in the sense that it’s adding information to our world, but it’s not doing anything independent of the person who created it, who built the tools, and who imagined what it would do.

You build a compass you know that it’s going to point north, you put the pieces inside of it, [and] you know it will do that.

It’s not breaking outside of the box of the initial rules that were given to it and the promise of artificial intelligence is that it is breaking out of that box.

So. I’d like to really understand that a little more.

Like if I buy a NEST learning thermometer and over time I’m like, ‘oh I’m too hot, I’m too cold, I’m too cold,’ and it “figures it out” but how is it breaking out of what it knows?

Well, what would be interesting about a NEST thermometer, (I don’t know the details of how a NEST thermometer works, but) a NEST thermometer is looking at all the patterns of when you turn on your heat and when you don’t….

Thermometer Gun

If you program in a NEST thermometer and you say please make the house hotter between 6:00 in the morning and 10:00 o’clock at night, that’s relatively simple.

If you just install a NEST thermometer and then it watches you and follows your patterns and then reaches the same conclusion, it’s ended up at the same output, but it’s done it in a different way which is more intelligent right?

Well that’s really the question isn’t it?

The reason I dwell on these things is not too kind of count angels dancing on heads of pins.

But to me this kind of speaks to the ultimate limit of what this technology can do.

Like if it is just a giant clockwork, then you have to come to the question, ‘Is that what we are?

Are we just a giant clockwork?’ If we’re not and it is, then there are limits to what it can do.

If we are and it is or we’re not and it’s not, then maybe someday it can do everything we can do.

Do you think that someday it can do everything we can to do?

Yes. I thought this might be where you were going and this is where it gets so interesting.

And that was where in my initial answer I was starting to head in this direction, but my instinct is that we are like a giant clock, an extremely complex clock and a clock that’s built on rules that we don’t understand and won’t understand for a long time,

Moving Time

and that is built on rules that defy the way we normally programmed rules into clocks and calculators, but that essentially we are reducible to some form of math,

and with infinite wisdom, we could reach that that there isn’t a special spiritual unknowable element in the box…

Let me pause right there.

Let’s put a pin in that word ‘spiritual’ for a minute, but I want to draw attention to when I asked you if AI is just a clockwork, you said “No it’s more than that,” and if I ask you if a human’s a clockwork, you say “yeah I think so.”

Well that’s because I was taking your definition of the clock, right?

So I think what you said a minute ago is really where it’s at — which is: either we are clocks and the machines are clocks, or we are machines, we are clocks and they’re not clocks, there are four possibilities there.

data protection

And my instinct is that if we’re going to define it that way, I’m going to define clocks in an incredibly broad sense meaning mathematical reasoning including mathematics we don’t understand today, I’ll make the argument that both humans and machines you’re creating are clocks.

If we’re thinking of clocks in a much narrower sense, which is just a set of simple instructions input/output, then machines can go beyond that and humans can go beyond that too.
But no matter how we define the clocks, I’m putting the humans and the machines in the same category.

Data Management Strategy
So I either agree depending on what your base definitions are that humans and machines both are category A or they’re both not category A, that there isn’t any fundamental difference between humans and machines.

 

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

 

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Source: gigaom.com

Voices in AI – Episode 76: A Conversation with Rudy Rucker

About this Episode

Episode 76 of Voices in AI features host Byron Reese and Rudy Rucker discuss the future of AGI, the metaphysics involved in AGI, and delve into whether the future will be for humanity’s good or ill.

Rudy Rucker is a mathematician, a computer scientist, as well as being a writer of fiction and nonfiction, with awards for the first two of the books in his Ware Tetralogy series.

Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese.

Today my guest is Rudy Rucker.

He is a mathematician, a computer scientist, and a science fiction author.

He has written books of fiction and nonfiction, and he’s probably best known for his novels in the Ware Tetralogy, which consists of software, wetware, freeware, and real-ware.

The first two of those won Philip K. Dick awards. Welcome to the show, Rudy.

Rudy Rucker: It’s nice to be here Byron.

This seems like a very interesting series you have and I’m glad to hold forth on my thoughts about AI.

Wonderful. I always like to start with my Rorschach question which is: What is artificial intelligence? And why is it artificial?

Well, a good working definition has always been the Turing test.

If you have a device or program that can convince you that it’s a person, then that’s pretty close to being intelligent.

AI System in Robot

So it has to master conversation?

It can do everything else, it can paint the Mona Lisa, it could do a million other things, but if it can’t converse, it’s not AI?

No those other things are also a big part of it.

You’d want it to be able to write a novel, ideally, or to develop scientific theories—to do the kinds of things that we do, in an interesting way.

Well, let me try a different tack, what do you think intelligence is?

I think intelligence is to have a sort of complex interplay with what’s happening around you.

You don’t want the old cliche that the robotic voice or the screen with capital letters on it, just not even able to use contractions, “do not help me.”

Robot Blogger

You want something that’s flexible and playful in intelligence.

I mean even in movies when you look at the actors, you often will get a sense that this person is deeply unintelligent or this person has an interesting mind.

It’s a richness of behavior, a sort of complexity that engages your imagination.

And do you think it’s artificial?

Is artificial intelligence actual intelligence or is it something that can mimic intelligence and look like intelligence, but it doesn’t actually have any, there’s no one actually home?

Right, well I think the word artificial is misleading.

I think as you asked me before the interview about my being friends with Stephen Wolfram, and one of Wolfram’s points has been that any natural process can embody universal computation.

Once you have the universal computation, it seems like, in principle, you might be able to get intelligent behavior emerging even if it’s not programmed.Data Warehouse

So then, it’s not clear that there’s some bright line that separates human intelligence from the rest of the intelligence.

I think when we say “artificial intelligence,” what we’re getting at is the idea that it would be something that we could bring into being, either by designing or probably more likely by evolving it in a laboratory setting.

So, on the Stephen Wolfram thread, his view is everything’s computation and that you can’t really say there’s much difference between a human brain and a hurricane, because what’s going on in there is essentially a giant clockwork running its program, and it’s all really computational equivalence, it’s all kind of the same in the end, do you ascribe to that?

Yeah, I’m a convert.

I wouldn’t use the word ‘clockwork’ that you use because that already slips in an assumption that computation is in some way clunky and with gears and teeth, because we can have things—

But it’s deterministic, isn’t it?

It’s deterministic, yes, so I guess in that sense it’s like clockwork.

So Stephen believes, and you hate to paraphrase something as big as like his view on science, but he believes that everything is—not a clockwork, I won’t use that word—but everything is deterministic.

But, even the most deterministic things, when you iterate them, become unpredictable, and they’re not unpredictable inherently, like from a universal standpoint.

But they’re unpredictable from how finite our minds are.

They’re in practice unpredictable?

Correct.

So, a lot of natural processes, like well there’s like when you take Physics I, you say oh, I can predict where if I fire an artillery shot where it’s going to land because it’s going to travel along a perfect parabola and then I can just work it out on the back of an envelope in a few seconds.

And then when you get into reality, well they don’t actually travel on perfect parabolas, they have this odd-shaped curve due to air friction, that’s not linear, it depends how fast they’re going.

And then, you skip into saying “Well, I really would have to simulate this click.”

And then when you get into saying you have to predict something by simulating the process, then the event itself is simulating itself already, and in practice, the simulation is not going to run appreciably faster than just waiting for the event to unfold, and that’s the catch.

AI Tech

We can take a natural process and it’s computational in the sense that it’s deterministic, so you think well, cool, I’ll just find out the rule it’s using and then I’ll use some math tricks and I’ll predict what it’s going to do.

For most processes, it turns out there aren’t any quick shortcuts, that’s actually all.

It was worked on by Alan Turing way back when he proved that you can’t effectively get extreme speed-ups of universal processes.

So then we’re stuck with saying, maybe it’s deterministic, but we can’t predict it, and going slightly off on a side thread here, this question of free will always come up, because we say well, “we’re not like deterministic processes because nobody can predict what we do.”

And the thing is if you get a really good AI program that’s running at its top level, then you’re not going to be able to predict that either.

So, we kind of confuse free will with unpredictability, but actually, unpredictability’s enough.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

 

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Source: gigaom.com

Voices in AI – Episode 75: A Conversation with Kevin Kelly

About this Episode

Episode 75 of Voices in AI features host Byron Reese and Kevin Kelly discuss the brain, the mind, what it takes to make AI, and Kevin’s thoughts on its inevitability.

Kevin has written books such as ‘The New Rules for a New Economy, ‘What Technology Wants’, and ‘The Inevitable’. Kevin also started Wired Magazine, an internet and print magazine of tech and culture.

Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

 

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese.

Today I am so excited we have as our guest, Kevin Kelly.

You know when I was writing the biography for Kevin, I didn’t even know where to start or where to end.

He’s perhaps best known for a quarter of a century ago, starting Wired magazine, but that is just one of many many things on an amazing career [path].

He has written a number of books, The New Rules for a New EconomyWhat Technology Wants, and most recently, The Inevitable, where he talks about the immediate future.

I’m super excited to have him on the show, welcome Kevin.

 

Kevin Kelly: It’s a real delight to be here, thanks for inviting me.

data protection

So what is inevitable?

There’s a hard version and a soft version, and I kind of adhere to the soft version.

The hard version is kind of a total deterministic world in which if we rewound the tape of life, it all unfolds exactly as it has, and we still have Facebook and Twitter, and we have the same president and so forth.

The soft version is to say that there are biases in the world, in biology as well as its extension into technology and that these biases tend to shape some of the large forms that we see in the world, still leaving the particulars, the specifics, the species to be completely, inherently, unpredictable and stochastic and random.

So that would say that things like you’re going to find on any planet that has water, you’ll find fish, it has life and in water, you’ll find fish, or will things, if you rewound the tape of life you’d probably get flying animals again and again, but you’ll never, but I mean, a specific bird, a robin is not inevitable.

And the same thing with technology. Any planet that discovers electricity and mixed wires will have telephones.

So telephones are inevitable, but the iPhone is not. And the internet’s inevitable, but Google’s not.

AI’s inevitable, but the particular variety of character, the specific species of AI is not.

That’s what I mean by inevitable—that there are these biases that are built by the very nature of chemistry and physics, that will bend things in certain directions.

Human Brain & Neuron Model

And what are some examples of those that you discuss in your book?

So, technology’s basically an extension of the same forces that drive life, and a kind of accelerated evolution is what technology is.

So if you ask the question about what are the larger forces in evolution, we have this movement towards complexity.

We have a movement towards diversity; we have a movement towards specialization; we have a movement towards mutualism.

Those also are happening in technology, which means that all things being equal, technology will tend to become more and more complex.

The idea that there’s any kind of simplification going on in technology is completely erroneous, there isn’t.

It’s not that the iPhone is any simpler.

Phone Notification

There’s a simple interface.

It’s like you have an egg, it’s a very simple interface but inside it’s very complex.

The inside of an iPhone continues to get more and more complicated, so there is a drive that, all things being equal, technology will be more complex and then next year it will be more and more specialized.

So, the history of technology in photography was there was one camera, one kind of camera.

Then there was a special kind of camera you could do for high speed; maybe there’s another kind of camera that could do underwater; maybe there was a kind that could do infrared; and then eventually we would do a high speed, underwater, infrared camera.

So, all these things become more and more specialized and that’s also going to be true about AI, we will have more and more specialized varieties of AI.

Data Virtualization

So let’s talk a little bit about [AI].

Normally the question I launch this with—and I heard your discourse on it—is: What is intelligence?

And in what sense is AI artificial?

Yes. So the big hairy challenge for that question is, we humans collectively as a species at this point in time, have no idea what intelligence really is.

We think we know when we see it, but we don’t really, and as we try to make artificial synthetic versions of it, we are, again and again, coming up to the realization that we don’t really know how it works and what it is.

Their best guess right now is that there are many different subtypes of cognition that collectively interact with each other and are codependent on each other, form the total output of our minds and of course other animal minds, and so.

I think the best way to think of this is we have a ‘zoo’ of different types of cognition, different types of solving things, of learning, of being smart, and that collection varies a little bit by person-to-person and a lot between different animals in the natural world and so…

AI Tech

That collection is still being mapped, and we know that there’s something like symbolic reasoning.

We know that there’s a kind of deductive logic, that there’s something about spatial navigation as a kind of intelligence.

We know that there’s mathematical type thinking; we know that there’s emotional intelligence; we know that there’s perception; and so far, all the AI that we have been ‘wowed’ by in the last 5 years is really all a synthesis of only one of those types of cognition, which is perception.

So all the deep learning neural net stuff that we’re doing is really just varieties of perception of perceiving patterns, and whether there’s audio patterns or image patterns, that’s really as far as we’ve gotten.

But there are all these other types, and in fact, we don’t even know what all the varieties of types [are].

Data Warehouse

We don’t know how we think, and I think one of the consequences of AI, trying to make AI, is that AI is going to be the microscope that we need to look into our minds to figure out how they work.

So it’s not just that we’re creating artificial minds, it’s the fact that that creation—that process—is the scope that we’re going to use to discover what our minds are made of.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

 

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Source: gigaom.com

Voices in AI – Episode 69: A Conversation with Raj Minhas

About this Episode

Episode 69 of Voices in AI features host Byron Reese and Dr. Raj Minhas talk about AI, AGI, and machine learning.

They also delve into explainability and other quandaries AI is presenting. Raj Minhas has a Ph.D. and MS in Electrical and Computer Engineering from the University of Toronto, with his BE from Delhi University.

Raj is also the Vice President and Director of the Interactive and Analytics Laboratory at PARC.

Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese. Today I’m excited that our guest is Raj Minhas, who is Vice President and the Director of Interactive and Analytics Laboratory at PARC, which we used to call Xerox PARC. Raj earned his Ph.D. and MS in Electrical and Computer Engineering from the University of Toronto, and his BE from Delhi University. He has eight patents and six patent-pending applications. Welcome to the show, Raj!

Raj Minhas: Thank you for having me.

I like to start off, just asking a really simple question, or what seems like a very simple question: what is artificial intelligence?

Okay, I’ll try to give you two answers.

One is a flip response, which is if you tell me what is intelligence, I’ll tell you what is artificial intelligence, but that’s not very useful, so I’ll try to give you my functional definition.

I think of artificial intelligence as the ability to automate cognitive tasks that we humans do, so that includes the ability to process information, make decisions based on that, learn from that information, at a high level.

That functional definition is useful enough for me.

 

Well, I’ll engage on each of those, if you’ll just permit me. I think even given a definition of intelligence that everyone agreed on, which doesn’t exist, artificial is still ambiguous. Do you think of it as artificial in the sense that artificial turf really isn’t grass, so it’s not really intelligence, it just looks like intelligence? Or, is it simply artificial because we made it, but it really is intelligent?

It’s the latter.

So if we can agree on what intelligence is, then artificial intelligence to me would be the classical definition of artificial intelligence, which is re-creating that outside the human body.

So re-creating that by ourselves, it may not be re-created in the way it is created in our minds, in the way humans or other animals do it, but, it’s re-created in that it achieves the same purpose, it’s able to reason in the same way, it’s able to perceive the world, it’s able to do problem-solving in that way.

So without getting necessarily bogged down by what is the mechanism by which we have intelligence, and does that mechanism need to be the same; artificial intelligence to me would be re-creating that – the ability of that.

 

Fair enough, so I’ll just ask you one more question along these lines. So, using your ability to automate cognitive tasks, let me give you four or five things, and you tell me if they’re AI. AlphaGo?

Yes.

And then a step down from that, a calculator?

Sure, a primitive form of AI.

A step down from that: an abacus?

Abacus, sure, but it involves humans in the operation of it, but maybe it’s on that boundary where it’s partially automated, but yes.

What about an assembly line?

Sure, so I think…

And then I would say my last one which is a cat food dish that refills itself when it’s empty? And if you say yes to that…

All of those things to me are intelligent, but some of those are very rudimentary, and not, so, for example, you look at animals.

On one end of the scale are humans, they can do a variety of tasks that other animals cannot, and on the other end of the spectrum, you may have very simple organisms, single-celled or mammals, they may do things that I would find intelligent, they may be simply responding to stimuli, and that intelligence may be very much encoded.

They may not have the ability to learn, so they may not have all aspects of intelligence, but I think this is where it gets really hard to say what is intelligence.

Which is my flip response.

If you say: what is intelligence?

I can say I’m trying to automate that by artificial intelligence, so, if you were to include in your definition of intelligence, which I do, that ability to do math implies intelligence, then by automating that with an abacus is a way of artificially doing that, right?

You have been doing it in your head using whatever mechanism is in there, you’re trying to do that artificially.

So it is a very hard question that seems so simple, but, at some point, in order to be logically consistent, you have to say yes, if that’s what I mean, that’s what I mean, even though the examples can get very trivial.

 

Well, I guess then, and this really is the last question along those lines: what, if everything falls under your definition, then what’s different now? What’s changed? I mean a word that means everything means nothing, right?

That is part of the problem, but I think what is becoming more and more different is, the kinds of things you’re able to do, right?

So we are able to reason now artificially in ways that we were not able to before.

Even if you take the narrower definition that people tend to use which is around machine learning, they’re able to use that to perceive the world in ways in which we were not able to before, and so, what is changing is that ability to do more and more of those things, without relying on a person necessarily at the point of doing them.

We still rely on people to build those systems to teach them how to do those things, but we are able to automate a lot of that.

Obviously artificial intelligence to me is more than machine learning where you show something a lot of data and it learns just for a function because it includes the ability to reason about things, to be able to say, “I want to create a system that does X, and how do I do it?”

So can you reason about models, and come to some way of putting them together and composing them to achieve that task?

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Source: Voices in AI – Episode 69: A Conversation with Raj Minhas – Gigaom

Voices in AI – Episode 74: A Conversation with Dr. Kai-Fu Lee

About this Episode

Episode 74 of Voices in AI features host Byron Reese and Dr. Kai-Fu Lee discussing the potential of AI to disrupt job markets, the comparison of AI research and implementation in the U.S. and China, as well as other facets of Dr. Lee’s book “AI Superpowers”.

Dr. Kai-Fu Lee, previously president of Google China, is now the CEO of Sinovation Ventures.

Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

 

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOmI’m Byron Reese. Today I am so excited my guest is Dr. Kai-Fu Lee. He is, of course, an AI expert. He is the CEO of Sinovation Ventures. He is the former President of Google China. And he is the author of a fantastic new book called “AI Superpowers.” Welcome to the show, Dr. Lee. 

Kai-Fu Lee: Thank you Byron.

 

I love to begin by saying, AI is one of those things that can mean so many things. And so, for the purpose of this conversation, what are we talking about when we talk about AI?

We’re talking about the advances in machine learning… in particular Deep Learning and related technologies as it applies to artificial narrow intelligence, with a lot of opportunities for implementation, application and value extraction.

We’re not talking about artificial general intelligence, which I think is still a long way out.

 

So, confining ourselves to narrow intelligence, if someone were to ask you worldwide, not even getting into all the political issues, what is the state of the art right now? How would you describe where we are as a planet with narrow artificial intelligence?

I think we’re at the point of readiness for application. I think the greatest opportunity is the application of what’s already known.

If we look around us, we see very few of the companies, enterprises and industries using AI when they all really should be.

Internet companies use AI a lot, but it’s really just beginning to enter financial, manufacturing, retail, hospitals, healthcare, schools, education and so on.

It should impact everything, and it has not.

So, I think what’s been invented and how it gets applied/implemented/monetized… value creation, that is a very clear 100% certain opportunity we should embrace.

Now, there can be more innovations, inventions, breakthroughs… but even without those, I think we’ve got so much on our hands that’s not yet been fully valued and implemented into industry.

 

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

 

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Source: gigaom.com

1 2 3 5