Category Archives for Artificial Intelligence (AI)

Voices in AI – Episode 62: A Conversation with Atif Kureishy

About this Episode

Episode 62 of Voices in AI features host Byron Reese and Atif Kureishy discussing AI, deep learning, and the practical examples and implications in the business market and beyond.

Atif Kureishy is the Global VP of Emerging Practices at Think Big, a Teradata company.

He also has a B.S. in physics and math from the University of Maryland as well as an MS in distributive computing from Johns Hopkins University.

Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese. Today my guest is Atif Kureishy.

He is the Global VP of Emerging Practices, which is AI and deep learning at Think Big, a Teradata company.

He holds a BS in Physics and Math from the University of Maryland, Baltimore County, and an MS in distributive computing from Johns Hopkins University.

Welcome to the show Atif.

Atif Kureishy: Welcome, thank you, appreciate it.

Q: 1

So I always like to start off by just asking you to define artificial intelligence.

Yeah, definitely an important definition, one that unfortunately is overused and stretched in many different ways.

Here at Think Big we actually have a very specific definition within the enterprise.

But before I give that, for me in particular, when I think of intelligence, that conjures up the ability to understand, the ability to reason, the ability to learn, and we usually equate that to biological systems or living entities

And now with the rise of probably more appropriate machine intelligence, we’re applying the term ‘artificial’ to it, and the rationale is probably because machines aren’t living and they’re not biological systems.

So with that, the way we’ve defined AI, in particular, is: leveraging machine and deep learning to drive towards a specific business outcome.

And it’s about giving leverage for human workers, to enable higher degrees of assistance and higher degrees of automation.

And when we define AI in that way, we actually give it three characteristics.

Those three characteristics are the ability to sense and learn, and so that’s being able to understand massive amounts of data and demonstrate continuous learning, and detecting patterns and signals within the noise if you will.

And the second is being able to reason and infer, and that is driving intuition and inference with increasing accuracy again to maximize a business outcome or a business decision.

And then ultimately it’s about deciding and acting, so actioning or automating a decision based on everything that’s understood, to drive towards more informed activities that are based on corporate intelligence.

So that’s kind of how we view AI in particular.

AI-ML-Robotics Technologies

Q: 2

Well, I applaud you for having given it so much thought, and there’s a lot there to unpack.

You talked about intelligence being about understanding and reasoning and learning, and that was even in your three areas.

Do you believe machines can reason?

You know, over time, we’re going to start to apply algorithms and specific models to the concept of reasoning.

And so the ability to understand, the ability to learn, are things that we’re going to express in mathematical terms no doubt.

Does it give it human lifelike characteristics? That’s still something to be determined.

Human Brain & Neuron Model

Q: 3

Well, I don’t mean to be difficult with the definition because, as you point out, most people aren’t particularly rigorous when it comes to it.

But if it’s to drive an outcome, take a cat food dish that refills itself when it’s low, it can sense, it can reason that it should put more food in.

And then it can act and release a mechanism that refills the food dish, is that AI, in your understanding, and if not why isn’t that AI?

Yeah, I mean I think in some sense it checks a lot of the boxes, but the reality is, being able to adapt and understand what’s occurring.

For instance, if that cat is coming out during certain times of the day ensuring that meals are prepared in the right way and that they don’t sit out and become stale or become spoiled in any way.

And that is signs of a more intelligent type of capability that is learning the behaviors and anticipating how best to respond given a specific outcome it’s driving towards.

AI System in Robot

Q: 4

Got you. So now, to take that definition, your company is Think Big.

What do you think big about? What is Think Big and what do you do?

So looking back in history a little bit, Think Big was actually an acquisition that Teradata had done several years ago, in the big data space, and particularly around open source and consulting.

And over time, Teradata had made several acquisitions and now we’ve unified all of those various acquisitions into a unified group, called Think Big Analytics.

And so what we’re particularly focused on is how do we drive business outcomes using advanced analytics and data science.

And we do that through a blend of approaches and techniques and technology frankly.

AI-Artificial Intelligence Benefits & Risks

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

 

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Source: gigaom.com

Voices in AI – Episode 61: A Conversation with Dr. Louis Rosenberg

About this Episode

Episode 61 of Voices in AI features host Byron Reese and Dr. Louis Rosenberg talking about AI and swarm intelligence. Dr. Rosenberg is the CEO of Unanimous AI. He also holds a B.S., M.S., and a Ph.D. in Engineering from Stanford.

Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese and today I’m excited that our guest is Louis Rosenberg.

He is the CEO at Unanimous A.I. He holds a B.S. in Engineering, an M.S. in Engineering, and a Ph.D. in Engineering all from Stanford. Welcome to the show, Louis.

Dr. Louis Rosenberg: Yeah, thanks for having me.

 

Q: 1

So tell me a little bit about why do you have a company? Why are you CEO of a company called Unanimous A.I.?

What is the unanimous aspect of it?

Sure. So, what we do at Unanimous A.I. is we use artificial intelligence to amplify the intelligence of groups rather than using A.I. to replace people.

And so instead of replacing human intelligence, we are amplifying human intelligence by connecting people together using A.I. algorithms.

So in laymen’s terms, you would say we build hive minds. In scientific terms, we would say we build artificial swarm intelligence by connecting people together into systems.

Honey Bee Hive-Natural

Q: 2

What is swarm intelligence?

So swarm intelligence is a biological phenomenon that people have been studying, or biologists have been studying, since the 1950s.

And it is basically the reason why birds flock and fish school and bees swarm—they are smarter together than they would be on their own.

And the way they become smarter together is not the way people do it. They don’t take calls, they don’t conduct surveys, there’s no SurveyMonkey in nature.

The way that groups of organisms get smarter together is by forming systems, real-time systems with feedback loops so that they can essentially think together as an emergent intelligence that is smarter as a uniform system than the individual participants would be on their own.

And so the way I like to think of an artificial swarm intelligence or a hive mind is as a brain of brains.

And that’s essentially what we focus on at Unanimous A.I., is figuring out how to do that among people, even though nature has figured out how to do that among birds and bees and fish, and have demonstrated over millions of years and hundreds of millions of years, how powerful it can be.

Bee Swarm

Q: 3

So before we talk about artificial swarm intelligence, let’s just spend a little time really trying to understand what it is that the animals are doing.

So the thesis is, your average ant isn’t very smart and even the smartest and isn’t very smart and yet collectively they exhibit behavior that’s quite intelligent.

They can do all kinds of things and forage and do this and that, and build a home and protect themselves from a flood and all of that. So how does that happen?

Yeah, so it’s an amazing process, and it’s worth taking one little step back and just asking ourselves, how do we define the term intelligence?

And then we can talk about how we can build a swarm intelligence.

And so, in my mind, the word intelligence could be defined as a system that takes in noisy input about the world and it processes that input and it uses it to make decisions, to have opinions, to solve problems and, ideally, it does it creatively and by learning over time.

And so if that’s intelligence, then there are lots of ways we can think about building artificial intelligence, which I would say is basically creating a system that involves technology that does some or all of these systems, takes in noisy input, and uses it to make decisions, have opinions, solve problems, and does it creatively and learning over time.

 Swimming fish school

Now, in nature, there’s really been two paths by which nature has figured out how to do these things, how to create intelligence.

One path is the path we’re very, very familiar with, which is by building up systems of neurons.

And so, over hundreds of millions and billions of years, nature figured out that if you build these systems of neurons, which we call brains, you can take in information about the world and you can use it to make decisions and have opinions and solve problems and do it creatively and learn over time.

But what nature has also shown is that in many organisms—particularly social organisms—once they’ve built that brain and they have an individual organism that can do this on their own, many social organisms then evolve the ability to connect the brains together into systems.

So if a brain is a network of neurons where intelligence emerges, a swarm in nature is a network of brains that are connected deeply enough that a superintelligence emerges.

And by superintelligence, we mean that the brain of brains is smarter together than those individual brains would be on their own.

And as you described, it happens in ants, it happens in bees, it happens in birds, and fish.

Migrating Flight of a Flock of Birds

And let me talk about bees because that happens to be the type of swarm intelligence that’s been studied the longest in nature.

And so, if you think about the evolution of bees, they first developed their individual brains, which allowed them to process information, but at some point, their brains could not get any larger, presumably because they fly, and so bees fly around, their brains are very tiny to be able to allow them to do that.

In fact, a honeybee has a brain that has less than a million neurons in it, and it’s smaller than a grain of sand.

And I know a million neurons sounds like a lot, but a human has 85 billion neurons. So however smart you are, divide that by 85,000 and that’s a honeybee.

So a single honeybee, very, very simple organism, and yet they have very difficult problems that they need to solve, just like humans have difficult problems.

Birds Flying in a Group

And so the type of problem that is actually studied the most in honeybees is picking a new home to move into.

And by a new home, I mean, you have a colony of 10,000 bees and every year they need to find a new home because they’ve outgrown their previous home and that home could be a hole in a hollow log, it could be a hole at the side of a building, it could be a hole—if you’re unlucky—in your garage, which happened to me.

And so a swarm of bees is going to need to find a new home to move into. And, again, it sounds like a pretty simple decision, but actually, it’s a life-or-death decision for honeybees.

And so for the evolution of bees, the better decision that they can make when picking a new home, the better the survival of their species.

And so, to solve this problem, what colonies of honeybees do is they form a hive mind or a swarm intelligence and the first step is that they need to collect information about their world.

And so they send out hundreds of scout bees out into the world to search 30 square miles to find potential sites, candidate sites that they can move into.

So that’s data collection. And so they’re out there sending hundreds of bees out into the world searching for different potential homes, then they bring that information back to the colony and now they have the difficult part of it: they need to make a decision, they need to pick the best possible site of dozens of possible sites that they have discovered.

Now, again, this sounds simple but honeybees are very discriminating house-hunters. They need to find a new home that satisfies a whole bunch of competing constraints.

That new home has to be large enough to store the honey they need for the winter. It needs to be ventilated well enough so they can keep it cool in the summer.

It needs to be insulated well enough so it can stay warm on cold nights. It needs to be protected from the rain, but also near good sources of water.

And also, of course, it needs to be well-located, near good sources of pollen.

Honey Bee opt for pollens

And so it’s a complex multi-variable problem. This is a problem that a single honeybee with a brain smaller than a grain of sand could not possibly solve.

In fact, a human that was looking at that data would find it very difficult to use a human brain to find the best possible solution to this multi-variable optimization problem.

Or a human that is faced with a similar human challenge, like finding the perfect location for a new factory or the perfect features of a new product or the perfect location to put a new store, would be very difficult to find a perfect solution.

And yet, rigorous studies by biologists have shown that honeybees pick the best solution from all the available options about 80% of the time.

And when they don’t pick the best possible solution, they pick the next best possible solution. And so it’s remarkable.

By working together as swarm intelligence, they are enabling themselves to make a decision that is optimized in a way that a human brain, which is 85,000 times more powerful, would struggle to do.

Human Brain & Neuron Model

And so how do they do this? Well, they form a real-time system where they can process the data together and converge together on the optimal solution.

Now, they’re honeybees, so how do they process the data? Well, nature came up with an amazing way. They do it by vibrating their bodies.

And so biologists call this a “waggle dance” because to humans when people first starting looking into hives, they saw these bees doing something that looked like they were dancing because they were vibrating their bodies.

It looked like they were dancing but really they were generating these vibrations, these signals that represent their support for their various home sites that were under consideration.

By having hundreds and hundreds of bees vibrating their bodies at the same time, they’re basically engaging in this multi-directional tug of war.

They’re pushing and pulling on a decision, exploring all the different options until they converge together in real-time on the one solution that they can best agree upon and it’s almost always the optimal solution.

And when it’s not the optimal solution, it’s the next best solution. So basically they’re forming this real-time system, this brain of brains that can converge together on an optimal solution and can solve problems that they couldn’t do on their own.

And so that’s the most well-known example of what a swarm intelligence is and we see it in honeybees, but we also see the same process happening in flocks of birds, in schools of fish, which allow them to be smarter together than alone.

Artificial-Intelligence-Brain

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

 

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Source: gigaom.com

Artificial-Intelligence-Brain

5 Common Misconceptions about AI

In recent years I ran into a number of misconceptions regarding AI, and sometimes when discussing AI with people from outside the field, I feel like we are talking about two different topics. This article is an attempt at clarifying what AI practitioners mean by AI, and where it is in its current state.

Artificial Intelligence

The first misconception has to do with Artificial General Intelligence or AGI:

 

1. Applied AI systems are just limited versions of AGI

Despite what many think, the state of the art in AI is still far behind human intelligence. Artificial General Intelligence, i.e. AGI, has been the motivating fuel for all AI scientists from Turing to today.

Somewhat analogous to Alchemy, the eternal quest for AGI that replicates and exceeds human intelligence has resulted in the creation of many techniques and scientific breakthroughs.

AGI has helped us understand facets of human and natural intelligence, and as a result, we’ve built effective algorithms inspired by our understanding and models of them.

However, when it comes to practical applications of AI, AI practitioners do not necessarily restrict themselves to pure models of human decision-making, learning, and problem-solving.

Rather, in the interest of solving the problem and achieving acceptable performance, AI practitioners often do what it takes to build practical systems.

At the heart of the algorithmic breakthroughs that resulted in Deep Learning systems, for instance, is a technique called back-propagation.

This technique, however, is not how the brain builds models of the world. This brings us to the next misconception:

 

2. There is a one-size-fits-all AI solution.

A common misconception is that AI can be used to solve every problem out there–i.e. the state-of-the-art AI has reached a level such that minor configurations of ‘the AI’ allows us to tackle different problems.

I’ve even heard people assume that moving from one problem to the next makes the AI system smarter as if the same AI system is now solving both problems at the same time.

The reality is much different: AI systems need to be engineered, sometimes heavily,  and require specifically trained models in order to be applied to a problem.

AI (ML/DL) Operations

And while similar tasks, especially those involving sensing the world (e.g., speech recognition, image or video processing) now have a library of available reference models, these models need to be specifically engineered to meet deployment requirements and may not be useful out of the box.

Furthermore, AI systems are seldom the only component of AI-based solutions. It often takes many tailor-made classically programmed components to come together to augment one or more AI techniques used within a system.

And yes, there are a multitude of different AI techniques out there, used alone or in hybrid solutions in conjunction with others, therefore it is incorrect to say:

 

3. AI is the same as Deep Learning

Back in the day, we thought the term artificial neural networks (ANNs) was really cool. Until that is, the initial euphoria around its potential backfired due to its lack of scaling and aptitude towards over-fitting.

Neural Network

Now that those problems have, for the most part, been resolved, we’ve avoided the stigma of the old name by “rebranding” artificial neural networks as  “Deep Learning”.

Deep Learning or Deep Networks are ANNs at scale, and the ‘deep’ refers not ‘too deep’ thinking, but to the number of hidden layers, we can now afford within our ANNs (previously it was a handful at most, and now they can be in the hundreds).

Deep Learning is used to generate models off of labeled data sets. The ‘learning’ in Deep Learning methods refers to the generation of the models, not to the models being able to learn in real-time as new data becomes available.

The ‘learning’ phase of Deep Learning models actually happens offline, needs many iterations, is time and process-intensive, and is difficult to parallelize.

Recently, Deep Learning models are being used in online learning applications. Online learning in such systems is achieved using different AI techniques such as Reinforcement Learning, or online Neuro-evolution.

A limitation of such systems is the fact that the contribution from the Deep Learning model can only be achieved if the domain of use can be mostly experienced during the offline learning period.

Once the model is generated, it remains static and not entirely robust to changes in the application domain.

A good example of this is in ecommerce applications–seasonal changes or short sales periods on ecommerce websites would require a deep learning model to be taken offline and retrained on sale items or new stock.

Data Virtualization

However, now with platforms like Sentient Ascend that use evolutionary algorithms to power website optimization, large amounts of historical data are no longer needed to be effective, rather, it uses neuro-evolution to shift and adjust the website in real-time based on the site’s current environment.

For the most part, though, Deep Learning systems are fueled by large data sets, and so the prospect of new and useful models being generated from large and unique datasets has fueled the misconception that…

 

4. It’s all about BIG data

It’s not. It’s actually about good data. Large, imbalanced datasets can be deceptive, especially if they only partially capture the data most relevant to the domain.

Data Management

Furthermore, in many domains, historical data can become irrelevant quickly.

In high-frequency trading in the New York Stock Exchange, for instance, recent data is of much more relevance and value than, for example, data from before 2001, when they had not yet adopted decimalization.

Finally, a general misconception I run into quite often:

 

6. If a system solves a problem that we think requires intelligence, that means it is using AI

This one is a bit philosophical in nature, and it does depend on your definition of intelligence. Indeed, Turing’s definition would not refute this.

data protection

However, as far as mainstream AI is concerned, a fully engineered system, say to enable self-driving cars, which does not use any AI techniques, is not considered an AI system.

If the behavior of the system is not the result of the emergent behavior of AI techniques used under the hood, if programmers write the code from start to finish, in a deterministic and engineered fashion, then the system is not considered an AI-based system, even if it seems so.

 

AI paves the way for a better future

Despite the common misconceptions around AI, the one correct assumption is that AI is here to stay and is indeed, the window to the future.

AI-Artificial Intelligence Benefits & Risks

AI still has a long way to go before it can be used to solve every problem out there and to be industrialized for wide-scale use.

Deep Learning models, for instance, take many expert PhD-hours to design effectively, often requiring elaborately engineered parameter settings and architectural choices depending on the use case.

Currently, AI scientists are hard at work on simplifying this task and are even using other AI techniques such as reinforcement learning and population-based or evolutionary architecture search to reduce this effort.

The next big step for AI is to make it be creative and adaptive, while at the same time, powerful enough to exceed human capacity to build models.

by Babak Hodjat, co-founder & CEO Sentient Technologies

Source: gigaom.com

Voices in AI – Episode 60: A Conversation with Robin Hanson

Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese. Today my guest is Robin Hanson.

He is an author, and he is also the Chief Scientist over at Consensus Point. He’s an associate professor of economics at George Mason University.

He holds a BS in Physics, an MS in Physics and he’s got an NA in conceptual foundations of science from the University of Chicago, he’s got a Ph.D. in Social Science from Caltech, and I’m sure there are other ones as well.

Welcome to the show Robin.

Robin Hanson: It’s great to be here.

Q: 1

I’m really fascinated by your books. Let’s start there. Tell me about the new book, what is it called?

My latest book is co-authored with Kevin Simler, and it’s called “The Elephant in the Brain: Hidden Motives in Everyday Life,” and that subtitle is the key. We are just wrong about why we do lots of things. For most everything we do, we have a story. If I were to stop you at any one moment and ask you, “Why are you doing that?,” you’ll almost always have a story and you’ll be pretty confident about it, and you don’t know how that is just wrong a lot. Your stories about why you do things are not that accurate.

Prepare your mind-Brain

Q: 2

So is it the case that we do everything, essentially unconsciously, and then the conscious mind follows along behind it and tries to rationalize, “Oh, I did that because of ‘blank,’” and then the brain fools us by switching the order of those two things, is that kind of what you’re getting at?

That’s part of it yes, your conscious mind is not the king or president of your mind, it’s the secretary. It’s the creepy guy who stands behind the king saying, “a judicious choice, sir.”

Your job isn’t to know why you do things or to make decisions, your job is to make up good explanations for them.

Data Warehouse

Q: 3

And there’s some really interesting research that bears that out, with split-brain patients and the like. How do we know that about the brain? Tell me a little bit about that.

Well, we know that in many circumstances when people don’t actually know why they do things, they still make up confident explanations.

So, we know that you’re just the sort of creature who will always have a confident story about why you do things, even when you’re wrong.

Now that by itself doesn’t say that you’re wrong, it just says that you might well be wrong.

In order to show that you are wrong a lot in specific situations, there’s really no substitute for looking at the things you do and trying to come up with a theory about why you do them.

And that’s what most of our book is about.

Artificial Intelligence

So our first third of the book is reviewing all the literature we have on why people might plausibly not be aware of their motives; why it might make sense for evolution to create a creature who isn’t aware, who wants to make up another story, but we really can’t convince you that you are wrong in detail unless we go to specific things.

So that’s why the last two-thirds of the book goes over 10 particular areas of life, and then [for] each area of life it says, “Here is your standard story about why you do things, and here are all these details of people’s behavior that just don’t make much sense from the usual story’s point of view.

And then we say: “Here’s another theory that makes a lot more sense in the details, that’s a better story about why you do things.” And isn’t it interesting that you’re not aware of that, you’re not saying that’s why you’re doing things, you’re doing the other thing?

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

 

Visit VoicesInAI.com to access the podcast, or subscribe now:

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Source: gigaom.com

Data Management

Voices in AI – Episode 55: A Conversation with Rob High

About this Episode

Episode 55 of Voices in AI features host Byron Reese and Rob High talking about IBM Watson and the history and future of AI. Rob High is an IBM fellow, VP, and Chief Technical Officer at IBM Watson.

Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Q: 1

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. August 12th, 1981.

That was the day IBM released the IBM PC and who could have imagined what that would lead to? Who would’ve ever thought, from that vantage point, of our world today?

Who could’ve imagined that eventually you would have one on every desktop and then they would all be connected?

Who would have guessed that through those connections, trillions of dollars of wealth would be created?

All the companies, you know, that you see in the news every day from eBay to Amazon to Google to Baidu to Alibaba, all of them have, in one way or the other, as the seed of their genesis, that moment on August 12th, 1981.

Now the interesting thing about that date, August of ‘81, that’s kind of getting ready to begin the school year, the end of the summer.

And it so happens that our guest, Rob High graduated from UC Santa Cruz in 1981, so he graduated about the same time, just a few months before this PC device was released.

And he went and joined up with IBM. And for the last 36/37 years, he has been involved in that organization affecting what they’re doing, watching it all happen, and if you think about it, what a journey that must be.

If you ever pay your respects to Elvis Presley and see his tombstone, you’ll see it says, “He became a living legend in his own time.” Now, I’ll be the first to say that’s a little redundant, right?

He was either a living legend or a legend in his own time. That being said, if there’s anybody who can be said to be a living legend in his own time, it’s our guest today.

It’s Rob High. He is an IBM fellow, he is a VP at IBM, he is the Chief Technical Officer at IBM Watson and he is with us today. Welcome to the show, Rob!

Rob High: Yeah, thank you very much. I appreciate the references but somehow I think my kids would consider those accolades to be a little, probably, you know, not accurate.

AI-ML-Robotics Technologies

Q: 2

Well, but from a factual standpoint, you joined IBM in 1981 when the PC was brand new.

Yeah – I’ve really been honored with having the opportunity to work on some really interesting problems over the years.

And with that honor has come to the responsibility to bring value to those problems, to the solutions we have for those problems.

And for that, I’ve always been well-recognized. So I do appreciate you bringing that up. In fact, it really is more than just any one person in this world that makes changes meaningful.

Data Management Strategy

Q: 3

Well, so walk me back to that. Don’t worry, this isn’t going to be a stroll down memory lane, but I’m curious.

In 1981, IBM was of course immense, as immense as it is now and the PC had to be a kind of tiny part of that at that moment in time.

It was new. When did your personal trajectory intercept with that or did it ever?

Had you always been on the bigger system side of IBM?

No, actually. It was almost immediate.

Probably was, I don’t know the exact number, but probably I was pretty close to the first one hundred or two hundred people that ordered a PC when it got announced.

In fact, the first thing I did at IBM was to take the PC into work and show my colleagues what the potential was.

I was just doing simple, silly things at the time, but I wanted to make an impression that this really was going to change the way that we were thinking about our roles at work and what technology was going to do to help change our trajectory there.

So, no, I actually had the privilege of being there at the very beginning.

I won’t say that I had the foresight to recognize its utility but I certainly appreciated it and I think that to some extent, my own career followed the trajectory of change that has occurred similar to what PCs did to us back then.

In other areas as well: including web computing, and service orientation, now cloud computing, and of course cognitive computing.

Machine Learning and AI

And so, walk me through that, and then let’s jump into Watson. So, walk me through the path you went through as this whole drama of the computer age unfolded around you.

Where did you go from point to point to point through that and end up where you are now?

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

 

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

 

Source: gigaom.com

Data Warehouse

The Modern Data Warehouse – Enterprise Data Curation for the Artificial Intelligence Future

This free 1-hour webinar from GigaOm Research brings experts in AI and data analytics, featuring GigaOm analyst William McKnight and a special guest from Microsoft.

The discussion will focus on the promise AI holds for organizations in every industry and every size, and how to overcome some of the challenges today of how to prepare for AI in the organization, and how to plan AI applications.

Data Governance

The foundation for AI is data. You must have enough data to analyze to build models.

Your data determines the depth of AI you can achieve — for example, statistical modeling, machine learning, or deep learning — and its accuracy.

The increased availability of data is the single biggest contributor to the uptake in AI where it is thriving.

Indeed, data’s highest use in the organization soon will be training algorithms.

AI is providing a powerful foundation for impending competitive advantage and business disruption.

Data Virtualization

In this 1-hour webinar, you will discover:

  • AI’s impending effect on the world
  • Data’s new highest use: training AI algorithms
  • Know & change behavior
  • Data collection
  • Corporate Skill Requirements

You’ll learn how organizations need to be thinking about AI and the data for AI.

data protection

Register now to join GigaOm and Microsoft for this free expert webinar.

Who Should Attend:

  • CIOs
  • CTOs
  • CDOs
  • Business Analysts
  • Data Analysts
  • Data Engineers
  • Data Scientists

Source: gigaom.com

Artificial-Intelligence-AI

Voices in AI – Episode 54: A Conversation with Ahmad Abdulkader

Byron Reese: This is Voices in AI brought to you by GigaOm. I am Byron Reese. Today our guest is Ahmad Abdulkader. He is the CTO of Voicera.

Before that, he was the lead architect for Facebook supplied AI efforts producing Deep Texts, which is a text understanding engine.

Prior to that, he worked at Google building OCR engines, machine learning systems, and computer vision systems.

He holds a Bachelor of Science and Electrical Engineering degree from Cairo University and a Masters in Computer Science from the University of Washington. Welcome to the show.

Ahmad Abdulkader: Thank you, thanks Byron, thanks for having me.

Artificial Intelligence-Machine Learning-Deep Learning Technologies

Q: 1

I always like to start out by just asking people to define artificial intelligence because I have never had two people define it the same way before.

Yeah, I can imagine. I am not aware of a formal definition.

So, to me, AI is the ability of machines to do or perform cognitive tasks that humans can do or learn to do rather. And eventually, learn to do it in a seamless way.

Q: 2

Is the calculator therefore artificial intelligence?

No, the calculator is not performing a cognitive task. A cognitive task I mean vision, speech understanding, understanding text, and such.

Actually, in fact, the brain is actually lousy at multiplying two six-digit numbers, which is what the calculator is good at.

But the calculator is really bad at doing a cognitive test.

Scientific Calculator

Q: 3

I see, well actually, that is a really interesting definition because you’re defining it not by some kind of an abstract notion of what it means to be intelligent, but you’ve got a really kind of narrow set of skills that once something can do those, it’s an AI. Do I understand you correctly?

Right, right, I have a sort of a yardstick, or I have a sort of a set of tasks a human can do in a seamless easy way without even knowing how to do it, and we want to actually have machines mimic that to some degree.

And there will be some very specific set of tasks, some of them are more important than others and so far, we haven’t been able to build machines that actually get even close to the human beings around these tasks.

Artificial-Intelligence-Brain

Q: 4

Help me understand how you are seeing the world that way, and I don’t want to get caught up on definitions, but this is really interesting.

Right.

Human Brain & Neuron Model

Q: 5

So, if a computer couldn’t read, couldn’t recognize objects, and couldn’t do all those things you just said, but let’s say it was creative and it could write novels. Is that an AI?

First of all, this is hypothetical. I wouldn’t know, I wouldn’t call it AI, so it goes back to the definition of intelligence, and then there’s a natural intelligence that humans exhibit, and then there is artificial intelligence that machines will attempt to make and exhibit.

So, the most important of these that we actually use sort of almost every second of the day are vision, speech understanding, or language understanding, and creativity is one of them.

So if you were to do that I would say this machine performed a subset of AI, but haven’t exhibited the behavior to show that’s it good at the most important ones, being vision, speech, and such.

Artificial Intelligence Good or Bad

Q: 6

When you say vision and speech are the most important ones, nobody’s ever really looked at the problem this way, so I really want to understand how you’re saying that, because it would seem to me those aren’t really the most important by a long shot.

I mean, if I had an AI that could diagnose any disease, tell us how to generate unlimited energy, fix all the environmental woes, tell us how to do faster than light travel, all of those things, like, feed the hungry, and alleviate poverty and all of those things, but they couldn’t tell a tuna fish from a Land Rover.

I would say that’s pretty important, I would take that hands down over what you’re calling to be more important stuff.

I think really important is an overloaded word. I think you’re talking about utility, right? So, you’re imagining a hypothetical situation where we’re able to build computers that will do the diagnosis or poverty and stuff like that.

These would be way more useful for us, or that’s what we think, or that’s the hypothesis. But actually, to do these tasks that you’re talking about, it probably implies, most probably that you have done or solved, to a great degree, solved vision.

It’s hard to imagine that you would be doing diagnosis without actually solving vision. So, these are sort of the basic tasks that actually humans can do, and babies learn, and we see babies or children learn this as they grow up.

So, perhaps the utility of what you talked about would be much more useful for us, but if you were to define importance as sort of the basic skills that you could build upon, I would say vision would be the most important one.

Language understanding perhaps would be the second most important one. And I think doing well in these basic cognitive skills would enable us to solve the problems that you’re talking about.

AI-Artificial Intelligence Benefits & Risks

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

 

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

 

Source: gigaom.com

AI (ML/DL) Operations

Making AI Work in Production, Not in Isolation

There’s a ton of momentum around machine learning and AI today, but there are important logistics to be worked out.

Despite fears and bold proclamations that AI will replace humans, its best application today is in serving people, to make them more productive.

Today’s AI platforms need to support that use case, but how well do they do so?

Next is the overwhelming fragmentation of AI tools and technologies. There are a range of machine learning and deep learning frameworks and libraries with which to build models.

Reasons to use AI

Reasons to use AI

The result is that companies are getting distracted by these disparate technologies, diluting their focus on the pragmatic adoption of AI.

There is also the decision point around using trained models from the public cloud providers: which platforms should you be on, and is there any way to mix, match and compare them?

Abstraction layers help here, not just across libraries or cloud-based cognitive services, but for using them in combination, and testing which is most effective.

Plus, once that’s done, and the models are built and/or selected, there’s the issue of deploying them to, and using them in, production.

What’s the best way to achieve that operationalization?

There are a lot of questions here. Join us for this free 1-hour webinar from GigaOm Research to get to some of the answers.

The Webinar features GigaOm analyst Andrew Brust and our special guest, Jon Richter from CognitiveScale, a company specializing in augmented intelligence.

Artificial Intelligence

In this 1-hour webinar, you will discover:

  • What’s involved in building AI that makes all your people more productive
  • How to experiment with models from different libraries and cloud platforms, efficiently and efficaciously
  • Why production deployment and use of machine learning models is no mere detail – it’s the critical link in making AI work at scale, beyond the scope of mere proof-of-concept projects
  • How to maximize sharing and unification, across programming languages, tools, frameworks, libraries, and clouds

Register now to join GigaOm Research and CognitiveScale for this free expert webinar.

AI System in Robot

Who Should Attend:

  • CTOs
  • CIOs
  • Chief Data Officers
  • Data Scientists
  • Data Engineers
  • DevOps professionals
  • Cloud architects

Source: gigaom.com

Neural System

Voices in AI – Episode 52: A Conversation with Rao Kambhampati

About this Episode

Sponsored by Dell and Intel, Episode 52 of Voices in AI, features host Byron Reese and Rao Kambhampati discussing creativity, military AI, jobs, and more.

Subbarao Kambhampati is a professor at ASU with teaching and research interests in Artificial Intelligence.

Serving as the president of AAAI, the Association for the Advancement of Artificial Intelligence.

Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today my guest is Rao Kambhampati.

He has spent the last quarter-century at Arizona State University, where he researches AI. In fact, he’s been involved in artificial intelligence research for thirty years.

He’s also the President of the AAAI, the Association for the Advancement of Artificial Intelligence.

He holds a Ph.D.in computer science from the University of Maryland, College Park.

Welcome to the show, Rao.

Rao Kambhampati: Thank you, thank you for having me.

Artificial Intelligence-Machine Learning-Deep Learning Technologies

Q: 1

I always like to start with the same basic question, which is, what is artificial intelligence?

And so far, no two people have given me the same answer.

So you’ve been in this for a long time, so what is artificial intelligence?

Well, I guess the textbook definition is, artificial intelligence is the quest to make machines show behavior, that when shown by humans would be considered a sign of intelligence.

intelligent behavior, of course, that right away begs the question, what is intelligence?

And you know, one of the reasons we don’t agree on the definitions of AI is partly because we all have very different notions of what intelligence is.

This much is for sure; intelligence is quite multi-faceted.

You know we have the perceptual intelligence—the ability to see the world, you know the ability to manipulate the world physically—and then we have social, emotional intelligence, and of course, you have cognitive intelligence.

And pretty much any of these aspects of intelligent behavior, when a computer can show those, we would consider that it is showing artificial intelligence.

So that’s basically the practical definition I use.

 

Artificial Intelligence Good or Bad

Q: 2

But to say, “while there are different kinds of intelligence, therefore, you can’t define it,” is akin to saying there are different kinds of cars, therefore, we can’t define what a car is.

I mean that’s very unsatisfying. I mean, isn’t there, this word ‘intelligent’ has to mean something?

I guess there are very formal definitions.

For example, you can essentially consider an artificial agent, working in some sort of environment, and the real question is, how does it improve the long-term reward that it gets from the environment, while it’s behaving in that environment?

And whatever it does to increase its long-term reward is seen, essentially as—I mean the more reward it’s able to get in the environment, the more important it is.

I think that is the sort of definition that we use in introductory AI sorts of courses, and we talk about these notions of rational agency, and how rational agents try to optimize their long-term reward.

But that sort of gets into more technical definitions. So when I talk to people, especially outside of computer science, I appeal to their intuitions of what intelligence is, and to the extent, we have disagreements there, that sort of seeps into the definitions of AI.

AI-Artificial Intelligence Benefits & Risks

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

 

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Source: gigaom.com

Artificial-Intelligence-AI

Voices in AI – Episode 51: A Conversation with Tim O’Reilly

About this Episode

Sponsored by Dell and Intel, Episode 51 of Voices in AI podcast features host Byron Reese and Tim O’Reilly discussing autonomous vehicles, capitalism, the Internet, and the economy.

Tim is the founder of O’Reilly Media. He popularized the terms open source and Web 2.0.

Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today our guest is Tim O’Reilly.

He is, of course, the founder and CEO of O’Reilly Media, Inc. In addition to his role at O’Reilly, he is a partner at an early-stage venture firm, O’Reilly AlphaTech Ventures, and he is on the board of Maker Media, which was spun out from O’Reilly back in 2012.

He’s on the board of Code for America, PeerJ, Civis Analytics, and POPVOX. He is the person who popularized the terms “open source” and “web 2.0.”

He holds an undergraduate degree from Harvard in the classics. Welcome to the show, Tim.

Tim O’Reilly: Hi, thanks very much, I’m glad to be on it. I should add one other thing to my bio, which is that I’m also the author of a forthcoming book about technology and the economy, called WTF: What’s The Future, and Why It’s Up to Us, which in a lot of ways, it’s a memoir of what I’ve learned from studying computer platforms over the last 30 years, and reflections on the lessons of technology platforms for the broader economy, and the choices that we have to make as a society.

AI-Artificial Intelligence Benefits & Risks

Well, I’ll start there. What is the future then? If you know, I want to know that right away.

Well, the point is not that there is one future. There are many possible futures, and we actually have a great role.

There’s a very scary narrative in which technology is seen as an inevitability. For example, “technology wants to eliminate jobs, that’s what it’s for.”

And I go through, for example, looking at algorithms, at Google, at Facebook, and the like and say, “Okay, what you really learn when you study it is, all of these algorithms have a fitness function that they’re being managed towards,” and this doesn’t actually change in the world of AI.

AI is simply new techniques that are still trying to go towards human goals. The thing we have to be afraid of is not AI becoming independent and going after its own goals.

Artificial-Intelligence-Brain

It’s what I refer to as “the Mickey and the broomsticks problem,” which is, we’re creating these machines, we’re turning them loose, and we’re telling them to do the wrong things.

They do exactly what we tell them to do, but we haven’t thought through the consequences and a lot of what’s happening in the world today is the result of bad instructions to the machines that we have built.

In a lot of ways, our financial markets are a lot like Google and Facebook, they are increasingly automated, but they also have a fitness function.

If you look at Google; their fitness function on both the search and the advertising side is relevant. You look at Facebook; loosely it could be described as engagement.

We have increasingly, for the last 40 years, been managing our economy around, “make money for the stock market,” and we’ve seen, as a result, the hollowing out of the economy.

And to apply this very concretely to AI, I’ll bring up a conversation I had with an AI pioneer recently, where he told me he was investing in a company that would get rid of 30% of call center jobs, was his estimate.

And I said, “Have you used a call center?

Were you happy with the service?

Why are you talking about using AI to get rid of these jobs, rather than to make the service better?”

You know I wrote a piece—actually I wrote it after the book, so it’s not in the book—[that’s] analysis of Amazon.

Artificial Intelligence Good or Bad

In the same 3 years which they added 45,000 robots to their factories, they’ve added hundreds of thousands of human workers.

The reason is that they’re saying “Oh, our master design pattern isn’t ‘cut costs and reap greater profits,’ it’s ‘keep upping the ante, keep doing more.’”

I actually started off the article by talking about my broken tea kettle and how I got a new one the same day, so I could have my tea the next morning, with no interruption.

And it used to be that Amazon would give you free 2-day shipping, and then it was free 1-day shipping, and then in many cases, it’s free same-day shipping, and, this is why they have this incredible fanatical customer focus, and they’re using the technology to actually do more.

My case has been, that if we actually shift from the fitness function being efficiency and shareholder value through driving increases profits to instead actually creating value in society—which is something that we can quite easily do—we’re going to have a very different economy and a very, very different political conversation than we’re having right now.

 

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

 

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Source: gigaom.com