Category Archives for Artificial Intelligence (AI)

Video Recording

Are Low-skilled Jobs More Vulnerable to Automation?

The following is an excerpt from GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.

The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices.

Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”

One of those deep questions of our time:

The technology’s potential impact on jobs

When the topic of automation and AI comes up, one of the chief concerns is always technology’s potential impact on jobs. There is a common assumption that it will be low-skilled jobs that are first automated, but is that really how automation will change the job market?

In this excerpt from The Fourth Age, Byron Reese explores which sorts of jobs are most vulnerable to automation.

The assumptions that low-skilled workers will be the first to go and that there won’t be enough jobs for them undoubtedly have some truth to them, but they require some qualification.

Generally speaking, when scoring jobs for how likely they are to be replaced by automation, the lower the wage a job pays, the higher the chance it will be automated. The inference usually drawn from this phenomenon is that a low-wage job is a low-skill job.

Learning via Webinar

Myth: a low-wage job is a low-skill job

This is not always the case. From a robot’s point of view, which of these jobs requires more skill: a waiter or a highly trained radiologist who interprets CT scans? A waiter, hands down. It requires hundreds of skills, from spotting rancid meat to cleaning up baby vomit.

But because we take all those things for granted, we don’t think they are all that hard. To a robot, the radiologist’s job, by comparison, is a cakewalk. It is just data in, probabilities out.

This phenomenon is so well documented that it has a name, the Moravec paradox. Hans Moravec was among those who noted that it is easier to do hard, brainy things with computers than “easy” things.

It is easier to get a computer to beat a grandmaster at chess than it is to get one to tell the difference between a photo of a dog and a cat.

Skill availability

Waiters’ jobs pay less than radiologists’ jobs not because they require fewer skills, but because the skills needed to be a waiter are widely available, whereas comparatively few people have the uncommon ability to interpret CT scans.

What this means is that the effects of automation are not going to be overwhelmingly borne by low-wage earners. Order takers at fast-food places may be replaced by machines, but the people who clean up the restaurant at night won’t be.

The jobs that automation affects will be spread throughout the wage spectrum.

 

Build Top & Destroy Bottom

All that being said, there is a widespread concern that automation is destroying jobs at the “bottom” and creating new jobs at the “top.”

Automation, this logic goes, maybe making new jobs at the top like geneticists but is destroying jobs at the bottom like warehouse workers.

Doesn’t this situation lead to a giant impoverished underclass locked out of gainful employment?

Often, the analysis you hear goes along these lines: “The new jobs are too complex for less-skilled workers.

World's People

For instance, if a new robot replaces a warehouse worker, tomorrow the world will need one less warehouse worker. Even if the world also happened to need an additional geneticist, what are you doing to do?

Will the warehouse worker have the time, money, and aptitude to train for the geneticist’s job?”

No. The warehouse worker doesn’t become the geneticist. What actually happens is this: A college biology professor becomes the new geneticist; a high-school biology teacher takes the college job; a substitute elementary teacher takes the high school job; the unemployed warehouse worker becomes a substitute teacher.

This is the story of progress. When a new job is created at the top, everyone gets a promotion. The question is not “Can a warehouse worker become a geneticist” but “Can everyone do a job a little harder than the one they currently do?”

If the answer to that is yes, which I emphatically believe, then we want all new jobs to be created at the top, so that everyone gets a chance to move up a rung on the ladder of success.

To read more of GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.

Source: gigaom.com

Seeing the Brain

Can Computers Be Implanted in Human Brains?

The following is an excerpt from GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.

The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices.

Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”

One of those deep questions of our time:

Instead of building an external general intelligence unit, would we instead augment our own brains with computing power? In this excerpt from The Fourth Age, Byron Reese considers the implications of direct interaction between computers and our brains.


Concept of Implanted Computers on Our Brains

Instead of building conscious computers, can we perhaps augment our brains with implanted computers? This doesn’t require us ever to crack the code of consciousness.

We just take our consciousness as a given and try to add appendages to our existing intellect. This feels substantially less alien than uploading ourselves to the machine.

Artificial Arm or Hand

You can imagine a prosthetic arm, for instance, that you control with your mind. In fact, you don’t really have to imagine it, it already exists.

Building more and more things that interact directly with the brain—say, an artificial eye—seems plausible.

Eventually, could entire computers be fitted into the brain?

Elon Musk advocates a solution like this. He wants to create a neural lace for our brains, a way to directly sync our brains to the digital world. He explains:

The solution that seems the best one is to have an AI layer [added to your brain] that can work well and symbiotically with you. . . . Just as your cortex works symbiotically with your limbic system, your third digital layer could work symbiotically with you.

What Musk proposes is way beyond the brain-controlled prosthetic described at the start of this chapter. He is talking about your thoughts and memories comingling with digital ones.

This would be where you think a thought like, “How long is the Nile River?” and that query is fed into Google Neuro (wirelessly, of course), and a quarter of a second later, you know the answer. If this ever happens, expect the ratings of Jeopardy! to fall off a cliff.

In addition, the historian Yuval Noah Harari speculates on what else to expect:

When brains and computers can interact directly, that’s it, that’s the end of history, that’s the end of biology as we know it. Nobody has a clue what will happen once you solve this. . . . We have no way of even starting to imagine what’s happening beyond that.

There are many who say this can’t be done. Steven Pinker sums up some of the difficulties:

Brains are oatmeal-soft, oat around in skulls, react poorly to being invaded, and suffer from inflammation around foreign objects. Neurobiologists haven’t the slightest idea how to decode the billions of synapses that underlie a coherent thought, to say nothing of manipulating them.

 

Breakthroughs to Merge People & Machines

Three breakthroughs would be needed to accomplish a meaningful merger of people and machines, and they may not be possible.

  1. First, a computer must be able to read a human thought.
  2. Second, a computer must be able to write a thought back to the brain.
  3. And third, a computer must do both of those things at speeds substantially faster than what we are presently accustomed to.

If we get all three of these, then we can join with computers in a cosmically significant way.

The first one, a machine reading a human thought, is the only one we can even do a little. There are several companies working on devices, often prosthetics, that can be controlled with the mind.

For instance, Johns Hopkins recently had a success creating a prosthetic hand whose individual fingers could be moved with thought. A male subject, who had his hands, was set to undergo a brain-mapping procedure for his epilepsy.

AI-powered Hand

The researchers built a glove with electronics in it that could buzz each finger. Then they placed a sensor over the part of the subject’s brain that controls finger movement. By buzzing each finger, they could specifically measure the exact part of the subject’s brain that corresponded to each finger.

It worked! He could later move the fingers of the prosthetic with his mind. However, this would work only for his brain. For you or I to accomplish the same feat would require a similar procedure.

Another Johns Hopkins project involves making an entire artificial arm that can be controlled by the brain. Already, about a dozen of them are in active use, but again, they involve surgeries, and the limbs currently cost half a million dollars each.

 

A Dexterous Robotic Device

However, Robert Armiger, the project manager for amputee research at Johns Hopkins, said, “The long-term goal for all of this work is to have noninvasive—no extra surgeries, no extra implants—ways to control a dexterous robotic device.”

These technologies are amazing and obviously life-changing for those who need them. But even if all the bugs were worked out and the fidelity was amped way up, as a consumer product used to interface with the real world, they are of limited value compared with, say, a voice interface.

It’s cool, to be sure, to be able to think “Lights on” and have them come on, but practically speaking it is only a bit better than speaking “Lights on.” And of course, we are not anywhere near being able to read a simple thought like that.

Brain-Intelligence-consciousness

Moving a finger is a distinct action from a distinct part of the brain. Thinking “Lights on” is completely different. We don’t even know how “Lights on” is coded into the brain.

But say we got all the bugs worked out, and, in addition, we learned how to write thoughts to the brain. Again, this is out in science fiction land. No one knows how a thought like, “Man, these new shoes are awesome” is encoded to the brain.

Think about that. There isn’t a “these shoes are [blank]” section of the brain where you store your thoughts on each pair of shoes you own. But let’s say for a moment that we figure this out and understand it so well that we can write thoughts to the brain at the same speed and accuracy as reading something.

This too is nice, but a little better than what we have now. I can Google “chicken and dumpling recipe” and then read the recipe right now. There is already a mechanism for data from the eyes to be written to the brain.

We mastered that eons ago. Even if the entire Internet could be accessed by my brain, that’s little better than the smartphone I already own.

 

Speed

However, let’s consider the third proposition, of speed. If all this could be done at fast speeds, that is something different. If I could think, “How do you speak French?” and suddenly all that data is imprinted on my mind, or is accessible by my brain at great speed, then that is something really big.

Ray Kurzweil thinks something like this will happen, that our thinking will become a hybrid of biological and nonbiological processes, and he even puts a date on it:

In the 2030s we’re going to connect directly from the neocortex to the cloud. When I need a few thousand computers, I can access that wirelessly.

It goes without saying that we don’t know if this is possible. Clearly, your brain can hold the information required for proficiency in French, but can it handle it being burned in seconds or even minutes?

AI-ML-Robotics Technologies

There are some biological limits that even technology cannot expand. No matter how advanced we get, an unaided human body cannot be made that can lift a freight train. Perhaps it won’t have to be written to our brain, but our brain can access a larger, outer brain.

But even then, there is a fundamental mismatch between the speed and manner in which computers and brains operate.

 

Augmenting the Cognitive Ability

There is also a fourth thing, which, if possible, is beyond a “big deal.” If we were able to achieve all three of the things just discussed and in addition were able to implant a conscious computer or an AGI in our brains, or otherwise connect to such a machine, and then utilize it to augment our cognitive abilities, then, well, the question of where the human ends and the machine begins won’t really matter all that much.

If we can, in fact, upgrade our reasoning ability, the very attribute that many believe makes us human, and improve it by orders of magnitude, then we would truly be superhuman.

Artificial Intelligence

Or maybe it is better to say that something will be superhuman and that thing will own and control your body. There may no longer be a “you” in any meaningful sense.

It is hard to contemplate any of this given where we are now. The brain is a wonderful thing, but it is neither a hard drive nor a CPU.

It is organic and analog. Turning the lights on with your brain is not just a simpler thing than learning French in three minutes, it is a completely different thing.

Those who believe you will be able to learn French that way do so not because they have special knowledge about the brain that the rest of us don’t have.

They believe it because they believe that minds are purely mechanistic and that technology knows no upper limits at all. If both of these propositions are true, then, well, even the sky is no longer the limit.

 

The US Defense Advanced Research Projects Agency (DARPA) Project

Despite the evident difficulty in merging computers and people, there are numerous projects underway to try to do some of the things we have just covered.

The US Defense Advanced Research Projects Agency (DARPA) is working on a project whose program manager describes as attempting to “open the channel between the human brain and modern electronics” by implanting a device in the human brain that can convert brain activity into meaningful electronic signals.

Artificial-Intelligence-AI

The agency is dedicating $62 million to the effort as part of its Neural Engineering System Design program.

And it is in no way the only one working on such a project. Several other groups, both public and private, are probing the limits of what is possible.

 

To read more of GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.

Source: gigaom.com

Shaking Hand with Technology

Expectation Versus Reality: Are boardrooms blocking digital revolutions?

There’s a heck of a lot of new technology available in the market. From artificial intelligence to blockchain, companies are inundated with new tools and solutions that promise to revolutionize aspects of their business they didn’t even know could be (or needed to be) improved.

Blockchain Technology

For executives facing intense pressure to keep up with the latest technology trends to remain competitive, figuring out what the true value is, versus more noise, is a daunting task.

And unfortunately, this is leaving many companies to decide to stick with the status quo – so they’re falling behind.

A new survey by Gartner found that 91 percent of companies still haven’t reached a “transformational” level of maturity in data and analytics, despite this having been the number one priority for CIOs in recent years.

As most businesses have not yet been able to fully implement and reap ROI for data analytics, which is the foundation of popular technologies like AI and machine learning, it’s clear these new tools still have a long way to go before they exit the ‘hype’ cycle and enter into operational reality.

But while the board may evangelize these major technology initiatives, what they need to realize is that these major digital disruptions are a long-term strategy that requires ongoing thought, planning and incremental tech investments.

Simply having an end goal to make AI a reality in your business to reap the many benefits it presents won’t necessarily get you there. Today, there are smaller tech trends that are fully operational and promise to bridge to the future. One such example is automation.

While not as sexy or media-worthy as AI in grabbing business news headlines, software robots today can perform a lot of the repetitive and time-consuming business tasks across departments with faster speed, accuracy, and ROI –  directly benefiting the bottom line.

But any business automation rollout has to start from the top. It requires careful planning and backing from the board in order for the c-suite to correctly navigate the changes it brings – operationally, culturally and technologically.

Here are three ways that the boardroom can break out of old habits and bring on the digital revolution.

 

Remove the bottlenecks

It’s clear that automation is at or near the top of the priority list, and the C-suite is beginning to reflect this.  According to a survey by KPMG, 25 percent of enterprises worldwide now have a Chief Digital Officer to lead this change.

However, the CDO has a long road ahead of them. A recent survey revealed that in 74 percent of organizations, automation is only being implemented by the IT department. Unfortunately, that’s a recipe for failure.

On average, 25 percent of technology projects fail, and many more show little return on investment or need significant alteration to be successful. Often, it’s because IT projects are simply that: IT projects.

Shutterstock

Automation isn’t just an IT function; it’s a function of the entire business, which means that a top-down leadership approach is critical to success. For IT leaders, getting C-suite buy-in from the very beginning not only establishes overarching business goals, it cements the project scope and removes potential bottlenecks or silos.

 

Be a champion

As the technology revolution continues, more and more business leaders are finding themselves boasting a new title: digital champion. A recent survey found that 68 percent of executives believe their CEOs are “digital champions,” up from 33 percent just ten years ago.

It is clear organizations have come a long way, but there’s still a ways to go.

Self-improvement of Freelance Blogger

Today, those in senior positions must take the lead in the robotic revolution, and not just on the project scope. To spur true change, leaders must foster a culture that not only understands automation technology but openly accepts it as necessary to carry out business functions.

When business leaders evangelize the benefits on both an executive and employee level from the very beginning, it removes the fear of the unknown, allowing for open dialogue and communication across all departments.

 

Fan it out

With recent news reporting that one-third of jobs will be automated by 2030, a common concern for the human workforce is that robots are coming to steal their jobs. However, that’s simply not the case.

A new 'heat map' of Britain assesses the risk of automation to jobs in every constituency in Britain. It shows that the UK's former industrial heartlands of the Midlands and the north are most at risk from the march of the machines 

Automation isn’t a threat; it’s an enabler. And, for employees who are mired in manual work, it will be a breath of fresh air. With effective leadership, employees can recognize the opportunity and shift attitudes towards incoming technology.

As the need for automation increases, business leaders can’t make decisions in a vacuum. Instead of simply swapping humans for robots, the C-suite must solicit feedback from the employees who will be affected by automation and look for ways to retrain or repurpose roles and duties.

By focusing on the high-level strategic activities that require empathy and communication and giving them a say in designing new responsibilities, employees can bring real value to the business while feeling safe and secure in the midst of change.

The business world is transforming, and technology is driving business objectives faster than ever before. There are a number of benefits to implementing automation, but it’s up to the C-suite to design a plan that allows the business to maximize return on investment.

As with any new deployment, success starts in the boardroom.

by Dennis Walsh, President, Americas & APAC, Redwood Software

Dennis Walsh is responsible for operations of Redwood Software in North America, LATAM, South America as well as the Asia Pacific. Walsh combines his business background and years in the software and services industry to successfully solve some of the most challenging IT and business automation issues.

Source: gigaom.com

Robot Blogger

Will We Really Lose Half our Jobs to Automation?

The following is an excerpt from GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.

The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices.

Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”

One of those deep questions of our time:

When the topic of automation and AI comes up, one of the chief concerns is always technology’s potential impact on jobs. Many fear that with the introduction of wide-scale automation, there will be no more jobs left for humans. But is it really that dire? I

n this excerpt from The Fourth Age, Byron Reese explores the prospect of massive job loss due to automation.

 

Technological Unemployment:

The “jobs will be destroyed too quickly” argument is an old one as well. In 1930, the economist John Maynard Keynes voiced it by saying, “We are being an afflicted with a new disease . . . technological unemployment.

Unemployment

This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.”

In 1978, New Scientist repeated the concern:

The relationship between technology and employment opportunities most commonly considered and discussed is, of course, the tendency for technology to be labour-saving and thus eliminate employment opportunities—if not actual jobs.

In 1995, the refrain was still the name. David F. Noble wrote in Progress without People:

Computer-aided manufacturing, robotics, computer inventories, automated switchboards and tellers, telecommunication technologies—all have been used to displace and replace people, to enable employers to reduce labour costs, contract-out, relocate operations.

But is it true now? Will new technology destroy the current jobs too quickly?

A number of studies have tried to answer this question directly. One of the very finest and certainly the most quoted was published in 2013 by Carl Benedikt Frey and Michael A. Osborne, both of Oxford University.

The report, titled The Future of Employment, is seventy-two pages long, but what has been referenced most frequently in the media is a single ten-word phrase: “about 47 per cent of total US employment is at risk.”

Hey, who needs more than that? It made for juicy and salacious headlines, to be sure. It seemed as if every news source screamed a variant of “Half of US Jobs Will Be Taken by Computers in Twenty Years.”

 

MEN WALK ON MOON Report

If we really are going to lose half our jobs in twenty years, well, then the New York Times should dust off the giant type it used back in 1969 when it printed “MEN WALK ON MOON” and report the story on the front page with equal emphasis.

Walk on Moon

But that is not actually what Frey and Osborne wrote. Toward the end of the report, they provide a four-hundred-word description of some of the limitations of the study’s methodology.

They state that “we make no attempt to estimate how many jobs will actually be automated. The actual extent and pace of computerisation will depend on several additional factors which were left unaccounted for.”

So what’s with the 47 per cent figure? What they said is that some tasks within 47 per cent of jobs will be automated. Well, there is nothing terribly shocking about that at all. Pretty much every job there is has had tasks within it automated. But the job remains. It is just different.

For instance, Frey and Osborne give the following jobs a 65 per cent or better chance of being computerized: social science research assistants, atmospheric and space scientists, and pharmacy aides. So what does this mean?

Social science professors will no longer have research assistants? Of course, they will. They will just do different things because much of what they do today will be automated. There won’t be any more space scientists? Pharmacists will no longer have anyone helping them?

Frey and Osborne say that the tasks of a barber have an 80 per cent chance of being taken over by AI or robots. In their category of jobs with a 90 per cent or higher chance of certain tasks being computerized are tour guides and carpenters’ helpers.

 

Job Morphing

The disconnect is clear: some of what a carpenter’s helper does will get automated, but the carpenter helper job won’t vanish; it will morph, as almost everyone else’s job will, from architect to zoologist. Sure, your iPhone can be a tour guide, but that won’t make tour guides vanish.

Job-Searching

Anyone who took the time to read past the introduction to The Future of Employment saw this. And to be clear, Frey and Osborne were very up-front. They stated, in scholar-speak, the following:

We do not capture any within-occupation variation resulting from the computerisation of tasks that simply free up time for human labour to perform other tasks.

In response to the Frey and Osborne paper, the Organization for Economic Cooperation and Development (OECD), an intergovernmental economic organization made up of nations committed to free markets and democracy, released a report in 2016 that directly counters it. I

n this report, entitled The Risk of Automation for Jobs in OECD Countries, the authors apply a “whole job” methodology and come up with the per cent of jobs potentially lost to computerization as 9 per cent. That is a pretty normal churn for the economy.

At the end of 2015, McKinsey & Company published a report entitled Four Fundamentals of Workplace Automation that came to similar conclusions as to the OECD. But again, it had a number too provocative for the media to resist sensationalizing.

The report said, “The bottom line is that 45 per cent of work activities could be automated using already demonstrated technology,” which was predictably reported as variants of “45% of Jobs to Be Eliminated with Existing Technology.”

Often overlooked was the fuller explanation of the report’s conclusion:

Our results to date suggest, first and foremost, that a focus on occupations is misleading. Very few occupations will be automated in their entirety in the near or medium term. Rather, certain activities are more likely to be automated, requiring entire business processes to be transformed, and jobs performed by people to be redefined, much like the bank teller’s job was redefined with the advent of ATMs.

The “47 per cent [or 45 per cent] of jobs will vanish” interpretation doesn’t even come close to passing the sniff test. Humans, even ones with little or no professional training, have incredible skills we hardly ever think about.

Let’s look closely at two of the jobs at the very top of Frey and Osborne’s list: short-order cook and waiter. Both have a 94 per cent chance of being computerized.

 

Robot at Pizza Shop Scenario

Imagine you own a pizza restaurant that employs one cook and one waiter. A fast-talking door-to-door robot salesman manages to sell you two robots: one designed to make pizzas and one designed to take orders and deliver pizzas to tables.

All you have to do is preload the food containers with the appropriate ingredients, and head off to Bermuda. The robot waiter, who understands twenty languages, takes orders with amazing accuracy, and flawlessly handles special requests like “I want half this, half that” and “light on the sauce.”

The orders are sent to the pizza robot, who makes the pizza with speed and consistency.

Let’s check in on these two robots on their first day of work and see how things are going:

  • A patron spills his drink. The robots haven’t been taught to clean up spills, since this is a surprisingly complicated task. The programmers knew this could happen, but the permutations of what could be spilled and where were too hard to deal with. They promised to include it in a future release, and in the meantime, to program the robot to show the customers where the cleaning supplies are kept.
  • A little dog, one of those yip-yips, comes yipping in and the waiter robot trips and falls down. Having no mechanism to right itself, it invokes the “I have fallen and cannot get up” protocol, which repeats that phrase over and over with an escalating tone of desperation until someone helps it up. When asked about this problem, the programmers reply, snappishly, that “it’s on the list.”
    Maggots get in the shredded cheese. Maggoty pizza is served to the patrons. All the robot is trained to do with customers unhappy with their orders is to remake their pizzas. More maggots. The robots don’t even know what maggots are.
  • A well-meaning pair of Boy Scouts pop in to ask if the pipe jutting out of the roof should be emitting smoke. They say they hadn’t noticed it before. Should it be? How would the robot know?
  • A not-well-meaning pair of boys come in and order a “pizza with no crust” to see if the robots would try to make it and ruin the oven. After that, they order a pizza with double crust and another one with twenty times the normal amount of sauce. Given that they are both wearing Richard Nixon masks, the usual protocol of taking photographs of troublesome patrons doesn’t work and results only in a franchise-wide ban of Richard Nixon at affiliated restaurants.
  • A patron begins choking on a pepperoni. Thinking he must be trying to order something, the robot keeps asking him to restate his request. The patron ends up dying right there at his table. After seeing no motion from him for half an hour, the robot repeatedly runs its “Sleeping Patron” protocol, which involves poking the customer and saying, “Excuse me, sir, please wake up” repeatedly.
  • The fire marshal shows up, seeing the odd smoke from the pipe in the roof, which he hadn’t noticed before. Upon discovering maggot-infested pizza and a dead patron being repeatedly poked by a robot, he shuts the whole place down. Meanwhile, you haven’t even boarded your flight to Bermuda.

This scenario is, of course, just the beginning. The range of things the robot waiter and cook can’t do is enough to provide sitcom material for ten seasons, with a couple of Christmas specials thrown in.

Pizza Cutting

The point is that those who think so-called low-skilled humans are easy targets for robot replacement haven’t fully realized what a magnificently versatile thing any human being is and how our most advanced electronics are little more than glorified toaster ovens.

While it is clear that we will see ever-faster technological advances, it is unlikely that they will be different enough in nature to buck our two-hundred-year run of plenty of jobs and rising wages.

In one sense, no technology really compares to mechanization, electricity, or steam engines in impact on labour. And those were a huge win for both workers and the overall economy, even though they were incredibly disruptive.

To read more of GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.

Source: gigaom.com

Artificial Intelligence Good or Bad

Benefits & Risks of Artificial Intelligence (AI) – Are we Ready for it?

We all heard scary stories: artificial intelligence is here to take our jobs and control the world and robots can turn against us. but what should we trust at this point? I think the best we could do is to research the subject and make our own opinion. That’s what I’ve done for you in this post. I have research articles about Artificial Intelligence from well know and trusted source and curated the content for you. And it all starts when you ask

Is Artificial Intelligence (AI) a Good or a Bad thing for our Future?

When you type, “Is artificial intelligence…” into Google to see what it auto suggests as possible searches you may want to do, the number one suggestion you get is “Is artificial intelligence a threat?” Followed closely by, “Is artificial intelligence dangerous?” Why all of this fear around this technology, which to date hasn’t harmed anyone? I think there are two reasons.

One, we have a long tradition of fearing the things that we create, that they’re going to rise up and destroy us. You can see this in stories like Frankenstein. You can even see this further back in Greek plays, like Oedipus, that had children killing their parents.

The second source for it must be popular media and entertainment, which shows all kinds of scenarios where artificial intelligence goes berserk or decides to take over. We all know HAL in 2001, and all of the rest of the examples. Of course, the truth is that that does not in any way indicate what is going to happen. That’s a logical fallacy known as generalizing from fictional evidence. Interestingly, one of the autofill suggestions is, “Is artificial intelligence capitalized?” Even on that, there isn’t widespread agreement, which just goes to show where we are in the science.

Source: gigaom.com

True Benefits & Risks of Artificial Intelligence

From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.

Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial.

This implies that AI per se since it does possess not an evolved innate drive (Will), cannot ‘attempt’ to replace humankind. It becomes dangerous only if humans, for example, engage in foolish biological engineering experiments to combine an evolved biological entity with an AI. Artificial Intelligence Benefits & Risks

The philosophy of Arthur Schopenhauer convincingly shows that the ‘Will’ (in his terminology), i.e. an innate drive, is at the basis of human behavior. Our cognitive apparatus has evolved as a ‘servant’ of that ‘Will’. Any attempt to interpret human behavior as primarily a system of computing mechanisms and our brain as a sort of computing apparatus is therefore doomed to failure.

Because AI has the potential to become more intelligent than any human, we have no surefire way of predicting how it will behave. We can’t use past technological developments as much of a basis because we’ve never created anything that has the ability to, wittingly or unwittingly, outsmart us. The best example of what we could face may be our own evolution. People now control the planet, not because we’re the strongest, fastest or biggest, but because we’re the smartest. If we’re no longer the smartest, are we assured to remain in control?

Curated from Benefits & Risks of Artificial Intelligence – Future of Life Institute

1 3 4 5