Tag Archives: AI technology

AI is here, but what do everyday people think of it?

Now that AI is making its way into our everyday lives, there has been a lot of attention paid to the various ways that it will affect society. While we’ve heard official statements from the white house as well as requests for more information, and statements from AI developers and companies, we haven’t heard much from everyday people and their thoughts about artificial intelligence. One research company decided to conduct a survey of 2,100 consumers from five different countries to gauge their feelings about AI. Here are some of the results:

  • 45% of consumers reported that they felt AI’s impact on society would be positive compared to just 7% who felt it would be negative.

  • 52% of consumers said that they felt AI would impact their personal life positively compared to just 7% who felt it would affect their personal life negatively.

  • Though two thirds of respondents said they knew something about artificial intelligence, just 18% said they knew a lot. 22% of people said their first impression of AI was “robots” suggesting that they knew very little.

  • 92% of people believe that general artificial intelligence will eventually arrive compared to just 8% who say AI is science fiction and will never materialize. Understanding and acceptance of AI is not surprisingly correlated with age with millennials being the most likely to desire faster development of AI and baby boomers being the least likely to desire faster AI development.

  • The number one concern that consumers have regarding AI is the potential for job loss. Only 18% said they thought the development of AI would lead to job creation. Other fears include an increase in cyber-attacks, stolen data, and invasion of privacy.

Though these fears aren’t completely unjustified, history and current data tell us that these fears may be blown out of proportion. Perhaps President Obama said it best in a recent interview: “I tend to be optimistic—historically we’ve absorbed new technologies, new jobs are created, and standards of living go up.”

It’s been a huge year for ai startups which have raised a record amount of funds for AI technology development. This suggests that the job market will see net gains rather than losses in the long run.

Source: HBR

AI technology must be available to all

The tremendous growth of the artificial intelligence industry in such a short period of time has prompted more than 8,000 AI researchers and scientists, including prominent names such as Elon Musk and Stephen Hawking, to sign an open letter warning the public about the danger of AI. To clarify, these researchers and scientists aren’t so worried about the technology itself—fears that machines will overthrow humanity falls into the realm of sci-fi and isn’t supported by the research. But what they are worried about is how the technology can be abused.

The letter

One of the points raised in the letter is that AI represents a third revolution in weaponry—gunpowder and nuclear technology being the first and second. They urge policy makers and developers to look beyond profits and to think about uses for AI that will be beneficial to humanity as a whole. Of especial concern is that a handful of the most powerful companies will hold all the technology and sell it to the highest bidders.

Open-sourced AI

AI researchers are pushing for open-sourced AI software that opens up the technology for everyone to use and benefit from.

OpenAI is a nonprofit artificial intelligence company with a goal of making AI safe and equally available to all people.

Google’s TensorFlow is open-source software that anyone can buy a license to in order to begin experimenting with AI intelligence.

Source: TechCrunch

The most exciting advancement in AI ever: kiss prediction algorithm

Almost everyone has had that awkward experience of leaning in for a first kiss with someone. All you can do is hope that the other person was really leaning in for a kiss, and not just leaning in for a high five or something. As anyone who has ever played the dating game can attest, body language can be extremely difficult to interpret. There are thousands of little things we do with our bodies that imply so much about what we’re thinking and feeling. Now researchers at MIT are working on computer systems that can analyze and interpret body language to hopefully make that first kiss a little less awkward.

Learning through television

Our parents always yelled at us for sitting in front of the television all day because it stunted our brains. But television is how researchers at MIT are teaching their neural networks to understand body language. So far, it’s watched over 600 hours of shows like “Desperate Housewives” and “The Office” (talk about a Netflix binge).

Moderate success

Next, researchers gave their algorithm new videos to watch and would pass the show one second before a hug, kiss, high-five, or handshake and ask it to predict which human behavior was about to take place. Incredibly, their deep learning program predicted the correct action 43% of the time. It wasn’t as good as humans (who were correct 71% of the time) but it’s a promising step in the right direction.

What’s next

Currently, MIT’s algorithm isn’t accurate enough for real world application, but we’re all waiting anxiously for the day when a device can be inconspicuously attached to us and whisper in our ear when our romantic interest is ready for that first kiss.

Source: The Motley Fool

Ten artificial intelligence stats that will blow you away

Bill Gates recently declared artificial intelligence “the holy grail of computer science.” The industry has made massive strides in recent years and there are even more exciting things ahead. Here are ten incredible statistics about artificial intelligence:

  • The AI market will grow from $420 million in 2014 to over $5 billion by the year 2020.
  • By 2018, an estimated 6 billion things from appliances to cars to wearable tech will depend on AI technology.
  • There are currently more than 1,000 AI start-up companies and a total of $5.4 billion has been invested into them.
  • A study by an AI language company found that 80% of executives believed that AI solutions improved worker performance and created new jobs.
  • Apple has Siri, Microsoft has Cortana, and Amazon has Alexis, but until recently, the majority of people were never using the personal assistants available to them. Now, only 2% of iPhone users haven’t used Siri.
  • By the year 2020, 40% of all mobile interactions between users and personal assistants will be powered by data. That means that AI will enable personal assistants to make decisions for us and not just carry out requests.
  • By 2020, 85% of all customer interactions with companies won’t require a human customer service representative as chatbots will be able to use artificial intelligence to solve customers’ problems.
  • Over the next decade, artificial intelligence will take over 16% of all U.S. jobs, however, much of that will be offset by the fact that there will be many new jobs created to create and maintain new AI platforms and machines.
  • By 2018, the fastest-growing companies will “employ” more smart machines and virtual assistants than humans.
  • Artificial intelligence will be powered by GPUs rather than CPUs. Currently, Nvidia’s best GPU, the Tesla K80 is 2-5 times faster than Intel’s leading CPU, the Xeon Phi 7120.

Source: Fool.com

The next big thing for artificial intelligence

The Dartmouth Conference of 1956 is considered by many to the be the birth of artificial intelligence. AI researches went forward from there confident that machines that could think like humans were just around the corner. That was 60 years ago; and while artificial intelligence has come a long way since then, we’re still not seeing machines that can truly think like humans do. Today researchers are once again hopeful that true artificial intelligence (an oxymoron if ever there was one) is within reach. But are we any closer than researchers were in the 50s? Here’s a look at some of the recent accomplishments, and setbacks that AI researchers are experiencing.

AI have mastered certain tasks

Where we’re seeing the biggest advancements in AI is computers that can do one thing extremely well. In the near future, we could see some jobs completely disappear as they are outsourced to machines that can do those same jobs much more efficiently and safely. We could be facing a labor displacement of a magnitude that hasn’t been seen since the industrial revolution.

Teaching AI to learn

While AI can be programmed to do certain tasks very well, a major hang up that researchers face is that they can’t teach AI to learn to do other things. All “learning” requires some kind of input from researchers. But humans could be placed in a room by themselves and can learn all on their own. This is called predictive learning or unsupervised learning and it’s an important key to solving the riddle of true artificial intelligence. For now, the big hurdle standing in the way of truly intelligent machines is the ability to teach them common sense, something humans are simply born with.

Source: Wall Street Journal

Artificial intelligence vs. human intelligence: how do they measure up?

There’s no denying that artificial intelligence is lightyears ahead of what it was just a few years ago. The technology continues to advance at an ever-increasing rate. But the ultimate goal of artificial intelligence researchers is to replicate human intelligence. So how do artificial intelligence and human intelligence measure up? You be the judge.

  1. The so-called “deep learning” that artificial intelligence is capable of isn’t really the type of profound learning like humans are capable of. Rather deep learning refers to an interconnected neural network. That means that artificial intelligence has immediate access to a wider body of knowledge, but humans are still capable of more profound thought.
  2. Artificial intelligence systems are able to beat the greatest chess masters in the world but they need millions of pictures (labeled by humans) to be able to learn to correctly identify a cat. Even a toddler can learn to differentiate between cats, dogs, and other animals after just a few instances of exposure to them.
  3. Intel’s latest processor, the i7, is one of the best CPUs that the average person can go out and buy. With four cores, it can perform four separate tasks simultaneously. But that’s no match for human biology. Even super computers are no match for the human brain’s 80 billion cells.
  4. The human brain can do what it does with just 10 watts of power. Artificial intelligence would need 10 terrawatts to imitate the human brain. That’s a trillion times more energy to do what the human brain is already capable of.
  5. Artificial intelligence runs off of algorithms programmed by humans. The algorithms haven’t actually changed much. What’s changed is the ability for artificial intelligence to run algorithms much quicker than ever before. Certain tasks, like mastering chess, depends on algorithms which is why artificial intelligence has the upper hand when it comes to playing chess.

Why even try

If after all these years, human intelligence is still vastly superior to artificial intelligence, why do artificial intelligence researchers even bother? Because despite its weaknesses when it comes to certain cognitive tasks, it can still do some things exceptionally better than humans. Artificial intelligence can sort through vast amounts of data in seconds, a task that would take humans days, weeks, or even years. Artificial intelligence is much better at humans when it comes to recognizing patterns hidden amongst large amounts of data. Artificial intelligence is far superior to humans when it comes to mathematical reasoning and computing as well.

In the end, humans need artificial intelligence. They can automate some of the more simple cognitive tasks for us such as pulling up our favorite song or performing a quick mathematical calculation. They will make our lives easier. But artificial intelligence still needs us as well. If true artificial intelligence capable of rivaling human intelligence is ever a reality, it will only be because very intelligent humans created it.

Source: Big Think

Six ways AI will make your life better

The last few years have seen huge advancements in the world of artificial intelligence. As the technology continues to improve, we’ll see more and more real world applications of artificial intelligence. That means artificial intelligence is going to start playing a more obvious role in our lives. Here are six ways that AI will be making our lives better in the coming years.

Your personal concierge

Already our smartphones come equipped with personal assistants that can follow basic commands and do certain tasks for us. But in the next few years, you can expect to see huge advancements in what these personal assistants are able to do. They will be able to do more than just follow simple commands. They will be able to make recommendations based on our preferences and even help us with decision making.

Crisis management

One major advantage to artificial intelligence is that it’s able to process vast amounts of data very quickly. During a crisis, this will be crucial as artificial intelligence can help us in sorting through incoming data and devising the best plan to deal with the disaster as it unfolds.

Search and rescue

Something relatively new in the world of artificial intelligence is the ability for artificial intelligence systems to work together to solve problems or perform tasks. Already there is a RoboCup World Championship where robots have to learn to work together in order to win. This ability for robots to work together can allow them to assist us with situational problems like search and rescue that require collaboration.

Public health

Public health is a major concern. At any time, there could be a major outbreak of a deadly illness. Health professionals are always on the lookout for these kinds of outbreaks by watching for patterns of symptoms. But artificial intelligence will be able to sort through medical data and identify worrying patterns much sooner than people alerting us to public health concerns before an outbreak.

Driverless cars

Not too long ago, the idea of a driverless car would have seemed like science fiction. Today they’re already a reality with numerous companies perfecting the technology. In the near future, you can expect to see driverless cars become available to the public. These cars will allow us to make more efficient use of our time since we will be able to focus on other tasks while our cars drive for us. They’ll also be a lot safer resulting in fewer accidents and traffic fatalities.

No terminator robots

One thing we don’t have to worry about, at least in the near future, is a robot uprising. We’re still a long ways off from creating artificial intelligence that can replicate the way the human brain works. At least for the time being, artificial intelligence will be completely at the mercy of human programming which means they can’t harm us, unless we program them to.

Source: Tech Insider

Artificial Intelligence is almost all grown up

Mary Shelley’s Frankenstein was the first pop culture example of artificial intelligence. Since then, the media, and especially Hollywood have depicted artificial intelligence numerous times. The depictions of artificial intelligence that we see in movies like Terminator or I, Robot are still nothing but science fiction, but the possibility of super smart artificial intelligence capable of matching human intelligence is no longer a far-fetched idea. In fact, due to the exponential growth of the artificial intelligence industry, we could see that level of artificial intelligence in a matter of years.

According to analysts who track the growth of computing costs and the costs for such technologies, in just four years, $4,000 dollars would be enough to buy a computer than could rival the human brain. Such a computer would be able to perform twenty quadrillion calculations per second.

AI in the workplace

Artificial intelligence just might be the biggest game changers in the world of technology this century. One of the reasons it has grown so quickly in such a short period of time is because businesses are beginning to see the useful real-world applications it has to offer. Already, dozens of industries such as healthcare diagnostics, automatic trading, business processing, advertising, and social media are using artificial intelligence to operate more efficiently and be more successful. It’s predicted that spending on artificial intelligence will increase from just over 200 million in 2015, to over 11 billion in 2024 as more businesses begin to invest in the technology.

Although artificial intelligence is relatively new to the workplace, it’s already shaking up the way we do business. Some of the top companies in the world are changing their business models to integrate artificial intelligence as opposed to sticking with a humans only approach. These businesses have a distinct advantage when it comes to gathering vast amounts of data in a very short period of time and then using that vast amount of data to make decisions. This enables companies to be quicker on their feet as they evaluate various analytics and make adjustments to their business strategy.

AI and the future

In Mary Shelley’s Frankenstein, and in many subsequent science fiction novels and films, the artificial intelligence humans create inevitably comes back to haunt them. Artificial intelligence isn’t without its risks. Some fear that if artificial intelligence could surpass human intelligence, they could overthrow us. Another fear is that artificial intelligence could lead to many people being out of a job as they’re replaced by machines. But experts in the field of artificial intelligence believe these fears are unfounded. It all comes down to human programming and how much power authority we grant them.

As for the work force, most analysts predict that an artificial intelligence boom will create many more jobs than it eliminates.

Source: Information Age

Asimov’s three laws of robotics in action

robot-1416112

Isaac Asimov, the famed science fiction author, did more than write novels. He is also credited with coming up with the three laws of robotics:

  1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

According to Asimov, these laws will need to govern the use of robotics to keep both them and humanity safe. A major fear is that artificially intelligent robots could eventually pose a threat to humans either by actively seeking to harm them, or by failing to act in a manner that would preserve human life. Because humans are beginning to give robots control over essential infrastructure, the latter is as especially big concern.

Recently, an employee at a Volkswagen plant in Germany was crushed when he became trapped in a robot arm. The machine was only doing what it was programmed to do and wasn’t able to alter its programming even when a human’s life was in danger. To make robots safer for humans, robotics researchers at Tufts University are working on developing artificial intelligence that can deviate from its programming if the circumstances warrant it. The technology is still primitive but it’s an important step if artificially intelligent robots are going to be coexisting with humans someday.

How it works

Researchers at Tufts University’s Human-Robot Interaction lab designed a robot that recognizes that it is allowed to disobey orders when there is a good reason. For example, when facing a ledge, the robot will refuse to walk forward even when ordered to. Not only will the robot refuse, but he is programmed to state the reason—that he would fall if he were to obey. To understand how the robot is able to do this, we have to first understand the concept of “felicity conditions.” Felicity conditions refer to the distinction between understanding the command being given, and the implications of following that command. To design a robot that could refuse to obey certain orders, the researchers programmed the robot to go through five logical steps when given a command:

  1. Do I know how to do X
  2. Am I physically able to do X now? Am I normally physically able to do X?
  3. Am I able to do X right now?
  4. Am I obligated based on my social role to do X?
  5. Does it violate any normative principle to do X?

This five step logical process enables the robot to determine whether or not a command would cause harm to itself or a human before following an order. The researchers recently presented their work at the AI for Human-Robot Interaction Symposium in Washington DC.

Source: IBTimes

A brief history of artificial intelligence

The concept of artificial intelligence began as pure fiction, something to be imagined but never actually existing. Today, we know that that’s no longer the case. Artificial Intelligence is real and there are already real-world applications where artificial intelligence is helping us solve some of the biggest problems facing humanity. We’re still a ways off from creating true artificial intelligence but we’re getting closer every day. Here’s a look at how artificial intelligence has developed through the years.

Greek myths

The earliest known reference to something we could term artificial intelligence dates back to the ancient Greeks. According to their mythology, Hephaestus, the blacksmith of Olympus created life-like metal automatons that were created to carry out certain functions.

The birth of science fiction

The concept of artificial intelligence stays relatively quiet until the early 19th century when Mary Shelley wrote Frankenstein, considered by many to be the first true science fiction novel because of the emphasis on the use of scientific methods and equipment to create a semi-intelligent monster. This novel gave way to more science fiction, some of which deal with the theme of robots and robots taking over humanity.

The first computer that was never made

Charles Babbage, a Victorian era inventor designed the first computer (on paper anyways) in 1822. It was designed to carry out mathematical calculations. He died before he could build his device which he called the Difference Engine but based on his designs, the machine could have worked had it been built and would have been the first computer.

The Turing Machine and the Turing Test

Alan Turing was a brilliant mathematician who helped bring World War II to an end by using his Turing Machine to break the German’s code. He is considered the father of computer science and artificial intelligence. He is also famous for coming up with the Turing Test which is designed to differentiate between computers that can be said to have artificial consciousness and computers that can’t.

Dartmouth Conference

In the summer of 1956, the scientific field of artificial intelligence was born over the course of a month long conference held at Dartmouth College. The boundaries of the field were set and plans were made to recreate human intelligence in a machine.

Hot and cold seasons

Researchers left the Dartmouth Conference with a lot of research money and optimism. Early AI researchers quickly realized that creating artificial intelligence was going to be a lot harder than they previously thought. This led to discouragement, lack of funding and very little progress in the field of AI in the early 70s and 80s though there were also resurgences as well.

AI in Hollywood and the real world

This brings us to today. Artificial Intelligence is now a part of our pop culture thanks to dozens of Hollywood movies that deal with the concept of artificial intelligence. Many of these portrayals are negative and depict robots overthrowing humanity, but some are more nuanced and treat the subject in a very thoughtful way. Artificial intelligence is now a part of our daily lives thanks to personal assistants like Siri and Cortana.

Source: History Extra