Tag Archives: AI science

The most exciting advancement in AI ever: kiss prediction algorithm

Almost everyone has had that awkward experience of leaning in for a first kiss with someone. All you can do is hope that the other person was really leaning in for a kiss, and not just leaning in for a high five or something. As anyone who has ever played the dating game can attest, body language can be extremely difficult to interpret. There are thousands of little things we do with our bodies that imply so much about what we’re thinking and feeling. Now researchers at MIT are working on computer systems that can analyze and interpret body language to hopefully make that first kiss a little less awkward.

Learning through television

Our parents always yelled at us for sitting in front of the television all day because it stunted our brains. But television is how researchers at MIT are teaching their neural networks to understand body language. So far, it’s watched over 600 hours of shows like “Desperate Housewives” and “The Office” (talk about a Netflix binge).

Moderate success

Next, researchers gave their algorithm new videos to watch and would pass the show one second before a hug, kiss, high-five, or handshake and ask it to predict which human behavior was about to take place. Incredibly, their deep learning program predicted the correct action 43% of the time. It wasn’t as good as humans (who were correct 71% of the time) but it’s a promising step in the right direction.

What’s next

Currently, MIT’s algorithm isn’t accurate enough for real world application, but we’re all waiting anxiously for the day when a device can be inconspicuously attached to us and whisper in our ear when our romantic interest is ready for that first kiss.

Source: The Motley Fool

Ten artificial intelligence stats that will blow you away

Bill Gates recently declared artificial intelligence “the holy grail of computer science.” The industry has made massive strides in recent years and there are even more exciting things ahead. Here are ten incredible statistics about artificial intelligence:

  • The AI market will grow from $420 million in 2014 to over $5 billion by the year 2020.
  • By 2018, an estimated 6 billion things from appliances to cars to wearable tech will depend on AI technology.
  • There are currently more than 1,000 AI start-up companies and a total of $5.4 billion has been invested into them.
  • A study by an AI language company found that 80% of executives believed that AI solutions improved worker performance and created new jobs.
  • Apple has Siri, Microsoft has Cortana, and Amazon has Alexis, but until recently, the majority of people were never using the personal assistants available to them. Now, only 2% of iPhone users haven’t used Siri.
  • By the year 2020, 40% of all mobile interactions between users and personal assistants will be powered by data. That means that AI will enable personal assistants to make decisions for us and not just carry out requests.
  • By 2020, 85% of all customer interactions with companies won’t require a human customer service representative as chatbots will be able to use artificial intelligence to solve customers’ problems.
  • Over the next decade, artificial intelligence will take over 16% of all U.S. jobs, however, much of that will be offset by the fact that there will be many new jobs created to create and maintain new AI platforms and machines.
  • By 2018, the fastest-growing companies will “employ” more smart machines and virtual assistants than humans.
  • Artificial intelligence will be powered by GPUs rather than CPUs. Currently, Nvidia’s best GPU, the Tesla K80 is 2-5 times faster than Intel’s leading CPU, the Xeon Phi 7120.

Source: Fool.com

The next big thing for artificial intelligence

The Dartmouth Conference of 1956 is considered by many to the be the birth of artificial intelligence. AI researches went forward from there confident that machines that could think like humans were just around the corner. That was 60 years ago; and while artificial intelligence has come a long way since then, we’re still not seeing machines that can truly think like humans do. Today researchers are once again hopeful that true artificial intelligence (an oxymoron if ever there was one) is within reach. But are we any closer than researchers were in the 50s? Here’s a look at some of the recent accomplishments, and setbacks that AI researchers are experiencing.

AI have mastered certain tasks

Where we’re seeing the biggest advancements in AI is computers that can do one thing extremely well. In the near future, we could see some jobs completely disappear as they are outsourced to machines that can do those same jobs much more efficiently and safely. We could be facing a labor displacement of a magnitude that hasn’t been seen since the industrial revolution.

Teaching AI to learn

While AI can be programmed to do certain tasks very well, a major hang up that researchers face is that they can’t teach AI to learn to do other things. All “learning” requires some kind of input from researchers. But humans could be placed in a room by themselves and can learn all on their own. This is called predictive learning or unsupervised learning and it’s an important key to solving the riddle of true artificial intelligence. For now, the big hurdle standing in the way of truly intelligent machines is the ability to teach them common sense, something humans are simply born with.

Source: Wall Street Journal

Artificial intelligence vs. human intelligence: how do they measure up?

There’s no denying that artificial intelligence is lightyears ahead of what it was just a few years ago. The technology continues to advance at an ever-increasing rate. But the ultimate goal of artificial intelligence researchers is to replicate human intelligence. So how do artificial intelligence and human intelligence measure up? You be the judge.

  1. The so-called “deep learning” that artificial intelligence is capable of isn’t really the type of profound learning like humans are capable of. Rather deep learning refers to an interconnected neural network. That means that artificial intelligence has immediate access to a wider body of knowledge, but humans are still capable of more profound thought.
  2. Artificial intelligence systems are able to beat the greatest chess masters in the world but they need millions of pictures (labeled by humans) to be able to learn to correctly identify a cat. Even a toddler can learn to differentiate between cats, dogs, and other animals after just a few instances of exposure to them.
  3. Intel’s latest processor, the i7, is one of the best CPUs that the average person can go out and buy. With four cores, it can perform four separate tasks simultaneously. But that’s no match for human biology. Even super computers are no match for the human brain’s 80 billion cells.
  4. The human brain can do what it does with just 10 watts of power. Artificial intelligence would need 10 terrawatts to imitate the human brain. That’s a trillion times more energy to do what the human brain is already capable of.
  5. Artificial intelligence runs off of algorithms programmed by humans. The algorithms haven’t actually changed much. What’s changed is the ability for artificial intelligence to run algorithms much quicker than ever before. Certain tasks, like mastering chess, depends on algorithms which is why artificial intelligence has the upper hand when it comes to playing chess.

Why even try

If after all these years, human intelligence is still vastly superior to artificial intelligence, why do artificial intelligence researchers even bother? Because despite its weaknesses when it comes to certain cognitive tasks, it can still do some things exceptionally better than humans. Artificial intelligence can sort through vast amounts of data in seconds, a task that would take humans days, weeks, or even years. Artificial intelligence is much better at humans when it comes to recognizing patterns hidden amongst large amounts of data. Artificial intelligence is far superior to humans when it comes to mathematical reasoning and computing as well.

In the end, humans need artificial intelligence. They can automate some of the more simple cognitive tasks for us such as pulling up our favorite song or performing a quick mathematical calculation. They will make our lives easier. But artificial intelligence still needs us as well. If true artificial intelligence capable of rivaling human intelligence is ever a reality, it will only be because very intelligent humans created it.

Source: Big Think

Six ways AI will make your life better

The last few years have seen huge advancements in the world of artificial intelligence. As the technology continues to improve, we’ll see more and more real world applications of artificial intelligence. That means artificial intelligence is going to start playing a more obvious role in our lives. Here are six ways that AI will be making our lives better in the coming years.

Your personal concierge

Already our smartphones come equipped with personal assistants that can follow basic commands and do certain tasks for us. But in the next few years, you can expect to see huge advancements in what these personal assistants are able to do. They will be able to do more than just follow simple commands. They will be able to make recommendations based on our preferences and even help us with decision making.

Crisis management

One major advantage to artificial intelligence is that it’s able to process vast amounts of data very quickly. During a crisis, this will be crucial as artificial intelligence can help us in sorting through incoming data and devising the best plan to deal with the disaster as it unfolds.

Search and rescue

Something relatively new in the world of artificial intelligence is the ability for artificial intelligence systems to work together to solve problems or perform tasks. Already there is a RoboCup World Championship where robots have to learn to work together in order to win. This ability for robots to work together can allow them to assist us with situational problems like search and rescue that require collaboration.

Public health

Public health is a major concern. At any time, there could be a major outbreak of a deadly illness. Health professionals are always on the lookout for these kinds of outbreaks by watching for patterns of symptoms. But artificial intelligence will be able to sort through medical data and identify worrying patterns much sooner than people alerting us to public health concerns before an outbreak.

Driverless cars

Not too long ago, the idea of a driverless car would have seemed like science fiction. Today they’re already a reality with numerous companies perfecting the technology. In the near future, you can expect to see driverless cars become available to the public. These cars will allow us to make more efficient use of our time since we will be able to focus on other tasks while our cars drive for us. They’ll also be a lot safer resulting in fewer accidents and traffic fatalities.

No terminator robots

One thing we don’t have to worry about, at least in the near future, is a robot uprising. We’re still a long ways off from creating artificial intelligence that can replicate the way the human brain works. At least for the time being, artificial intelligence will be completely at the mercy of human programming which means they can’t harm us, unless we program them to.

Source: Tech Insider

Artificial Intelligence is almost all grown up

Mary Shelley’s Frankenstein was the first pop culture example of artificial intelligence. Since then, the media, and especially Hollywood have depicted artificial intelligence numerous times. The depictions of artificial intelligence that we see in movies like Terminator or I, Robot are still nothing but science fiction, but the possibility of super smart artificial intelligence capable of matching human intelligence is no longer a far-fetched idea. In fact, due to the exponential growth of the artificial intelligence industry, we could see that level of artificial intelligence in a matter of years.

According to analysts who track the growth of computing costs and the costs for such technologies, in just four years, $4,000 dollars would be enough to buy a computer than could rival the human brain. Such a computer would be able to perform twenty quadrillion calculations per second.

AI in the workplace

Artificial intelligence just might be the biggest game changers in the world of technology this century. One of the reasons it has grown so quickly in such a short period of time is because businesses are beginning to see the useful real-world applications it has to offer. Already, dozens of industries such as healthcare diagnostics, automatic trading, business processing, advertising, and social media are using artificial intelligence to operate more efficiently and be more successful. It’s predicted that spending on artificial intelligence will increase from just over 200 million in 2015, to over 11 billion in 2024 as more businesses begin to invest in the technology.

Although artificial intelligence is relatively new to the workplace, it’s already shaking up the way we do business. Some of the top companies in the world are changing their business models to integrate artificial intelligence as opposed to sticking with a humans only approach. These businesses have a distinct advantage when it comes to gathering vast amounts of data in a very short period of time and then using that vast amount of data to make decisions. This enables companies to be quicker on their feet as they evaluate various analytics and make adjustments to their business strategy.

AI and the future

In Mary Shelley’s Frankenstein, and in many subsequent science fiction novels and films, the artificial intelligence humans create inevitably comes back to haunt them. Artificial intelligence isn’t without its risks. Some fear that if artificial intelligence could surpass human intelligence, they could overthrow us. Another fear is that artificial intelligence could lead to many people being out of a job as they’re replaced by machines. But experts in the field of artificial intelligence believe these fears are unfounded. It all comes down to human programming and how much power authority we grant them.

As for the work force, most analysts predict that an artificial intelligence boom will create many more jobs than it eliminates.

Source: Information Age

Asimov’s three laws of robotics in action

robot-1416112

Isaac Asimov, the famed science fiction author, did more than write novels. He is also credited with coming up with the three laws of robotics:

  1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

According to Asimov, these laws will need to govern the use of robotics to keep both them and humanity safe. A major fear is that artificially intelligent robots could eventually pose a threat to humans either by actively seeking to harm them, or by failing to act in a manner that would preserve human life. Because humans are beginning to give robots control over essential infrastructure, the latter is as especially big concern.

Recently, an employee at a Volkswagen plant in Germany was crushed when he became trapped in a robot arm. The machine was only doing what it was programmed to do and wasn’t able to alter its programming even when a human’s life was in danger. To make robots safer for humans, robotics researchers at Tufts University are working on developing artificial intelligence that can deviate from its programming if the circumstances warrant it. The technology is still primitive but it’s an important step if artificially intelligent robots are going to be coexisting with humans someday.

How it works

Researchers at Tufts University’s Human-Robot Interaction lab designed a robot that recognizes that it is allowed to disobey orders when there is a good reason. For example, when facing a ledge, the robot will refuse to walk forward even when ordered to. Not only will the robot refuse, but he is programmed to state the reason—that he would fall if he were to obey. To understand how the robot is able to do this, we have to first understand the concept of “felicity conditions.” Felicity conditions refer to the distinction between understanding the command being given, and the implications of following that command. To design a robot that could refuse to obey certain orders, the researchers programmed the robot to go through five logical steps when given a command:

  1. Do I know how to do X
  2. Am I physically able to do X now? Am I normally physically able to do X?
  3. Am I able to do X right now?
  4. Am I obligated based on my social role to do X?
  5. Does it violate any normative principle to do X?

This five step logical process enables the robot to determine whether or not a command would cause harm to itself or a human before following an order. The researchers recently presented their work at the AI for Human-Robot Interaction Symposium in Washington DC.

Source: IBTimes

A brief history of artificial intelligence

The concept of artificial intelligence began as pure fiction, something to be imagined but never actually existing. Today, we know that that’s no longer the case. Artificial Intelligence is real and there are already real-world applications where artificial intelligence is helping us solve some of the biggest problems facing humanity. We’re still a ways off from creating true artificial intelligence but we’re getting closer every day. Here’s a look at how artificial intelligence has developed through the years.

Greek myths

The earliest known reference to something we could term artificial intelligence dates back to the ancient Greeks. According to their mythology, Hephaestus, the blacksmith of Olympus created life-like metal automatons that were created to carry out certain functions.

The birth of science fiction

The concept of artificial intelligence stays relatively quiet until the early 19th century when Mary Shelley wrote Frankenstein, considered by many to be the first true science fiction novel because of the emphasis on the use of scientific methods and equipment to create a semi-intelligent monster. This novel gave way to more science fiction, some of which deal with the theme of robots and robots taking over humanity.

The first computer that was never made

Charles Babbage, a Victorian era inventor designed the first computer (on paper anyways) in 1822. It was designed to carry out mathematical calculations. He died before he could build his device which he called the Difference Engine but based on his designs, the machine could have worked had it been built and would have been the first computer.

The Turing Machine and the Turing Test

Alan Turing was a brilliant mathematician who helped bring World War II to an end by using his Turing Machine to break the German’s code. He is considered the father of computer science and artificial intelligence. He is also famous for coming up with the Turing Test which is designed to differentiate between computers that can be said to have artificial consciousness and computers that can’t.

Dartmouth Conference

In the summer of 1956, the scientific field of artificial intelligence was born over the course of a month long conference held at Dartmouth College. The boundaries of the field were set and plans were made to recreate human intelligence in a machine.

Hot and cold seasons

Researchers left the Dartmouth Conference with a lot of research money and optimism. Early AI researchers quickly realized that creating artificial intelligence was going to be a lot harder than they previously thought. This led to discouragement, lack of funding and very little progress in the field of AI in the early 70s and 80s though there were also resurgences as well.

AI in Hollywood and the real world

This brings us to today. Artificial Intelligence is now a part of our pop culture thanks to dozens of Hollywood movies that deal with the concept of artificial intelligence. Many of these portrayals are negative and depict robots overthrowing humanity, but some are more nuanced and treat the subject in a very thoughtful way. Artificial intelligence is now a part of our daily lives thanks to personal assistants like Siri and Cortana.

Source: History Extra

Does AI spell doom for the workforce?

outdoor-1436934

For approximately a hundred years, humans have theorized the possibility of artificial intelligence and the implications that such an invention would bring about. In literature and on the big screen, we read or see stories where artificial intelligence overthrows humanity and tries to wipe us off the face of the earth. Authors and movie producers aren’t the only ones spreading fear of AI. Some of the most brilliant minds of today, like Stephen Hawking and Elon Musk have also issued warnings about the potential for AI to turn against humanity if given too much independence.

But another fear, and perhaps one that’s a little more understandable, is that AI could one day leave millions of people jobless as their work is outsourced to artificially intelligent machines. This thought isn’t a new one. It was first brought up in the 70s and 80s when computers were becoming more mainstream. The discussion has been rekindled due to recent advancements in artificial intelligence. The question then is, do we really need to fear? The answer isn’t so simple.

The computer revolution scare

When computers were starting to become more affordable and mainstream in the late 70s and through the 80s, many people feared that companies would fire humans and replace them with computers who could do the work that people did. Because computers were cheaper than paying employees in the long run, it wasn’t hard to believe that companies might favor computers over human workers.

But of course, we never saw the widespread job loss that many predicted. Though computers did have a huge impact on the workforce, this impact was largely positive. Though many jobs were replaced by computers, the existence of computers in the workplace led to many more jobs than it eliminated.

Will the robot revolution will be different

Though similar fears have been put to rest in the past, those warning of an impending job crisis due to the advancement of artificial intelligence and robotics say this time will be different. Already artificial intelligence is demonstrating its ability to perform tasks that could in theory render many human professions obsolete. For instance, driverless cars are already a reality that will eventually become mainstream. Who needs bus or taxi drivers when buses and taxis can be outfitted with a computer that can follow a specified route or take people to a specified destination?

Of course, just like with the computer revolution, a robot revolution would certainly create jobs as well as people would be needed to design them.

Blue collar vs. white collar jobs

Though the robot revolution will certainly have a large impact on the workforce, it’s still unclear what that impact will be. Some argue that blue collar jobs face the largest risk as the work is menial and can more easily be replaced. But others have been quick to point out that as artificial intelligence becomes more intelligent, white collar jobs might also be at stake, perhaps even more so than blue collar jobs. In the end, people in the work force will just have to wait and see what the robot revolution brings.

Source: Inquisitr

 

Super Mario Bros: The ultimate artificial intelligence test

nintendo-1557929

If you were to make a list of the greatest video games ever made, the original Super Mario Bros. made for the Nintendo System would have to be towards the top of that list. Thanks to original game mechanics and challenging level designs, Super Mario Bros. was an instant classic and remains popular today—30 years later. But retro-gaming enthusiasts aren’t the only ones still playing Super Mario Bros. As it turns out, the classic video game is a huge hit with artificial intelligence developers. That’s because they often use the video game to test the intelligence of their systems.

How the test works

Artificial intelligence researchers use Super Mario Bros. to test AI in a couple of different ways. The first is that they attempt to program an AI system to successfully beat the game from beginning to end. To do this, the AI system needs to learn the nuances of the game in order to time jumps perfectly, avoid enemies, and successfully reach the end of each level before time expires. From 2009 to 2012, AI researchers even held a Mario AI competition to where competitors tried to design AI that could complete the game in the fastest time.

AI researchers are also programming artificial intelligence to design playable levels that will provide players with a challenge without being too difficult. Researchers believe that artificial intelligence may soon be used to assist humans with level design.

Why Super Mario Bros.

You may be wondering, out of all the ways to test an AI system, why use a video game. As it turns out, video games are an ideal way to test artificial intelligence because they require the use of logic, creativity, situational awareness, and decision-making skills to successfully beat the game. All of these skills are necessary in the quest for true artificial intelligence. But of the tens of thousands of video games to choose from, why Super Mario Bros.? AI researchers cite two primary reasons: first, they just love to play the game like the rest of us and second, the classic platformer game’s perfect mix of complexity and simplicity combined with finely-tuned mechanics make it an ideal game for artificial intelligence testing.

While early AI systems used Atari games for testing, it’s too simplistic today thanks to advancements in artificial intelligence. But side-scrolling games like Super Mario Bros. present a bit more of a challenge because a good percentage of the level design can’t be observed by AI at any given time.

Source: Motherboard.com