Facebook a serious contender in the artificial intelligence race

The last several years have been huge for the artificial intelligence field. There have been huge leaps in artificial intelligence technology and it’s generating more and more attention for the field. As artificial intelligence begins to seem more like a reality and becomes a part of our day-to-day life, more companies are joining in the race to be the first to create true artificial intelligence. Google and IBM are two of the most notable companies working on artificial intelligence. Companies like Apple and Microsoft have been developing their AI personal assistants, Siri, and Cortana, respectively. But behind the curtains and unknown to most, Facebook has also recently joined the artificial intelligence race and they’re becoming a serious contender.

While most think of Facebook as a social media company, Facebook has actually grown to become one of the most advanced technology research companies in the world. They’ve turned their attention to creating computers that are less like linear, logical machines and more like humans.

How it all started

When Facebook founder, Mark Zuckerberg, got together with Facebook leadership to discuss what they could do to stay relevant for the next 10-20 years, the idea of artificial intelligence came up. Facebook already uses some simplified artificial intelligence to predict what people will want to see in their News Feeds. They decided to consult Yann LeCun, a notable AI researcher to help them step up their game. They’ve since asked Yann to build the best artificial intelligence lab in the world.

Currently, Yann has put together a team of 30 AI research scientists and 15 engineers though that number is expected to grow dramatically in the coming years.

The Goal

According to LeCun, even the most advanced artificial intelligence systems in existence today are dumb compared to humans. Granted, they have access to unlimited stores of knowledge, but they lack the common sense that humans come by naturally. One of Yann’s primary goals, and perhaps his biggest obstacle is creating an artificial intelligence system that can learn unsupervised. Currently, the only way artificial intelligence systems learn is by humans giving them input. But humans learn simply by existing in the world and going about their day. To create true artificial intelligence, Facebook’s AI system will have to learn like humans learn.

Facebook M

In addition to working towards creating true artificial intelligence, Facebook is also working on a personal assistant much like Apple’s Siri and Microsoft’s Cortana. It’s called Facebook M. The idea is to start implementing more practical uses of artificial intelligence as they work on true artificial intelligence. According to Yann and his team, Facebook M will be able to do far more than Siri or Cortana. They hope that within a couple of years, Facebook M will be able to make calls, and stay on hold for people until a person comes on the line. Practical indeed.

Source: Popular Science

Facebook a serious contender in the artificial intelligence race

The last several years have been huge for the artificial intelligence field. There have been huge leaps in artificial intelligence technology and it’s generating more and more attention for the field. As artificial intelligence begins to seem more like a reality and becomes a part of our day-to-day life, more companies are joining in the race to be the first to create true artificial intelligence. Google and IBM are two of the most notable companies working on artificial intelligence. Companies like Apple and Microsoft have been developing their AI personal assistants, Siri, and Cortana, respectively. But behind the curtains and unknown to most, Facebook has also recently joined the artificial intelligence race and they’re becoming a serious contender.

While most think of Facebook as a social media company, Facebook has actually grown to become one of the most advanced technology research companies in the world. They’ve turned their attention to creating computers that are less like linear, logical machines and more like humans.

How it all started

When Facebook founder, Mark Zuckerberg, got together with Facebook leadership to discuss what they could do to stay relevant for the next 10-20 years, the idea of artificial intelligence came up. Facebook already uses some simplified artificial intelligence to predict what people will want to see in their News Feeds. They decided to consult Yann LeCun, a notable AI researcher to help them step up their game. They’ve since asked Yann to build the best artificial intelligence lab in the world.

Currently, Yann has put together a team of 30 AI research scientists and 15 engineers though that number is expected to grow dramatically in the coming years.

The Goal

According to LeCun, even the most advanced artificial intelligence systems in existence today are dumb compared to humans. Granted, they have access to unlimited stores of knowledge, but they lack the common sense that humans come by naturally. One of Yann’s primary goals, and perhaps his biggest obstacle is creating an artificial intelligence system that can learn unsupervised. Currently, the only way artificial intelligence systems learn is by humans giving them input. But humans learn simply by existing in the world and going about their day. To create true artificial intelligence, Facebook’s AI system will have to learn like humans learn.

Facebook M

In addition to working towards creating true artificial intelligence, Facebook is also working on a personal assistant much like Apple’s Siri and Microsoft’s Cortana. It’s called Facebook M. The idea is to start implementing more practical uses of artificial intelligence as they work on true artificial intelligence. According to Yann and his team, Facebook M will be able to do far more than Siri or Cortana. They hope that within a couple of years, Facebook M will be able to make calls, and stay on hold for people until a person comes on the line. Practical indeed.

Source: Popular Science

Say “hello” to the first artificially intelligent Barbie

After receiving widespread criticism for their Teen Talk Barbie that lamented, “Math class is tough,” Mattel is stepping up their game by releasing Hello Barbie, full name Barbara Millicent Roberts, the first Barbie with artificial intelligence. Their goal is to create a toy that seems more lifelike because of its ability to carry on a conversation with kids. Whereas Teen Talk Barbie, and other previous talking Barbies, simply selected a phrase at random from a small database of possible phrases, Hello Barbie knows 8,000 lines of dialogue. Even more impressive, she selects certain phrases based on what kids are saying to her or asking her.

How it works

The secret is in Barbie’s belt buckle which actually doubles as a button that can activate speech recognition software. When a child holds down the belt buckle button and speaks to Barbie, the doll records the audio and transmits it to a ToyTalk server (ToyTalk is a third party service not owned by Mattel that manages the databases of phrases for various toys). The ToyTalk server runs something called a decision engine to select an appropriate response to what the child said. Oren Jacob, the CEO of ToyTalk describes ToyTalk’s decision engine as a kind of map with forks in the road. It uses natural language processing to analyze what the child is saying or asking and arrives at an optimal response which is transmitted back to the Barbie Doll. This entire process takes only seconds.

It keeps getting better

One of the best things about Hello Barbie is that it has the ability to keep on improving when it comes to speech recognition and response selection. Because Hello Barbie’s 8,000 lines of dialogue are stored on ToyTalk’s servers and not on a chip within the doll itself, a team of ToyTalk employees have access to that database of dialogue and can continually improve it. As more children talk to Hello Barbie, ToyTalk can study patterns, tweak their decision engine to be more accurate, and add or remove lines of dialogue as needed.

Because the audio recordings are stored on ToyTalk servers, the child’s parents can go online and listen to or delete audio recordings. They also have the option to share recordings of their child interacting with Barbie.

According to Mattel, Hello Barbie will hit the shelves in November just in time for the holidays.

Source: Popular Science

Autonomy: the real artificial intelligence threat

Artificial intelligence has always had a bad rap when it comes to depictions of it in television, film, and literature. It seems humans have a tendency to expect the worst possible outcome when they visualize a future where artificial intelligence is a part of our day-to-day lives. In most depictions, artificial intelligence quickly surpasses human intelligence and uses that advantage to attempt to destroy the entire species. While everyone pretty much recognizes that these films and books are science fiction, there are plenty who do believe in the real possibility of an artificial intelligence takeover. And though many of these people’s opinions might be written off as the delusions of a crazy conspiracy theorist, other voices are harder to ignore. Some of science’s most brilliant minds like Stephen Hawking, Bill Gates, and Tesla Motors CEO Elon Musk as well as many AI researchers have all warned about the dangers of artificial intelligence. Hawking even went so far as to say that full artificial intelligence could mean the end of the human race.

Is artificial intelligence really dangerous?

At a recent conference, Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence, stressed that artificial intelligence in itself isn’t really dangerous. What the general public—those who haven’t dedicated years to study artificial intelligence—doesn’t understand is that artificial intelligence can’t suddenly alter its programming to turn against humans as they often do in Hollywood depictions of artificial intelligence. The real danger is in the programming they’re given to begin with.

Humans still humanity’s greatest threat

What Stephen Hawking, Bill Gates, and Elon Musk really fear isn’t artificial intelligence, but the humans tasked with creating and programming it. AI is nothing but intelligence software that can enable machines to imitate the more complex human behaviors. Full artificial intelligence, which we have so far been unable to create is software that perfectly imitates human behavior and can surpass human intelligence. But either way, artificial intelligence is still limited to its programming.

The real threat of artificial intelligence is the autonomy, or freedom that humans give it. According to AI researches. Autonomy isn’t something that occurs naturally with artificial intelligence. Artificial intelligence couldn’t simply takeover our weapons systems like they often do in movies in order to wipe out humanity. Artificial intelligence only handles the tasks we give them. If humans grant them autonomy with weapon systems, then there is danger. What Stephen Hawking, Bill Gates, and Elon Musk are urging is that humans be careful about what tasks they assign to artificial intelligence and what tasks they leave to humans. Even more importantly, they urge that humans develop a method of control so that humans retain control over artificial intelligence.  

Source: Tech Insider

What does “artificial intelligence” even mean?

 

Artificial intelligence used to be something we only read about in science fiction or saw on the big screen. Today, everyone is at least familiar with the concept of artificial intelligence thanks to media attention. Though we have a pretty good grasp of what artificial means, coming up with a concrete definition of exactly what constitutes artificial intelligence is easier said than done. As artificial intelligence begins to become a reality, a useable definition of artificial intelligence that is universally acceptable is going to be necessary in order to regulate their use in various circumstances. Any laws and policies designed to regulate artificial intelligence will be worthless without a widely accepted definition of the terms.

Defining the terms

The word “artificial” is by far the easier word to define for legal purposes. It simply means, “not occurring in nature or not occurring in the same form in nature”. In short anything man-made that might imitate something that is naturally occurring. This definition of “artificial” even covers the possibility of using modified biological materials in the creation of artificially intelligent “machines”.

The word “intelligence” is where it gets difficult. The difficulty in defining the term is nothing new. In the world of philosophy, the meaning of the word “intelligence” and the meaning of other words connected to it such as “consciousness”, “thought”, “free will”, and “mind” have been debated for centuries—back to the time of Aristotle.

Currently, in the field of artificial intelligence, researches contrast artificial intelligence with human intelligence. This merely places the burden of proof onto psychologists. There is no real solid definition of artificial intelligence that doesn’t depend on a comparison to human intelligence.

While it is easy for even the average person to tell the difference between a programmable machine and a true AI system, the difficulty lies in differentiating between AI systems that merely give the appearance of intelligence and those systems that can be said to truly imitate human intelligence.

Sidestepping the question

Though coming up with a philosophical definition of the word “intelligence” that we can all agree on may still be centuries away, the need for a working definition of “artificial intelligence” is immediate. For would-be regulators of the use of artificial intelligence outside the laboratory, they need to ask themselves, “What risks does artificial intelligence pose?”

Already artificial intelligence is making a way into our day-to-day lives. As AI becomes more mainstream, there will be important societal implications. Corporations can use AI to take jobs away from humans. AI systems may be used to commit crimes. Unless there is an acceptable definition of artificial intelligence that can be used to regulate their use—and soon—we may face the future unprepared.

Source: Popular Science

Can artificial intelligence feel empathy?

For centuries, humans have given a lot of thought to what separates them from the animals. Though there are a lot of differences between us and animals, many argue that it is our superior reasoning abilities that truly sets us apart. In more recent years, we’ve turned our attention to what distinguishes humans from machines. In a short amount of time, artificial intelligence science has advanced so quickly that computers now seem more human than ever. The greatest obstacle in creating artificial intelligence is not creating something intelligent. The challenge is creating something that seems human. Throughout the short history of artificial intelligence science, a number of tests have been proposed that will differentiate between true artificial intelligence and a wannabe.

The Turing Test

Alan Turing, a British code breaker who inspired the movie The Imitation Game, was one of the pioneers of artificial intelligence science. He proposed a test he called the imitation game, later renamed the Turing Test, that would distinguish whether a machine could be said to be artificially intelligent. The test is simple in concept but extremely difficult to actually pass. The test works by having a machine carry on a conversation with a human acting as judge. If the human cannot distinguish a machine from a human through conversation alone (the human doesn’t get to see the machine so it doesn’t have to look human) then it is said to have passed the Turing Test.

Tricking the Turing Test

What Alan Turing couldn’t have predicted is the devious nature of computer programmers who would set out to devise a machine that could trick his test rather than embody true artificial intelligence. Only in the last couple of years have machines been able to trick human judges into believing they’re carrying on a conversation with another human. The problem with these machines is they can do one thing, and one thing only.

The Lovelace Test

Now that the Turing Test has been bested, a test proposed in 2001 by Selmer Bringsjord, Paul Bello, and David Ferrucci called the Lovelace Test is being used to distinguish man from machine. For the Lovelace Test, a human judge asks an artificially intelligence machine to create some piece of art, either a poem, story, or picture. Next the human judge gives a criterion, for example, write a poem about a cat. If a machine can follow this direction, it’s said to be sufficiently human.

What about empathy?

Recently, there has been some thought about what society really wants out of an artificially intelligent machine. It’s great if they can carry on a conversation or produce art, but what about those human emotions that set us apart from machines. In a recent panel discussion at Robotronica 2015, panelists discussed what human emotions it would be important for artificial intelligence to obtain. Empathy was first on the list. If a machine could feel empathy, humans won’t need to fear artificial intelligence as we tend to do.

Source: phys.org/news/2015-08-human-emotions-artificial-intelligence.html