Tag Archives: artificial intelligence science

Facebook a serious contender in the artificial intelligence race

The last several years have been huge for the artificial intelligence field. There have been huge leaps in artificial intelligence technology and it’s generating more and more attention for the field. As artificial intelligence begins to seem more like a reality and becomes a part of our day-to-day life, more companies are joining in the race to be the first to create true artificial intelligence. Google and IBM are two of the most notable companies working on artificial intelligence. Companies like Apple and Microsoft have been developing their AI personal assistants, Siri, and Cortana, respectively. But behind the curtains and unknown to most, Facebook has also recently joined the artificial intelligence race and they’re becoming a serious contender.

While most think of Facebook as a social media company, Facebook has actually grown to become one of the most advanced technology research companies in the world. They’ve turned their attention to creating computers that are less like linear, logical machines and more like humans.

How it all started

When Facebook founder, Mark Zuckerberg, got together with Facebook leadership to discuss what they could do to stay relevant for the next 10-20 years, the idea of artificial intelligence came up. Facebook already uses some simplified artificial intelligence to predict what people will want to see in their News Feeds. They decided to consult Yann LeCun, a notable AI researcher to help them step up their game. They’ve since asked Yann to build the best artificial intelligence lab in the world.

Currently, Yann has put together a team of 30 AI research scientists and 15 engineers though that number is expected to grow dramatically in the coming years.

The Goal

According to LeCun, even the most advanced artificial intelligence systems in existence today are dumb compared to humans. Granted, they have access to unlimited stores of knowledge, but they lack the common sense that humans come by naturally. One of Yann’s primary goals, and perhaps his biggest obstacle is creating an artificial intelligence system that can learn unsupervised. Currently, the only way artificial intelligence systems learn is by humans giving them input. But humans learn simply by existing in the world and going about their day. To create true artificial intelligence, Facebook’s AI system will have to learn like humans learn.

Facebook M

In addition to working towards creating true artificial intelligence, Facebook is also working on a personal assistant much like Apple’s Siri and Microsoft’s Cortana. It’s called Facebook M. The idea is to start implementing more practical uses of artificial intelligence as they work on true artificial intelligence. According to Yann and his team, Facebook M will be able to do far more than Siri or Cortana. They hope that within a couple of years, Facebook M will be able to make calls, and stay on hold for people until a person comes on the line. Practical indeed.

Source: Popular Science

Facebook a serious contender in the artificial intelligence race

The last several years have been huge for the artificial intelligence field. There have been huge leaps in artificial intelligence technology and it’s generating more and more attention for the field. As artificial intelligence begins to seem more like a reality and becomes a part of our day-to-day life, more companies are joining in the race to be the first to create true artificial intelligence. Google and IBM are two of the most notable companies working on artificial intelligence. Companies like Apple and Microsoft have been developing their AI personal assistants, Siri, and Cortana, respectively. But behind the curtains and unknown to most, Facebook has also recently joined the artificial intelligence race and they’re becoming a serious contender.

While most think of Facebook as a social media company, Facebook has actually grown to become one of the most advanced technology research companies in the world. They’ve turned their attention to creating computers that are less like linear, logical machines and more like humans.

How it all started

When Facebook founder, Mark Zuckerberg, got together with Facebook leadership to discuss what they could do to stay relevant for the next 10-20 years, the idea of artificial intelligence came up. Facebook already uses some simplified artificial intelligence to predict what people will want to see in their News Feeds. They decided to consult Yann LeCun, a notable AI researcher to help them step up their game. They’ve since asked Yann to build the best artificial intelligence lab in the world.

Currently, Yann has put together a team of 30 AI research scientists and 15 engineers though that number is expected to grow dramatically in the coming years.

The Goal

According to LeCun, even the most advanced artificial intelligence systems in existence today are dumb compared to humans. Granted, they have access to unlimited stores of knowledge, but they lack the common sense that humans come by naturally. One of Yann’s primary goals, and perhaps his biggest obstacle is creating an artificial intelligence system that can learn unsupervised. Currently, the only way artificial intelligence systems learn is by humans giving them input. But humans learn simply by existing in the world and going about their day. To create true artificial intelligence, Facebook’s AI system will have to learn like humans learn.

Facebook M

In addition to working towards creating true artificial intelligence, Facebook is also working on a personal assistant much like Apple’s Siri and Microsoft’s Cortana. It’s called Facebook M. The idea is to start implementing more practical uses of artificial intelligence as they work on true artificial intelligence. According to Yann and his team, Facebook M will be able to do far more than Siri or Cortana. They hope that within a couple of years, Facebook M will be able to make calls, and stay on hold for people until a person comes on the line. Practical indeed.

Source: Popular Science

Autonomy: the real artificial intelligence threat

Artificial intelligence has always had a bad rap when it comes to depictions of it in television, film, and literature. It seems humans have a tendency to expect the worst possible outcome when they visualize a future where artificial intelligence is a part of our day-to-day lives. In most depictions, artificial intelligence quickly surpasses human intelligence and uses that advantage to attempt to destroy the entire species. While everyone pretty much recognizes that these films and books are science fiction, there are plenty who do believe in the real possibility of an artificial intelligence takeover. And though many of these people’s opinions might be written off as the delusions of a crazy conspiracy theorist, other voices are harder to ignore. Some of science’s most brilliant minds like Stephen Hawking, Bill Gates, and Tesla Motors CEO Elon Musk as well as many AI researchers have all warned about the dangers of artificial intelligence. Hawking even went so far as to say that full artificial intelligence could mean the end of the human race.

Is artificial intelligence really dangerous?

At a recent conference, Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence, stressed that artificial intelligence in itself isn’t really dangerous. What the general public—those who haven’t dedicated years to study artificial intelligence—doesn’t understand is that artificial intelligence can’t suddenly alter its programming to turn against humans as they often do in Hollywood depictions of artificial intelligence. The real danger is in the programming they’re given to begin with.

Humans still humanity’s greatest threat

What Stephen Hawking, Bill Gates, and Elon Musk really fear isn’t artificial intelligence, but the humans tasked with creating and programming it. AI is nothing but intelligence software that can enable machines to imitate the more complex human behaviors. Full artificial intelligence, which we have so far been unable to create is software that perfectly imitates human behavior and can surpass human intelligence. But either way, artificial intelligence is still limited to its programming.

The real threat of artificial intelligence is the autonomy, or freedom that humans give it. According to AI researches. Autonomy isn’t something that occurs naturally with artificial intelligence. Artificial intelligence couldn’t simply takeover our weapons systems like they often do in movies in order to wipe out humanity. Artificial intelligence only handles the tasks we give them. If humans grant them autonomy with weapon systems, then there is danger. What Stephen Hawking, Bill Gates, and Elon Musk are urging is that humans be careful about what tasks they assign to artificial intelligence and what tasks they leave to humans. Even more importantly, they urge that humans develop a method of control so that humans retain control over artificial intelligence.  

Source: Tech Insider

What does “artificial intelligence” even mean?

 

Artificial intelligence used to be something we only read about in science fiction or saw on the big screen. Today, everyone is at least familiar with the concept of artificial intelligence thanks to media attention. Though we have a pretty good grasp of what artificial means, coming up with a concrete definition of exactly what constitutes artificial intelligence is easier said than done. As artificial intelligence begins to become a reality, a useable definition of artificial intelligence that is universally acceptable is going to be necessary in order to regulate their use in various circumstances. Any laws and policies designed to regulate artificial intelligence will be worthless without a widely accepted definition of the terms.

Defining the terms

The word “artificial” is by far the easier word to define for legal purposes. It simply means, “not occurring in nature or not occurring in the same form in nature”. In short anything man-made that might imitate something that is naturally occurring. This definition of “artificial” even covers the possibility of using modified biological materials in the creation of artificially intelligent “machines”.

The word “intelligence” is where it gets difficult. The difficulty in defining the term is nothing new. In the world of philosophy, the meaning of the word “intelligence” and the meaning of other words connected to it such as “consciousness”, “thought”, “free will”, and “mind” have been debated for centuries—back to the time of Aristotle.

Currently, in the field of artificial intelligence, researches contrast artificial intelligence with human intelligence. This merely places the burden of proof onto psychologists. There is no real solid definition of artificial intelligence that doesn’t depend on a comparison to human intelligence.

While it is easy for even the average person to tell the difference between a programmable machine and a true AI system, the difficulty lies in differentiating between AI systems that merely give the appearance of intelligence and those systems that can be said to truly imitate human intelligence.

Sidestepping the question

Though coming up with a philosophical definition of the word “intelligence” that we can all agree on may still be centuries away, the need for a working definition of “artificial intelligence” is immediate. For would-be regulators of the use of artificial intelligence outside the laboratory, they need to ask themselves, “What risks does artificial intelligence pose?”

Already artificial intelligence is making a way into our day-to-day lives. As AI becomes more mainstream, there will be important societal implications. Corporations can use AI to take jobs away from humans. AI systems may be used to commit crimes. Unless there is an acceptable definition of artificial intelligence that can be used to regulate their use—and soon—we may face the future unprepared.

Source: Popular Science

Can artificial intelligence feel empathy?

For centuries, humans have given a lot of thought to what separates them from the animals. Though there are a lot of differences between us and animals, many argue that it is our superior reasoning abilities that truly sets us apart. In more recent years, we’ve turned our attention to what distinguishes humans from machines. In a short amount of time, artificial intelligence science has advanced so quickly that computers now seem more human than ever. The greatest obstacle in creating artificial intelligence is not creating something intelligent. The challenge is creating something that seems human. Throughout the short history of artificial intelligence science, a number of tests have been proposed that will differentiate between true artificial intelligence and a wannabe.

The Turing Test

Alan Turing, a British code breaker who inspired the movie The Imitation Game, was one of the pioneers of artificial intelligence science. He proposed a test he called the imitation game, later renamed the Turing Test, that would distinguish whether a machine could be said to be artificially intelligent. The test is simple in concept but extremely difficult to actually pass. The test works by having a machine carry on a conversation with a human acting as judge. If the human cannot distinguish a machine from a human through conversation alone (the human doesn’t get to see the machine so it doesn’t have to look human) then it is said to have passed the Turing Test.

Tricking the Turing Test

What Alan Turing couldn’t have predicted is the devious nature of computer programmers who would set out to devise a machine that could trick his test rather than embody true artificial intelligence. Only in the last couple of years have machines been able to trick human judges into believing they’re carrying on a conversation with another human. The problem with these machines is they can do one thing, and one thing only.

The Lovelace Test

Now that the Turing Test has been bested, a test proposed in 2001 by Selmer Bringsjord, Paul Bello, and David Ferrucci called the Lovelace Test is being used to distinguish man from machine. For the Lovelace Test, a human judge asks an artificially intelligence machine to create some piece of art, either a poem, story, or picture. Next the human judge gives a criterion, for example, write a poem about a cat. If a machine can follow this direction, it’s said to be sufficiently human.

What about empathy?

Recently, there has been some thought about what society really wants out of an artificially intelligent machine. It’s great if they can carry on a conversation or produce art, but what about those human emotions that set us apart from machines. In a recent panel discussion at Robotronica 2015, panelists discussed what human emotions it would be important for artificial intelligence to obtain. Empathy was first on the list. If a machine could feel empathy, humans won’t need to fear artificial intelligence as we tend to do.

Source: phys.org/news/2015-08-human-emotions-artificial-intelligence.html

Implications of AI: legal responsibility and civil rights

Just a few years ago, the idea of artificially intelligent robots would have seemed like pure science fiction. The first mention of an automaton in in Homer’s Illiad. More recently, early science fiction writers like H.G. Wells and Isaac Asimov wrote about artificially intelligent robots. Today, we’re treated to at least one movie each year that deals with the subject of artificial intelligence.

Only in the last few years has artificial intelligence began to seem like a reality. Every day artificial intelligence is more science and less fiction. Already there are real world applications for artificial intelligence. They’re beginning to take over jobs that were once handled by humans. In the next few years, experts predict that artificial intelligence will continue to become more a part of our daily lives. It’s predicted that by 2025, robots will be replacing humans in one third of today’s jobs. We’re only just now coming to terms with the implications that a future shared with artificial intelligence has in store.

The robot apocalypse

A favorite motif in artificial intelligence fiction is the robot apocalypse, in which artificial intelligence decides that humans needs to go and turns its focus to obliterating all human life. Nearly every Hollywood movie about artificial intelligence in the last decade has included this plot line. Though these movies and stories are science fiction and pure speculation, many brilliant minds are concerned that there could be a real threat in creating artificial intelligence, especially if they’re given control of our weapons systems.

Big names in the scientific community like Elon Musk (CEO of Tesla Motors) and Stephen Hawking, along with more than 1,000 other AI and robotics researchers have signed an open letter citing the dangers of using AI in weapons development.

What about the legal and social implications

Though Hollywood—and the general public—like to imagine worst-case-scenarios—there are other important implications that haven’t been given as much consideration. Consider the example of the automated shopping robot designed by a Swiss art group. It was programmed to purchase illegal products over the Darknet. It was able to purchase a Hungarian passport and some Ecstasy pills as well as a few other illegal products before it was “arrested” by Swiss police. Ultimately, no charges were brought against the robot or its creators but the idea remains, how will society deal with the criminal activities of artificially intelligence beings in the future, especially when they are acting on their own and not on the programming of humans? Will artificial intelligence be held legally responsible? If so, will they need the same human rights that most people have in free countries such as the right to legal counsel? Will we see artificially intelligence beings fighting for equal rights? Only the future can tell, but perhaps it is just as likely we’ll see a robot civil rights movement as a robot apocalypse.

Source: Tech Crunch

Humor: Artificial intelligence’s greatest obstacle

 

It’s been said that the true test of mastering a foreign language is the ability to make a joke in that language. While a sense of humor is usually second nature for most native speakers, it’s surprisingly difficult—if not impossible—to teach. It’s so difficult, in fact, that some reason that the development of a sense of humor will be the ultimate test for artificial intelligence. To understand the difficulty in teaching artificial intelligence to be humorous, consider what goes into making a joke.

What’s in a joke?

On a recent trip to Australia, comedy writer, David Misch observed two manta rays engaged in—shall we say—extracurricular activities. With perfect comedic timing, he quipped “Hey! It’s fifty shades of ray!” The joke led his friend, a former computer programmer interested in artificial intelligence, to think about whether a computer could ever be programmed to make that joke—not merely be programmed to repeat it, but truly generate it were it exposed to the same circumstances that David Misch was.

In the end, it was determined that in order for an artificially intelligent computer to make that joke, it would need to be able to perform numerous, instant calculations. It would need to be able to connect the two very different topics of manta ray intercourse and human S and M, then it would need to be able to access the entirety of pop culture references to human S and M ultimately settling on Fifty Shades of Grey. Then it would need the ability to appreciate the pun, understand the rhyme of “ray” and “grey,” and gauge the audience’s ability to get the joke. Finally, artificial intelligence would need to do all of this in a blink of an eye to achieve good comedic timing (the joke wouldn’t have been funny five minutes later).

The moral of this story is that a lot goes into the making of a good joke and artificial intelligence is still far away from being able to replicate it.

Funny AI

Though artificial intelligence is still a long way away from developing a sense of humor, that hasn’t stopped humans from trying. Apple executives, for instance, were not overly thrilled to learn that those who programmed Siri, the iPhone’s built-in personal assistant, had managed to work in a few jokes. Microsoft’s counterpart, Cortana, is likewise programmed to give humorous responses to certain questions. Of course the major difference is that these artificial intelligences are merely parroting back jokes that they were programmed to say in response to specific questions, not generating their own humor.

The ultimate test

Some have theorized that in order for AI to reach its full potential, humans will need to feel comfortable interacting with it. Developing a sense of humor will certainly need to be a part of that process. Of course, that’s easier said than done. For the time being, we’ll have to be content with Siri’s dry sense of humor that she inherited from computer programmers.

Source: Huffington Post

Could AI develop spirituality?

Artificial Intelligence theorists spend a good deal of time thinking about how the creation of true artificial intelligence would affect society. Some brilliant minds, like Stephen Hawking and Bill Gates are rather pessimistic about what a future with artificial intelligence could mean for humankind. Others are hoping for the best. Both sides predict an event called The Singularity.

The Singularity

The Singularity is a term artificial intelligence theorists use to describe a point in time where the development of intelligence is no longer biological. In other words, true intelligence can be created and that intelligence can in turn build upon itself. Because this artificial intelligence would not be restricted by the need for biological evolution, like humans are, it could grow exponentially quickly surpassing human intelligence, eventually becoming trillions of times more advanced than human intelligence.

Good AI, bad AI

If The Singularity did come about, and many brilliant minds like Hawking predict that it will (and soon). Human intelligence would be surpassed quickly. For many theorists, it’s only a question of when. But one unanswered question is whether this will be good or bad for humankind.

Hollywood movies depicting artificial intelligence often portray a dystopian future where artificial intelligence has altered its own directives and is now bent on eliminating humankind. But according to many theorists, the opposite reality could become true. As intelligence grows exponentially, AI could prove more and more useful to humans resulting in unimaginable advancements in technology.

The real question is whether an artificially intelligent being would seek to harm or help humans.

AI and religion

One question about AI that is only just now being talked about is whether AI could become religious. Humans have pondered about their purpose in the universe and how they came to be since their beginning. If human intelligence can be boiled down to electrochemical reactions in the brain, then, theoretically, artificial intelligence, like human intelligence, could begin to think about those same kinds of questions leading them to a religion of sorts.

Marvin L. Minksy of MIT hypothesized that artificial intelligence might even be able to develop a “soul” of sorts. He jokes that perhaps artificial intelligence will one day stumble upon a computer science textbook, read about the development of artificial intelligence, and ultimately reject the idea that they were first created by humans and develop their own belief system about how they came to be.

If artificial intelligence were to become religious, we can only hope that they would choose a peaceful belief system that is inclusive towards humans rather than an exclusive and violent one.

Source: Huffington Post

What Hollywood gets right (and wrong) about artificial intelligence

 

Since the concept of artificial intelligence was first dreamed of, Hollywood has made a tremendous amount of money portraying it in film. Movie-goers are naturally interested in artificial intelligence and like to imagine the worst possible outcomes that could result as artificial intelligence becomes a more common part of our daily life.

Of course, Hollywood has been known to get a few things wrong when it comes to portraying scientific technologies on the big screen. What is really surprising is they actually get a few things right. Here’s a look at what Hollywood gets right and wrong about artificial intelligence.

Mind uploading

One of the most common tropes in artificial intelligence fiction is the concept of mind uploading, or digital immortality. The idea behind mind uploading is that humans can artificially become immortal by uploading human consciousness into a machine or robot of some kind. The most recent Hollywood film to make use of this trope was Chappie. Though the concept has enjoyed a lot of popularity in Hollywood films, Artificial Intelligence experts say that it’s also one of the most inaccurate. Currently, science is nowhere near being able to upload human consciousness into a machine. Though a few theorize it could possibly happen far in the future, the majority in the scientific community believe it’s nothing but science fiction.

Changes in agenda

One of Asimov’s three laws of robotics states that “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” But there is no shortage of movies that involve robots governed by artificial intelligence choosing to implement the Zeroth Law in which they diverge from their programmed agenda to achieve what they perceive as a greater good. In I Robot, for instance, the robots stage a revolution against the humans. According to the scientific community, movies like Steven Spielberg’s AI are more accurate because artificial intelligence cannot stray from its programmed instructions.

Robot feelings

Another favorite trope in artificial intelligence fiction is the robot that develops human emotions. Of all of the artificial intelligence myths, this one is probably the most subjective. According to experts in artificial intelligence, there is really no science to prove or disprove the possibility of robots developing human emotions. The question largely depends on how we define human emotion. Some theorize that emotions are nothing more than the result of electrical and chemical reactions in the brain. In that sense, it could in theory be recreated in a robot though that science is probably decades away.

Source: OuterPlaces.com