Tag Archives: artificial intelligence technology

A look at how the world’s leading tech companies are using AI

Artificial intelligence is the biggest thing to happen in the world of technology since the invention of computers. AI market spending has gone through the roof. This year it’s an $8 billion market. By 2020, the market will reach $47 billion. Exciting things are coming. Here are some of the ways the world’s top tech companies are going to introduce AI into our daily lives.

Google

Google is currently leading the self-driving vehicle market and is probably going to be the first to have a fully autonomous vehicle in the hands of consumers. They also own TensorFlow, an open source AI software library that anyone can use to experiment with creating machine learning programs.

Apple

Apple’s digital assistant, Siri, relies heavily on machine learning to accurately provide information and assistance for iPhone and iPad users.

Microsoft

Microsoft has its own version of Siri called “Cortana.” If Microsoft’s claims are to be believed, Cortana is about to get a major update that would make her the most accurate speech recognition machine in existence.

Intel

Intel recently purchased chipmaker, Movidius and Nervana Systems, a deep learning AI startup. Intel is trying to create the first line of CPUs that use neural computer architectures.

Don’t let big companies have all the fun

The year 2016 has been an unprecedented one for artificial intelligence startups which have raised a record amount of funding. Expect to see a number of exciting AI startups in the coming years that do big things.

Source: Tech Vibes

AI is here, but what do everyday people think of it?

Now that AI is making its way into our everyday lives, there has been a lot of attention paid to the various ways that it will affect society. While we’ve heard official statements from the white house as well as requests for more information, and statements from AI developers and companies, we haven’t heard much from everyday people and their thoughts about artificial intelligence. One research company decided to conduct a survey of 2,100 consumers from five different countries to gauge their feelings about AI. Here are some of the results:

  • 45% of consumers reported that they felt AI’s impact on society would be positive compared to just 7% who felt it would be negative.

  • 52% of consumers said that they felt AI would impact their personal life positively compared to just 7% who felt it would affect their personal life negatively.

  • Though two thirds of respondents said they knew something about artificial intelligence, just 18% said they knew a lot. 22% of people said their first impression of AI was “robots” suggesting that they knew very little.

  • 92% of people believe that general artificial intelligence will eventually arrive compared to just 8% who say AI is science fiction and will never materialize. Understanding and acceptance of AI is not surprisingly correlated with age with millennials being the most likely to desire faster development of AI and baby boomers being the least likely to desire faster AI development.

  • The number one concern that consumers have regarding AI is the potential for job loss. Only 18% said they thought the development of AI would lead to job creation. Other fears include an increase in cyber-attacks, stolen data, and invasion of privacy.

Though these fears aren’t completely unjustified, history and current data tell us that these fears may be blown out of proportion. Perhaps President Obama said it best in a recent interview: “I tend to be optimistic—historically we’ve absorbed new technologies, new jobs are created, and standards of living go up.”

It’s been a huge year for ai startups which have raised a record amount of funds for AI technology development. This suggests that the job market will see net gains rather than losses in the long run.

Source: HBR

Elon Musk’s next project: Rosie from the Jetsons

OpenAI, Elon Musk’s (Tesla Motors CEO) $1 billion artificial intelligence playground has turned its attention to a new project, to create their own domestic robot. It’s every homeowner’s dream come true if they can make it a reality.

A kind of test

According to the nonprofit research group behind the project, the overarching goal of creating such a robot isn’t to get humans out of doing their chores—as cool as that is—rather, they view it as a kind of test to determine whether artificial intelligence technology is progressing in the right direction. In other words, they want to make sure they can create an artificially intelligent machine that won’t try to kill us and figured they’d start by creating a robot that could do our chores for us.

The challenge

Machines that do our cleaning for us are nothing new. Dishwashers, washing machines, and dryers have been doing that for years. Even ovens have a self-cleaning function these days. Then there’s the Roomba which runs around like a thing possessed keeping our floors spotless. The difference between these machines and the one OpenAI wants to create is that all of these machines we’re already using can only do one thing, and they don’t think, they merely perform the one function they were programmed to do. The robot they’re planning would be a “general purpose” robot, one that could “think” about which chores need to be done and set about doing them in the most efficient way possible, so basically Rosie from the Jetsons.

OpenAI sees this project as just one small step to creating truly intelligent machines. If it’s a success they’ll turn their attention to the bigger problems facing humanity.

Source: The Independent

Artificial intelligence may mean the end of cyber threats

Cyber-security has come a long way in the last twenty years. In the 90s, the predominant security model used to create secure operating systems was the castle and moat approach. Everything inside the firewall was trusted and anything outside it wasn’t trusted. But emerging internet services like email meant that things needed to get through the wall. This was the beginning of the antivirus era of cyber-security, an era that we are still in. Antivirus works by identifying a threat, creating a signature, and distributing that signature so that every other computer with antivirus software installed can identify malware and defend against it.

A new era in cyber-security

Though the cyber security model hasn’t changed much since the advent of antivirus software, that could be about to change, thanks to advancements in cyber-threats. Most people creating malware use it once, and never again, which means identifying it and protecting against it in the future isn’t as helpful as it once was. A lot malware is advanced enough to slip through signature-based techniques of identifying them. Finally, the sheer volume of cyber threats continues to grow at an exponential rate and it’s getting harder to stay on top of them.

Deep learning and the future of cyber-security

Advancement in the field of deep learning allows artificial intelligence developers to create machines that can think like humans but process vast amounts of data quickly. Artificial intelligence researches are hopeful that AI may be the answer to the growing cyber threat problem. AI could theoretically identify eliminate cyber-threats as fast as they can be created.

While previous methods for protecting against cyber threats has been reactionary, the malware attacks, the antivirus software identifies it, and then makes other computers immune to it, cyber-security led by AI could take a more proactive approach in dealing with cyber threats.

Source: Recode

Facebook a serious contender in the artificial intelligence race

The last several years have been huge for the artificial intelligence field. There have been huge leaps in artificial intelligence technology and it’s generating more and more attention for the field. As artificial intelligence begins to seem more like a reality and becomes a part of our day-to-day life, more companies are joining in the race to be the first to create true artificial intelligence. Google and IBM are two of the most notable companies working on artificial intelligence. Companies like Apple and Microsoft have been developing their AI personal assistants, Siri, and Cortana, respectively. But behind the curtains and unknown to most, Facebook has also recently joined the artificial intelligence race and they’re becoming a serious contender.

While most think of Facebook as a social media company, Facebook has actually grown to become one of the most advanced technology research companies in the world. They’ve turned their attention to creating computers that are less like linear, logical machines and more like humans.

How it all started

When Facebook founder, Mark Zuckerberg, got together with Facebook leadership to discuss what they could do to stay relevant for the next 10-20 years, the idea of artificial intelligence came up. Facebook already uses some simplified artificial intelligence to predict what people will want to see in their News Feeds. They decided to consult Yann LeCun, a notable AI researcher to help them step up their game. They’ve since asked Yann to build the best artificial intelligence lab in the world.

Currently, Yann has put together a team of 30 AI research scientists and 15 engineers though that number is expected to grow dramatically in the coming years.

The Goal

According to LeCun, even the most advanced artificial intelligence systems in existence today are dumb compared to humans. Granted, they have access to unlimited stores of knowledge, but they lack the common sense that humans come by naturally. One of Yann’s primary goals, and perhaps his biggest obstacle is creating an artificial intelligence system that can learn unsupervised. Currently, the only way artificial intelligence systems learn is by humans giving them input. But humans learn simply by existing in the world and going about their day. To create true artificial intelligence, Facebook’s AI system will have to learn like humans learn.

Facebook M

In addition to working towards creating true artificial intelligence, Facebook is also working on a personal assistant much like Apple’s Siri and Microsoft’s Cortana. It’s called Facebook M. The idea is to start implementing more practical uses of artificial intelligence as they work on true artificial intelligence. According to Yann and his team, Facebook M will be able to do far more than Siri or Cortana. They hope that within a couple of years, Facebook M will be able to make calls, and stay on hold for people until a person comes on the line. Practical indeed.

Source: Popular Science

Facebook a serious contender in the artificial intelligence race

The last several years have been huge for the artificial intelligence field. There have been huge leaps in artificial intelligence technology and it’s generating more and more attention for the field. As artificial intelligence begins to seem more like a reality and becomes a part of our day-to-day life, more companies are joining in the race to be the first to create true artificial intelligence. Google and IBM are two of the most notable companies working on artificial intelligence. Companies like Apple and Microsoft have been developing their AI personal assistants, Siri, and Cortana, respectively. But behind the curtains and unknown to most, Facebook has also recently joined the artificial intelligence race and they’re becoming a serious contender.

While most think of Facebook as a social media company, Facebook has actually grown to become one of the most advanced technology research companies in the world. They’ve turned their attention to creating computers that are less like linear, logical machines and more like humans.

How it all started

When Facebook founder, Mark Zuckerberg, got together with Facebook leadership to discuss what they could do to stay relevant for the next 10-20 years, the idea of artificial intelligence came up. Facebook already uses some simplified artificial intelligence to predict what people will want to see in their News Feeds. They decided to consult Yann LeCun, a notable AI researcher to help them step up their game. They’ve since asked Yann to build the best artificial intelligence lab in the world.

Currently, Yann has put together a team of 30 AI research scientists and 15 engineers though that number is expected to grow dramatically in the coming years.

The Goal

According to LeCun, even the most advanced artificial intelligence systems in existence today are dumb compared to humans. Granted, they have access to unlimited stores of knowledge, but they lack the common sense that humans come by naturally. One of Yann’s primary goals, and perhaps his biggest obstacle is creating an artificial intelligence system that can learn unsupervised. Currently, the only way artificial intelligence systems learn is by humans giving them input. But humans learn simply by existing in the world and going about their day. To create true artificial intelligence, Facebook’s AI system will have to learn like humans learn.

Facebook M

In addition to working towards creating true artificial intelligence, Facebook is also working on a personal assistant much like Apple’s Siri and Microsoft’s Cortana. It’s called Facebook M. The idea is to start implementing more practical uses of artificial intelligence as they work on true artificial intelligence. According to Yann and his team, Facebook M will be able to do far more than Siri or Cortana. They hope that within a couple of years, Facebook M will be able to make calls, and stay on hold for people until a person comes on the line. Practical indeed.

Source: Popular Science

Autonomy: the real artificial intelligence threat

Artificial intelligence has always had a bad rap when it comes to depictions of it in television, film, and literature. It seems humans have a tendency to expect the worst possible outcome when they visualize a future where artificial intelligence is a part of our day-to-day lives. In most depictions, artificial intelligence quickly surpasses human intelligence and uses that advantage to attempt to destroy the entire species. While everyone pretty much recognizes that these films and books are science fiction, there are plenty who do believe in the real possibility of an artificial intelligence takeover. And though many of these people’s opinions might be written off as the delusions of a crazy conspiracy theorist, other voices are harder to ignore. Some of science’s most brilliant minds like Stephen Hawking, Bill Gates, and Tesla Motors CEO Elon Musk as well as many AI researchers have all warned about the dangers of artificial intelligence. Hawking even went so far as to say that full artificial intelligence could mean the end of the human race.

Is artificial intelligence really dangerous?

At a recent conference, Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence, stressed that artificial intelligence in itself isn’t really dangerous. What the general public—those who haven’t dedicated years to study artificial intelligence—doesn’t understand is that artificial intelligence can’t suddenly alter its programming to turn against humans as they often do in Hollywood depictions of artificial intelligence. The real danger is in the programming they’re given to begin with.

Humans still humanity’s greatest threat

What Stephen Hawking, Bill Gates, and Elon Musk really fear isn’t artificial intelligence, but the humans tasked with creating and programming it. AI is nothing but intelligence software that can enable machines to imitate the more complex human behaviors. Full artificial intelligence, which we have so far been unable to create is software that perfectly imitates human behavior and can surpass human intelligence. But either way, artificial intelligence is still limited to its programming.

The real threat of artificial intelligence is the autonomy, or freedom that humans give it. According to AI researches. Autonomy isn’t something that occurs naturally with artificial intelligence. Artificial intelligence couldn’t simply takeover our weapons systems like they often do in movies in order to wipe out humanity. Artificial intelligence only handles the tasks we give them. If humans grant them autonomy with weapon systems, then there is danger. What Stephen Hawking, Bill Gates, and Elon Musk are urging is that humans be careful about what tasks they assign to artificial intelligence and what tasks they leave to humans. Even more importantly, they urge that humans develop a method of control so that humans retain control over artificial intelligence.  

Source: Tech Insider

What does “artificial intelligence” even mean?

 

Artificial intelligence used to be something we only read about in science fiction or saw on the big screen. Today, everyone is at least familiar with the concept of artificial intelligence thanks to media attention. Though we have a pretty good grasp of what artificial means, coming up with a concrete definition of exactly what constitutes artificial intelligence is easier said than done. As artificial intelligence begins to become a reality, a useable definition of artificial intelligence that is universally acceptable is going to be necessary in order to regulate their use in various circumstances. Any laws and policies designed to regulate artificial intelligence will be worthless without a widely accepted definition of the terms.

Defining the terms

The word “artificial” is by far the easier word to define for legal purposes. It simply means, “not occurring in nature or not occurring in the same form in nature”. In short anything man-made that might imitate something that is naturally occurring. This definition of “artificial” even covers the possibility of using modified biological materials in the creation of artificially intelligent “machines”.

The word “intelligence” is where it gets difficult. The difficulty in defining the term is nothing new. In the world of philosophy, the meaning of the word “intelligence” and the meaning of other words connected to it such as “consciousness”, “thought”, “free will”, and “mind” have been debated for centuries—back to the time of Aristotle.

Currently, in the field of artificial intelligence, researches contrast artificial intelligence with human intelligence. This merely places the burden of proof onto psychologists. There is no real solid definition of artificial intelligence that doesn’t depend on a comparison to human intelligence.

While it is easy for even the average person to tell the difference between a programmable machine and a true AI system, the difficulty lies in differentiating between AI systems that merely give the appearance of intelligence and those systems that can be said to truly imitate human intelligence.

Sidestepping the question

Though coming up with a philosophical definition of the word “intelligence” that we can all agree on may still be centuries away, the need for a working definition of “artificial intelligence” is immediate. For would-be regulators of the use of artificial intelligence outside the laboratory, they need to ask themselves, “What risks does artificial intelligence pose?”

Already artificial intelligence is making a way into our day-to-day lives. As AI becomes more mainstream, there will be important societal implications. Corporations can use AI to take jobs away from humans. AI systems may be used to commit crimes. Unless there is an acceptable definition of artificial intelligence that can be used to regulate their use—and soon—we may face the future unprepared.

Source: Popular Science

Can artificial intelligence feel empathy?

For centuries, humans have given a lot of thought to what separates them from the animals. Though there are a lot of differences between us and animals, many argue that it is our superior reasoning abilities that truly sets us apart. In more recent years, we’ve turned our attention to what distinguishes humans from machines. In a short amount of time, artificial intelligence science has advanced so quickly that computers now seem more human than ever. The greatest obstacle in creating artificial intelligence is not creating something intelligent. The challenge is creating something that seems human. Throughout the short history of artificial intelligence science, a number of tests have been proposed that will differentiate between true artificial intelligence and a wannabe.

The Turing Test

Alan Turing, a British code breaker who inspired the movie The Imitation Game, was one of the pioneers of artificial intelligence science. He proposed a test he called the imitation game, later renamed the Turing Test, that would distinguish whether a machine could be said to be artificially intelligent. The test is simple in concept but extremely difficult to actually pass. The test works by having a machine carry on a conversation with a human acting as judge. If the human cannot distinguish a machine from a human through conversation alone (the human doesn’t get to see the machine so it doesn’t have to look human) then it is said to have passed the Turing Test.

Tricking the Turing Test

What Alan Turing couldn’t have predicted is the devious nature of computer programmers who would set out to devise a machine that could trick his test rather than embody true artificial intelligence. Only in the last couple of years have machines been able to trick human judges into believing they’re carrying on a conversation with another human. The problem with these machines is they can do one thing, and one thing only.

The Lovelace Test

Now that the Turing Test has been bested, a test proposed in 2001 by Selmer Bringsjord, Paul Bello, and David Ferrucci called the Lovelace Test is being used to distinguish man from machine. For the Lovelace Test, a human judge asks an artificially intelligence machine to create some piece of art, either a poem, story, or picture. Next the human judge gives a criterion, for example, write a poem about a cat. If a machine can follow this direction, it’s said to be sufficiently human.

What about empathy?

Recently, there has been some thought about what society really wants out of an artificially intelligent machine. It’s great if they can carry on a conversation or produce art, but what about those human emotions that set us apart from machines. In a recent panel discussion at Robotronica 2015, panelists discussed what human emotions it would be important for artificial intelligence to obtain. Empathy was first on the list. If a machine could feel empathy, humans won’t need to fear artificial intelligence as we tend to do.

Source: phys.org/news/2015-08-human-emotions-artificial-intelligence.html

Implications of AI: legal responsibility and civil rights

Just a few years ago, the idea of artificially intelligent robots would have seemed like pure science fiction. The first mention of an automaton in in Homer’s Illiad. More recently, early science fiction writers like H.G. Wells and Isaac Asimov wrote about artificially intelligent robots. Today, we’re treated to at least one movie each year that deals with the subject of artificial intelligence.

Only in the last few years has artificial intelligence began to seem like a reality. Every day artificial intelligence is more science and less fiction. Already there are real world applications for artificial intelligence. They’re beginning to take over jobs that were once handled by humans. In the next few years, experts predict that artificial intelligence will continue to become more a part of our daily lives. It’s predicted that by 2025, robots will be replacing humans in one third of today’s jobs. We’re only just now coming to terms with the implications that a future shared with artificial intelligence has in store.

The robot apocalypse

A favorite motif in artificial intelligence fiction is the robot apocalypse, in which artificial intelligence decides that humans needs to go and turns its focus to obliterating all human life. Nearly every Hollywood movie about artificial intelligence in the last decade has included this plot line. Though these movies and stories are science fiction and pure speculation, many brilliant minds are concerned that there could be a real threat in creating artificial intelligence, especially if they’re given control of our weapons systems.

Big names in the scientific community like Elon Musk (CEO of Tesla Motors) and Stephen Hawking, along with more than 1,000 other AI and robotics researchers have signed an open letter citing the dangers of using AI in weapons development.

What about the legal and social implications

Though Hollywood—and the general public—like to imagine worst-case-scenarios—there are other important implications that haven’t been given as much consideration. Consider the example of the automated shopping robot designed by a Swiss art group. It was programmed to purchase illegal products over the Darknet. It was able to purchase a Hungarian passport and some Ecstasy pills as well as a few other illegal products before it was “arrested” by Swiss police. Ultimately, no charges were brought against the robot or its creators but the idea remains, how will society deal with the criminal activities of artificially intelligence beings in the future, especially when they are acting on their own and not on the programming of humans? Will artificial intelligence be held legally responsible? If so, will they need the same human rights that most people have in free countries such as the right to legal counsel? Will we see artificially intelligence beings fighting for equal rights? Only the future can tell, but perhaps it is just as likely we’ll see a robot civil rights movement as a robot apocalypse.

Source: Tech Crunch