A brief history of artificial intelligence

The concept of artificial intelligence began as pure fiction, something to be imagined but never actually existing. Today, we know that that’s no longer the case. Artificial Intelligence is real and there are already real-world applications where artificial intelligence is helping us solve some of the biggest problems facing humanity. We’re still a ways off from creating true artificial intelligence but we’re getting closer every day. Here’s a look at how artificial intelligence has developed through the years.

Greek myths

The earliest known reference to something we could term artificial intelligence dates back to the ancient Greeks. According to their mythology, Hephaestus, the blacksmith of Olympus created life-like metal automatons that were created to carry out certain functions.

The birth of science fiction

The concept of artificial intelligence stays relatively quiet until the early 19th century when Mary Shelley wrote Frankenstein, considered by many to be the first true science fiction novel because of the emphasis on the use of scientific methods and equipment to create a semi-intelligent monster. This novel gave way to more science fiction, some of which deal with the theme of robots and robots taking over humanity.

The first computer that was never made

Charles Babbage, a Victorian era inventor designed the first computer (on paper anyways) in 1822. It was designed to carry out mathematical calculations. He died before he could build his device which he called the Difference Engine but based on his designs, the machine could have worked had it been built and would have been the first computer.

The Turing Machine and the Turing Test

Alan Turing was a brilliant mathematician who helped bring World War II to an end by using his Turing Machine to break the German’s code. He is considered the father of computer science and artificial intelligence. He is also famous for coming up with the Turing Test which is designed to differentiate between computers that can be said to have artificial consciousness and computers that can’t.

Dartmouth Conference

In the summer of 1956, the scientific field of artificial intelligence was born over the course of a month long conference held at Dartmouth College. The boundaries of the field were set and plans were made to recreate human intelligence in a machine.

Hot and cold seasons

Researchers left the Dartmouth Conference with a lot of research money and optimism. Early AI researchers quickly realized that creating artificial intelligence was going to be a lot harder than they previously thought. This led to discouragement, lack of funding and very little progress in the field of AI in the early 70s and 80s though there were also resurgences as well.

AI in Hollywood and the real world

This brings us to today. Artificial Intelligence is now a part of our pop culture thanks to dozens of Hollywood movies that deal with the concept of artificial intelligence. Many of these portrayals are negative and depict robots overthrowing humanity, but some are more nuanced and treat the subject in a very thoughtful way. Artificial intelligence is now a part of our daily lives thanks to personal assistants like Siri and Cortana.

Source: History Extra

Do computers still need us?

One of the most exciting things going on in the world of tech as we approach 2016 is the advancement of artificial intelligence technology. One of the biggest steps taken in the industry is the improvement of so-called “cognitive computing.” Cognitive computing is when a computer is capable of interpreting human meaning out of questions spoken in natural language. In other words, it doesn’t need to be given specified commands. They aren’t programmed to give a set response to a predetermined question posed in a specific way. They “hear” human speech, interpret what is being asked or said and determine an appropriate response.

Does that make us obsolete?

As with any major scientific advancement, the idea of artificial intelligence is scary to a lot of people. One of the fears, and our literature and movies often depict this fear, is that artificial intelligence could grow smart enough to rise up and overthrow humanity. That level of artificial intelligence is probably very far away still.

A more legitimate fear, and a fear that could be realized very soon, is the idea that artificial intelligence can grow smart enough that it no longer needs us. If we create something autonomous, it can take our jobs, do them better than we can, and make us obsolete. For some, that may be a fate worse than death at the hands of a robot apocalypse.

According to developers, the answer is “no”

In a recent survey of 529 artificial intelligence developers, 47% said that machine learning software still requires human input some of the time. Just 2.6% reported that human input wasn’t required at all. That means that approximately 97.4% of the most advanced computers in the world still need the human touch.

You can expect the cognitive computing industry to continue to grow in the coming years and expect artificial intelligence to become even more advanced. But according to the leading researchers in the field who understand the concept of artificial intelligence more than anyone else, even as these computers and programs are designed and released into the world, they will still need a team of humans to keep them working properly. It seems that humans don’t need to worry about losing their jobs to artificial intelligence anytime soon.

But now that computers can interpret and respond to questions, it’d be interesting to see how they’d answer the question: “Do you still need us?”

Source: Forbes

Does AI spell doom for the workforce?


For approximately a hundred years, humans have theorized the possibility of artificial intelligence and the implications that such an invention would bring about. In literature and on the big screen, we read or see stories where artificial intelligence overthrows humanity and tries to wipe us off the face of the earth. Authors and movie producers aren’t the only ones spreading fear of AI. Some of the most brilliant minds of today, like Stephen Hawking and Elon Musk have also issued warnings about the potential for AI to turn against humanity if given too much independence.

But another fear, and perhaps one that’s a little more understandable, is that AI could one day leave millions of people jobless as their work is outsourced to artificially intelligent machines. This thought isn’t a new one. It was first brought up in the 70s and 80s when computers were becoming more mainstream. The discussion has been rekindled due to recent advancements in artificial intelligence. The question then is, do we really need to fear? The answer isn’t so simple.

The computer revolution scare

When computers were starting to become more affordable and mainstream in the late 70s and through the 80s, many people feared that companies would fire humans and replace them with computers who could do the work that people did. Because computers were cheaper than paying employees in the long run, it wasn’t hard to believe that companies might favor computers over human workers.

But of course, we never saw the widespread job loss that many predicted. Though computers did have a huge impact on the workforce, this impact was largely positive. Though many jobs were replaced by computers, the existence of computers in the workplace led to many more jobs than it eliminated.

Will the robot revolution will be different

Though similar fears have been put to rest in the past, those warning of an impending job crisis due to the advancement of artificial intelligence and robotics say this time will be different. Already artificial intelligence is demonstrating its ability to perform tasks that could in theory render many human professions obsolete. For instance, driverless cars are already a reality that will eventually become mainstream. Who needs bus or taxi drivers when buses and taxis can be outfitted with a computer that can follow a specified route or take people to a specified destination?

Of course, just like with the computer revolution, a robot revolution would certainly create jobs as well as people would be needed to design them.

Blue collar vs. white collar jobs

Though the robot revolution will certainly have a large impact on the workforce, it’s still unclear what that impact will be. Some argue that blue collar jobs face the largest risk as the work is menial and can more easily be replaced. But others have been quick to point out that as artificial intelligence becomes more intelligent, white collar jobs might also be at stake, perhaps even more so than blue collar jobs. In the end, people in the work force will just have to wait and see what the robot revolution brings.

Source: Inquisitr


Super Mario Bros: The ultimate artificial intelligence test


If you were to make a list of the greatest video games ever made, the original Super Mario Bros. made for the Nintendo System would have to be towards the top of that list. Thanks to original game mechanics and challenging level designs, Super Mario Bros. was an instant classic and remains popular today—30 years later. But retro-gaming enthusiasts aren’t the only ones still playing Super Mario Bros. As it turns out, the classic video game is a huge hit with artificial intelligence developers. That’s because they often use the video game to test the intelligence of their systems.

How the test works

Artificial intelligence researchers use Super Mario Bros. to test AI in a couple of different ways. The first is that they attempt to program an AI system to successfully beat the game from beginning to end. To do this, the AI system needs to learn the nuances of the game in order to time jumps perfectly, avoid enemies, and successfully reach the end of each level before time expires. From 2009 to 2012, AI researchers even held a Mario AI competition to where competitors tried to design AI that could complete the game in the fastest time.

AI researchers are also programming artificial intelligence to design playable levels that will provide players with a challenge without being too difficult. Researchers believe that artificial intelligence may soon be used to assist humans with level design.

Why Super Mario Bros.

You may be wondering, out of all the ways to test an AI system, why use a video game. As it turns out, video games are an ideal way to test artificial intelligence because they require the use of logic, creativity, situational awareness, and decision-making skills to successfully beat the game. All of these skills are necessary in the quest for true artificial intelligence. But of the tens of thousands of video games to choose from, why Super Mario Bros.? AI researchers cite two primary reasons: first, they just love to play the game like the rest of us and second, the classic platformer game’s perfect mix of complexity and simplicity combined with finely-tuned mechanics make it an ideal game for artificial intelligence testing.

While early AI systems used Atari games for testing, it’s too simplistic today thanks to advancements in artificial intelligence. But side-scrolling games like Super Mario Bros. present a bit more of a challenge because a good percentage of the level design can’t be observed by AI at any given time.

Source: Motherboard.com