What Comes After AI?

Table of Contents

    The progress in tech’s ability to develop algorithms that can learn and adapt is staggering. Some say that due to the immense computational power available to not only programmers and data scientists, but mathematicians, biologists, and even philosophers, teachers, and non-academical people, that Machine Learning has been democratized and expanded and Artificial Intelligence has grown exponentially.

    The “Do It Yourself” movement has advanced to the realm of hardware. We find average people implementing devices to make their homes smart, just as we build self-driving cars and engineer rockets that are capable of reaching the limits of earth’s stratosphere, robots capable of recognizing their environment, and much more.

    Our goals for Artificial Intelligence have never seemed more imminent than they do right now; at the same time, AI still has a long way to go to be widely applicable and useful. It also begs the question as to what are we trying to do with AI? Why is AI needed and what problem are we trying to solve?

    Key AI Challenge

    A key challenge for AI is that it continues to be very domain-centric. A self-driving car, for example, can do just that, drive itself. But it can’t pick up your children from school if you can’t get there yourself. It can’t go pick up your groceries without you. The hardware and software that is implemented and designed are meant for only one thing—to drive you to the destination you request.

    Most robot manufacturers are focused on enabling a robot to accomplish specific tasks, such as being able to move across a variety of terrains and environments without falling, being able to distinguish a cat from a dog, or being able to understand human language, spoken or written.

    Similarly, in software development, learning algorithms that are applied to solving problems within big data, predictive analytics, and advanced statistical analysis, among others, is still very much confined to a particular area of function for a specific case. For example, a predictive algorithm developed for credit card fraud can’t be used for other types of fraud and will not at all be applicable for other types of predictions in other sectors such as healthcare or manufacturing.

    AlphaGo can play the game of Go very well, but it can’t play chess (a much simpler game), or Poker, or any of the other games that human beings can play within minutes of being exposed to them.

    Bottom line? We are building smarter machines that are still far from “intelligent.”

    Current AI Projects are Self-limiting

    Part of the concern is that, while we have an overwhelming need to develop AI applications, “AI” means different things to different people, companies, and research projects. Creators set out to design, build, and invest in solving one problem issue, or a collection of problems that are all related.

    A car that drives itself is not a sign of intelligence. It is a machine—a car with mechanical, electronic, and software components—that drives itself from A to B, hopefully without crashing and on time (although neither can be guaranteed). If we set our goals to develop an algorithm to detect Credit Card Fraud for one company, don’t expect it to predict Healthcare Costs for a hospital.

    Nobody ever started an AI research project stating that they want to develop an algorithm that can solve all problems. In other words, there are no research projects today working on developing a robot that’s capable of driving a car, picking up groceries, predicting credit card fraud, selecting the perfect gift for someone, and cleaning the house, all within the same scope and day.

    The day just described, however, is a typical day in the life of a person working for a credit card company. He or she is an intelligent person, someone who is able to make millions of decisions, including, just to start, the most logical sequence to accomplish the activities, what tools to use, where to go, how fast to walk, and what route to drive there.

    Still, there are numerous separate projects working to resolve each of these tasks individually. Hopefully, they will perform well, or even better than humans, in each of those areas.

    But that is not intelligence. That is a learned skill.

    In reality, to develop AI is to ask the question: “Can I build a machine that can replace me?”

    And even if we could build such a machine, there is one kind of intelligence that is a much bigger challenge—emotional intelligence.

    Emotional Intelligence Needed

    As briefly described by Howard Gardner, emotional intelligence “is the level of your ability to understand other people, what motivates them and how to work cooperatively with them.”

    It is important to distinguish between emotional intelligence and ethics.

    In AI, ethics is a whole topic in itself. The fact is, it’s inevitable that a car’s AI software will at some point have to make a decision on behalf of humans—whether to kill a passenger or kill a pedestrian, for example. Women are purportedly given lower credit ratings than men, and technology-assisting judges may be implementing serious biases against black defendants during sentencing due to the use of biased historical data to predict outcomes.

    The problem is that we enlist humans—with all of their biases—to teach the machines. Or, instead of humans, we use data that contains untruths about today. And we have almost no data about the future.

    Here’s another example. When it won its match playing Go, AlphaGo didn’t adapt to the player, it adapted to the moves that the player made. Would AlphaGo have won faster if it was also able to understand the state of mind of the human player?

    If its passenger was stressed, would a car choose to drive through a calmer or more scenic route? Or could it drive slower or faster based on the mood its passenger was in at the moment?

    Would an online chatbot use a different language if it understood that the customer was angry? And if so, what language?

    Historical Data is Useless for AI

    AI’s biggest failures are due to the fact that it relies on historical data to predict future data. For example, an intelligent vacuum cleaner robot should be able to optimize itself over time, learning as it goes, maximizing its coverage as it vacuums the house multiple times, keeping an “eye” out for whatever interrupts its path, such as a cat or a toddler. It should be able to figure out that things sometimes move and that some areas get dirtier than others do, so they spend more time in those areas.

    In other words, vacuum robots should adapt to their environment.

    Supervised or reinforcement learning algorithms are, at best, statistical models that simply try to deduce the future from the past. The past does not always reflect the future, nor does it take the need for change into consideration. Basically, using machine learning algorithms with historical data is simply attempting to repeat the past in the future and that can be dangerous especially knowing that our views a century ago, or even 10 years ago is substantially different than today. An example of this was the example discussed previously of white defendants being mislabeled as low risk more often than black defendants in the U.S. Sentencing Commission study.

    Even unsupervised learning is limited to overcoming built-in biases since it uses the approach of similar properties to determine the correct result. But properties still need to be defined by humans. If you don’t believe me, watch this amazingly funny YouTube video of someone using Google Captcha and their interaction in defining what a car or a sign is.

    The Future of AI is Another AI

    One interesting concept involves having the ability to create an overseer for all other AI projects. Imagine that, in your smart home, you have a vacuum robot, a robot lawnmower, a robot cooking machine, a smart fridge (interesting that fridges are “smart,” but a cooker is a “robot”). In fact, all of your lights, your heating, your security system, and your locks are “smart” or “robot” or “intelligent.”

    Can we create a machine or develop custom software that can learn from all of those algorithms? An average person can mow the lawn, vacuum, clean, cook, etc., so it is feasible that we can teach a piece of software to do those things.

    What is missing, however, is mobility. Even though the robot cooker can cook and, with some additional intelligence, use the internet to order the ingredients it needs for dinner online, can it answer the door to receive the groceries, prepare the ingredients (peel, unfreeze, unpack, etc.), and place them into itself?

    A vacuum robot can vacuum well, but, unfortunately, it still can’t clean itself and empty its own dust bin.

    What is needed here?

    Make it Alive! More Robotics are Needed

    We need more robotics, more arms, pulleys, and conveyors; we need more small and big things that turn, pull, push, and move. Is AI just about robotics? No, of course not. But there is little point in building an Artificial Intelligent system that does nothing more than put out words or numbers. That would be a single purpose machine, and it hardly represents intelligence.

    Also, when we’re talking about robotics, I include devices such as cameras, pressure sensors, temperature sensors, microphones, and many other types of devices that need to be improved to gather more data samples so that the machine can have utilized a larger view of the data to feed its learning cycles.

    The Beginning of Self-programming

    One more step that is interesting to explore is the ability to teach a machine to program itself. Yes, we have heard that algorithms such as AlphaGo can program themselves, but the objective is to learn to play the game better—not to create something else completely different. Can it create a game that is better than Go? Or create a new Space Invaders? Or CRM?

    It can get very interesting if we can tell the machine that we want to play a game and the machine creates a game for us to play. Ring any bells for those born pre-1980?

    Summary

    We are a long way from creating real Artificial Intelligence. We can’t even figure out how to ensure that only humans can log in to a website. How will we be able to teach a machine what a human is if we can’t recognize one ourselves?

    At best, we are at the cusp of developing very smart – not intelligent – machines. Machines that have the most sophisticated algorithms and that are continuously improving, machines that can drive us around or vacuum our house in a smart way but not intelligently.

    But that is all they are: machines. More specifically, they are machines that initially run software that has been developed by, coded by, and has specific rules defined by humans. We are as far away from Artificial Intelligence as we were 50 years ago, but we are producing more algorithms and smarter machines, and we are doing so faster than ever before. That in itself will help us achieve the ultimate goal of a truly Artificial Intelligent machine.