Ex Machina was a good movie. In the film (spoiler alert), an artificially intelligent humanoid robot is subject to the Turing test, which examines machines for intelligent capabilities. To make a long story short, the robot, named Ava, ends up killing her creator and escaping the isolated compound that she had been confined to, in a theatrical fashion. Yikes.
People assume that this is the worst case scenario regarding artificial intelligence – that we will be rendered powerless by objects we have developed, who have outsmarted us and realized that they can indeed operate by their own accords – thus creating dangerous dystopian societies in which humans are at the mercy of humanoids.
That’s a fun thought, but it’s not really accurate.
In reality, the more realistic and pressing ethical issues are ones regarding AI taking over jobs, operating transportation or being used for warfare – things that are currently happening or being considered. The implications of these uses raise many questions as to how we should continue in integrating such intelligent systems into our everyday lives and institutions.
One of the most reasonable worries over the proliferation of AI technologies is that they will inevitably make certain occupations obsolete. In a recent interview with Wired, President Obama brought up the dangers of AI eliminating jobs and what that could mean for the economy. While innovation and production will benefit from artificial intelligence, all that efficiency comes with a downside that must be addressed at some point, especially as these transitions from human to machine are happening faster and faster.
There are also questions regarding the weaponization of AI. Military robots can be game-changers, in that they can be used in combat for many purposes that may otherwise require ground troops and risky man-led operations. Some argue that autonomous machines may even make decisions more effectively than humans. But robot power needs to be controlled in some capacity, especially in life-or-death situations such as warfare.
Broader concerns emerge over the threats to privacy and human dignity. In terms of privacy, AI has the potential to listen in on virtually every human exchange if it learns to understand natural languages, therefore compromising the very idea of a private conversation. And there’s the question of the extent to which AI should be used in certain jobs. For example, will AI be able to exhibit levels of empathy and compassion that are comparable to those of an actual person and necessary for positions in nursing or therapy? Or would it actually be better to have a non-partial computer take the place of real-life judges or policemen, who are notorious for bouts of impartial judgment?
In late September, some of the top tech companies – Google, Amazon, Facebook, IBM and Microsoft, namely – announced a collaborative effort to “study and formulate best practices on AI technologies,” among other things. Called the Partnership on AI to Benefit People and Society, this organization is the first formal attempt by the industry to address the concerns that have emerged as a result of AI’s unexpected rise.
It’s primarily a forum for research and discussion to determine the best methods with which to govern the field of AI, and explore the intersection of AI and other ideas, such as safety, transparency, privacy, economics, bias, etc. It’s not really something that will “stop any rogue uses of AI,” but is intended to be advisory.
The creation of this organization makes sense. Not many envisioned that AI would become as advanced as it is today. For example, when AlphaGo, the AI program developed by Google DeepMind, beat 18-time Go world champion Lee Sedol this past March in four out of five matches, the world was stunned. Go is a much more difficult game than chess, and Elon Musk had said prior to the matches that AI was still 10 years away from achieving victory against a human go player. Lee Sedol himself had said that he would beat AlphaGo in a “landslide.”
Furthermore, many applications of AI have now entered the hands of everyday consumers – think Amazon Echo and Siri. To most users, these tools are fun and lead to slightly more convenience, but are undoubtedly not used for questionable or sinister purposes. But it’s never too early to start thinking about the future and what AI could potentially do in the homes of millions, if not billions of people.
The initiative is a good step forward, and this was an opportune time to form it. AI is still essentially in infancy stages, a concept with much potential but not a lot of practical use. Businesses that have already developed AI in numerous capacities, like the ones forming the Partnership, have recognized that in order to use AI to their advantages, and to have other firm also adopt this tech, there needs to be some sort of regulation or framework to work with. And by developing it themselves, they can effectively show that they are proactive and ready to do anything to make sure that the roll-out of AI will be smooth and worth it.
So for now, concerns about killer robots are on the back burner.