The Ethics of AI

Ex Machina was a good movie. In the film (spoiler alert), an artificially intelligent humanoid robot is subject to the Turing test, which examines machines for intelligent capabilities. To make a long story short, the robot, named Ava, ends up killing her creator and escaping the isolated compound that she had been confined to, in a theatrical fashion. Yikes.

ex-machina

Ex Machina

People assume that this is the worst case scenario regarding artificial intelligence – that we will be rendered powerless by objects we have developed, who have outsmarted us and realized that they can indeed operate by their own accords – thus creating dangerous dystopian societies in which humans are at the mercy of humanoids.

That’s a fun thought, but it’s not really accurate.

In reality, the more realistic and pressing ethical issues are ones regarding AI taking over jobs, operating transportation or being used for warfare – things that are currently happening or being considered. The implications of these uses raise many questions as to how we should continue in integrating such intelligent systems into our everyday lives and institutions.

One of the most reasonable worries over the proliferation of AI technologies is that they will inevitably make certain occupations obsolete. In a recent interview with Wired, President Obama brought up the dangers of AI eliminating jobs and what that could mean for the economy. While innovation and production will benefit from artificial intelligence, all that efficiency comes with a downside that must be addressed at some point, especially as these transitions from human to machine are happening faster and faster.

potus_grey

President Obama’s guest edited issue of Wired

There are also questions regarding the weaponization of AI. Military robots can be game-changers, in that they can be used in combat for many purposes that may otherwise require ground troops and risky man-led operations. Some argue that autonomous machines may even make decisions more effectively than humans. But robot power needs to be controlled in some capacity, especially in life-or-death situations such as warfare.

Broader concerns emerge over the threats to privacy and human dignity. In terms of privacy, AI has the potential to listen in on virtually every human exchange if it learns to understand natural languages, therefore compromising the very idea of a private conversation. And there’s the question of the extent to which AI should be used in certain jobs. For example, will AI be able to exhibit levels of empathy and compassion that are comparable to those of an actual person and necessary for positions in nursing or therapy? Or would it actually be better to have a non-partial computer take the place of real-life judges or policemen, who are notorious for bouts of impartial judgment?

In late September, some of the top tech companies – Google, Amazon, Facebook, IBM and Microsoft, namely – announced a collaborative effort to “study and formulate best practices on AI technologies,” among other things. Called the Partnership on AI to Benefit People and Society, this organization is the first formal attempt by the industry to address the concerns that have emerged as a result of AI’s unexpected rise.

partnershiponai-screenshot

Brainchild of Google, Facebook, Microsoft, IBM, and Amazon

It’s primarily a forum for research and discussion to determine the best methods with which to govern the field of AI, and explore the intersection of AI and other ideas, such as safety, transparency, privacy, economics, bias, etc. It’s not really something that will “stop any rogue uses of AI,” but is intended to be advisory.

The creation of this organization makes sense. Not many envisioned that AI would become as advanced as it is today. For example, when AlphaGo, the AI program developed by Google DeepMind, beat 18-time Go world champion Lee Sedol this past March in four out of five matches, the world was stunned. Go is a much more difficult game than chess, and Elon Musk had said prior to the matches that AI was still 10 years away from achieving victory against a human go player. Lee Sedol himself had said that he would beat AlphaGo in a “landslide.”

house-alpha-go-2-1200

Lee Sedol struggling against AlphaGo. Poor guy.

Furthermore, many applications of AI have now entered the hands of everyday consumers – think Amazon Echo and Siri. To most users, these tools are fun and lead to slightly more convenience, but are undoubtedly not used for questionable or sinister purposes. But it’s never too early to start thinking about the future and what AI could potentially do in the homes of millions, if not billions of people.

The initiative is a good step forward, and this was an opportune time to form it. AI is still essentially in infancy stages, a concept with much potential but not a lot of practical use. Businesses that have already developed AI in numerous capacities, like the ones forming the Partnership, have recognized that in order to use AI to their advantages, and to have other firm also adopt this tech, there needs to be some sort of regulation or framework to work with. And by developing it themselves, they can effectively show that they are proactive and ready to do anything to make sure that the roll-out of AI will be smooth and worth it.

So for now, concerns about killer robots are on the back burner.

 

10 comments

  1. Austin Ellis · ·

    Great post! Ex Machina caught my eye as a film I have yet to see, but have heard a lot about. I have been following HBO’s series Westworld which also heavily relies on the ethics of AI, as well as their abilty to remember and understand injustices against them. I highly recommend it. Anyway, I think that decison making in AI is crucial feature of concern for future development. Warfare as you mention, as well as accident scenarios for autonomous vehicles, represent difficulties in AI right now. The idea in general of a non-human intelligence being responsible for human life is a justifiable point of controversy. I am glad organizations like Google are looking into solutions.

  2. Aditya Murali · ·

    Awesome post! AI has always been a cool but scary concept to me, with its capabilities seemingly endless. I am definitely most concerned about losing jobs to AI, and what that will do to the economy as companies do whatever they can to cut costs and implement this kind of technology, creating higher rates of unemployment. But, like Austin mentioned in his comment, AI projects that currently are on the market like autonomous cars are not doing too well, and the technology does have some issues. I firmly believe that while AI makes great sense as a concept, rolling it out into the real world will cause a lot of difficulties and will be proven to too risky to be fully trusted.

  3. holdthemayo4653 · ·

    Your post instantly caught my eye. My company is always looking to achieve expense savings through process simplification and automation. I constantly fear that many jobs in this country will be lost to AI, with autonomous driving vehicles only being the beginning. You made a great point about jobs that require “soft” skills and how AI will not be able to replace them. I almost worry more about AI making decisions based on the 99% reality. Humans are allowed to make judgment calls and consider alternatives when something doesn’t “feel right”. I wonder how AI will be able to balance the science and the art of decision making.

  4. I had no idea these companies had come together to form this Partnership! I found this post interesting and you address so many questions that not only these companies, but governments and us as a society need to start asking about these technologies. From a straight utilitarian standpoint, will these AI machines produce more good for society overall than the loss of certain jobs? Does that even matter? And will news types of jobs be created due to the massive amounts of AI? I was really excited to hear about the Partnership, but I think as AI progresses and affect our society even more these questions will go beyond big corporations and governmental involvement will be required.

  5. Tyler O'Neill · ·

    Excellent post (and movie)! I love considering the possibilities of what AI will be in the future and how it will affect human’s lives. Personally, I see AI as an opportunity for human’s to further pursue our unique desires. In a world where AI controlled robots take over all of our jobs, it eliminates human’s need to worry about having enough goods and services and we can then focus our time and energy on the tasks and jobs that truly interest us the most (not necessarily the most profitable ones). It is amazing the strides that have been made with AI in recent years, and I’m incredibly excited to see what will come next!

  6. magicjohnshin1 · ·

    Hey Katie, thanks for sharing! I really loved your blog post because it really delves into the important questions. Is this really morally acceptable? It really is scary to think how much AI can influence our lives in the future, and even now. The controversy over automating and integrating AI into weaponry and warfare has always been a hot topic, and there is never a right answer. I think that maybe part of it can be AI transformed but others not. I wonder… Cheers!

  7. Great post! I think you open up a lot of controversial topics with your article that will definitely be important in the future. The idea of AI taking jobs is a problem, but then you get into the discussion of what’s more important, shareholder value or giving people a living. You also get into the debate about how whether or not it’s a company’s duty to give “charity” and employee regular humans when AI would be much more efficient. The Partnership on AI is definitely the most interesting part of the article for me since I had no knowledge about it. Do you think that the government will or has to step into this? As technology companies, many might speculate that the interests of the “greater good” do not necessarily align with the companies. However, the government traditionally always lags behind in terms of technology, so intervening might do more harm than good. This post has so many great tid bits that could turn into full fledged problems in the future, and I wouldn’t be surprised to see this talked about in the presidential debates in 20 years.

  8. I think this was a great post. It did a great job of highlighting some key concerns for people when it comes to AI. I think the largest cause of this fear is mainly because people don’t understand what Artificial Intelligence is. Furthermore, you have very credible sources like Stephen Hawking and Elon Musk have denounced AI. On the other hand, you have some other credible sources like Zuckerberg who don’t feel so negatively about AI. I think your point about AI being used today is great because it reminds people that this technology, at the end of the day, is just an automated computer. Thank you so much for the insight.

  9. Fun topic! I’m a big fan of movies like iRobot and Terminator. I’m just waiting for the day that Skynet goes live! In the meantime before robots kill us all, they might be taking our jobs. This is an interesting predicament. If you run a company and have the ability to significantly reduce costs with the use of machines or AI, do you replace your human capital with those machines? It is a question of of to whom management owes a fiduciary duty to. Certainty in the case of a publicly traded company, management must make every effort to increase profits for shareholders. That is where their duty lies. In private companies, the question is not so clear. Management is able to make decisions to benefit stakeholders rather than shareholders. It remains unclear what is ethical and what is not. I just hope machines don’t take my job.

  10. AI is one of the technologies that are most poised to take off in coming year. Will be interesting to see what it brings (and what it doesn’t)

%d bloggers like this: