*Note: This is not my final reflection blog post; but you can still read & comment!
AI was one of the biggest topics we discussed in class, and there has been a constant debate between the pros and cons of AI in the future. Inventions ranging from autonomous vehicles to facial recognition will be possible with the AI, and many of these are already here; they are just not readily available yet for everyone. Some are optimistic for the efficiency and economic boon, while others are pessimistic about the possibility of AI domination of the human race (for extremists). Whether one has a positive or negative outlook on AI, it has a potential to be used as a deadly weapon, which I will discuss further today.
Earlier this month, KAIST, the best science and technology university in South Korea, received serious concerns from several AI experts around the globe for its collaboration with a leading manufacturer of cluster munitions (explosive bomb that scatters bomblets) in South Korea. They were collaborating to open an AI weapons lab, which shocked many AI experts and made them boycott KAIST for its AI weapon development until it forgoes its partnership with the weapon manufacturer. Within days, the President of KAIST, Shin Sung-Chul responded and assured other AI experts that the institution is well aware of the ethical issues and has no intention to develop a lethal weapon lacking human control. The boycott was cancelled with KAIST forgoing the partnership with the cluster munitions manufacturer, Hanhwa Systems.
Although not fully autonomous, it can fully walk without a physical human control and can mimic the user based on a sensor. This was from early April of 2018.
Now, there are several reasons why their work is problematic. One, it wasn’t an issue solely for the fact that AI was being implemented in a military setting. Toby Walsh, the boycott organizer, mentions the following:
“No one, for instance, should risk a life or limb clearing a minefield – this is a perfect job for a robot. But we should not, however, hand over the decision of who lives or who dies to a machine – this crosses an ethical red-line and will result in new weapons of mass destruction.”
Walsh and many other AI experts who supported the boycott want to use AI to minimize damage, not to create autonomous machines that effectively find and kill targets. I agree with this sentence wholeheartedly, as this will only bring another arms race that we saw during post-WWII period in the Cold War. Even if the technology is great, if it is used in a bad way, it’s better not to exist. I can’t imagine such machines being in the hands of terrorists…
Two, KAIST intended to develop machines that are fully autonomous, i.e. no need for human control. When we are already witnessing the power of machine learning and its ability to process data and perform tasks that were not instructed, no one knows how these machines will act after being fully developed. In fact, this is the scariest part, because it means these machines can potentially backfire us at any given time. Nuclear bombs, at least, can be “fully” controlled with several buttons and orders to be launched. AI machines? Not so much.
Lastly, continuation of this collaboration can threaten North Korea and make recent efforts for peace in vain. This is more of a political risk than AI’s technology itself, but it is a rather significant one because it’s not a clever idea to pose an aggression when the two leaders in North Korea and the U.S. have “unexpected” behaving patterns that could cause a serious war. KAIST may have thought that it was a good way to protect the South Korea and add value to its military force in a subtle, academic way, but when it has so much potential for a lethal weapon, people’s thoughts can be different.
And one more interesting fact: Hanhwa Systems, the weapon manufacturer that KAIST partnered with, was actually banned from more than 120 countries and the UN for making cluster munitions that I mentioned earlier. There was a convention on cluster munitions back in 2008 after realizing the impact of those bombs, yet this manufacturer still has developed them. When a law-abiding weapon manufacturer collaborates with one of the brightest institutions in science and technology, I doubt the outcome of that partnership will be optimistic.
1 min lesson on cluster munitions & its deadliness
After reviewing this issue, I was fascinated that we live in a world where we “have to” worry about lethal AI machines that we would see in movies like Terminators, yet the fact that such attempts are made in my very own homeland scares me. A week after this boycott issue, 123 UN member countries gathered in Geneva to discuss the risks and challenges that can arise from AI’s implementation in the military setting, specifically as “killing robots.” I am glad that leading experts in this field care a lot more about AI’s misuse than its effectiveness because the ethical side is hard to be recovered as soon as it is compromised. When everyone starts building this lethal weapon, the next question becomes “who will build it better”, not “how can we prevent its misuse.” This post may add another layer for the pessimistic side of AI technology, but if used correctly with constant checks and balances, it will serve us well even in the military realm.
Thanks for reading, and share your thoughts!