Lethal AI robots could have come true sooner than we thought

*Note: This is not my final reflection blog post; but you can still read & comment!

AI was one of the biggest topics we discussed in class, and there has been a constant debate between the pros and cons of AI in the future. Inventions ranging from autonomous vehicles to facial recognition will be possible with the AI, and many of these are already here; they are just not readily available yet for everyone. Some are optimistic for the efficiency and economic boon, while others are pessimistic about the possibility of AI domination of the human race (for extremists). Whether one has a positive or negative outlook on AI, it has a potential to be used as a deadly weapon, which I will discuss further today.


Earlier this month, KAIST, the best science and technology university in South Korea, received serious concerns from several AI experts around the globe for its collaboration with a leading manufacturer of cluster munitions (explosive bomb that scatters bomblets) in South Korea. They were collaborating to open an AI weapons lab, which shocked many AI experts and made them boycott KAIST for its AI weapon development until it forgoes its partnership with the weapon manufacturer. Within days, the President of KAIST, Shin Sung-Chul responded and assured other AI experts that the institution is well aware of the ethical issues and has no intention to develop a lethal weapon lacking human control. The boycott was cancelled with KAIST forgoing the partnership with the cluster munitions manufacturer, Hanhwa Systems.

Although not fully autonomous, it can fully walk without a physical human control and can mimic the user based on a sensor. This was from early April of 2018.

Now, there are several reasons why their work is problematic. One, it wasn’t an issue solely for the fact that AI was being implemented in a military setting. Toby Walsh, the boycott organizer, mentions the following:

“No one, for instance, should risk a life or limb clearing a minefield – this is a perfect job for a robot. But we should not, however, hand over the decision of who lives or who dies to a machine – this crosses an ethical red-line and will result in new weapons of mass destruction.”

Walsh and many other AI experts who supported the boycott want to use AI to minimize damage, not to create autonomous machines that effectively find and kill targets. I agree with this sentence wholeheartedly, as this will only bring another arms race that we saw during post-WWII period in the Cold War. Even if the technology is great, if it is used in a bad way, it’s better not to exist. I can’t imagine such machines being in the hands of terrorists…


Two, KAIST intended to develop machines that are fully autonomous, i.e. no need for human control. When we are already witnessing the power of machine learning and its ability to process data and perform tasks that were not instructed, no one knows how these machines will act after being fully developed. In fact, this is the scariest part, because it means these machines can potentially backfire us at any given time. Nuclear bombs, at least, can be “fully” controlled with several buttons and orders to be launched. AI machines? Not so much.


Lastly, continuation of this collaboration can threaten North Korea and make recent efforts for peace in vain. This is more of a political risk than AI’s technology itself, but it is a rather significant one because it’s not a clever idea to pose an aggression when the two leaders in North Korea and the U.S. have “unexpected” behaving patterns that could cause a serious war. KAIST may have thought that it was a good way to protect the South Korea and add value to its military force in a subtle, academic way, but when it has so much potential for a lethal weapon, people’s thoughts can be different.

And one more interesting fact: Hanhwa Systems, the weapon manufacturer that KAIST partnered with, was actually banned from more than 120 countries and the UN for making cluster munitions that I mentioned earlier. There was a convention on cluster munitions back in 2008 after realizing the impact of those bombs, yet this manufacturer still has developed them. When a law-abiding weapon manufacturer collaborates with one of the brightest institutions in science and technology, I doubt the outcome of that partnership will be optimistic.

1 min lesson on cluster munitions & its deadliness

After reviewing this issue, I was fascinated that we live in a world where we “have to” worry about lethal AI machines that we would see in movies like Terminators, yet the fact that such attempts are made in my very own homeland scares me. A week after this boycott issue, 123 UN member countries gathered in Geneva to discuss the risks and challenges that can arise from AI’s implementation in the military setting, specifically as “killing robots.” I am glad that leading experts in this field care a lot more about AI’s misuse than its effectiveness because the ethical side is hard to be recovered as soon as it is compromised. When everyone starts building this lethal weapon, the next question becomes “who will build it better”, not “how can we prevent its misuse.” This post may add another layer for the pessimistic side of AI technology, but if used correctly with constant checks and balances, it will serve us well even in the military realm.

Thanks for reading, and share your thoughts!









  1. RayCaglianone · ·

    This article reminds me a lot about the debate over the use of drones in warfare: technology making possible the use of unmanned devices that can wage warfare on a precise, or grand, scale. But at the end of the day there is still a human on the other end operating a drone, whereas the possibilities suggested by AI is a lot more ominous. The moral questions faced in the era of developing AI are already provocative enough, and add warfare into the mix and you’re due for comparisons to apocalyptic science fiction like Terminator. I do wonder how much different nations, including the U.S., have under wraps regarding AI. We can only hope that the people in charge of this technology ensure that cooler heads prevail. People living during the Cuban Missile Crisis and the Cold War probably imagined that nothing could be more threatening than nuclear weapons in military technology, but with AI getting better and better, those check and balances will be more important than ever.

  2. NeroC1337 · ·

    My question with the topics is that, if South Korean intended to implement this AI into warfare, why do they release this information when the product isn’t yet fully finished. Because, isn’t the normally logic will be that BOOM, looked this thing we got here that is finished, ready to go, and lethal. That doubt really keeps me wondering whether or not this is an intention to bring AI as a lethal weapon, or just develop a robot tool that could implement in the battlefield, like you mentioned clearing the explosive mine field and such. Now would it land in the hands of the terrorists, I hope not. But it is definitely up to the government the developers to ensure that.

  3. Tully Horne · ·

    First, I think it is worth mentioning the recent good news about peace talks between North and South Korea facilitated by the United States. With that being said, I think it is ironic that, although war involves killing people, we have a hard time handing over ethical dilemmas to robots. The drones used by the US in war are a good example of AI used in war that people have had debates about. @raycaglianone makes a good point about checks and balances. If one single country is allowed to get out of control with their use of AI for military purposes, things may get out of hand. A greater hope would be for no war, but a more realistic one is to leave the fighting to people as bringing AI into the mix could have exponentially more dangerous consequences. Good post!

  4. DingnanZhou · ·

    Nice post! I remember reading twitter discussion on AI lethal weapons. That should dworlefinitely categorized as a dark side of technology. Agree on comment from @raycaglianone on check and balances. Since they are lethal and are most likely to kill lives, restrictions have to be hard. Everything wants world peace. If the ideal stage cannot be achieved, let’s hope those shall only put into use for justice.

  5. realjakejordon · ·

    Jo, great article! I think a lot of my thoughts are covered by previous arguments but this makes me feel good about AI. I know you read my wrap up, and know that all this tech scares me a little. I’m worried that capitalism and being there first doesn’t always make way for those checks and balances, but at least when it comes to lethal weapons, people recognize a need to be cautious! It’s good that that “red-line” exists, and hopefully it can transcend beyond just weapons in the world of AI!

%d bloggers like this: