The TSA Is Learning

Since we’ve started this class, I’ve been more aware of digital trends and applications, and I’ve noticed something interesting. If there is a problem in the world, chances are people are trying to solve it using some sort of digital solution.

Don’t want to get up to turn the lights out? The Clapper!

Can’t figure out how to vacuum your carpet? Roomba!

This brought me to think about some other problems in our lives (a little more seriously than a vacuum). I remember reading an article a few years ago about a security review the TSA conducted. Undercover Federal Agents tested airport security strength by running 70 tests to avoid detection. They carried weapons, contraband, and mock bombs while going though pat-downs and scans. Guess how many times they were successful. 67. That’s right, 95% of the time, airport security failed to detect a threat.tsa-logo

So this got me thinking. With a massive budget, $40 Billion for Homeland Security in total, this seems like a prime opportunity for a large investment in digital solutions. The article did say that this was due to both human and technological error, but not to what extent and not to what the mix was. An advancement in technology will obviously help the error rate but I think human error is the telling part of this story.

A great example of human error is the cause of plane crashes. Boeing estimates that 80% of plane crashes are caused by human error. As a former professional pilot myself, I can tell you it’s because of two reasons: tactical and operational human errors. Tactical human errors would point to poor decisions because of fatigue or lack of experience and operational human errors involve poor flight instruction or training.

The main points of human error: “poor decisions” “lack of experience” “poor instruction” “poor training.” Sounds like this could apply to the TSA story above, doesn’t it?b-funny-tsa

How do you fix human error? Easy actually. Take the human out of it. As it turns out, Homeland Security is already working on it. In June of this year, the department announced that it is working with Google to build computer algorithms that can automatically detect and identify concealed items in images captured by body scanners. It’s a contest actually. A total of $1.5 million dollars is being put up as the prize. Crowd sourcing you say? Sounds familiar.

Neural networks seem to be the best solution for this problem. The basic idea of a neural network is that it can learn to do tasks without task specific programming. If you feed images into a neural network that are labeled “gun” or “not gun,” eventually the neural network will be able to identify pictures of a gun. For this contest, Homeland Security has provided 1000 three-dimensional body scan images for the data scientists to train their algorithms. Down the road, if this goes live, over 2 million people are scanned at checkpoints every day in the US – that’s a lot of learning.

The TSA has also commissioned a team to work on this problem from Duke University. They’re using a neural network structure as well. An interesting point that they repeat several times is that they’re not trying to totally replace humans in the security process; augment and reduce the workload. Going back to tactical human errors, I know from experience that boredom and task saturation are huge causal factors when you find yourself in a bad situation.

The article describes the boredom portion: “A human today has to focus on the whole image—most of which is not a threat, but the human has to look at everything,” Carin said. “It can get quite monotonous to see the same basic luggage images one after the other, and that makes it very difficult for a human to focus and pay attention all the time for that rare event where a threat is present.”

On the flip side, I imagine there is an immense amount of pressure to get people through security quickly, especially at peak times. This would be the task saturation part. When you move quickly and have multiple items to look for, you’re bound to miss something.

So, baby steps. I think the best idea is to augment current TSA employees and eventually, maybe we’ll get to the point of total automation.babysteps_book_cover

 

In closing, I think about digital maturity. While Homeland Security is not a “company” per se, they need to have a digital culture if they want to improve operations. I would describe them as “Early,” moving into “Developing” digital maturity. We’re seeing some enterprise-wide efforts but I’m not so sure that their organization is very cross functional or their adoption of digital technology internally is very high. On the other side though, they are making investments – and talking about making investments. I think the roadblock here is that they have to contract out most or all of their projects which makes this a very top-heavy decision model.

We’ll see. The government is large and bureaucratic. Let’s hope they’re nimble enough to stay ahead of the threats.

d53d5314176098b742aeac0e6a80ff1b-funny-things-funny-stuff

8 comments

  1. Human error is definitely a large aspect of TSA malfunctions and I think the neural networks offers a probable solution. When I think of my personal experience at TSA they are usually not looking at the computers and carry on bags are scanned, and pay more attention to taking off your boots and jackets! I think it’s a right move to start to hand over the reigns to technology and digital equipment as threats have become more serious over the years.

  2. Great Blog. I really like how you tied in Digital Maturity as a goal of TSA. Obviously, they should be striving to leverage current technology to keep us safe, and usually it’s people that get in the way. Hopefully, in the future, TSA leadership can recognize failure points and attempt to apply the best tech to help keep our airlines safe.

  3. I actually remember when this report came out/was leaked very well. This blog connected so many ideas that we have been discussing in class: from crowd sourcing to the digital maturity of the TSA. Brilliant angle to look at this from. It also makes me feel better, just based on my own experiences with TSA. When going through the line post-starting my MBA, I now think all about operations and creating the optimal line. And the pressure on line operators (in this case TSA) that long lines and cranky “customers” put on them. It seems like this could be a problem that could be attacked from multiple angles, both decreasing human error in the scanning system, and also improving the line experience through optimization (though I think the recent TSA pre-check is an attempt at this).

  4. Fascinating! While I’ve definitely seen pushback in general when talking about ways to automate stuff like this and replace jobs, the major safety improvements have to take precedence. It’s terrifying to think that they’re taking away my poorly “concealed” bottle of water I forgot to leave at home, but missing the actual threats.

  5. Great post! I’ve always wondered why TSA separates security into technology-controlled metal detectors and human-controlled bag scanning. It seems inevitable that a machine should be able to scan to do both, since looking through a bag for well-known objects should be a feasible task considering we can have cars drive themselves. It’ll be interesting to see how this advent of machine learning may be affected if people start to change the shape of items like guns, knives, etc. This reminds me of commercials for a flask that is shaped to be a sunscreen tube: why would a sunscreen tube ever raise suspicions for alcohol? It’s all about our existing biases and how this affects our training of the models. If somehow sharp objects or mock bombs can also change into different forms, all of the machine learning training will need to be redone, too.

  6. I thought you made some really interesting points. However, I remember when the more advanced body scanners came out and a lot of people felt these were an invasion of privacy. I would be curious if there would be a similar reaction to a technology taking the place. Even though there wouldn’t be the human element there would likely be some sort of data–potentially even tracking– on each traveler. I do think it’s amazing how they missed that many threats yet they always manage to catch when I forget to empty my water bottle!

  7. Nice post. Although I’d be curious to see if any of the undergrads know what “The Clapper” is (let alone the “What about Bob?” reference). I do like the idea of using AI for security. Seems like one of those things that’s a no-brainer (pun intended).

  8. I remember reading that TSA report when it first came out and being in complete shock. Their response seems to me like they’re reading our course’s playbook, between leveraging AI to improve security and crowd-sourcing the technology to carry it out effectively. I hope a follow-up report is posted to see how their initiatives are panning out. Very interesting topic and great tie-ins to our class!

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: