I was inspired by this article as well as my experience before starting my MBA. Several years ago, I encountered the theme of “learn, unlearn, relearn” and made an effort to incorporate it into my work. Now, as a graduate student, I try and incorporate it into my education as I encounter new ideas that require me to unlearn old ways of thinking.
The first part “learn” is straightforward – we absorb information or learn processes. The third part “relearn” is also straightforward – sometimes we need to relearn a skill or knowledge. Take riding a bicycle after many years or rediscovering a language; it may take some time, but you can relearn and even strengthen the skill or knowledge.
The last part “unlearn” is complicated. Unlearning can refer to unlearning a fact, a way of thinking, or a way of acting. A far too simple example is we learned that Pluto was a planet growing up, but now it is not classified as a planet. We had to unlearn that fact then replace it with new knowledge. Or unlearning shortcuts/tools for an old OS system when an update drops. A more complex example is unlearning some work habits when you transitioned to work from home. Unlearning can free up space and allow us to gain new insights to approach a challenge differently.
With digital disruption all around us, organizations need to unlearn old systems or approaches to make way for new processes or tactics. This HBR article goes into detail about how unlearning can help organizations adapt and grow in a rapidly changing environment. Our brains cannot keep adding information, so we need to unlearn certain processes to make room for new and improved processes.
What makes unlearning complex is not fully knowing how deep that way of acting or thinking affects other habits or perspectives. As a result, how do we know what to target to unlearn something? Moreover, how can we fully know if we have unlearned a way of thinking or acting?
A more interesting application of “learn, unlearn, relearn” comes to light when referring to machine learning systems. An integral part of the machine learning process is learning based on a set of evaluations. As the system examines more and more data and goes through evaluation, the system can improve. And yet, data scientists may not fully comprehend how it works. As algorithms become more complex, it will become more difficult to accurately explain how a machine learning system accomplishes its task. After the system engages in learning and relearning, it may be impossible to identify a specific subset of data and how that directly impacts the system’s algorithm.
This leads me to the first question: Can a machine learning system truly unlearn?
If data scientists cannot understand how these complex machine learning systems function, then how can they know whether a system has truly unlearned something? They would have to clearly define what they want it to unlearn then be able to assess if it has unlearned that. It would be like asking students to unlearn a strategy for solving a math equation, but then not being able to observe them on any future attempts. Take this article that discusses AI and machine learning in healthcare. A machine learning system was looking at images from radiology to help with x-rays and mammograms. In the process, the system was able to accurately identify the race of patients, even though that was not its purpose. The data scientists could not determine which factors the program used to do this. Some data scientists are working on ways to allow machine learning programs to unlearn. Currently, it may require building a new system from scratch.
Which brings me to the second question: Why would you want a machine learning system to unlearn?
First, new regulations and laws may give more power to an individual when it comes to data privacy. If these new regulations permit me to force organizations to stop using specific personal data, they could delete it. But has the machine learning system retained some aspect of my data? Is my data now an integral part of the system? Am I ok with that?
Second, making machine learning systems unlearn can allow for more adaptable systems that align with ethical and moral standards. As these systems become more complex and further expand into our everyday lives, data scientists may discover inherent biases within them. Instead of starting from scratch, which takes time and money, the ability to unlearn certain patterns could allow the machine learning system to change and still function.
While the application or necessity of a machine learning system unlearning may not be fully realized, it may help engineers better understand systems and organizations better adapt systems to meet changing regulations and needs. I believe organizations need more tools to refine machine learning systems and the process of unlearning can be a critical tool in the toolbox.
The complexity and challenges of unlearning for humans also applies to machine learning. Maybe data scientists need to unlearn certain ways of thinking about machine learning to address this challenge?
(I also enjoyed watching this video explaining how machines learn – https://youtu.be/R9OHn5ZF4Uo)