Paracelsus, the “father of toxicology,” said, “all things are poison and nothing is without poison; only the dose makes a thing not a poison.” The adage has been adopted as a caution towards overindulgence in its many forms: from television, to sugar, to caffeine. Perhaps most relevant to today’s society, experts such as health economist Anusuya Chatterjee are studying how technology fits into this axiom.
Chatterjee remarks, “…we should rather say humans are using technology in such a way that it’s affecting their health. Humans are making the choice to be so obsessed with technology.” But beyond its metaphorical applications, the axiom’s biological basis should not be understated — and key to Paracelsus’ original work was the belief that toxic substances, as well as diseases, targeted and affected specific areas of the body more than others.
If we combine these two applications of Paracelsus’ theory — the metaphorical and the biological — we arrive at a more holistic picture: varying doses of anything affect specific parts of an individual differently, and as Chatterjee says, particularly in the case of technology, it is up to that individual to make trade-offs and judgment calls in self-administering those doses as to minimize self-harm (or self-poisoning, if you will).
For example, there are the evolutionary, physiological effects of technology: lengthened thumbs and necks from smart phones, obesity from sedentary lifestyles, disrupted sleep patterns, and myopia, to name some. All of these affect individuals differently, depending on usage of technology — that is, how and how much we use it — and usage varies according to culture, age, and socioeconomic status (although the “how much” is becoming even more homogenous as internet and basic technology becomes more accessible).
Beyond these externally detectable changes, however, of particular interest to me is the very tool that brought us to these problems in the first place — our brains — which I think Paracelsus would agree is technology’s target organ. How is the brain changing with technological evolution? And as Columbia Law Professor Tim Wu puts it, “Will that type of evolution take us in desirable directions, as we usually assume biological evolution does?”
The debate is out on this one. But what is important to remember in exploring this evolution, Wu argues, is the idea that technological evolution is self-evolution, and unlike biological evolution, which is driven by adaptation, this other, currently more pervasive type of evolution is driven by wants — by the desire for comfort. Are we getting smarter and stronger in adapting to challenges, or are we merely using technology to bury them?
Consider our shortened attention spans, for example. What is the evolutionary advantage to this? There is none, arguably. However, the societal implications have been remarkable: Disney’s intricate queue strategy, the debated (and often ineffective) use of technology in K-12 classrooms, and even the proposal to shorten Major League Baseball games. Are these changes improving society and human beings, or are they contributing to a spiraling of sorts, unchecked and misguided due to our inability to effectively manage the realized (and potential) power of technology?
This self-evolution has also perpetuated a culture of consumption in which any interaction necessitates nomenclature (“participatory culture”), and while Clay Shirky argues in his book, Cognitive Surplus, that this revitalized trend in participation creates value and meaning in otherwise consumed media, I’d challenge that assertion. The curation of online identities and their alignment (or misalignment) with real-world, everyday identities (not to mention the accessibility of anonymity) highlights this idea of self-evolution and challenges his concept that real value is being created through online interaction. Ask any Tinder user how much value they’ve gotten through their online connections, for example, or consider the popularity and relatability of memes highlighting “Me On the Internet vs Me in Real Life.”
I would argue that the self surrogacy hypothesis that Shirky illustrates extends beyond television, and partly to these online communities themselves. Not to mention, returning to strictly biological effects, significant evidence has shown that technology can cause addiction, overstimulate our nervous systems, and weaken our cognitive memories — all despite (or because of) our participation.
Now, I’m not denouncing the positive power of technology and online social networks — their impact for good is undeniable, and I believe that they can (and do) create value, if considered strategically and critically. After all, if we return to Paracelsus, we remember that it’s the dose that makes something poisonous — and in this course, I’m hoping to explore this concept further: how do, can, and should we use technology, particularly in business? What are the managerial implications of such technology, and how will this self-evolution continue to shape society, for better and for worse? The advancement of technology, namely social media and digital business, isn’t slowing down anytime soon, but the advancement of the managerial and organizational tools critical to technology’s (and society’s) well-being isn’t moving as rapidly as technology like AI or the internet of things. In this class, I hope to learn what we, as business-minded people, can do to try to catch up.