Watch Me Quit

Marina Shifrin dancing in her quit video

In an era where U.S. unemployment figures typically range between five and eight percent (and were as high as 10% in 2009), it might seem oddly inappropriate to celebrate the mundane act of quitting a job—especially when the event is made public at a significant risk to one’s reputation.

Yet, this is exactly what people are doing with their final hours at work. People are using company property and time to choreograph elaborate media happenings, recording them on video and posting them online for a global audience. Although we have no idea who these people are, what they do for a living or why they’re leaving, we are captivated by the spectacle.

Girl holding sign saying "I quit"It’s not difficult to understand why “quit videos” are so magnetic. Nearly everyone who works for a living has been compelled, at some time or another, to seethe silently while enduring an employment scenario that left us wanting more. Perhaps it’s a stupid boss that sets us on edge, or a noisy cubicle placed near the break room. Or it could be the vapid nature of most work environments, a landscape overrun with micromanaging leadership, indecipherable buzzword jargon, and organizations who incessantly bang us over the head about their “commitment to culture.”

Most of us follow the rules of cordial professionalism when leaving our place of employment: we provide two week’s notice, complete the required paperwork, and hand off any projects currently in-stream. We might even have a small party or a round of drinks with our coworkers. We promise to keep in touch, say nice things and move on. It’s all very benign.

Increasingly, though, departing employees are electing to tender their resignations in the form of elaborately staged events. The sheer orchestration required to pull off these stunts is impressive: there’s the coffee shop barista who hired a barbershop quartet, the Renaissance Hotel employee accompanied by a marching band, an insurance salesman dressed as a banana, and the GoDaddy engineer who announced her new puppeteering career via Super Bowl commercial.

And sometimes, the employers fight back. When Marina Shifrin quit her job by dancing to a song by Kanye West in front of 18 million viewers, her viral legacy was cemented (including a number of copycats). However, her former company countered the attack with a video of their own, taking advantage of the opportunity to elevate their name in front of a newly-expanded audience.  Today, Ms. Shifrin writes for Glamour and tweets about orange juice, but one gets the sense that her fleeting celebrity will sustain the semblance of a career (for the time being, at least).

From the perspective of the departing employee, it’s difficult to pinpoint the intended benefit of quit videos. It could be the chance to become an Internet celebrity that’s so intoxicating, or it could be something more deeply psychological. Quitting a job is an inherently solitary exercise; everyone else is a part of something, and we’ve made a decision to extricate ourselves from it. Perhaps the mass exposure of fame provides a sense of immediacy, smoothing the transition from one social circle to another.

factory workers sitting at tables doing menial laborStill, one interesting aspect of note is that most quit videos begin with a sincere explanation of what we’re about to see. Sometimes the tone is almost apologetic, as if the creators are aware that what they’re doing is borderline inappropriate but Darn it, I was mistreated and my story must be heard. Taken in this context, quit videos operate as a rejection of the flawed organizational dynamics found in many corporate ecosystems.

Smart, creative employees long ago stopped thinking of themselves merely as human capital. Perhaps this form of social media offers a greater good yet to be discovered: the actualization of the self, and a public opportunity to reclaim one’s dignity in the face of dehumanization. Whatever the rationale, Frederick Winslow Taylor is surely spinning in his grave.

Reading vs. Consuming

One can’t help but laugh when going through old magazines, especially those “best of” issues that boldly predict the most innovative trends to emerge in the near or distant future.

Cover of Smithsonian MagazineIn the August 2010 issue of Smithsonian, for example, the magazine celebrated its 40th anniversary by listing “40 Things You Need to Know About the Next 40 Years.” The list includes automobiles that run on salt water, organs and body parts made to order, and world peace finally manifesting as the result of our global population reaching old age en masse.

Some of these predicted events may, indeed, turn out to be prescient. In fact, the last item on the list is already taking place: new habits in personal literacy. The proliferation of mobile technology, with its smooth surfaces requiring swipes and taps to operate, have created a modality in which reading has become more dextrous. Meanwhile, smaller screens minimize the amount of content that can be consumed in one session.

Reading and writing, like all activities, are subject to dynamic influences that shape how written material is created, distributed and received. In 15th-century Europe, only 1 in 20 males could read, and writing was an even rarer skill. The advent of the printing press allowed content to be mass-produced at greater volumes, allowing for less “scholarly” works to see publication (the first romance novel was published in 1740). When it comes to formulating a public aesthetic, technology is not neutral.

Words today have migrated from bound paper pulp to 4.5 billion tiny screens worldwide. We see them illuminated on darkened commuter trains. We gaze absentmindedly at them while standing in line at the supermarket. We sneak quick glances at them while stopped at a red light. We aren’t reading for knowledge or enrichment; we’re filling a moment until the next thing happens.

Illustration by Erik Carter. Screenshots from Vine users Alona Forsythe, Brandon Bowen, Dems, imanilindsay, MRose and Sionemaraschino.
Illustration by Erik Carter for the New York Times Magazine. Screenshots from Vine users Alona Forsythe, Brandon Bowen, Dems, imanilindsay, MRose and Sionemaraschino.

But what are we reading that’s so riveting that it removes us from the present moment? As it turns out, nothing. There is a whole industry around the idea of “borecore,” a term used by Jenna Wortham in a New York Times Magazine article this month. She describes how video-sharing apps like Vine and Meerkat operate like a firehose, constantly releasing a voluminous stream of vapid, self-indulgent content more appropriate as time-filler than value enrichment:

“Rather than killing time at the mall, in a Spencer’s Gifts or the food court, young people are filming themselves doing the incredibly mundane: goofing around in a backyard pool, lounging on basement couches, whatever; in other words, recording the minutiae of their lives and uploading it for not very many to see.”

Kindle Fire deviceA similar trend is taking place with non-video content. We don’t read the written word, as we do a book or periodical literature; we consume it. Screens are always on, and we never stop peeking at them. The digital material we consume is highly visual and interactive, requiring a series of finger gestures and precise tapping sequences. Pop-up windows with tiny “close” buttons and interstitial moving images compete for our attention, and every selection we make is recorded by someone, somewhere, for some data-driven purpose we’ll never understand.

Reading off a screen requires a user to rapidly formulate a pattern of behavior that rewards distraction. Sitting down with a long narrative, told in a singular voice, just doesn’t have the same element of persuasion during instances when we need a quick blast of information (such as checking the customer reviews of a product while we’re standing in the store, deciding whether to make the purchase) or just want to waste a few minutes at the airport until our flight is called.

People standing in line, looking at their smart phones.
Photo by Greg Battin of Autodesk University.

Some might argue that technology has even influenced the quality of content we choose to consume. At its “best,” reading from a tablet screen is no different than turning the pages of a book, because good writing will always transcend whatever medium by which it’s delivered. At “worst,” though, we scan words off a screen between the moments of our lives, like sneaking an unhealthy snack into our mouths when we think no one is looking.

Whatever our opinion on the quality of digital material being consumed, one cannot deny that our retrieval capacity has enhanced our role as content aggregators. We reside in constant, curatorial flux regarding the world around us, and the screen is the first place we look when we need answers. It’s also the vehicle of choice when we want to put something out into the world, even if the end in mind is a book printed on paper. Otherwise, this blog wouldn’t exist.

He That Loseth His Life

Anyone who has studied American Southern literature will have read “Good Country People,” a short story written in 1955 by Flannery O’Connor. The piece describes how two convergent views of reality collide at an uncomfortable point of contact: one seemingly innocent romantic encounter that erupts into something horrific, cruel and bizarre.

Flannery O'Connor in her driveway in 1962
Flannery O’Connor in her driveway, in 1962. Photo by Joe McTyre for the Atlanta Constitution. Image reproduced from the New York Times.

The main point of the story is that we reside in a dual existence between two planes of understanding. One is seemingly docile and innocent, a patchwork of hopeful clichés operating as accepted truth. The other is more nihilistic and refutes any deep metaphysical parallels with the surface world, insisting that there is nothing behind or beyond that which we can interpret with our senses.

Last week, three media pieces appeared concerning the topic of cyberbullying. One is an article on TechDirt by Shawn DuBravac, Ph.D., chief economist at the Consumer Electronics Association and author of the upcoming book, Digital Destiny: How the New Age of Data Will Transform the Way We Work, Live, and Communicate.

In his TechDirt piece, DuBravac discusses how social media participants cultivate parallel identities, and these profiles dictate wholly separate belief structures regarding acceptable engagement. He mentions a recent Supreme Court case of a man convicted of threatening to kill his wife on her Facebook page. The defense claimed that what he wrote online what never intended to be taken literally; the prosecution argued that what he meant when he wrote the post is irrelevant to the act itself.

This case could create an interesting precedent, given a remark by Justice Sonia Sotomayor that “[the courts] have been loathe to create more exceptions to the First Amendment,” which DuBravac suggests could increase the amount of legal latitude afforded to perpetrators of online abuse. However, he also senses that the separation between the digital and offline realms are slowly converging:

“Since its foundation, the Internet has revealed its unique place in society – a place where people are free to be whoever they want. This freedom has found its purest expression in social network sites. Yet the nature of the Internet is changing. We hardly even talk about ‘being online’ anymore, because we’re always online through our smartphones and mobile devices.”

The first step towards resolving any problem is to define it. Last week, the Novia Scotia Supreme Court included “element(s) of malice” as part of their official definition of cyberbullying. The new tort describes the procedure by which a complainant can seek protection orders against any individual or group whose electronic communications are intended to cause “fear, intimidation, humiliation, distress or other damage or harm to another person’s health, emotional well-being, self-esteem or reputation.”

Such language would be a good start for many colleges and universities in the United States, many of whom have policies about sexual assault, hazing and discrimination but hardly any mention of abuse that occurs online. This lack of clarity could possibly reflect a difference in attitude, demonstrating that cyberbullying behavior among adolescents evolves as the effect age groups become older.

According to a study conducted by researchers in Seattle and Wisconsin, for example, name-calling attacks that might evoke suicidal thoughts among a preteen audience are statistically less likely to have the same effect on college students. In addition, there is little definition or agreement among authorities as to what actions constitute cyberbullying at the university level.

cyberbullying illustrationOn the one hand, we have people who actively minimize their responsibility when posting content online. It’s the digital equivalent of “I was angry and not thinking straight.” On the other hand, it’s clear that we need some sort of metric to properly qualify the spectrum of online aggression. It’s nearly impossible to determine the appropriate consequence when, for example, a simple Facebook comment goes too far. We can unfriend or unfollow those who transgress against us, but all that does is cut one stream from a single source.

It seems to be a rather grim business, this digital netherworld of anonymous nihilism slowly rising to greet us when we are most vulnerable. It brings to mind a revealing quote from Good Country People. It takes place at a pivotal moment in the story when the main character, a seemingly benign yet covertly despicable human, is compelled to defend his lack of personal accountability:

“I’ve gotten a lot of interesting things. And you needn’t think you’ll catch me because Pointer ain’t really my name. I use a different name at every house and don’t stay nowhere long. And I’ll tell you another thing, Hulga, you ain’t so smart. I been believing in nothing ever since I was born.”