The Dumbest Generation Grows Up
eBook - ePub

The Dumbest Generation Grows Up

From Stupefied Youth to Dangerous Adults

Mark Bauerlein

Share book
  1. 256 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

The Dumbest Generation Grows Up

From Stupefied Youth to Dangerous Adults

Mark Bauerlein

Book details
Book preview
Table of contents
Citations

About This Book

From Stupefied Youth to Dangerous Adults Back in 2008, Mark Bauerlein was a voice crying in the wilderness. As experts greeted the new generation of "Digital Natives" with extravagant hopes for their high-tech future, he pegged them as the "Dumbest Generation." Today, their future doesn't look so bright, and their present is pretty grim. The twenty-somethings who spent their childhoods staring into a screen are lonely and purposeless, unfulfilled at work and at home. Many of them are even suicidal. The Dumbest Generation Grows Up is an urgently needed update on the Millennials, explaining their not-so-quiet desperation and, more important, the threat that their ignorance poses to the rest of us. Lacking skills, knowledge, religion, and a cultural frame of reference, Millennials are anxiously looking for something to fill the void. Their mentors have failed them. Unfortunately, they have turned to politics to plug the hole in their souls. Knowing nothing about history, they are convinced that it is merely a catalogue of oppression, inequality, and hatred. Why, they wonder, has the human race not ended all this injustice before now? And from the depths of their ignorance rises the answer: Because they are the first ones to care! All that is needed is to tear down our inherited civilization and replace it with their utopian aspirations. For a generation unacquainted with the constraints of human nature, anything seems possible. Having diagnosed the malady before most people realized the patient was sick, Mark Bauerlein surveys the psychological and social wreckage and warns that we cannot afford to do this to another generation.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is The Dumbest Generation Grows Up an online PDF/ePUB?
Yes, you can access The Dumbest Generation Grows Up by Mark Bauerlein in PDF and/or ePUB format, as well as other popular books in Politics & International Relations & Conservatism & Liberalism. We have over one million books available in our catalogue for you to explore.

CHAPTER ONE Making Unhappy—and Dangerous—Adults

What have we done to them?
“Them”—the Millennials, the first Americans to come of age in the Digital Age, the cutting edge of the tech revolution, competing like never before for college and grad school, ready to think globally and renounce prejudice and fashion their profiles to achieve, achieve, follow their passions and be all that they can be—but ending up behind the Starbucks counter or doing contract work, living with their parents or in a house with four friends, nonetheless lonely and mistrustful, with no thoughts of marriage and children, no weekly church attendance or civic memberships, more than half of them convinced that their country is racist and sexist. This is no longer the cohort that in 2010 was “Confident, Self-Expressive, Liberal, Upbeat, and Open to Change.”1 It is a generation with a different theme: “53 Reasons Life Sucks for Millennials.”2
And we—the educators, journalists, intellectuals, business and foundation leaders, consultants, psychologists, and other supervisors of the young—who flattered them as Millennials Rising: The Next Great Generation,3 cried, “Here Come the Millennials,”4 left them to their digital devices and video games and five hundred TV channels and three hundred photos in their pockets, fed them diverting apps and stupid movies and crass music, and stuck them with crushing student debt and frightful health-care costs, a coarse and vulgar public square, churches in retreat, and an economy of “creative destruction” and “disruptive innovation” (which the top 10 percent exploited, but the rest experienced as, precisely, destructive and disruptive), all the while giving them little education in history, art, literature, philosophy, political theory, comparative religion—a cultural framework that might have helped them manage the confusions.
No generation had had so many venues for self-realization and could explore them without the guidance of the seniors—Facebook, online role-playing, YouTube (whose original motto, remember, was “Broadcast Yourself”). After all, if Millennials were individuals who could “think and process information fundamentally differently from their predecessors,”5 their minds conditioned to operate in alternative ways by digital immersion in their developing years, then the opinions of Boomers and Generation Xers of what the kids proceeded to do wasn’t altogether relevant. If an eleven-year-old “community volunteer and blogger” could blow away a prominent education consultant with her international network and organizational savvy (“She’s sharing and learning and collaborating in ways that were unheard of just a few years ago”6), then the rest of us were forever fated to play catch-up. “The Internet and the digital world was [sic] something that belonged to adults, and now it’s something that really is the province of teenagers,” a Berkeley researcher told the producers of “Growing Up Online,” a 2008 episode of PBS’s Frontline.7 So who are forty-five-year-olds to judge? As a distinguished academic put it in a keynote discussion at the 2008 South by Southwest festival (SXSW), “Kids are early adopters of all new technologies. And they do it outside the watchful eyes of their parents. So there’s a sense of fear among parents.”8 Lighten up, we were told. Instead of fearing these kids who were passing them by, said the most progressive admirers of this new generation gap, the elders had a better option: “What Old People Can Learn from Millennials.”9
A dozen years ago, those of us watching with a skeptical eye couldn’t decide which troubled us more: the fifteen-year-olds averaging eight hours of media per day or the adults marveling at them. How could the older and wiser ignore the dangers of adolescents’ reading fewer books and logging more screen hours? How could they not realize that social media would flood the kids with youth culture and peer pressure day and night, blocking the exposure to adult matters and fresh ideas and a little high art that used to happen all the time when authors and their new books appeared in a standard segment on Johnny Carson or when Milton Friedman appeared repeatedly on Donohue in the late ’70s, teenagers played Masterpiece and Trivial Pursuit, and even little kids heard Leonard Bernstein’s beloved children’s concerts or got their classical music on Bugs Bunny. In a 2010 speech, George Steiner warned, “Nothing frightens me more than the withdrawal of serious music from the lives of millions of young children”—Chopin and Wagner replaced by “the barbarism of organized noise.” That was the inevitable outcome once technology enabled youths to become independent consumers. But for every George Steiner, there were dozens of intellectuals and teachers willing to cheer the multi-tasking, hyper-social young. Maybe it was that those figures who surely knew better were unwilling to protest for fear of appearing to be grouches, fogeys. Steiner himself admitted, “I sound like a boring old reactionary.” Nobody wanted to be that—though Steiner added, “I don’t apologize.”10
There should have been many, many more critics. The evidence was voluminous. Even as the cheerleaders were hailing the advent of digital youth, signs of intellectual harm were multiplying. Instead of heeding the signs, people in positions of authority rationalized them away. Bill Gates and Margaret Spellings and Barack Obama told Millennials they had to go to college to acquire twenty-first-century skills to get by in the information economy, and the schools went on to jack up tuition, dangle loans, and leave them five years after graduation in the state of early-twentieth-century sharecroppers, the competence they had developed in college and the digital techniques they had learned on their own often proving to be no help in the job market. The solution? Be more flexible, mobile, adaptive! High school students bombed NAEP exams (“the Nation’s Report Card”) in U.S. history and civics,11 but, many shrugged: Why worry, now that Google is around? The kids can always look it up! An August 2013 column in Scientific American featured an author recalling his father paying him five dollars to memorize the U.S. presidents in order and reflecting, “Maybe we’ll soon conclude that memorizing facts is no longer part of the modern student’s task. Maybe we should let the smartphone call up those facts as necessary.”12 As boys began stacking up heavy sessions of video games, Senator Charles Schumer worried that they might become desensitized to violence and death, prompting a columnist at Wired magazine to scoff, “But dire pronouncements about new forms of entertainment are old hat. It goes like this: Young people embrace an activity. Adults condemn it. The kids grow up, no better or worse than their elders, and the moral panic subsides.”13
Such “no big deal” comments didn’t jibe with the common characterization of the digital advent as on the order of Gutenberg, but few minds in that heady time of screen innovations bothered to quibble. Something historic, momentous, epochal was underway, a movement, a wave, fresh and hopeful—so don’t be a naysayer. In December 2011, Joichi Ito, then director of the MIT Media Lab, stated in the New York Times, “The Internet isn’t really a technology. It’s a belief system.”14 And Silicon Valley entrepreneur and critic Andrew Keen was right to call its advocates “evangelists.”15 John Perry Barlow, the renowned defender of open internet who coined the term “electronic frontier,” imagined virtual reality as the Incarnation in reverse: “Now, I realized, would the Flesh be made Word.”16
Given how pedestrian Facebook, Twitter, and Wikipedia seem today, not to mention the oddball auras of their founders and CEOs, it is difficult to remember the masters-of-the-universe, march-of-time cachet they enjoyed in the Web 2.0 phase of the Revolution (the first decade of the twenty-first century). Change happens so fast that we forget the spectacular novelty of it all, the days when digiphiles had all the momentum, the cool. As a friend who’d gone into technical writing in the ’90s told me recently, “It was sooo much fun back then.” Nobody wanted to hear the downsides, especially when so much money was being made. SAT scores in reading and writing kept slipping, but with all the texting, chatting, blogging, and tweeting, it was easy to find the high schoolers expressive in so many other ways, writing more words than any generation in history. The class of 2012 did less homework than previous cohorts did—a lot less—but at the Q & A at an event at the Virginia Military Institute, after I noted their sliding diligence, a young political scientist explained why: they were spending less time on assignments because all the tools and programs they’d mastered let them work so much faster—they weren’t lazy; they were efficient!—at which point the twelve hundred cadets in attendance, tired of my berating them for their selfies, stopped booing and burst out in applause. A much-discussed 2004 survey by the National Endowment for the Arts (NEA), Reading at Risk: A Survey of Literary Reading in America, found an astonishing drop in young adults’ consumption of fiction, poetry, and drama, with only 43 percent of them reading any literature at all in leisure hours, 17 percentage points fewer than in 1982,17 but in my presentation of the findings at dozens of scholarly meetings and on college campuses (I had worked on the NEA project), the professionals dismissed them as alarmist and reactionary, arising from a “moral panic” no different from the stuffy alarm about Elvis and comic books fifty years earlier.
Some public intellectuals defended the digitizing kids because they, too, loved Facebook and Wikipedia. “The early signs of a culture of civic activism among young people, joined by networked technologies, are cropping up around the world,” wrote two Harvard scholars in 2008, endorsing the networks for, among other things, helping organize resistance against authoritarian regimes—and thus putting opponents of the internet into the role of supporting repressive forces.18 Others wouldn’t criticize the trends because they didn’t much care about the tradition-heavy materials that dropped out as kids logged on and surfed and chatted—the better books, films, artworks, symphonies and jazz solos, discussion shows, and history no longer present. In an April 2001 story in the New York Times with the revealing title “More Ado (Yawn) about Great Books,” reporter Emily Eakin quoted a top professor: “You can conceive of a curriculum producing the same cognitive skills that doesn’t use literature at all but opts for connecting with the media tastes of the day—film, video, TV, etc. It’s no longer clear why we need to teach literature at all.” Such critical thinking skills are the key aim, Eakin wrote, and “those, some English professors are willing to admit, can be honed just as well through considerations of ‘Sex and the City’ as ‘Middlemarch.’ ”19 From the notion that Sex and the City serves to promote higher-order reflections, it’s only a small step to the satirical videos on collegehumor.com, founded by undergraduates in 1999 and a few years later pulling in $10 million annually. Still others defending digital youth had a personal reason for countenancing the turn to the screen in spite of its intellectual costs: they didn’t want to chide the kids. It made them uncomfortable. They didn’t want to embrace the authority that licensed criticism of others for their leisure choices, and they didn’t want anyone else to assume it, either, and especially not to direct it at the (putatively) powerless adolescents. It sounded too much like get-off-my-lawn bullying.
Whatever the motives, the outcome was a climate of acceptance. Even some of the most conscientious studies of digital youth chose to play it neutral, not to judge. Hanging Out, Messing Around, and Geeking Out: Kids Living and Learning with New Media was a large entry in a series on digital media funded by the MacArthur Foundation and published by MIT Press in 2010. Mizuko Ito of the University of California, Irvine, led a team of twenty-one researchers on a three-year ethnographic project, building case studies, collecting data and contextual information, and providing analytic insights in order to describe the role of digital media, devices, and communications in the ordinary hours of youth in the United States. It was a superb profile of adolescent behavior and the new media environment. The researchers enumerated “intimacy practices” that kept peers close to one another. They explored “fansubbing practices,” the rising status of kids as “technology experts” in their families, what went into profiles on MySpace, the interpretation of “feedback” on open sites such as YouTube, the widening category of “work,” and so forth.20
I skimmed the book when it came out and corresponded briefly with Professor Ito. I just looked back at it and found that the chapters hold up, though some of the technologies are dated, of course. At the beginning of the book, however, the authors briefly declared a certain suspension of judgment that pulled me up short. Stating that they proposed to approach media as “embodiments of social and cultural relationships,” Ito and her coauthors concluded, “It follows that we do not see the content of the media or media platform (TV, books, games, etc.) as the most important variables for determining social or cognitive outcomes.”21 That is, the specific stuff the youths consumed was not a primary influence on their development—not in the eyes of the observers. This was a crucial withholding of critical judgment, flattening the character of the actual subject matter passing through the screens. Whether text messages talked about Shakespeare homework or party gossip, whether an individual browsed the web for Civil War battles or for pets at play, shared photos of Modernist architecture or of party scenes… the researchers were determined to remain indifferent. The methodology demanded it: to document, not assess; to describe, not prescribe. The goal was to render habitats and habits, to show how a new tool produces new activities and alters the environs and beings within it. The content and quality of the materials consumed and created, their aesthetic, moral, and intellectual merits, were to come second or third or last, if at all. The inquirers wouldn’t evaluate the substance of a video game, only how it was situated in the home, how parents regulated it, how kids identified with the figures.… What the kids did with it, not what it was: that was the key. Not what, but how: that was the question.
This is the standard ethnographic posture, of course—disinterested, unbiased, and open-minded—but how much of themselves did the investigators have to suppress in order to stay true to the method? One profile of a young anime expert in the book noted that, though he was at the time a graduate student in electrical engineering at a top school, he spent “about eight hours a day keeping up with his hobby.” His own words: “I think pretty much all the time that’s not school, eating, or sleeping.”22 One might have called this an obsession or an addiction—every leisure moment devoted to a cartoon genre, a habit that disengaged the young man from people and things in his immediate surroundings. If that was too extreme a diagnosis, the authors could at least have pondered the opportunity costs: no exercise, no dating, no volunteering or churchgoing, no books or museums or concerts or other hobbies. I would have asked about, precisely, the content of anime. What was so appealing about it? Was there a particular character or storyline that grabbed him? What were his first feelings at the first viewing? That line of investigation would get to the heart of his case: Is this really how he wishes to spend his teens and twenties? How long does he plan to keep it up? Apart from the pleasure, what does anime do for him that other, more educational diversions might do just as well?
That wasn’t the tack taken by the investigator here, however. Instead, after the young man confessed his every-free-moment groove, the sole comment was, “Building a reputation as one of the most knowledgeable voices in the online anime fandom requires this kind of commitment as well as an advanced media ecology that is finely tailored to his interests.”23
True enough, but when I read that final remark now in 2021, I don’t think about anime, the young man’s extraordinary “commitment,” and his advanced media skills. Yes, his fixation is off the charts, and there is an etiology to trace. But I let it go because I don’t have the information. Instead, I consider the mindset of the observer, the researcher doing the project, an intelligent and caring academic who has somehow turned off her taste, who refuses to ask whether the young man’s lifestyle is healthy or whether anime is really worth so many precious hours of his formative years. What did the observer think about this habit? She must have had an opinion. Did she approve of what anime was doing to him? Would she be happy to see her own child diving into anime and shunning everything else in leisure time? Did she project forward five or ten years and envision this man heading into middle age still hooked, or perhaps no longer hooked and regretting the months and years that might have been?
She couldn’t say; this was a case study, and the pr...

Table of contents