On July 11, 2000, in one of the more unlikely moments in the history of the Senate Judiciary Committee, Senator Orrin Hatch handed the microphone to Metallica’s drummer, Lars Ulrich, to hear his thoughts on art in the age of digital reproduction. Ulrich’s primary concern was a new online service called Napster, which had debuted a little more than a year before. As Ulrich explained in his statement, the band began investigating Napster after unreleased versions of one of their songs began playing on radio stations around the country. They discovered that their entire catalog of music was available there for free.
Ulrich’s trip to Washington coincided with a lawsuit that Metallica had just filed against Napster — a suit that would ultimately play a role in the company’s bankruptcy filing. But in retrospect, we can also see Ulrich’s appearance as an intellectual milestone of sorts, in that he articulated a critique of the Internet-era creative economy that became increasingly commonplace over time. ‘‘We typically employ a record producer, recording engineers, programmers, assistants and, occasionally, other musicians,’’ Ulrich told the Senate committee. ‘‘We rent time for months at recording studios, which are owned by small-business men who have risked their own capital to buy, maintain and constantly upgrade very expensive equipment and facilities. Our record releases are supported by hundreds of record companies’ employees and provide programming for numerous radio and television stations. … It’s clear, then, that if music is free for downloading, the music industry is not viable. All the jobs I just talked about will be lost, and the diverse voices of the artists will disappear.’’
The intersection between commerce, technology and culture has long been a place of anxiety and foreboding. Marxist critics in the 1940s denounced the assembly-line approach to filmmaking that Hollywood had pioneered; in the ’60s, we feared the rise of television’s ‘‘vast wasteland’’; the ’80s demonized the record executives who were making money off violent rap lyrics and ‘‘Darling Nikki’’; in the ’90s, critics accused bookstore chains and Walmart of undermining the subtle curations of independent bookshops and record stores.
But starting with Ulrich’s testimony, a new complaint has taken center stage, one that flips those older objections on their heads. The problem with the culture industry is no longer its rapacious pursuit of consumer dollars. The problem with the culture industry is that it’s not profitable enough. Thanks to its legal troubles, Napster itself ended up being much less important as a business than as an omen, a preview of coming destructions. Its short, troubled life signaled a fundamental rearrangement in the way we discover, consume and (most importantly) pay for creative work. In the 15 years since, many artists and commentators have come to believe that Ulrich’s promised apocalypse is now upon us — that the digital economy, in which information not only wants to be free but for all practical purposes is free, ultimately means that ‘‘the diverse voices of the artists will disappear,’’ because musicians and writers and filmmakers can no longer make a living.
Take a look at your own media consumption, and you can most likely see the logic of the argument. Just calculate for a second how many things you used to pay for that now arrive free of charge: all those Spotify playlists that were once $15 CDs; the countless hours of YouTube videos your kids watch each week; online articles that once required a magazine subscription or a few bucks at the newsstand. And even when you do manage to pull out a credit card, the amounts are shrinking: $9 for an e-book that used to be a $20 hardcover. If the prices of traditional media keep falling, then it seems logical to critics that we will end up in a world in which no one has an economic incentive to follow creative passions. The thrust of this argument is simple and bleak: that the digital economy creates a kind of structural impossibility that art will make money in the future. The world of professional creativity, the critics fear, will soon be swallowed by the profusion of amateurs, or the collapse of prices in an age of infinite and instant reproduction will cheapen art so that no one will be able to quit their day jobs to make it — or both.
The trouble with this argument is that it has been based largely on anecdote, on depressing stories about moderately successful bands that are still sharing an apartment or filmmakers who can’t get their pictures made because they refuse to pander to a teenage sensibility. When we do see hard data about the state of the culture business, it usually tracks broad industry trends or the successes and failures of individual entertainment companies. That data isn’t entirely irrelevant, of course; it’s useful to know whether the music industry is making more or less money than it did before Ulrich delivered his anti-Napster testimony. But ultimately, those statistics only hint at the most important question. The dystopian scenario, after all, isn’t about the death of the record business or Hollywood; it’s about the death of music or movies. As a society, what we most want to ensure is that the artists can prosper — not the record labels or studios or publishing conglomerates, but the writers, musicians, directors and actors themselves.
Their financial fate turns out to be much harder to measure, but I endeavored to try. Taking 1999 as my starting point — the year both Napster and Google took off — I plumbed as many data sources as I could to answer this one question: How is today’s creative class faring compared with its predecessor a decade and a half ago? The answer isn’t simple, and the data provides ammunition for conflicting points of view. It turns out that Ulrich was incontrovertibly correct on one point: Napster did pose a grave threat to the economic value that consumers placed on recorded music. And yet the creative apocalypse he warned of has failed to arrive. Writers, performers, directors and even musicians report their economic fortunes to be similar to those of their counterparts 15 years ago, and in many cases they have improved. Against all odds, the voices of the artists seem to be louder than ever.
The closest data set we have to a bird’s-eye view of the culture industry can be found in the Occupational Employment Statistics, an enormous compendium of data assembled by the Labor Department that provides employment and income estimates. Broken down by general sector and by specific professions, the O.E.S. lets you see both the forest and the trees: You can track employment data for the Farming, Fishing and Forestry Occupations (Group 45-0000), or you can zoom in all the way to the Fallers (Group 45-4021) who are actually cutting down the trees. The O.E.S. data goes back to the 1980s, though some of the category definitions have changed over time. This, and the way the agency collects its data, can make specific year-to-year comparisons less reliable. The best approximation of the creative-class group as a whole is Group 27-0000, or Arts, Design, Entertainment, Sports and Media Occupations. It’s a broader definition than we’re looking for — I think we can all agree that professional athletes are doing just fine, thank you very much — but it gives us a place to start.
The first thing that jumps out at you, looking at Group 27-0000, is how stable it has been over the past decade and a half. In 1999, the national economy supported 1.5 million jobs in that category; by 2014, the number had grown to nearly 1.8 million. This means the creative class modestly outperformed the rest of the economy, making up 1.2 percent of the job market in 2001 compared with 1.3 percent in 2014. Annual income for Group 27-0000 grew by 40 percent, slightly more than the O.E.S. average of 38 percent. From that macro viewpoint, it hardly seems as though the creative economy is in dust-bowl territory. If anything, the market looks as if it is rewarding creative work, not undermining it, compared with the pre-Napster era.
The problem with the O.E.S. data is that it doesn’t track self-employed workers, who are obviously a large part of the world of creative production. For that section of the culture industry, the best data sources are the United States Economic Census, which is conducted every five years, and a firm called Economic Modeling Specialists International, which tracks detailed job numbers for self-employed people in specific professions. If anything, the numbers from the self-employed world are even more promising. From 2002 to 2012, the number of businesses that identify as or employ ‘‘independent artists, writers and performers’’ (which also includes some athletes) grew by almost 40 percent, while the total revenue generated by this group grew by 60 percent, far exceeding the rate of inflation.
What do these data sets have to tell us about musicians in particular? According to the O.E.S., in 1999 there were nearly 53,000 Americans who considered their primary occupation to be that of a musician, a music director or a composer; in 2014, more than 60,000 people were employed writing, singing or playing music. That’s a rise of 15 percent, compared with overall job-market growth during that period of about 6 percent. The number of self-employed musicians grew at an even faster rate: There were 45 percent more independent musicians in 2014 than in 2001. (Self-employed writers, by contrast, grew by 20 percent over that period.)
Of course, Baudelaire would have filed his tax forms as self-employed, too; that doesn’t mean he wasn’t also destitute. Could the surge in musicians be accompanied by a parallel expansion in the number of broke musicians? The income data suggests that this just isn’t true. According to the O.E.S., songwriters and music directors saw their average income rise by nearly 60 percent since 1999. The census version of the story, which includes self-employed musicians, is less stellar: In 2012, musical groups and artists reported only 25 percent more in revenue than they did in 2002, which is basically treading water when you factor in inflation. And yet collectively, the figures seem to suggest that music, the creative field that has been most threatened by technological change, has become more profitable in the post-Napster era — not for the music industry, of course, but for musicians themselves. Somehow the turbulence of the last 15 years seems to have created an economy in which more people than ever are writing and performing songs for a living.
How can this be? The record industry’s collapse is real and well documented. Even after Napster shut down in 2002, music piracy continued to grow: According to the Recording Industry Association of America, 30 billion songs were illegally downloaded from 2004 to 2009. American consumers paid for only 37 percent of the music they acquired in 2009. Artists report that royalties from streaming services like Spotify or Pandora are a tiny fraction of what they used to see from traditional album sales. The global music industry peaked just before Napster’s debut, during the heyday of CD sales, when it reaped what would amount today to almost $60 billion in revenue. Now the industry worldwide reports roughly $15 billion in revenue from recorded music, a financial Armageddon even if you consider that CDs are much more expensive to produce and distribute than digital tracks. With such a steep decline, how can the average songwriter or musician be doing better in the post-Napster era? And why does there seem to be more musicians than ever?
Part of the answer is that the decline in recorded-music revenue has been accompanied by an increase in revenues from live music. In 1999, when Britney Spears ruled the airwaves, the music business took in around $10 billion in live-music revenue internationally; in 2014, live music generated almost $30 billion in revenue, according to data assembled from multiple sources by the live-music service Songkick. Starting in the early 1980s, average ticket prices for concerts closely followed the rise in overall consumer prices until the mid-1990s, when ticket prices suddenly took off: From 1997 to 2012, average ticket prices rose 150 percent, while consumer prices grew less than 100 percent. It’s elemental economics: As one good — recorded music — becomes ubiquitous, its price plummets, while another good that is by definition scarce (seeing a musician play a live performance) grows in value. Moreover, as file-sharing and iTunes and Spotify have driven down the price of music, they have also made it far easier to envelop your life with a kind of permanent soundtrack, all of which drives awareness of the musicians and encourages fans to check them out in concert. Recorded music, then, becomes a kind of marketing expense for the main event of live shows.
It’s true that most of that live-music revenue is captured by superstar acts like Taylor Swift or the Rolling Stones. In 1982, the musical 1-percenters took in only 26 percent of the total revenues generated by live music; in 2003, they captured 56 percent of the market, with the top 5 percent of musicians capturing almost 90 percent of live revenues. But this winner-takes-all trend seems to have preceded the digital revolution; most 1-percenters achieved their gains in the ’80s and early ’90s, as the concert business matured into a promotional machine oriented around marquee world tours. In the post-Napster era, there seems to have been a swing back in a more egalitarian direction. According to one source, the top 100 tours of 2000 captured 90 percent of all revenue, while today the top 100 capture only 43 percent.
The growth of live music isn’t great news for the Brian Wilsons of the world, artists who would prefer to cloister themselves in the studio, endlessly tinkering with the recording process in pursuit of a masterpiece. The new economics of the post-Napster era are certainly skewed toward artists who like to perform in public. But we should remember one other factor here that is often forgotten. The same technological forces that have driven down the price of recorded music have had a similar effect on the cost of making an album in the first place. We easily forget how expensive it was to produce and distribute albums in the pre-Napster era. In a 2014 keynote speech at an Australian music conference, the indie producer and musician Steve Albini observed: ‘‘When I started playing in bands in the ’70s and ’80s, most bands went through their entire life cycle without so much as a note of their music ever being recorded.’’ Today, musicians can have software that emulates the sound of Abbey Road Studios on their laptops for a few thousand dollars. Distributing music around the world — a process that once required an immense global corporation or complex regional distribution deals — can now be performed by the artist herself while sitting in a Starbucks, simply through the act of uploading a file.
The vast machinery of promoters and shippers and manufacturers and A&R executives that sprouted in the middle of the 20th century, fueled by the profits of those high-margin vinyl records and CDs, has largely withered away. What remains is a more direct relationship between the musicians and their fans. That new relationship has its own demands: the constant touring and self-promotion, the Kickstarter campaigns that have raised $153 million dollars to date for music-related projects, the drudgery that inevitably accompanies a life without handlers. But the economic trends suggest that the benefits are outweighing the costs. More people are choosing to make a career as a musician or a songwriter than they did in the glory days of Tower Records.
Of the big four creative industries (music, television, movies and books), music turns out to be the business that has seen the most conspicuous turmoil: None of the other three has seen anywhere near the cratering of recorded-music revenues. The O.E.S. numbers show that writers and actors each saw their income increase by about 50 percent, well above the national average. According to the Association of American Publishers, total revenues in the fiction and nonfiction book industry were up 17 percent from 2008 to 2014, following the introduction of the Kindle in late 2007. Global television revenues have been projected to grow by 24 percent from 2012 to 2017. For actors and directors and screenwriters, the explosion of long-form television narratives has created a huge number of job opportunities. (Economic Modeling Specialists International reports that the number of self-employed actors has grown by 45 percent since 2001.) If you were a television actor looking for work on a multiseason drama or comedy in 2001, there were only a handful of potential employers: the big four networks and HBO and Showtime. Today there are Netflix, Amazon, AMC, Syfy, FX and many others.
What about the economics of quality? Perhaps there are more musicians than ever, and the writers have collectively gotten a raise, but if the market is only rewarding bubble-gum pop and ‘‘50 Shades Of Grey’’ sequels, there’s a problem. I think we can take it as a given that television is exempt from this concern: Shows like ‘‘Game Of Thrones,’’ ‘‘Orange Is The New Black,’’ ‘‘Breaking Bad’’ and so on confirm that we are living through a golden age of TV narrative. But are the other forms thriving artistically to the same degree?
Look at Hollywood, and at first blush the picture is deeply depressing. More than half of the highest grossing movies of 2014 were either superhero films or sequels; it’s clearly much harder to make a major-studio movie today that doesn’t involve vampires, wizards or Marvel characters. This has led a number of commentators and filmmakers to publish eulogies for the classic midbudget picture. ‘‘Back in the 1980s and 1990s,’’ Jason Bailey wrote on Flavorwire, ‘‘it was possible to finance — either independently or via the studio system — midbudget films (anywhere from $5 million to $60 million) with an adult sensibility. But slowly, quietly, over roughly the decade and a half since the turn of the century, the paradigm shifted.’’ Movies like ‘‘Blue Velvet,’’ ‘‘Do the Right Thing’’ or ‘‘Pulp Fiction’’ that succeeded two or three decades ago, the story goes, would have had a much harder time in the current climate. Steven Soderbergh apparently felt so strongly about the shifting environment that he abandoned theatrical moviemaking altogether last year.
Is Bailey’s criticism really correct? If you make a great midbudget film in 2015, is the marketplace less likely to reward your efforts than it was 15 years ago? And has it become harder to make such a film? Cinematic quality is obviously more difficult to measure than profits or employment levels, but we can attempt an estimate of artistic achievement through the Rotten Tomatoes rankings, which aggregate critics’ reviews for movies. Based on my analysis, using data on box-office receipts and budgets from IMDB, I looked at films from 1999 and 2013 that met three categories. First, they were original creations or adaptations, not based on existing franchises, and were intended largely for an adult audience; second, they had a budget below $80 million; and third, they were highly praised by the critics, as defined by their Rotten Tomatoes score — in other words, the best of the cinematic midlist. In 1999, the most highly rated films in these categories combined included ‘‘Three Kings,’’ ‘‘Being John Malkovich,’’ ‘‘American Beauty’’ and ‘‘Election.’’ The 2013 list included ‘‘12 Years a Slave,’’ ‘‘Her,’’ ‘‘Zero Dark Thirty,’’ ‘‘American Hustle’’ and ‘‘Nebraska.’’ In adjusted dollars, the class of 1999 brought in roughly $430 million at the box office. But the 2013 group took in about $20 million more. True, individual years can be misleading: All it takes is one monster hit to skew the numbers. But if you look at the blended average over a three-year window, there is still no evidence of decline. The 30 most highly rated midbudget films of 1999 to 2001 took in $1.5 billion at the domestic box office, adjusted for inflation; the class of 2011 to 2013 took in the exact same amount. Then as now, if you make a small or midsize movie that rates on the Top 10 lists of most critics, you’ll average roughly $50 million at the box office.
The critics are right that big Hollywood studios have abandoned the production of artistically challenging films, part of a broader trend since the 1990s of producing fewer films over all. (From 2006 to 2011, the combined output of major Hollywood studios declined by 25 percent.) And yet the total number of pictures released in the United States — nearly 600 in 2011 — remains high. A recent entertainment research report, The Sky Is Rising, notes that most of that growth has come from independent production companies, often financed by wealthy individuals from outside the traditional studio system. ‘‘Her,’’ ‘‘12 Years a Slave,’’ ‘‘Dallas Buyers Club,’’ ‘‘American Hustle’’ and ‘‘The Wolf of Wall Street’’ were all funded by major indies, though they usually relied on distribution deals with Hollywood studios. At the same time, of course, some of the slack in adventurous filmmaking has been taken up by the television networks. If Francis Ford Coppola were making his ‘‘Godfather’’ trilogy today, he might well end up at HBO or AMC, with a hundred hours of narrative at his disposal, instead of 10.
How have high-quality books fared in the digital economy? If you write an exceptional novel or biography today, are you more or less likely to hit the best-seller list than you might have in the pre-Kindle age? Here the pessimists might have a case, based on my analysis. Every year, editors at The New York Times Book Review select the 100 notable books of the year. In 2004 and 2005, the years before the first Kindles were released, those books spent a combined 2,781 weeks on The Times’s best-seller list and the American Booksellers Association’s IndieBound list, which tracks sales in independent bookstores. In 2013 and 2014, the notable books spent 2,531 weeks on the best-seller lists — a decline of 9 percent. When you look at the two lists separately, the story becomes more complicated still. The critical successes of 2013 and 2014 actually spent 6 percent more weeks on the A.B.A. list, but 30 percent fewer weeks on the broader Times list. The numbers seem to suggest that the market for books may be evolving into two distinct systems. Critically successful works seem to be finding their audience more easily among indie-bookstore shoppers, even as the mainstream market has been trending toward a winner-takes-all sweepstakes.
This would be even more troubling if independent bookstores — traditional champions of the literary novel and thoughtful nonfiction — were on life support. But contrary to all expectations, these stores have been thriving. After hitting a low in 2007, decimated not only by the Internet but also by the rise of big-box chains like Borders and Barnes & Noble, indie bookstores have been growing at a steady clip, with their number up 35 percent (from 1,651 in 2009 to 2,227 in 2015); by many reports, 2014 was their most financially successful year in recent memory. Indie bookstores account for only about 10 percent of overall book sales, but they have a vastly disproportionate impact on the sale of the creative midlist books that are so vital to the health of the culture.
How do we explain the evolutionary niche that indie bookstores seem to have found in recent years? It may be as simple as the tactile appeal of books and bookstores themselves. After several years of huge growth, e-book sales have plateaued over the past two years at 25 to 30 percent of the market, telegraphing that a healthy consumer appetite for print remains. To many of us, buying music in physical form is now simply an inconvenience: schlepping those CDs home and burning them and downloading the tracks to our mobile devices. But many of the most ardent Kindle converts — and I count myself among them — still enjoy browsing shelves of physical books, picking them up and sitting back on the couch with them. The trend might also reflect the social dimension of book culture: If you’re looking for literary community, you head out to the weekly reading series at the indie bookstore and buy something while you’re there. (Arguably, it’s the same phenomenon that happened with music, only with a twist. If you’re looking for musical community, you don’t go out on a CD-buying binge. You go to a show instead.)
All these numbers, of course, only hint at whether our digital economy rewards quality. Or — even better than that milquetoast word ‘‘quality’’ — at whether it rewards experimentation, boundary-pushing, satire, the real drivers of new creative work. It could be that our smartphone distractions and Kardashian celebrity culture have slowly but steadily lowered our critical standards, the aesthetic version of inflation: The critics might like certain films and books today because they’re surrounded by such a vast wasteland of mediocrity, but if you had released them 15 years ago, they would have paled beside the masterpieces of that era. But if you scan the titles, it is hard to see an obvious decline. A marketplace that rewarded ‘‘American Beauty,’’ ‘‘The Corrections’’ or ‘‘In the Heart of the Sea’’ doesn’t seem glaringly more sophisticated than one that rewards ‘‘12 Years a Slave,’’ ‘‘The Flamethrowers’’ or ‘‘The Sixth Extinction.’’
If you believe the data, then one question remains. Why have the more pessimistic predictions not come to pass? One incontrovertible reason is that — contrary to the justifiable fears of a decade ago — people will still pay for creative works. The Napsterization of culture turned out to be less of a threat to prices than it initially appeared. Consumers spend less for recorded music, but more for live. Most American households pay for television content, a revenue stream that for all practical purposes didn’t exist 40 years ago. Average movie-ticket prices continue to rise. For interesting reasons, book piracy hasn’t taken off the way it did with music. And a whole new creative industry — video games — has arisen to become as lucrative as Hollywood. American households in 2013 spent 4.9 percent of their income on entertainment, the exact same percentage they spent in 2000.
At the same time, there are now more ways to buy creative work, thanks to the proliferation of content-delivery platforms. Practically every device consumers own is tempting them at all hours with new films or songs or shows to purchase. Virtually no one bought anything on their computer just 20 years ago; the idea of using a phone to buy and read a 700-page book about a blind girl in occupied France would have sounded like a joke even 10 years ago. But today, our phones sell us every form of media imaginable; our TVs charge us for video-on-demand products; our car stereos urge us to sign up for SiriusXM.
And just as there are more avenues for consumers to pay for creative work, there are more ways to be compensated for making that work. Think of that signature flourish of 2000s-era television artistry: the exquisitely curated (and usually obscure) song that signals the transition from final shot to the rolling credits. Having a track featured during the credits of ‘‘Girls’’ or ‘‘Breaking Bad’’ or ‘‘True Blood’’ can be worth hundreds of thousands of dollars to a songwriter. (Before that point, the idea of licensing a popular song for the credits of a television series was almost unheard-of.) Video-game budgets pay for actors, composers, writers and song licenses. There are YouTube videos generating ad revenue and Amazon Kindle Singles earning royalties, not to mention those emerging studios (like Netflix and Yahoo) that are spending significant dollars on high-quality video. Filmmakers alone have raised more than $290 million on Kickstarter for their creations. Musicians are supplementing their income with instrument lessons on YouTube. All of these outlets are potential sources of revenue for the creative class, and all of them are creatures of the post-Napster era. The Future of Music Coalition recently published a list of all the revenue streams available to musicians today, everything from sheet-music sales at concerts to vinyl-album sales. They came up with 46 distinct sources, 13 of which — including YouTube partner revenue and ringtone royalties — were nonexistent 15 years ago, and six of which, including film and television licensing, have greatly expanded in the digital age.
The biggest change of all, perhaps, is the ease with which art can be made and distributed. The cost of consuming culture may have declined, though not as much as we feared. But the cost of producing it has dropped far more drastically. Authors are writing and publishing novels to a global audience without ever requiring the service of a printing press or an international distributor. For indie filmmakers, a helicopter aerial shot that could cost tens of thousands of dollars a few years ago can now be filmed with a GoPro and a drone for under $1,000; some directors are shooting entire HD-quality films on their iPhones. Apple’s editing software, Final Cut Pro X, costs $299 and has been used to edit Oscar-winning films. A musician running software from Native Instruments can recreate, with astonishing fidelity, the sound of a Steinway grand piano played in a Vienna concert hall, or hundreds of different guitar-amplifier sounds, or the Mellotron proto-synthesizer that the Beatles used on ‘‘Strawberry Fields Forever.’’ These sounds could have cost millions to assemble 15 years ago; today, you can have all of them for a few thousand dollars.
From the bird’s-eye perspective, it may not look as though all that much has changed in terms of the livelihoods of the creative class. On the whole, creators seem to be making slightly more money, while growing in number at a steady but not fast pace. I suspect the profound change lies at the boundaries of professionalism. It has never been easier to start making money from creative work, for your passion to undertake that critical leap from pure hobby to part-time income source. Write a novel or record an album, and you can get it online and available for purchase right away, without persuading an editor or an A&R executive that your work is commercially viable. From the consumer’s perspective, blurring the boundaries has an obvious benefit: It widens the pool of potential talent. But it also has an important social merit. Widening the pool means that more people are earning income by doing what they love.
These new careers — collaborating on an indie-movie soundtrack with a musician across the Atlantic, uploading a music video to YouTube that you shot yourself on a smartphone — require a kind of entrepreneurial energy that some creators may lack. The new environment may well select for artists who are particularly adept at inventing new career paths rather than single-mindedly focusing on their craft. There are certainly pockets of the creative world, like those critically acclaimed books dropping off the mainstream best-seller lists, where the story is discouraging. And even the positive trends shouldn’t be interpreted as a mindless defense of the status quo. Most full-time artists barely make enough money to pay the bills, and so if we have levers to pull that will send more income their way — whether these take the form of government grants, Kickstarter campaigns or higher fees for the music we stream — by all means we should pull those levers.
But just because creative workers deserve to make more money, it doesn’t mean that the economic or technological trends are undermining their livelihoods. If anything, the trends are making creative livelihoods more achievable. Contrary to Lars Ulrich’s fear in 2000, the ‘‘diverse voices of the artists’’ are still with us, and they seem to be multiplying. The song remains the same, and there are more of us singing it for a living.
Steven Johnson is the author of nine books, most recently “How We Got to Now: Six Innovations That Made the Modern World.”