Tagged: Technology Toggle Comment Threads | Keyboard Shortcuts

  • feedwordpress 23:02:06 on 2018/11/12 Permalink
    Tags: , , , , , ideas, , , , Technology   

    When Tech Loves Its Fiercest Critics, Buyer Beware 

    Detail from the cover of Harari’s lastest work, 21 Lessons for the 21st Century.

    A year and a half ago I reviewed Yuval Noah Harari’s Homo Deus, recommending it to the entire industry with this subhead: “No one in tech is talking about Homo Deus. We most certainly should be.”

    Eighteen months later, Harari is finally having his technology industry moment. The author of a trio of increasingly disturbing books – Sapiens, for which made his name as a popular historian philosopher, the aforementioned Homo Deus, which introduced a dark strain of tech futurism to his work, and the recent 21 Lessons for the 21st Century – Harari has cemented his place in the Valley as tech’s favorite self-flagellant. So it’s only fitting that this weekend Harari was the subject of New York Times profile featuring this provocative title: Tech C.E.O.s Are in Love With Their Principal Doomsayer. The subhead continues: “The futurist philosopher Yuval Noah Harari thinks Silicon Valley is an engine of dystopian ruin. So why do the digital elite adore him so?”

    Well, I’m not sure if I qualify as one of those elites, but I have a theory, one that wasn’t quite raised in the Times’ otherwise compelling profile. I’ve been a student of Harari’s work, and if there’s one clear message, it’s this: We’re running headlong into a world controlled by a tiny elite of superhumans, masters of new technologies that the “useless class” will never understand. “Homo sapiens is an obsolete algorithm,” Harari writes in Homo Deus. A new religion of Dataism will transcend our current obsession with ourselves, and we will “dissolve within the data torrent like a clump of earth within a gushing river.” In other words, we humans are f*cked, save for a few of the lucky ones who manage to transcend their fate and become masters of the machines. “Silicon Valley is creating a tiny ruling class,” the Times writes, paraphrasing Harari’s work, “and a teeming, furious “useless class.””

    So here’s why I think the Valley loves Harari: We all believe we’ll be members of that tiny ruling class. It’s an indefensible, mathematically impossible belief, but as Harari reminds us in 21 Lessons, “never underestimate human stupidity.” Put another way, we are  fooling ourselves, content to imagine we’ll somehow all earn a ticket into (or onto) whatever apocalypse-dodging exit plan Musk, Page or Bezos might dream up (they’re all obsessed with leaving the planet, after all). Believing that impossible fiction is certainly a lot easier than doing the quotidian work of actually fixing the problems which lay before us. Better to be one of the winners than to risk losing along with the rest of the useless class, no?

    But we can’t all be winners in the future Harari lays out, and he seems to understand this fact. “If you make people start thinking far more deeply and seriously about these issues,” he said to the Times, “some of the things they will think about might not be what you want them to think about.”

    Exactly, Professor. Now that I’ve departed the Valley, where I spent nearly three decades of my life, I’m starting to gain a bit of perspective on my own complicated relationship with the power structure of the place. I grew up with the (mostly) men who lead companies like Amazon, Google, Facebook and Apple, and early in the industry’s rise, it was heady to share the same stage with legends like Bezos, Jobs, or Page. But as the technology industry becomes the driving force of social rupture, I’m far more skeptical of its leaders’ abilities to, well, lead.

    Witness this nearly idea-free interview with Google CEO Sundar Pichai, also in the Times, where the meticulously media-prepped executive opines on whether his industry has a role to play in society’s ills: “Every generation is worried about the new technology, and feels like this time it’s different. Our parents worried about Elvis Presley’s influence on kids. So, I’m always asking the question, “Why would it be any different this time?” Having said that, I do realize the change that’s happening now is much faster than ever before. My son still doesn’t have a phone.”

    Pichai’s son not have a phone, but he is earning money mining Ethereum (really, you can’t make this shit up). I’m not sure the son of a centi-millionaire needs to earn money – but it certainly is useful to master the algorithms that will soon control nearly every aspect of human life. So – no, son, no addictive phone for you (even though my company makes them, and makes their operating systems, and makes the apps which ensure their addictive qualities).

    But mining crypto currency? Absolutely!

    Should Harari be proven right and humanity becomes irrelevant, I’m pretty sure Pichai’s son will have a first class ticket out of whatever mess is left behind. But the rest of us? We should probably focus on making sure that kid never needs to use it.

    Cross posted from NewCo Shift. 


    By the way, the other current obsession of Valley folks is author Anand Giridharadas’ Winners Take All – The Elite Charade of Changing the World. Read them together for a one-two punch, if you dare…

     
  • feedwordpress 16:16:33 on 2018/09/24 Permalink
    Tags: , , , , , , , Technology   

    Governance, Technology, and Capitalism. 

    Or, Will Nature Just Shrug Its Shoulders?

    If you pull far enough back from the day to day debate over technology’s impact on society – far enough that Facebook’s destabilization of democracy, Amazon’s conquering of capitalism, and Google’s domination of our data flows start to blend into one broader, more cohesive picture – what does that picture communicate about the state of humanity today?

    Technology forces us to recalculate what it means to be human – what is essentially us, and whether technology represents us, or some emerging otherness which alienates or even terrifies us.  We have clothed ourselves in newly discovered data, we have yoked ourselves to new algorithmic harnesses, and we are waking to the human costs of this new practice. Who are we becoming?

    Nearly two years ago I predicted that the bloom would fade from the technology industry’s rose, and so far, so true. But as we begin to lose faith in the icons of our former narratives, a nagging and increasingly urgent question arises:  In a world where we imaging merging with technology, what makes us uniquely human?

    Our lives are now driven in large part by data, code, and processing, and by the governance of algorithms. These determine how data flows, and what insights and decisions are taken as a result.

    So yes, software has, in a way, eaten the world. But software is not something being done to us. We have turned the physical world into data, we have translated our thoughts, actions, needs and desires into data, and we have submitted that data for algorithmic inspection and processing. What we now struggle with is the result of these new habits – the force of technology looping back upon the world, bending it to a new will.  What agency – and responsibility – do we have? Whose will? To what end?

    • ••

    Synonymous with progress, asking not for permission, fearless of breaking things – in particular stupid, worthy-of-being-broken things like government, sclerotic corporations, and fetid social norms – the technology industry reveled for decades as a kind of benighted warrior for societal good. As one Senator told me during the Facebook hearings this past summer, “we purposefully didn’t regulate technology, and that was the right thing to do.” But now? He shrugged. Now, maybe it’s time.

    Because technology is already regulating us. I’ve always marveled at libertarians who think the best regulatory framework for government is none at all. Do they think that means there’s no governance?

    In our capitalized healthcare system, data, code and algorithms now drive diagnosis, costs, coverage and outcomes. What changes on the ground? People are being denied healthcare, and this equates to life or death in the real world. 

    In our public square, data, code and algorithms drive civil discourse. We no longer share one physical, common square, but instead struggle to comprehend a world comprised of a billion Truman Shows. What changes on the ground? The election results of the world’s most powerful country.

    Can you get credit to start a business? A loan to better yourself through education? Financial decisions are now determined by data, code, and algorithms. Job applications are turned to data, and run through cohorts of similarities, determining who gets hired, and who ultimately ends up leaving the workforce.

    And in perhaps the most human pursuit of all – connecting to other humans – we’ve turned our desires and our hopes to data, swapping centuries of cultural norms for faith in the governance of code and algorithms built – in necessary secrecy – by private corporations.

    • ••

    How does a human being make a decision? Individual decision making has always been opaque – who can query what happens inside someone’s head? We gather input, we weigh options and impacts, we test assumptions through conversations with others. And then we make a call – and we hope for the best.

    But when others are making decisions that impact us, well, those kinds of decisions require governance. Over thousands of years we’ve designed systems to insure that our most important societal decisions can be queried and audited for fairness, that they are defensible against some shared logic, that they will  benefit society at large.

    We call these systems government. It is imperfect but… it’s better than anarchy.

    For centuries, government regulations have constrained social decisions that impact health, job applications, credit – even our public square. Dating we’ve left to the governance of cultural norms, which share the power of government over much of the world.

    But in just the past decade, we’ve ceded much of this governance to private companies – companies motivated by market imperatives which demand their decision making processes be hidden. Our public government – and our culture – have not kept up.

    What happens when decisions are taken by algorithms of governance that no one understands? And what happens when those algorithms are themselves governed by a philosophy called capitalism?

    • ••

    We’ve begun a radical experiment combining technology and capitalism, one that most of us have scarcely considered. Our public commons – that which we held as owned by all, to the benefit of all – is increasingly becoming privatized.

    Thousands of companies are now dedicated to revenue extraction in the course of delivering what were once held as public goods. Public transportation is being hollowed out by Uber, Lyft, and their competitors (leveraging public goods like roadways, traffic infrastructure, and GPS).  Public education is losing funding to private schools, MOOCs, and for-profit universities. Public health, most disastrously in the United States, is driven by a capitalist philosophy tinged with technocratic regulatory capture. And in perhaps the greatest example of all, we’ve ceded our financial future to the almighty 401K – individuals can no longer count on pensions or social safety nets – they must instead secure their future by investing in “the markets” – markets which have become inhospitable to anyone lacking the technological acumen of the world’s most cutting-edge hedge funds.

    What’s remarkable and terrifying about all of this is the fact that the combinatorial nature of technology and capitalism outputs fantastic wealth for a very few, and increasing poverty for the very many. It’s all well and good to claim that everyone should have a 401K. It’s irresponsible to continue that claim when faced with the reality that 84 percent of the stock market is owned by the wealthiest ten percent of the population.

    This outcome is not sustainable. When a system of governance fails us, we must examine its fundamental inputs and processes, and seek to change them.

    • ••

    So what truly is governing us in the age of data, code, algorithms and processing? For nearly five decades, the singular true north of capitalism has been to enrich corporate shareholders. Other stakeholders – employees, impacted communities, partners, customers – do not directly determine the governance of most corporations.

    Corporations are motivated by incentives and available resources. When the incentive is extraction of capital to be placed in the pockets of shareholders, and a new resource becomes available which will aide that extraction, companies will invent fantastic new ways to leverage that resource so as to achieve their goal. If that resource allows corporations to skirt current regulatory frameworks, or bypass them altogether, so much the better.

    The new resource, of course, is the combination of data, code, algorithms and processing. Unbridled, replete with the human right of speech and its attendant purchasing of political power, corporations are quite literally becoming our governance model.

    Now the caveat: Allow me to state for the record that I am not a socialist. If you’ve never read my work, know I’ve started six companies, invested in scores more, and consider myself an advocate of transparently governed free markets. But we’ve leaned far too over our skis – the facts no longer support our current governance model.

    • ••

    We turn our worlds to data, leveraging that data, technocapitalism then terraforms our world. Nowhere is this more evident that with automation – the largest cost of nearly every corporation is human labor, and digital technologies are getting extraordinarily good at replacing that cost.

    Nearly everyone agrees this shift is not new – yes yes, a century or two ago, most of us were farmers. But this shift is coming far faster, and with far less considered governance. The last great transition came over generations. Technocapitalism has risen to its current heights in ten short years. Ten years. 

    If we are going to get this shift right, we urgently need to engage in a dialog about our core values. Can we perhaps rethink the purpose of work, given work no longer means labor? Can we reinvent our corporations and our regulatory frameworks to honor, celebrate and support our highest ideals? Can we prioritize what it means to be human even as we create and deploy tools that make redundant the way of life we’ve come to know these past few centuries?

    These questions beg a simpler one: What makes us human?

    I dusted off my old cultural anthropology texts, and consulted the scholars. The study of humankind teaches us that we are unique in that we are transcendent toolmakers – and digital technology is our most powerful  tool. We have nuanced language, which allows us both recollection of the past, and foresight into the future. We are wired – literally at the molecular level – to be social, to depend on one another, to share information and experience. Thanks to all of this, we have the capability to wonder, to understand our place in the world, to philosophize. The love of beauty,  philosophers will tell you, is the most human thing of all.

    Oh, but then again, we are uniquely capable of intentional destroying ourselves. Plenty of species can do that by mistake. We’re unique in our ability to do it on purpose.

    But perhaps the thing that makes us most human is our love of story telling, for narrative weaves nearly everything human into one grand experience. Our greatest philosophers even tell stories about telling stories! The best stories employ sublime language, advanced tools, deep community, profound wonder, and inescapable narrative tension.  That ability to destroy ourselves? That’s the greatest narrative driver in this history of mankind.

    How will it turn out?

    • ••

    We are storytelling engines uniquely capable of understanding our place in the world. And it’s time to change our story, before we fail a grand test of our own making: Can we transition to a world inhabited by both ourselves, and the otherness of the technology we’ve created? Should we fail, nature will indifferently shrug its shoulders. It has billions of years to let the whole experiment play over again.

    We are the architects of this grand narrative. Let’s not miss our opportunity to get it right.

    Adapted from a speech presented at the Thrival Humans X Tech conference in Pittsburgh earlier this week. 

    Cross posted from NewCo Shift. 

     

     
  • feedwordpress 14:40:18 on 2018/09/18 Permalink
    Tags: , , , Technology   

    Dear Marc: Please, *Do* Get Involved 

    The Los Angeles Times was the first newspaper I ever read – I even attended a grammar school named for its founding family (the Chandlers). Later in life I worked at the Times for a summer – and found even back then, the great brand had begun to lose its way.

    I began reading The Atlantic as a high schooler in the early 1980s, and in college I dreamt of writing long form narratives for its editors. In graduate school, I even started a publication modeled on The Atlantic‘s brand – I called it The Pacific. My big idea: The west coast was a huge story in desperate need of high-quality narrative journalism. (Yes, this was before Wired.)

    I toured The Washington Post as a teenager, and saw the desks where Bernstein and Woodward brought down a corrupt president. I met Katherine Graham once, at a conference I hosted, and I remain star struck by the institution she built to this day.

    And every seven days, for more than five decades, Time magazine came to my parents’ home, defining the American zeitgeist and smartly summarizing what mattered in public discourse.

    Now all four of my childhood icons are owned by billionaires who made their fortunes in technology. History may not repeat, but it certainly rhymes. During the Gilded Age, our last great era of unbridled income inequality, many of America’s greatest journalistic institutions were owned by wealthy industrialists. William Randolph Hearst was a mining magnate. Joseph Pulitzer came from a wealthy European merchant family, though he came to the US broke and epitomized the American “self made man.” Andre Carnegie, Jay Gould, Cornelius Vanderbilt Jr., and Henry Flagler all dabbled in newspapers, with a healthy side of politics, which drove nearly all of American publishing during the Gilded Age.

    Which brings us to the Benioffs, and to Time. This week’s announcement struck all the expected notes – “The Benioffs will hold TIME as a family investment,” “TIME is a treasure trove of the world’s history and culture,” “Lynne and I will take on no operational responsibility for TIME, and look only to be stewards of this historic and iconic brand.”

    Well to that, I say poppycock. Time needs fixing, not benign stewardship. While it may be appropriate and politic to proclaim a hands-off approach, the flagship brand of the former Time Inc. empire could use a strong dose of what the Benioffs have to offer. Here’s my hot take on why and how:

    • Don’t play down the middle. What the United States needs right now is a voice of reason, of strength, of post-Enlightenment thinking. Not a safe, bland version of “on the one hand, on the other hand” journalism. As Benioff well knows, politics is now the biggest driver of attention in the land, and taking a principled stand matters more than ever.
    • Learn from Bezos. Sure, the richest man in the world didn’t mess with the editorial side of the house, but then again, he already had an extraordinary leader in Marty Baron at the helm. But Bezos did completely shift the business model at the Post, implementing entirely new approaches to, well, pretty much every operating model in the building. New revenue leadership, new software platforms and processes, even a new SaaS business line. He thoroughly modernized the place, and if ever a place needed the same, it’s Time.
    • Invest in the product – editorial. But thoughtfully.  First and foremost, the Benioffs should force the Time team to answer the most important question of any consumer brand: Differentiation that demands a premium. Why should Time earn someone’s attention (and money)? What makes the publication unique? What does its brand stand for, beyond history and a red band around the cover? What mission is it on? If anyone understands these issues, it’s Marc and Lynne Benioff. Don’t hold back on forcing this difficult conversation – including on staffing and leadership (I’ve no bone to pick with anyone there, BTW). American journalism needs it, now. I can imagine a Time magazine where the most talented and elite commentators debate the issues of our day. And what issues they truly are! But to draw them, the product must sing, and it must also pay. Abolish the practice of paying a pittance for an argument well rendered. It’s time.
    • Related, rethink the print business. Print isn’t dead, but it needs a radical rethink. There isn’t a definitive weekly journal of sensible political and social discourse in America, and there really should be. The New Yorker is comfortably highbrow, US News is a college review site, Newsweek is rudderless. Time has a huge opportunity, but as it stands, it plays to the middle far too much, and online, it tries to be everything to nobody. Perhaps the hardest, but most important thing anyone can do at a struggling print magazine is to cut circulation (the base number of readers) and find its truly passionate brand advocates. The company already did this a year ago, but it may not have gone far enough. Junk circulation is rife in the magazine business. It’s also rampant online, which leads to…
    • Please, fix the website. A  site that has a nearly 10-month out of date copyright notice at the bottom is not run like a lean product shop. Time online is a poster child for compromised business decisions driven entirely by acquiring junk audience (did you know that Time has 60mm uniques? Yeah, neither do they). Every single page on Time.com is littered with half a dozen or more competing display banners. The place stinks of desperate autoplay video, programmatic pharmaceutical come ons, and tawdry link bait (there are literally THREE instances of Outbrain-like junk on each article page. THREE!). Fixing this economic and product mess requires deep pockets and strong product imagination. The Benioffs have both. Invent (and or copy) new online models where the advertising adds value, where marketers would be proud to support the product. I’ve spoken to dozens of senior marketers looking to lean into high-quality news analysis. They’ve got very little to support at present. Time could change that.
    • Move out of Time Inc’s headquarters. Like, this week. The original Time Inc. HQ were stultifying and redolent with failure, but even the new digs downtown bear the albatross of past glories. It’s soul crushing. As an independent brand, Time needs a space that reclaims its pioneer spirt, and encourages its staff to rethink everything. Move to Nomad, the Flatiron, West Chelsea – anywhere but a skyscraper in the financial district.
    • Finally, leverage and rethink the cover. One of the largest single losses in the shift from analog to digital publishing was the loss of covers – the album cover (and its attendant liner notes), the book cover (and its attendant social signaling), and the magazine cover (and its attendant declarative power). The magazine cover is social artifact, editorial arbiter, cultural convener. The digital world still lacks the analog cover’s power. Time should make it a priority to invent its successor. Lock ten smart humans in a room full of whiteboards and don’t let them out till they have a dozen or more good ideas. Then test and learn – the answer is in there somewhere. The world needs editorial convening more than ever.

    There’s so much more, but I didn’t actually set out to write a post about how to fix Time  – I was merely interested in the historical allegories of successful industrialists who turned to publishing as they consolidated their legacies. In an interview with the New York Times this week, Benioff claimed his purchase of Time was aligned with his mission of “impact investing,” and that he was not going to be operationally involved. Well, Marc, if you truly want to have an impact, I beg to differ: Please do get involved, and the sooner the better.

     

     
  • feedwordpress 13:43:08 on 2018/09/06 Permalink
    Tags: , , , , , , , Technology, ,   

    Facebook, Twitter, and the Senate Hearings: It’s The Business Model, Period. 

    “We weren’t expecting any of this when we created Twitter over 12 years ago, and we acknowledge the real world negative consequences of what happened and we take the full responsibility to fix it.”

    That’s the most important line from Twitter CEO Jack Dorsey’s testimony yesterday – and in many ways it’s also the most frustrating. But I agree with Ben Thompson, who this morning points out (sub required) that Dorsey’s philosophy on how to “fix it” was strikingly different from that of Facebook COO Sheryl Sandberg (or Google, which failed to send a C-level executive to the hearings). To quote Dorsey (emphasis mine): “Today we’re committing to the people and this committee to do that work and do it openly. We’re here to contribute to a healthy public square, not compete to have the only one. We know that’s the only way our business thrives and helps us all defend against these new threats.”

    Ben points out that during yesterday’s hearings, Dorsey was willing to tie the problems of public discourse on Twitter directly to the company’s core business model, that of advertising. Sandberg? She ducked the issue and failed to make the link.

    You may recall my piece back in January, Facebook Can’t Be Fixed. In it I argue that the only way to address Facebook’s failings as a public square would be to totally rethink its core advertising model, a golden goose which has driven the company’s stock on an six-year march to the stratosphere. From the post:

    “[Facebook’s ad model is] the honeypot which drives the economics of spambots and fake news, it’s the at-scale algorithmic enabler which attracts information warriors from competing nation states, and it’s the reason the platform has become a dopamine-driven engagement trap where time is often not well spent.

    To put it in Clintonese: It’s the advertising model, stupid.

    We love to think our corporate heroes are somehow super human, capable of understanding what’s otherwise incomprehensible to mere mortals like the rest of us. But Facebook is simply too large an ecosystem for one person to fix.”

    That one person, of course, is Mark Zuckerberg, but what I really meant was one company – Facebook. It’s heartening to see Sandberg acknowledge, as she did in her written testimony, the scope and the import of the challenges Facebook presents to our democracy (and to civil society around the world). But regardless of sops to “working closely with law enforcement and industry peers” and “everyone working together to stay ahead,” it’s clear Facebook’s approach to “fixing” itself remains one of going it alone. A robust, multi-stakeholder approach would quickly identify Facebook’s core business model as a major contributor to the problem, and that’s an existential threat.

    Sandberg’s most chilling statement came at the end of of her prepared remarks, in which she defined Facebook as engaged in an “arms race” against actors who co-opt the company’s platforms. Facebook is ready, Sandberg implied, to accept the challenge of lead arms producer in this race: “We are determined to meet this challenge,” she concludes.

    Well I’m sorry, I don’t want one private company in charge of protecting civil society. I prefer a more accountable social structure, thanks very much.

    I’ve heard this language of “arms races” before, in far less consequential framework: Advertising fraud, in particular on Google’s search platforms. To combat this fraud, Google locked arms with a robust network of independent companies, researchers, and industry associations, eventually developing a solution that tamed the issue (it’s never going to go away entirely).  That approach – an open and transparent process, subject to public checks and balances – is what is desperately needed now, and what Dorsey endorsed in his testimony. He’s right to do so. Unlike Google’s ad fraud issues of a decade ago, Facebook and Twitter’s problems extend to life or death, on-the-ground consequences – the rise of a dictator in the Philippines, genocide in Myanmar, hate crimes in Sri Lanka, and the loss of public trust (and possibly an entire presidential election) here in the United States. The list is terrifying, and it’s growing every week.

    These are not problems one company, or even a heterogenous blue ribbon committee, can or should “fix.” Facebook does not bear full responsibility for these problems – anymore than Trump is fully responsible for the economic, social, and cultural shifts which swept him into office last year.  But just as Trump has become the face of what’s broken in American discourse today, Facebook – and tech companies more broadly – have  become the face of what’s broken in capitalism. Despite its optimistic, purpose driven, and ultimately naive founding principles, the technology industry has unleashed a mutated version of steroidal capitalism upon the world, failing along the way to first consider the potential damage its business models might wreak.

    In an OpEd introducing the ideas in his new book “Farsighted”, author Steven Johnson details how good decisions are made, paying particular attention to how important it is to have diverse voices at the table capable of imagining many different potential scenarios for how a decision might play out. “Homogeneous groups — whether they are united by ethnic background, gender or some other commonality like politics — tend to come to decisions too quickly,” Johnson writes.  “They settle early on a most-likely scenario and don’t question their assumptions, since everyone at the table seems to agree with the broad outline of the interpretation.”

    Sounds like the entire tech industry over the past decade, no?

    Johnson goes on to quote the economist and Nobel laureate Thomas Schelling: “One thing a person cannot do, no matter how rigorous his analysis or heroic his imagination, is to draw up a list of things that would never occur to him.”

    It’s clear that the consequences of Facebook’s platforms never occurred to Zuckerberg, Sandberg, Dorsey, or other leaders in the tech industry. But now that the damage is clear, they must be brave enough to consider new approaches.

    To my mind, that will require objective study of tech’s business models, and an open mind toward changing them. It seems Jack Dorsey has realized that. Sheryl Sandberg and her colleagues at Facebook? Not so much.

     

     

     

     
  • feedwordpress 17:07:06 on 2018/08/01 Permalink
    Tags: , , geopolitics, , , , , , , , Technology   

    Google and China: Flip, Flop, Flap 

    Google’s Beijing offices in 2010, when the company decided to stop censoring its results and exit the market.

    I’ve been covering Google’s rather tortured relationship with China for more than 15 years now. The company’s off again, on again approach to the Internet’s largest “untapped” market has proven vexing, but as today’s Intercept scoop informs us, it looks like Google has yielded to its own growth imperative, and will once again stand up its search services for the Chinese market. To wit:

    GOOGLE IS PLANNING to launch a censored version of its search engine in China that will blacklist websites and search terms about human rights, democracy, religion, and peaceful protest, The Intercept can reveal.

    The project – code-named Dragonfly – has been underway since spring of last year, and accelerated following a December 2017 meeting between Google’s CEO Sundar Pichai and a top Chinese government official, according to internal Google documents and people familiar with the plans.

    If I’m reading story correctly, it looks like Google’s China plans, which were kept secret from nearly all of the company’s employees, were leaked to The Intercept by concerned members of Google’s internal “Dragonfly” team, one of whom was quoted:

    “I’m against large companies and governments collaborating in the oppression of their people, and feel like transparency around what’s being done is in the public interest,” the source said, adding that they feared “what is done in China will become a template for many other nations.”

    This news raises any number of issues – for Google, certainly, but given the US/China trade war, for anyone concerned with the future of free trade and open markets. And it revives an age old question about where the line is between “respecting the rule of law in markets where we operate,” a standard tech company response to doing business on foreign soil, and “enabling authoritarian rule,” which is pretty much what Google will be doing should it actually launch the Dragonfly app.

    A bit of history. Google originally refused to play by China’s rules, and in my 2004 book, I reviewed the history, and gave the company props for taking a principled stand, and forsaking what could have been massive profits in the name of human rights. Then, in 2006, Google decided to enter the Chinese market, on government terms. Google took pains to explain its logic:

    We ultimately reached our decision by asking ourselves which course would most effectively further Google’s mission to organize the world’s information and make it universally useful and accessible. Or, put simply: how can we provide the greatest access to information to the greatest number of people?

    I didn’t buy that explanation then, and I don’t buy it now. Google is going into China for one reason, and one reason alone: Profits. As Google rolled out its service in 2006, I penned something of a rant, titled “Never Poke A Dragon While It’s Eating.” In it I wrote:

    The Chinese own a shitload of our debt, and are consuming a shitload of the world’s export base of oil. As they consolidate their power, do you really believe they’re also planning parades for us? I’m pretty sure they’ll be celebrating decades of US policy that looked the other way while the oligarchy used our technology (and that includes our routers, databases, and consulting services) to meticulously undermine the very values which allowed us to create companies like Google in the first place. But those are not the kind of celebrations I’m guessing we’d be invited to.

    So as I puzzle through this issue, understanding how in practical terms it’s really not sensible to expect that some GYMA pact is going to change the world (as much as I might wish it would), it really, honestly, comes down to one thing: The man in the White House.

    Until the person leading this country values human rights over appeasement, and decides to lead on this issue, we’re never going to make any progress. 

    Google pulled out of China in 2010, using a China-backed hacking incident as its main rationale (remember that?!).  The man in the White House was – well let’s just say he wasn’t Bush, nor Clinton, and he wasn’t Trump. In any case, the hacking incident inconveniently reminded Google that the Chinese government has no qualms about using data derived from Google services to target its own citizens.

    Has the company forgotten that fact? One wonders. Back in 2010, I praised the company for standing up to China:

    In this case, Google is again taking a leadership role, and the company is forcing China’s hand. While it’s a stretch to say the two things are directly connected, the seeming fact that China’s government was behind the intrusions has led Google to decide to stop censoring its results in China. This is politics at its finest, and it’s a very clear statement to China: We’re done playing the game your way.

    Seems Google’s not done after all. Which is both sad, and utterly predictable. Sad, because in today’s political environment, we need our companies to lead on moral and human rights issues. And predictable, because Android has a massive hold on China’s internet market, and Google’s lack of a strong search play there threatens not only the company’s future growth in its core market, but its ability to leverage Android across all its services, just as it has in Europe and the United States.

    Google so far has not made a statement on The Intercept’s story, though I imagine smoke is billowing out of some communications war room inside the company’s Mountain View headquarters.  Will the company attempt some modified version of its 2006 justifications? I certainly hope not. This time, I’d counsel, the company should just tell the truth: Google is a public company that feels compelled to grow, regardless of whether that growth comes at a price to its founding values. Period, end of story.

    I’ll end with another quote from that 2006 “Don’t Poke a Dragon” piece:

    …companies like Yahoo and Google don’t traffic in sneakers, they traffic in the most powerful forces in human culture – expression. Knowledge. Ideas. The freedom of which we take as fundamental in this country, yet somehow, we seem to have forgotten its importance in the digital age – in China, one protesting email can land you in jail for 8 years, folks.

    …Congress can call hearings, and beat up Yahoo, Google and the others for doing what everyone else is doing, but in the end, it’s not (Google’s) fault, nor, as much as I wish they’d take it on, is it even their problem. It’s our government’s problem….Since when is China policy somehow the job of private industry?

    Until that government gives (the tech industry) a China policy it can align behind, well, they’ll never align, and the very foundation of our culture – free expression and privacy, will be imperiled.

    After all, the Chinese leaders must be thinking, as they snack on our intellectual property, we’re only protecting our citizens in the name of national security.

    Just like they do in the US, right?

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
esc
cancel