Tagged: Google Toggle Comment Threads | Keyboard Shortcuts

  • feedwordpress 23:02:06 on 2018/11/12 Permalink
    Tags: , , , , Google, ideas, , , ,   

    When Tech Loves Its Fiercest Critics, Buyer Beware 

    Detail from the cover of Harari’s lastest work, 21 Lessons for the 21st Century.

    A year and a half ago I reviewed Yuval Noah Harari’s Homo Deus, recommending it to the entire industry with this subhead: “No one in tech is talking about Homo Deus. We most certainly should be.”

    Eighteen months later, Harari is finally having his technology industry moment. The author of a trio of increasingly disturbing books – Sapiens, for which made his name as a popular historian philosopher, the aforementioned Homo Deus, which introduced a dark strain of tech futurism to his work, and the recent 21 Lessons for the 21st Century – Harari has cemented his place in the Valley as tech’s favorite self-flagellant. So it’s only fitting that this weekend Harari was the subject of New York Times profile featuring this provocative title: Tech C.E.O.s Are in Love With Their Principal Doomsayer. The subhead continues: “The futurist philosopher Yuval Noah Harari thinks Silicon Valley is an engine of dystopian ruin. So why do the digital elite adore him so?”

    Well, I’m not sure if I qualify as one of those elites, but I have a theory, one that wasn’t quite raised in the Times’ otherwise compelling profile. I’ve been a student of Harari’s work, and if there’s one clear message, it’s this: We’re running headlong into a world controlled by a tiny elite of superhumans, masters of new technologies that the “useless class” will never understand. “Homo sapiens is an obsolete algorithm,” Harari writes in Homo Deus. A new religion of Dataism will transcend our current obsession with ourselves, and we will “dissolve within the data torrent like a clump of earth within a gushing river.” In other words, we humans are f*cked, save for a few of the lucky ones who manage to transcend their fate and become masters of the machines. “Silicon Valley is creating a tiny ruling class,” the Times writes, paraphrasing Harari’s work, “and a teeming, furious “useless class.””

    So here’s why I think the Valley loves Harari: We all believe we’ll be members of that tiny ruling class. It’s an indefensible, mathematically impossible belief, but as Harari reminds us in 21 Lessons, “never underestimate human stupidity.” Put another way, we are  fooling ourselves, content to imagine we’ll somehow all earn a ticket into (or onto) whatever apocalypse-dodging exit plan Musk, Page or Bezos might dream up (they’re all obsessed with leaving the planet, after all). Believing that impossible fiction is certainly a lot easier than doing the quotidian work of actually fixing the problems which lay before us. Better to be one of the winners than to risk losing along with the rest of the useless class, no?

    But we can’t all be winners in the future Harari lays out, and he seems to understand this fact. “If you make people start thinking far more deeply and seriously about these issues,” he said to the Times, “some of the things they will think about might not be what you want them to think about.”

    Exactly, Professor. Now that I’ve departed the Valley, where I spent nearly three decades of my life, I’m starting to gain a bit of perspective on my own complicated relationship with the power structure of the place. I grew up with the (mostly) men who lead companies like Amazon, Google, Facebook and Apple, and early in the industry’s rise, it was heady to share the same stage with legends like Bezos, Jobs, or Page. But as the technology industry becomes the driving force of social rupture, I’m far more skeptical of its leaders’ abilities to, well, lead.

    Witness this nearly idea-free interview with Google CEO Sundar Pichai, also in the Times, where the meticulously media-prepped executive opines on whether his industry has a role to play in society’s ills: “Every generation is worried about the new technology, and feels like this time it’s different. Our parents worried about Elvis Presley’s influence on kids. So, I’m always asking the question, “Why would it be any different this time?” Having said that, I do realize the change that’s happening now is much faster than ever before. My son still doesn’t have a phone.”

    Pichai’s son not have a phone, but he is earning money mining Ethereum (really, you can’t make this shit up). I’m not sure the son of a centi-millionaire needs to earn money – but it certainly is useful to master the algorithms that will soon control nearly every aspect of human life. So – no, son, no addictive phone for you (even though my company makes them, and makes their operating systems, and makes the apps which ensure their addictive qualities).

    But mining crypto currency? Absolutely!

    Should Harari be proven right and humanity becomes irrelevant, I’m pretty sure Pichai’s son will have a first class ticket out of whatever mess is left behind. But the rest of us? We should probably focus on making sure that kid never needs to use it.

    Cross posted from NewCo Shift. 


    By the way, the other current obsession of Valley folks is author Anand Giridharadas’ Winners Take All – The Elite Charade of Changing the World. Read them together for a one-two punch, if you dare…

     
  • feedwordpress 17:32:13 on 2018/08/28 Permalink
    Tags: elections, , fake news, free press, Google, , , , , , , ,   

    Hey Jack, Sheryl, and Sundar: It’s Time to Call Out Trump On Fake News. 

    Next week Sheryl Sandberg, COO of Facebook, and Jack Dorsey, CEO of Twitter, will testify in front of Congress. They must take this opportunity to directly and vigorously defend the role that real journalism plays not only on their platforms, but also in our society at large. They must declare that truth exists, that facts matter, and that while reasonable people can and certainly should disagree about how to respond to those facts, civil society depends on rational discourse driven by an informed electorate.

    Why am I on about this? I do my very best to ignore our current president’s daily doses of Twitriol, but I couldn’t whistle past today’s rant about how tech platforms are pushing an anti-Trump agenda.

    Seems the president took a look at himself in Google’s infinite mirror, and he apparently didn’t like what he saw. Of course, a more cynical reading would be that his advisors reminded him that senior executives from Twitter, Facebook, and Google* are set to testify in front of Congress next week, providing a perfect “blame others and deflect narrative from myself” moment for our Bully In Chief.

    Trump’s hatred for journalism is legendary, and his disdain for any truth that doesn’t flatter is well established. As numerous actual news outlets have already established, there’s simply no evidence that Google’s search algorithms do anything other than reflect the reality of Trump news,  which in the world of *actual journalism* where facts and truth matter, is fundamentally negative. This is not because of bias – this is because Trump creates fundamentally negative stories. You know, like failing to honor a war hero, failing to deliver on his North Korea promises, failing to fix his self-imposed policy of imprisoning children, failing to hire advisors who can avoid guilty verdicts….and all that was just in the last week or so.

    But the point of this post isn’t to go on a rant about our president. Instead, I want to make a point about the leaders of our largest technology platforms.

    It’s time Jack, Sheryl, Sundar, and others take a stand against this insanity.  Next week, at least two of them actually have just that chance.

    I’ll lay out my biases for anyone reading who might suspect I’m an agent of the “Fake News Media.” I’m on the advisory board of NewsGuard, a startup that ranks news sites for accuracy and reliability. I’m running NewsGuard’s browser plug in right now, and every single news site that comes up for a Google News search on “Trump News” is flagged as green – or reliable.

    NewsGuard is run by two highly respected members of the “real” media – one of whom is a longstanding conservative, the other a liberal.

    I’m also an advisor and investor in RoBhat Labs, which recently released a plugin that identifies fake images in news articles. Beyond that, I’ve taught journalism at UC Berkeley, where I graduated with a masters after two years of study and remain on the advisory board. I’m also a member of several ad-hoc efforts to address what I’ve come to call the “Real Fake News,” most of which peddles far right wing conspiracy theories, often driven by hostile state actors like Russia. I’ve testified in front of Congress on these issues, and I’ve spent thirty years of my life in the world of journalism and media. I’m tired of watching our president defame our industry, and I’m equally tired of watching the leaders of our tech industry fail to respond to his systematic dismantling of our civil discourse (or worse, pander to it).

    So Jack, Sheryl, and whoever ends up coming from Google, here’s my simple advice: Stand up to the Bully in Chief. Defend civil discourse and the role of truth telling and the free press in our society. A man who endlessly claims that the press is the enemy is a man to be called out. Heed these words:

    “It is the press, above all, which wages a positively fanatical and slanderous struggle, tearing down everything which can be regarded as a support of national independence, cultural elevation, and the economic independence of the nation.”

    No one would claim these are Trump’s words, the prose is far too elegant. But the sentiment is utterly Trumpian. With with apologies to Mike Godwin, those words belong to Adolf Hitler. Think about that, Jack, Sheryl, and Sundar. And speak from your values next week.

    *Google tried to send its general counsel, Kent Walker, but Congress is tired of hearing from lawyers. It’s uncertain if the company will step up and send an actual leader like Sundar or Susan. 

     

     
  • feedwordpress 17:07:06 on 2018/08/01 Permalink
    Tags: , , geopolitics, Google, , , , , , ,   

    Google and China: Flip, Flop, Flap 

    Google’s Beijing offices in 2010, when the company decided to stop censoring its results and exit the market.

    I’ve been covering Google’s rather tortured relationship with China for more than 15 years now. The company’s off again, on again approach to the Internet’s largest “untapped” market has proven vexing, but as today’s Intercept scoop informs us, it looks like Google has yielded to its own growth imperative, and will once again stand up its search services for the Chinese market. To wit:

    GOOGLE IS PLANNING to launch a censored version of its search engine in China that will blacklist websites and search terms about human rights, democracy, religion, and peaceful protest, The Intercept can reveal.

    The project – code-named Dragonfly – has been underway since spring of last year, and accelerated following a December 2017 meeting between Google’s CEO Sundar Pichai and a top Chinese government official, according to internal Google documents and people familiar with the plans.

    If I’m reading story correctly, it looks like Google’s China plans, which were kept secret from nearly all of the company’s employees, were leaked to The Intercept by concerned members of Google’s internal “Dragonfly” team, one of whom was quoted:

    “I’m against large companies and governments collaborating in the oppression of their people, and feel like transparency around what’s being done is in the public interest,” the source said, adding that they feared “what is done in China will become a template for many other nations.”

    This news raises any number of issues – for Google, certainly, but given the US/China trade war, for anyone concerned with the future of free trade and open markets. And it revives an age old question about where the line is between “respecting the rule of law in markets where we operate,” a standard tech company response to doing business on foreign soil, and “enabling authoritarian rule,” which is pretty much what Google will be doing should it actually launch the Dragonfly app.

    A bit of history. Google originally refused to play by China’s rules, and in my 2004 book, I reviewed the history, and gave the company props for taking a principled stand, and forsaking what could have been massive profits in the name of human rights. Then, in 2006, Google decided to enter the Chinese market, on government terms. Google took pains to explain its logic:

    We ultimately reached our decision by asking ourselves which course would most effectively further Google’s mission to organize the world’s information and make it universally useful and accessible. Or, put simply: how can we provide the greatest access to information to the greatest number of people?

    I didn’t buy that explanation then, and I don’t buy it now. Google is going into China for one reason, and one reason alone: Profits. As Google rolled out its service in 2006, I penned something of a rant, titled “Never Poke A Dragon While It’s Eating.” In it I wrote:

    The Chinese own a shitload of our debt, and are consuming a shitload of the world’s export base of oil. As they consolidate their power, do you really believe they’re also planning parades for us? I’m pretty sure they’ll be celebrating decades of US policy that looked the other way while the oligarchy used our technology (and that includes our routers, databases, and consulting services) to meticulously undermine the very values which allowed us to create companies like Google in the first place. But those are not the kind of celebrations I’m guessing we’d be invited to.

    So as I puzzle through this issue, understanding how in practical terms it’s really not sensible to expect that some GYMA pact is going to change the world (as much as I might wish it would), it really, honestly, comes down to one thing: The man in the White House.

    Until the person leading this country values human rights over appeasement, and decides to lead on this issue, we’re never going to make any progress. 

    Google pulled out of China in 2010, using a China-backed hacking incident as its main rationale (remember that?!).  The man in the White House was – well let’s just say he wasn’t Bush, nor Clinton, and he wasn’t Trump. In any case, the hacking incident inconveniently reminded Google that the Chinese government has no qualms about using data derived from Google services to target its own citizens.

    Has the company forgotten that fact? One wonders. Back in 2010, I praised the company for standing up to China:

    In this case, Google is again taking a leadership role, and the company is forcing China’s hand. While it’s a stretch to say the two things are directly connected, the seeming fact that China’s government was behind the intrusions has led Google to decide to stop censoring its results in China. This is politics at its finest, and it’s a very clear statement to China: We’re done playing the game your way.

    Seems Google’s not done after all. Which is both sad, and utterly predictable. Sad, because in today’s political environment, we need our companies to lead on moral and human rights issues. And predictable, because Android has a massive hold on China’s internet market, and Google’s lack of a strong search play there threatens not only the company’s future growth in its core market, but its ability to leverage Android across all its services, just as it has in Europe and the United States.

    Google so far has not made a statement on The Intercept’s story, though I imagine smoke is billowing out of some communications war room inside the company’s Mountain View headquarters.  Will the company attempt some modified version of its 2006 justifications? I certainly hope not. This time, I’d counsel, the company should just tell the truth: Google is a public company that feels compelled to grow, regardless of whether that growth comes at a price to its founding values. Period, end of story.

    I’ll end with another quote from that 2006 “Don’t Poke a Dragon” piece:

    …companies like Yahoo and Google don’t traffic in sneakers, they traffic in the most powerful forces in human culture – expression. Knowledge. Ideas. The freedom of which we take as fundamental in this country, yet somehow, we seem to have forgotten its importance in the digital age – in China, one protesting email can land you in jail for 8 years, folks.

    …Congress can call hearings, and beat up Yahoo, Google and the others for doing what everyone else is doing, but in the end, it’s not (Google’s) fault, nor, as much as I wish they’d take it on, is it even their problem. It’s our government’s problem….Since when is China policy somehow the job of private industry?

    Until that government gives (the tech industry) a China policy it can align behind, well, they’ll never align, and the very foundation of our culture – free expression and privacy, will be imperiled.

    After all, the Chinese leaders must be thinking, as they snack on our intellectual property, we’re only protecting our citizens in the name of national security.

    Just like they do in the US, right?

     
  • feedwordpress 23:59:30 on 2018/06/01 Permalink
    Tags: , , , , crypto, , , , , Google, , , , , , , world wide web   

    Do We Want A Society Built On The Architecture of Dumb Terminals? 

    The post Do We Want A Society Built On The Architecture of Dumb Terminals? appeared first on John Battelle's Search Blog.

    God, “innovation.” First banalized by undereducated entrepreneurs in the oughts, then ground to pablum by corporate grammarians over the past decade, “innovation” – at least when applied to business – deserves an unheralded etymological death.

    But.

    This will be a post about innovation. However, whenever I feel the need to peck that insipid word into my keyboard, I’m going to use some variant of the verb “to flourish” instead. Blame Nobel laureate Edmond Phelps for this: I recently read his Mass Flourishing, which outlines the decline of western capitalism, and I find its titular terminology far less annoying.

    So flourishing it will be.

    In his 2013 work, Phelps (who received the 2006 Nobel in economics) credits mass participation in a process of innovation (sorry, there’s that word again) as central to mass flourishing, and further argues – with plenty of economic statistics to back him up – that it’s been more than a full generation since we’ve seen mass flourishing in any society. He writes:

    …prosperity on a national scale—mass flourishing—comes from broad involvement of people in the processes of innovation: the conception, development, and spread of new methods and products—indigenous innovation down to the grassroots. This dynamism may be narrowed or weakened by institutions arising from imperfect understanding or competing objectives. But institutions alone cannot create it. Broad dynamism must be fueled by the right values and not too diluted by other values.

    Phelps argues the last “mass flourishing” economy was the 1960s in the United States (with a brief but doomed resurgence during the first years of the open web…but that promise went unfulfilled). And he warns that “nations unaware of how their prosperity is generated may take steps that cost them much of their dynamism.” Phelps further warns of a new kind of corporatism, a “techno nationalism” that blends state actors with corporate interests eager to collude with the state to cement market advantage (think Double Irish with a Dutch Sandwich).

    These warnings were proffered largely before our current debate about the role of the tech giants now so dominant in our society. But it sets an interesting context and raises important questions. What happens, for instance, when large corporations capture the regulatory framework of a nation and lock in their current market dominance (and, in the case of Big Tech, their policies around data use?).

    I began this post with Phelps to make a point: The rise of massive data monopolies in nearly every aspect of our society is not only choking off shared prosperity, it’s also blinkered our shared vision for the kind of future we could possibly inhabit, if only we architect our society to enable it. But to imagine a different kind of future, we first have to examine the present we inhabit.

    The Social Architecture of Data 

    I use the term “architecture” intentionally, it’s been front of mind for several reasons. Perhaps the most difficult thing for any society to do is to share a vision of the future, one that a majority might agree upon. Envisioning the future of a complex living system – a city, a corporation, a nation – is challenging work, work we usually outsource to trusted institutions like government, religions, or McKinsey (half joking…).

    But in the past few decades, something has changed when it comes to society’s future vision. Digital technology became synonymous with “the future,” and along the way, we outsourced that future to the most successful corporations creating digital technology. Everything of value in our society is being transformed into data, and extraordinary corporations have risen which refine that data into insight, knowledge, and ultimately economic power. Driven as they are by this core commodity of data, these companies have acted to cement their control over it.

    This is not unusual economic behavior, in fact, it’s quite predictable. So predictable, in fact, that it’s developed its own structure – an architecture, if you will, of how data is managed in today’s information society. I’ve a hypothesis about this architecture – unproven at this point (as all are) – but one I strongly suspect is accurate. Here’s how it might look on a whiteboard:

    We “users” deliver raw data to a service provider, like Facebook or Google, which then captures, refines, processes, and delivers that data back as services to us. The social contract we make is captured in these services’ Terms of Services – we may “own” the data, but for all intents and purposes, the power over that information rests with the platform. The user doesn’t have a lot of creative license to do much with that data he or she “owns” – it lives on the platform, and the platform controls what can be done with it.

    Now, if this sounds familiar, you’re likely a student of early computing architectures. Back before the PC revolution, most data, refined or not, lived on a centralized platform known as a mainframe. Nearly all data storage and compute processing occurred on the mainframe. Applications and services were broadcast from the mainframe back to “dumb terminals,” in front of which early knowledge workers toiled. Here’s a graph of that early mainframe architecture:

     

    This mainframe architecture had many drawbacks – a central point of failure chief among them, but perhaps its most damning characteristic was its hierarchical, top down architecture. From an user’s point of view, all the power resided at the center. This was great if you ran IT at a large corporation, but suffice to say the mainframe architecture didn’t encourage creativity or a flourishing culture.

    The mainframe architecture was supplanted over time with a “client server” architecture, where processing power migrated from the center to the edge, or node. This was due in large part to the rise the networked personal computer (servers were used  for storing services or databases of information too large to fit on PCs). Because they put processing power and data storage into the hands of the user, PCs became synonymous with a massive increase in productivity and creativity (Steve Jobs called them “bicycles for the mind.”) With the PC revolution power transferred from the “platform” to the user – a major architectural shift.

    The rise of networked personal computers became the seedbed for the world wide web, which had its own revolutionary architecture. I won’t trace it here (many good books exist on the topic), but suffice to say the core principle of the early web’s architecture was its distributed nature. Data was packetized and distributed independent of where (or how) it might be processed. As more and more “web servers” came online, each capable of processing data as well as distributing it, the web became a tangled, hot mess of interoperable computing resources. What mattered wasn’t the pipes or the journey of the data, but the service created or experienced by the user at the point of that service delivery, which in the early days was of course a browser window (later on, those points of delivery became smartphone apps and more).

    If you were to attempt to map the social architecture of data in the early web, your map would look a lot like the night sky – hundreds of millions of dots scattered in various constellations across the sky, each representing a node where data might be shared, processed, and distributed. In those early days the ethos of the web was that data should be widely shared between consenting parties so it might be “mixed and mashed” so as to create new products and services. There was no “mainframe in the sky” anymore – it seemed everyone on the web had equal and open opportunities to create and exchange value.

    This is why the late 1990s through mid oughts were a heady time in the web world – nearly any idea could be tried out, and as the web evolved into a more robust set of standards, one could be forgiven for presuming that the open, distributed nature of the web would inform its essential social architecture.

    But as web-based companies began to understand the true value of controlling vast amounts of data, that dream began to fade. As we grew addicted to some of the most revelatory web services – first Google search, then Amazon commerce, then Facebook’s social dopamine – those companies began to centralize their data and processing policies, to the point where we are now: Fearing these giants’ power over us, even as we love their products and services.

    An Argument for Mass Flourishing

    So where does that leave us if we wish to heed the concerns of Professor Phelps? Well, let’s not forget his admonition: “nations unaware of how their prosperity is generated may take steps that cost them much of their dynamism.” My hypothesis is simply this: Adopting a mainframe architecture for our most important data – our intentions (Google), our purchases (Amazon), our communications and social relationships (Facebook) – is not only insane, it’s also massively deprecative of future innovation (damn, sorry, but sometimes the word fits). In Facebook, Tear Down This Wall, I argued:

    … it’s impossible for one company to fabricate reality for billions of individuals independent of the interconnected experiences and relationships that exist outside of that fabricated reality. It’s an utterly brittle product model, and it’s doomed to fail. Banning third party agents from engaging with Facebook’s platform insures that the only information that will inform Facebook will be derived from and/or controlled by Facebook itself. That kind of ecosystem will ultimately collapse on itself. No single entity can manage such complexity. It presumes a God complex.

    So what might be a better architecture? I hinted at it in the same post:

    Facebook should commit itself to being an open and neutral platform for the exchange of value across not only its own services, but every service in the world.

    In other words, free the data, and let the user decide what do to with it. I know how utterly ridiculous this sounds, in particular to anyone reading from Facebook proper, but I am convinced that this is the only architecture for data that will allow a massively flourishing society.

    Now this concept has its own terminology: Data portability.  And this very concept is enshrined in the EU’s GDPR legislation, which took effect one week ago. However, there’s data portability, and then there’s flourishing data portability – and the difference between the two really matters. The GDPR applies only to data that a user *gives* to a service, not data *co-created* with that service. You also can’t gather any insights the service may have inferred about you based on the data you either gave or co-created with it. Not to mention, none of that data is exported in a machine readable fashion, essentially limiting its utility.

    But imagine if that weren’t the case. Imagine instead you can download your own Facebook or Amazon “token,” a magic data coin containing not only all the useful data and insights about you, but a control panel that allows you to set and revoke permissions around that data for any context. You might pass your Amazon token to Walmart, set its permissions to “view purchase history” and ask Walmart to determine how much money it might have saved you had you purchased those items on Walmart’s service instead of Amazon. You might pass your Facebook token to Google, set the permissions to compare your social graph with others across Google’s network, and then ask Google to show you search results based on your social relationships. You might pass your Google token to a startup that already has your genome and your health history, and ask it to munge the two in case your 20-year history of searching might infer some insights into your health outcomes.

    This might seem like a parlor game, but this is the kind of parlor game that could unleash an explosion of new use cases for data, new startups, new jobs, and new economic value. Tokens would (and must) have auditing, trust, value exchange, and the like built in (I tried to write this entire post without mentioned blockchain, but there, I just did it), but presuming they did, imagine what might be built if we truly set the data free, and instead of outsourcing its power and control to massive platforms, we took that power and control and, just like we did with the PC and the web, pushed it to the edge, to the node…to ourselves?

    I rather like the sound of that, and I suspect Mssr. Phelps would as well. Now, how might we get there? I’ve no idea, but exploring possible paths certainly sounds like an interesting project…

    The post Do We Want A Society Built On The Architecture of Dumb Terminals? appeared first on John Battelle's Search Blog.

     
  • feedwordpress 02:42:37 on 2017/08/11 Permalink
    Tags: , Google, , , , ,   

    No. Social Terrorists Will Not Win 

    The post No. Social Terrorists Will Not Win appeared first on John Battelle's Search Blog.

    Social Terrorist

     

     

     

     

     

     

     

     

     

     

     

     

     

    small group of social terrorists have hijacked the rational discourse led by society’s most accomplished, intelligent, and promising organizations.

    (cross posted from NewCo Shift)

    Let’s start with this: Google is not a perfect company. It’s easy to cast it as an omniscient and evil villain, the leader of a millennium-spanning illuminati hellbent on world subjugation. Google the oppressor. Google the silencer of debate. Google, satanic overlord predicted by the holy text!

    But that narrative is bullshit, and all rational humans know it. Yes, we have to pay close attention — and keep our powder dry — when a company with the power and reach of Google (or Facebook, or Amazon, or Apple…) finds itself a leader in the dominant cultural conversation of our times.

    But when a legitimate and fundamentally important debate breaks out, and the company’s employees try to come together to understand its nuances, to find a path forward …..To threaten those engaged in that conversation with physical violence? That’s fucking terrorism, period. And it’s damn well time we called it that.

    Have we lost all deference to the hard won lessons of the past few hundred years? Are we done with enlightenment, with scientific discourse, with fucking manners? Do we now believe progress can only be imposed? Have we abandoned debate? Can we no longer engage in rational discourse, or move forward by attempting to understand each other’s point of view?

    I’m so fucking angry that the asshat trolls managed to force Google’s CEO Sundar Pichai to cancel his planned all hands meeting today, one half hour before it started, I’m finding it hard to even write. Before I can continue, I just need to say this. To scream it, and then I’m sure I’ll come to my senses: FUCK YOU. FUCK YOU, asshats, for hijacking the conversation, for using physical threats, implied or otherwise, as a weapon to shut down legitimate rational discourse. FUCK YOU for paralyzing one of our society’s most admired, intelligent, and successful engines of capitalism, FUCK YOU for your bullying, FUCK YOU for your rage and your anger, FUCK YOU for making me feel just like I am sure you feel about me: I want to fucking kick your fucking ass.

    But now I will take a breath. And I will remember this: The emotions of that last paragraph never move us forward. Ever.

    Google was gathering today to have an honest, difficult, and most likely emotional conversation about the most important idea in our society at present: How to allow all of us to have the right to our points of view, while at the same time insuring the application of those views don’t endanger or injure others. For its entire history, this company has had an open and transparent dialog about difficult issues. This is the first time that I’ve ever heard of where that dialog has been cancelled because of threats of violence.

    This idea Google was preparing to debate is difficult. This idea, and the conflict it engenders, is not a finished product. It is a work in progress. It is not unique to Google. Nor is it unique to Apple, or Facebook, Microsoft or Apple — it could have easily arisen and been leapt upon by social terrorists at any of those companies. That it happened at Google is not the point.

    Because this idea is far bigger than any of those companies. This idea is at the center of our very understanding of reality. At the center of our American idea. Painstakingly, and not without failure, we have developed social institutions — governments, corporations, churches, universities, the press — to help us navigate this conflict. We have developed an approach to cultural dialog that honors respect, abjures violence, accepts truth. We don’t have figured it out entirely. But we can’t abandon the core principles that have allowed us to move so far forward. And that is exactly what the social terrorists want: For us to give up, for us to abandon rational discourse.

    Google is a company comprised of tens of thousands of our finest minds. From conversations I’ve had tonight, many, if not most of those who work there are fearful for their safety and that of their loved ones. Two days ago, they were worried about their ability to speak freely and express their opinions. Today, because social terrorists have gone nuclear, those who disagree with those terrorists — the vast majority of Googlers, and by the way, the vast majority of the world — are fearful for their physical safety.

    And because of that, open and transparent debate has been shut down.

    What. The. Fuck.

    If because of physical threat we can no longer discuss the nuanced points of a difficult issue, then America dies, and so does our democracy.

    This cannot stand.

    Google has promised to have its dialog, but now it will happen behind closed doors, in secrecy and cloaked in security that social terrorists will claim proves collusion. Well done, asshats. You’ve created your own reality.

    It’s up to us to not let that reality become the world’s reality. It’s time to stand up to social terrorists. They cannot and must not win.

    The post No. Social Terrorists Will Not Win appeared first on John Battelle's Search Blog.

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
esc
cancel