Tagged: Google Toggle Comment Threads | Keyboard Shortcuts

  • feedwordpress 23:59:30 on 2018/06/01 Permalink
    Tags: , , , , crypto, , , , , Google, , , , , , , world wide web   

    Do We Want A Society Built On The Architecture of Dumb Terminals? 

    The post Do We Want A Society Built On The Architecture of Dumb Terminals? appeared first on John Battelle's Search Blog.

    God, “innovation.” First banalized by undereducated entrepreneurs in the oughts, then ground to pablum by corporate grammarians over the past decade, “innovation” – at least when applied to business – deserves an unheralded etymological death.

    But.

    This will be a post about innovation. However, whenever I feel the need to peck that insipid word into my keyboard, I’m going to use some variant of the verb “to flourish” instead. Blame Nobel laureate Edmond Phelps for this: I recently read his Mass Flourishing, which outlines the decline of western capitalism, and I find its titular terminology far less annoying.

    So flourishing it will be.

    In his 2013 work, Phelps (who received the 2006 Nobel in economics) credits mass participation in a process of innovation (sorry, there’s that word again) as central to mass flourishing, and further argues – with plenty of economic statistics to back him up – that it’s been more than a full generation since we’ve seen mass flourishing in any society. He writes:

    …prosperity on a national scale—mass flourishing—comes from broad involvement of people in the processes of innovation: the conception, development, and spread of new methods and products—indigenous innovation down to the grassroots. This dynamism may be narrowed or weakened by institutions arising from imperfect understanding or competing objectives. But institutions alone cannot create it. Broad dynamism must be fueled by the right values and not too diluted by other values.

    Phelps argues the last “mass flourishing” economy was the 1960s in the United States (with a brief but doomed resurgence during the first years of the open web…but that promise went unfulfilled). And he warns that “nations unaware of how their prosperity is generated may take steps that cost them much of their dynamism.” Phelps further warns of a new kind of corporatism, a “techno nationalism” that blends state actors with corporate interests eager to collude with the state to cement market advantage (think Double Irish with a Dutch Sandwich).

    These warnings were proffered largely before our current debate about the role of the tech giants now so dominant in our society. But it sets an interesting context and raises important questions. What happens, for instance, when large corporations capture the regulatory framework of a nation and lock in their current market dominance (and, in the case of Big Tech, their policies around data use?).

    I began this post with Phelps to make a point: The rise of massive data monopolies in nearly every aspect of our society is not only choking off shared prosperity, it’s also blinkered our shared vision for the kind of future we could possibly inhabit, if only we architect our society to enable it. But to imagine a different kind of future, we first have to examine the present we inhabit.

    The Social Architecture of Data 

    I use the term “architecture” intentionally, it’s been front of mind for several reasons. Perhaps the most difficult thing for any society to do is to share a vision of the future, one that a majority might agree upon. Envisioning the future of a complex living system – a city, a corporation, a nation – is challenging work, work we usually outsource to trusted institutions like government, religions, or McKinsey (half joking…).

    But in the past few decades, something has changed when it comes to society’s future vision. Digital technology became synonymous with “the future,” and along the way, we outsourced that future to the most successful corporations creating digital technology. Everything of value in our society is being transformed into data, and extraordinary corporations have risen which refine that data into insight, knowledge, and ultimately economic power. Driven as they are by this core commodity of data, these companies have acted to cement their control over it.

    This is not unusual economic behavior, in fact, it’s quite predictable. So predictable, in fact, that it’s developed its own structure – an architecture, if you will, of how data is managed in today’s information society. I’ve a hypothesis about this architecture – unproven at this point (as all are) – but one I strongly suspect is accurate. Here’s how it might look on a whiteboard:

    We “users” deliver raw data to a service provider, like Facebook or Google, which then captures, refines, processes, and delivers that data back as services to us. The social contract we make is captured in these services’ Terms of Services – we may “own” the data, but for all intents and purposes, the power over that information rests with the platform. The user doesn’t have a lot of creative license to do much with that data he or she “owns” – it lives on the platform, and the platform controls what can be done with it.

    Now, if this sounds familiar, you’re likely a student of early computing architectures. Back before the PC revolution, most data, refined or not, lived on a centralized platform known as a mainframe. Nearly all data storage and compute processing occurred on the mainframe. Applications and services were broadcast from the mainframe back to “dumb terminals,” in front of which early knowledge workers toiled. Here’s a graph of that early mainframe architecture:

     

    This mainframe architecture had many drawbacks – a central point of failure chief among them, but perhaps its most damning characteristic was its hierarchical, top down architecture. From an user’s point of view, all the power resided at the center. This was great if you ran IT at a large corporation, but suffice to say the mainframe architecture didn’t encourage creativity or a flourishing culture.

    The mainframe architecture was supplanted over time with a “client server” architecture, where processing power migrated from the center to the edge, or node. This was due in large part to the rise the networked personal computer (servers were used  for storing services or databases of information too large to fit on PCs). Because they put processing power and data storage into the hands of the user, PCs became synonymous with a massive increase in productivity and creativity (Steve Jobs called them “bicycles for the mind.”) With the PC revolution power transferred from the “platform” to the user – a major architectural shift.

    The rise of networked personal computers became the seedbed for the world wide web, which had its own revolutionary architecture. I won’t trace it here (many good books exist on the topic), but suffice to say the core principle of the early web’s architecture was its distributed nature. Data was packetized and distributed independent of where (or how) it might be processed. As more and more “web servers” came online, each capable of processing data as well as distributing it, the web became a tangled, hot mess of interoperable computing resources. What mattered wasn’t the pipes or the journey of the data, but the service created or experienced by the user at the point of that service delivery, which in the early days was of course a browser window (later on, those points of delivery became smartphone apps and more).

    If you were to attempt to map the social architecture of data in the early web, your map would look a lot like the night sky – hundreds of millions of dots scattered in various constellations across the sky, each representing a node where data might be shared, processed, and distributed. In those early days the ethos of the web was that data should be widely shared between consenting parties so it might be “mixed and mashed” so as to create new products and services. There was no “mainframe in the sky” anymore – it seemed everyone on the web had equal and open opportunities to create and exchange value.

    This is why the late 1990s through mid oughts were a heady time in the web world – nearly any idea could be tried out, and as the web evolved into a more robust set of standards, one could be forgiven for presuming that the open, distributed nature of the web would inform its essential social architecture.

    But as web-based companies began to understand the true value of controlling vast amounts of data, that dream began to fade. As we grew addicted to some of the most revelatory web services – first Google search, then Amazon commerce, then Facebook’s social dopamine – those companies began to centralize their data and processing policies, to the point where we are now: Fearing these giants’ power over us, even as we love their products and services.

    An Argument for Mass Flourishing

    So where does that leave us if we wish to heed the concerns of Professor Phelps? Well, let’s not forget his admonition: “nations unaware of how their prosperity is generated may take steps that cost them much of their dynamism.” My hypothesis is simply this: Adopting a mainframe architecture for our most important data – our intentions (Google), our purchases (Amazon), our communications and social relationships (Facebook) – is not only insane, it’s also massively deprecative of future innovation (damn, sorry, but sometimes the word fits). In Facebook, Tear Down This Wall, I argued:

    … it’s impossible for one company to fabricate reality for billions of individuals independent of the interconnected experiences and relationships that exist outside of that fabricated reality. It’s an utterly brittle product model, and it’s doomed to fail. Banning third party agents from engaging with Facebook’s platform insures that the only information that will inform Facebook will be derived from and/or controlled by Facebook itself. That kind of ecosystem will ultimately collapse on itself. No single entity can manage such complexity. It presumes a God complex.

    So what might be a better architecture? I hinted at it in the same post:

    Facebook should commit itself to being an open and neutral platform for the exchange of value across not only its own services, but every service in the world.

    In other words, free the data, and let the user decide what do to with it. I know how utterly ridiculous this sounds, in particular to anyone reading from Facebook proper, but I am convinced that this is the only architecture for data that will allow a massively flourishing society.

    Now this concept has its own terminology: Data portability.  And this very concept is enshrined in the EU’s GDPR legislation, which took effect one week ago. However, there’s data portability, and then there’s flourishing data portability – and the difference between the two really matters. The GDPR applies only to data that a user *gives* to a service, not data *co-created* with that service. You also can’t gather any insights the service may have inferred about you based on the data you either gave or co-created with it. Not to mention, none of that data is exported in a machine readable fashion, essentially limiting its utility.

    But imagine if that weren’t the case. Imagine instead you can download your own Facebook or Amazon “token,” a magic data coin containing not only all the useful data and insights about you, but a control panel that allows you to set and revoke permissions around that data for any context. You might pass your Amazon token to Walmart, set its permissions to “view purchase history” and ask Walmart to determine how much money it might have saved you had you purchased those items on Walmart’s service instead of Amazon. You might pass your Facebook token to Google, set the permissions to compare your social graph with others across Google’s network, and then ask Google to show you search results based on your social relationships. You might pass your Google token to a startup that already has your genome and your health history, and ask it to munge the two in case your 20-year history of searching might infer some insights into your health outcomes.

    This might seem like a parlor game, but this is the kind of parlor game that could unleash an explosion of new use cases for data, new startups, new jobs, and new economic value. Tokens would (and must) have auditing, trust, value exchange, and the like built in (I tried to write this entire post without mentioned blockchain, but there, I just did it), but presuming they did, imagine what might be built if we truly set the data free, and instead of outsourcing its power and control to massive platforms, we took that power and control and, just like we did with the PC and the web, pushed it to the edge, to the node…to ourselves?

    I rather like the sound of that, and I suspect Mssr. Phelps would as well. Now, how might we get there? I’ve no idea, but exploring possible paths certainly sounds like an interesting project…

    The post Do We Want A Society Built On The Architecture of Dumb Terminals? appeared first on John Battelle's Search Blog.

     
  • feedwordpress 02:42:37 on 2017/08/11 Permalink
    Tags: , Google, , , , ,   

    No. Social Terrorists Will Not Win 

    The post No. Social Terrorists Will Not Win appeared first on John Battelle's Search Blog.

    Social Terrorist

     

     

     

     

     

     

     

     

     

     

     

     

     

    small group of social terrorists have hijacked the rational discourse led by society’s most accomplished, intelligent, and promising organizations.

    (cross posted from NewCo Shift)

    Let’s start with this: Google is not a perfect company. It’s easy to cast it as an omniscient and evil villain, the leader of a millennium-spanning illuminati hellbent on world subjugation. Google the oppressor. Google the silencer of debate. Google, satanic overlord predicted by the holy text!

    But that narrative is bullshit, and all rational humans know it. Yes, we have to pay close attention — and keep our powder dry — when a company with the power and reach of Google (or Facebook, or Amazon, or Apple…) finds itself a leader in the dominant cultural conversation of our times.

    But when a legitimate and fundamentally important debate breaks out, and the company’s employees try to come together to understand its nuances, to find a path forward …..To threaten those engaged in that conversation with physical violence? That’s fucking terrorism, period. And it’s damn well time we called it that.

    Have we lost all deference to the hard won lessons of the past few hundred years? Are we done with enlightenment, with scientific discourse, with fucking manners? Do we now believe progress can only be imposed? Have we abandoned debate? Can we no longer engage in rational discourse, or move forward by attempting to understand each other’s point of view?

    I’m so fucking angry that the asshat trolls managed to force Google’s CEO Sundar Pichai to cancel his planned all hands meeting today, one half hour before it started, I’m finding it hard to even write. Before I can continue, I just need to say this. To scream it, and then I’m sure I’ll come to my senses: FUCK YOU. FUCK YOU, asshats, for hijacking the conversation, for using physical threats, implied or otherwise, as a weapon to shut down legitimate rational discourse. FUCK YOU for paralyzing one of our society’s most admired, intelligent, and successful engines of capitalism, FUCK YOU for your bullying, FUCK YOU for your rage and your anger, FUCK YOU for making me feel just like I am sure you feel about me: I want to fucking kick your fucking ass.

    But now I will take a breath. And I will remember this: The emotions of that last paragraph never move us forward. Ever.

    Google was gathering today to have an honest, difficult, and most likely emotional conversation about the most important idea in our society at present: How to allow all of us to have the right to our points of view, while at the same time insuring the application of those views don’t endanger or injure others. For its entire history, this company has had an open and transparent dialog about difficult issues. This is the first time that I’ve ever heard of where that dialog has been cancelled because of threats of violence.

    This idea Google was preparing to debate is difficult. This idea, and the conflict it engenders, is not a finished product. It is a work in progress. It is not unique to Google. Nor is it unique to Apple, or Facebook, Microsoft or Apple — it could have easily arisen and been leapt upon by social terrorists at any of those companies. That it happened at Google is not the point.

    Because this idea is far bigger than any of those companies. This idea is at the center of our very understanding of reality. At the center of our American idea. Painstakingly, and not without failure, we have developed social institutions — governments, corporations, churches, universities, the press — to help us navigate this conflict. We have developed an approach to cultural dialog that honors respect, abjures violence, accepts truth. We don’t have figured it out entirely. But we can’t abandon the core principles that have allowed us to move so far forward. And that is exactly what the social terrorists want: For us to give up, for us to abandon rational discourse.

    Google is a company comprised of tens of thousands of our finest minds. From conversations I’ve had tonight, many, if not most of those who work there are fearful for their safety and that of their loved ones. Two days ago, they were worried about their ability to speak freely and express their opinions. Today, because social terrorists have gone nuclear, those who disagree with those terrorists — the vast majority of Googlers, and by the way, the vast majority of the world — are fearful for their physical safety.

    And because of that, open and transparent debate has been shut down.

    What. The. Fuck.

    If because of physical threat we can no longer discuss the nuanced points of a difficult issue, then America dies, and so does our democracy.

    This cannot stand.

    Google has promised to have its dialog, but now it will happen behind closed doors, in secrecy and cloaked in security that social terrorists will claim proves collusion. Well done, asshats. You’ve created your own reality.

    It’s up to us to not let that reality become the world’s reality. It’s time to stand up to social terrorists. They cannot and must not win.

    The post No. Social Terrorists Will Not Win appeared first on John Battelle's Search Blog.

     
  • feedwordpress 22:11:05 on 2017/05/17 Permalink
    Tags: , , , Google, , , ,   

    The Internet Big Five Is Now The World’s Big Five 

    The post The Internet Big Five Is Now The World’s Big Five appeared first on John Battelle's Search Blog.

    Back in December of 2011, I wrote a piece I called “The Internet Big Five,” in which I noted what seemed a significant trend: Apple, Microsoft, Google, Amazon, and Facebook were becoming the most important companies not only in the technology world, but in the world at large. At that point, Facebook had not yet gone public, but I thought it would be interesting to compare each of them by various metrics, including market cap (Facebook’s was private at the time, but widely reported). Here’s the original chart:

    I called it “Draft 1” because I had a sense there was a franchise of sorts brewing. I had no idea. I started to chart out the various strengths and relative weaknesses of the Big Five, but work on NewCo shifted my focus for a spell.

    Three years later, in 2014, I updated the chart. The growth in market cap was staggering:

    Nearly a trillion dollars in net market cap growth in less than three years! My goodness!

    But since 2014, the Big Five have rapidly accelerated their growth. Let’s look at the same chart, updated to today:

    Ummm..HOLY SHIT! Almost two trillion dollars of market cap added in less than seven years. And the “Big Five” have become, with a few limited incursions by Berkshire Hathaway, the five largest public companies in the US. This has been noted by just about everyone lately, including The Atlantic, which just employed the very talented Alexis Madrigal to pay attention to them on a regular basis. In his maiden piece, Madrigal notes that the open, utopian world of the web just ten years ago (Web 2, remember that? I certainly do…) has lost, bigly, to a world of walled-garden market cap monsters.

    I agree and disagree. Peter Thiel is fond of saying that the best companies are monopolists by nature, and his predictions seem to be coming true. But monopolies grow old, fray, and usually fail to benefit society over time. There’s a crisis of social responsibility and leadership looming for the Big Five — they’ve got all the power, now it’s time for them to face their responsibility. I’ll be writing much more about that in coming weeks and months. As I’ve said elsewhere, in a world where our politics has devolved to bomb throwing and sideshows, we must expect our businesses — in particular our most valuable ones — to lead.

    The post The Internet Big Five Is Now The World’s Big Five appeared first on John Battelle's Search Blog.

     
  • nmw 17:34:54 on 2016/10/23 Permalink
    Tags: ad, ads, , , Google, , ,   

    If we can get to the point where advertisers can actually know who they are communicating with, perhaps our advertising ecosystem will evolve to a place where it adds value to consumers’ lives on a regular basis, as opposed to interrupting and annoying us all day long… 

    When that happens, Facebook’s implicit advantage – that it knows who we are – will become commodified, and perhaps – just perhaps – the open web will once again thrive.

    http://battellemedia.com/archives/2016/10/google-capitulates-to-facebooks-identity-machine.php

     
  • feedwordpress 22:31:56 on 2016/02/03 Permalink
    Tags: , Google, , , , , traffic, , waze   

    The Waze Effect: Flocking, AI, and Private Regulatory Capture 

    The post The Waze Effect: Flocking, AI, and Private Regulatory Capture appeared first on John Battelle's Search Blog.

    Screenshot_2015-04-20-18-03-49-1_resized-738987(image)

    A couple of weeks ago my wife and I were heading across the San Rafael bridge to downtown Oakland for a show at the Fox Theatre. As all Bay area drivers know, there’s a historically awful stretch of Interstate 80 along that route – a permanent traffic sh*t show. I considered taking San Pablo road, a major thoroughfare which parallels the freeway. But my wife fired up Waze instead, and we proceeded to follow an intricate set of instructions which took us onto frontage roads, side streets, and counter-intuitive detours. Despite our shared unease (unfamiliar streets through some blighted neighborhoods), we trusted the Waze algorithms – and we weren’t alone. In fact, a continuous stream of automobiles snaked along the very same improbable route – and inside the cars ahead and behind me, I saw glowing blue screens delivering similar instructions to the drivers within.

    About a year or so ago I started regularly using the Waze app  – which is to say, I started using it on familiar routes: to and from work, going to the ballpark, maneuvering across San Francisco for a meeting. Prior to that I only used the navigation app as an occasional replacement for Google Maps –  when I wasn’t sure how to get from point A to point B.

    Of course, Waze is a revelation for the uninitiated. It essentially turns your car into an autonomous vehicle, with you as a simple robot executing the commands of an extraordinarily sophisticated and crowd-sourced AI.

    But as I’m sure you’ve noticed if you’re a regular “Wazer,” the app is driving a tangible “flocking” behavior in a significant percentage of drivers on the road. In essence, Waze has built a real time layer of data and commands over our current traffic infrastructure. This new layer is owned and operated by a for-profit company (Google, which owns Waze), its algorithms necessarily protected as intellectual property. And because it’s so much better than what we had before, nearly everyone is thrilled with the deal (there are some upset homeowners tired of those new traffic flows, for instance).

    Since the rise of the automobile, we’ve managed traffic flows through a public commons – a slow moving but accountable ecosystem of local and national ordinances (speed limits, stop signs, traffic lights, etc) that were more or less consistent across all publicly owned road ways.

    Information-first tech platforms like Waze, Uber, and Airbnb are delivering innovative solutions to real world problems that were simply impossible for governments to address (or even imagine). At what point will Waze or something like it integrate with the traffic grid, and start to control the lights?

    I’ve written before about how we’re slowly replacing our public commons with corporate, for-profit solutions – but I sense a quickening afoot. There’s an inevitable collision between the public’s right to know, and a corporation’s need for profit (predicated on establishing competitive moats and protecting core intellectual property).  How exactly do these algorithms choose how best to guide us around? Is it fair to route traffic past people’s homes and/or away from roadside businesses? Should we just throw up our hands and “trust the tech?”

    We’ve already been practicing solutions to these questions, first with the Web, then with Google search and the Facebook Newsfeed, and now with Waze. But absent a more robust dialog addressing these issues, we run a real risk of creating a new kind of regulatory capture – not in the classic sense, where corrupt public officials preference one company over another, but rather a more private kind, where a for-profit corporation literally becomes the regulatory framework itself – not through malicious intent or greed, but simply by offering a better way.

    The post The Waze Effect: Flocking, AI, and Private Regulatory Capture appeared first on John Battelle's Search Blog.

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
esc
cancel