21 February 2020 (Brussels, Belgium) – Getting to “obviously” is so very, very hard when it comes to scientific discovery and technology. It takes a lot of work. In many ways the world is becoming so dynamic and complex that technological capabilities are overwhelming human capabilities to optimally interact with and leverage those technologies. I think as we are hurled headlong into this frenetic pace (made worse by all this development in artificial intelligence) we suffer from illusions of understanding, a false sense of comprehension, failing to see the looming chasm between what our brain knows and what our mind is capable of accessing.
So for the last three years in order to write my magnum opus (working title: The Murder of Data Privacy; catchy) I have engaged in a “consumption regimen” and plowed through (at last count) 130+ books and 700+ magazine articles/white papers, coupling that with numerous conversations and interviews at 45+ multi-faceted conferences and trade shows which cover artificial intelligence, computer science, cyber security, journalism/media, legal technology, and mobile technology.
And every month I devote one weekend (adding 4 weeks every summer) to make a concerted effort to step back and do a technology “big think” – take all those pieces, shards, ostraca, palimpsests, barrage of social media clips screaming by me and determine what I think is key to know. Much of it I have been writing about in my way too long blog posts. But that’s how I can internalize everything. We have created an environment which rewards simplicity and shortness, which punishes complexity and depth. I hate it. And what I hate more is the commentary I read, much of it like listening to your maiden aunt who knows so much about marriage. Much of the technology I read about or see at conferences I also force myself “to do”, taking the old Spanish proverb to heart:
Because writing about technology is not “technical writing.” It is about framing a concept, about creating a narrative. Technology affects people both positively and negatively. You need to provide perspective. You need to actually “do” the technology. But the model I try to follow in my posts is more like the British magazine tradition of a weekly diary – on the issue, but a little distant from it, personal as well as political, conversational more than formal.
So this past week with its concatenation of events … the Munich Security Conference, Mark Zuckerberg’s visit to Brussels, and the unveiling of the EU’s digital agenda … it took even more work. These three events were just more waves in the continuing tsunami, the rapid technological change accompanied by globalisation – all created by innovators such as Amazon, Apple, Facebook, Google, and Microsoft. Though the more common media reference is GAFA, the acronym for Google, Apple, Facebook and Amazon. The acronym serves to identify the dominant companies as an entity — effectively an oligopoly that controls much of the tech industry market. It is these companies that have thrown tech as a “person” into the room with us at all times. By introducing technology so very rapidly into every aspect of the human existence in such a preposterously short historical period of time, GAFA/Big Tech has thrown a metaphorical person into all our lives.
Technology has allowed the extraordinary to become quotidian.
In this post I want to begin by addressing the hoary beast, regulation. In a follow-up piece I will continue that discussion and address the EU’s digital agenda. Yes, yes, yes. Don’t start. I know. It is all related. But let’s do this in pieces. The overall theme is the death of data privacy which the EU regulators valiantly cry “is not true!” It is true. And data privacy did not die of natural causes. It was poisoned with a lethal cocktail of incompetence, arrogance, greed, short-sightedness and sociopathic delusion.
This past week the discussions on regulation of online platforms was not focused on the economic, or competition, or dominance. It was on the breathtaking advance of scientific discovery and technology that seems destined to force us into an age of hate. Read the sociologists and their findings all seem to converge on one point: we’re defined much more by what we reject than what we adore. In the U.S., extreme polarization has guaranteed that nearly 50 percent of the country can hate the other nearly 50 percent of the country - and the feeling is mutual. Our technological tools make it ever easier to weaponize that antipathy. Facebook posts that generate intense emotion are much more likely to be shared than those that appeal to the cooler ends of our psyches. And we know: intense emotion for a human being usually means hate rather than love, though it shouldn’t. We have come to realize that it is difficult to remove by logic an idea not placed there by logic in the first place.
So Mr Zuckerberg sauntered into town. Well, two towns. Brussels and Munich. And as they say, timing is everything. During his meetings in Brussels this past Monday, EU officials … as part of their arguments with him … outlined the EU’s new digital agenda before it was officially unveiled later in the week. Wow. The fax machine in his hotel room Monday night must have been humming. But let me take a few steps back.
It was a wacky week but a familiar debate broke out over a Facebook policy decision. The company announced that ads made by influencers, on behalf of politicians, would be allowed on the platform so long as they were labeled as ads. The company will not, however, put those ads in its Ads Library, where they can be reviewed by the public. It’s not clear that anyone will review those ads outside of Facebook, as the Federal Elections Commission, which regulates political advertising, currently has no policy on influencer marketing. The influencer posts can be fact-checked, unless they contain the speech of a politician, in which case they cannot.
Got all that? Great. All pretty clear, yes? No? Huh. Maybe Casey Newton is correct:
I don’t know why you would build a whole public ads library, require a certain subset of posts to be labeled as ads, and then exempt those ads from your ads library. I also don’t know why you would invite a fresh nine months’ worth of news cycles over unlabeled viral political ads from influencers, false political ads from influencers that are not fact-checked due to the presence of candidate political speech, and so on. The situation would seem to pit the company’s integrity teams against their advertising teams, with the advertising teams winning all the most important battles.
But set all that aside for a moment. Who should be setting all these policies in the first place? Should it be Facebook, or should it be someone else? Someone like, oh, say, the government?
Well, that’s what Facebook says it wants. The Zuck said as much during a trip over last weekend and this week to Brussels and Munich. On Saturday at the Munich Security Conference:
“Even if I’m not going to agree with every regulation in the near term, I do think it’s going to be the kind of thing that builds trust and better governance of the internet, and will benefit everyone, including us, over the long term”.
He followed up with an op-ed in the Financial Times on Sunday, asserting that Facebook needs “more oversight and accountability.” Facebook also released a white paper outlining the approach it would like to see regulators take to creating legal standards for content moderation. The approach it would like to see, you may not be surprised to learn, is one that largely follows the avenues Facebook has already taken. That includes:
• requiring public reporting on policy enforcement actions
• reducing the visibility of content that violates standards; and
• blocking attempts to regulate speech based on the content of that speech
The white paper does not address how countries might regulate political ads, though Zuckerberg’s statement that posts on Facebook ought to be regulated like something in between a telecom company and a newspaper suggests the answer is … “very lightly.”
European regulators, for their part, dismissed Facebook’s white paper so quickly that you wondered if they had even bothered to read it. From the Wall Street Journal:
Thierry Breton, the EU commissioner for internal market and services, who met with Mr. Zuckerberg on Monday, told reporters afterward that the Facebook white paper “is too low in terms of responsibility. There are interesting things, but it’s not enough.”
He said the commission will decide by the end of the year what kind of liability to impose on online platforms. “I told him the comparison with telecoms is not relevant. A message [on Facebook] reaches hundreds of millions. On telcos you have one-on-one communications.”
But to be fair, even if you find Facebook’s suggested regulations self-serving, they do highlight important trade-offs that governments will have to make as they consider new laws. Consider, for example, the increasingly popular idea of legally requiring platforms to remove bad posts within 24 hours. Facebook points out, rightly I think, that this creates the wrong incentives:
A requirement that companies “remove all hate speech within 24 hours of receiving a report from a user or government” may incentivize platforms to cease any proactive searches for such content, and to instead use those resources to more quickly review user and government reports on a firstin-first-out basis. In terms of preventing harm, this shift would have serious costs. […] Companies focused on average speed of assessment would end up prioritizing review of posts unlikely to violate or unlikely to reach many viewers, simply because those posts are closer to the 24-hour deadline, even while other posts are going viral and reaching millions.
Here Facebook’s preferred solution – requiring companies to take down bad posts that hit a certain threshold of virality – strikes me as more likely to create a positive effect.
Everyone who posts on the internet, and lives in the world that the internet creates, has a rooting interest in both platforms and nation states finding a good balance. And even as we watch Facebook struggle to articulate a coherent position on political ads, we see nation states adopting awful regulations that serve only to censor their citizens. Over the weekend I saw this on ZDNet:
Singapore’s Ministry of Communications and Information (MCI) on Monday instructed Facebook to block access to the States Times Review (STR) page after the latter repeatedly refused to comply with previous directives issued under POFMA. The “disabling” order, outlined under Section 34 of the Act, requires Facebook to disable access for local users. The spokesperson said: “We believe orders like this are disproportionate and contradict the government’s claim that POFMA would not be used as a censorship tool. We’ve repeatedly highlighted this law’s potential for overreach and we’re deeply concerned about the precedent this sets for the stifling of freedom of expression in Singapore.”
Among the stories that had outraged Singapore’s government was a story about two critics of the government being arrested. And it’s not just Singapore. There are brand-new rules for social media in Pakistan. It’s easy to root for tech platforms to be regulated. It’s harder to accept that those regulations, when they finally do appear, are so often terrible.
Or look at the U.S. The U.S. Justice Department (DOJ) is hosting a workshop/study to examine the scope of a law known as Section 230 of the Communications Decency Act. The law protects online platforms from liability for their users’ posts and allows them to moderate users’ content without being treated as publishers. As tech companies have grown in size and power, the DOJ (and Congress) have questioned whether Section 230 needs an update. U.S. Attorney General William Barr aligned himself with the skeptics, telling a gathering of the National Association of Attorneys General when he announced the study that
“Section 230 has been interpreted quite broadly by the courts. Today, many are concerned that Section 230 immunity has been extended far beyond what Congress originally intended. Ironically, Section 230 has enabled platforms to absolve themselves completely of responsibility for policing their platforms, while blocking or removing third-party speech – including political speech – selectively, and with impunity.”
Is it really a move to curtail abuse by tech-giants and protect the people? Or a beginning of a power-grab to curtail free speech and increase “the right kind” of censorship? Conservatives are piling on, believing Section 230 has aided tech companies’ ability to censor speech they don’t agree with. U.S. Senator Jon Kyl, a Republican from Arizona, led a team of lawyers who interviewed conservatives who use and study Facebook and other sites and said tech firms “systematically discriminate against certain ideologies”. Such claims of bias inspired Missouri Republican Senator Josh Hawley to propose a revision to Section 230 that would tie the law’s promise of immunity to a regular audit proving tech companies’ algorithms and content-removal practices are “politically neutral.”
The tech giants have had a remarkable bull run. The combined value of the five biggest American tech firms has risen by almost $2trn in the past 12 months: that is roughly equivalent to Germany’s entire stockmarket. Four of the five – Alphabet, Amazon, Apple and Microsoft – are each now worth over $1trn. The surge has confounded predictions of an imminent “techlash”. Consumers say they care about privacy but act as if they care much more about getting stuff, preferably without having to pay for it. The fines and penalties imposed by regulators to date amount to less than 1% of the big five’s market value. Yes, a bigger backlash may well materialise. As their economic and political power grows ever greater, the probability lessens that the world will simply stand by and watch. I will continue my thoughts on regulation in the next instalment.
Do not think for a moment that these tech giants are oblivious. They “get it”. Tech executives who initially resisted regulation now call it desirable. Well, so long as regulations serve their interests, companies support them. Large tech companies have found regulations can help consolidating their market position, while smaller enterprises struggle. Apple supports global privacy regulations, Microsoft pushes restrictions in the use of facial recognition technologies, and Facebook looks to governments to regulate content online.
The General Data Protection Regulation (GDPR) and the recently passed California Consumer Privacy Act (CCPA)? The tech giants were all actively involved in shaping both of them through their lobbyists and lawyers and have now figured out work-arounds. Do not think for one moment that regulations are the main drivers of corporate governance. It distracts from the power tech companies have in setting norms and standards themselves. Through their business models and innovations, they develop rules on speech, access to information and competition.
Reality? If tech executives want change, there is no need to wait for government regulation to guide them in the right direction. They can start in their own “republics” today. As regulators of the domains they govern, nothing stops them proactively aligning their terms of use with human rights, democratic principles and the rule of law. They are massive enough to play government, yet cleverly shy away from taking responsibility for their own territories. I say cleverly because they know anchoring “terms of use” and “standards” and “rule of law” are mind-bending for researchers, regulators and democratic representatives alike.
So in the next instalment, let’s dig a little deeper. The EU has realized it cannot be a player. It lost its opportunity to affect our digital architecture. So it wants to set the rules for the world of technology on data protection, artificial intelligence, competition and so much more.
Because Europe is both gnome and giant in the tech world. The continent has lots of cutting-edge technology but no significant digital platforms. It accounts for less than 4% of the market capitalisation of the world’s 70 largest platforms (America boasts 73% and China 18%). At the same time, the EU is a huge market, with a population of more than 500m, which no tech titan can ignore. It contributes about a quarter of the revenues of Facebook and Google.
This combination has given rise to what Anu Bradford of Columbia Law School calls, in a new book of the same name, the “Brussels effect”. Digital services are, in her words, often “indivisible”. It would be too expensive for big tech firms to offer substantially different services outside the EU. As a result, most have adopted the GDPR as a global standard. Governments, too, have taken more than a page from the EU’s data-protection book. About 120 countries have now passed privacy laws, most of which resemble the GDPR and its predecessors.
So … the European Commission wants to repeat the “Brussels effect” trick in other areas. The main documents presented this past week are a grab bag of measures to foster the use of technology in Europe … and to limit perceived dangers. The commission has released a “strategy” to promote the use of data, the idea to create a “single European data space” in which digital information flows freely and securely.
But the Brussels effect may be less effective than in the past. The ground on which the debate over privacy legislation was conducted had been long established before the GDPR, but regulation in such areas as artificial intelligence are nascent. And the push to “extensively regulate” the tech giants could simply lead these giants to differentiate their regional offerings after all … and stymie Europe’s startups. Worse, the data strategy could easily turn protectionist, which would limit Europe’s ability to set global rules which could help to give its firms a much-needed leg up.
At risk is Europe’s role as a third “techno-sphere” – one that is not controlled by a handful of tech titans, as it is in America, or by the Chinese state. So in my next episode, let’s try and unpack all of this.