The struggle (futility?) of controlling or regulating technology in the modern world: an introduction

The regulatory state must be examined through the lens of the reconstruction of the political economy: the ongoing shift from an industrial mode of development to an informational one. Regulatory institutions will continue to struggle in the era of informational capitalism they simply cannot understand.

 

 

29 October 2020 (Chania, Crete) – Amid the very real devastations of already-vulnerable lives and livelihoods caused by the COVID-19 pandemic, there has been a tsunami of commentary. Academic and policy experts of every stripe are already asserting the lessons and proposing competing agendas that the crisis seems to dictate. But in fact, there really seems only one clear truth so far. Incongruously neglected in the many confident pronouncements and predictions, this truth is that nobody knows the historic implications of this moment. A radical diversity of futures is possible. In each of these futures, a diversity of competing views will likely clash as much as they do now.

Yes, it will accelerate, and continue to accelerate, a wide range of underlying themes underway such as remote work, distributed work and forms of “digital quarantines” – which most of the people reading this were already doing. It has forced those who were not early adopters of these technologies and techniques to adapt. But we are still looking at an unrelenting horizonlessness world, a fragmented, fractal world.

And the early commentary – oh, how we thought our advanced technology would save us, a now familiar story. All it did was emphasise how poorly resourced we let our societies become. What the pandemic seems to show is that all our technical and technology tools for acting in the face of massive uncertainties – the dispassionately assured experts, the precise scientific metrics, the rigorous technical models, the all-seeing, intrusive monitoring – have added up to much less than the promised quality of control. It has become clear, indeed, that in the face of this well-foreseen challenge to our well-being, human capacities to steer our world based on our understanding of it exist largely in our imaginations.

Yep. The universalization of all those highly efficient, technologically intensive “just-in-time” supply chains resulted in critical shortages of basic goods and medical equipment. The frictionless flow of “ideas” has produced a glut of disinformation and propaganda without improving public understanding of current events. And there is a growing awareness that the COVID-19 pandemic may be merely a harbinger for catastrophes that are yet to come, following decades of growth fueled by the most efficient energy sources available.

Because we are a civilization that measures itself by its technological achievements – but now confronted with the limits of its power. Well, sort of. While we were shown our technology tools to be wanting, we seemed so more dependent on them than ever before. The masses .. both the nonessential kind and the essential kind … hurled themselves at luminescent screens, like so many moths to the flame. “THANK GOODNESS FOR TWITTER AND NETFLIX AND TIKTOK AND ZOOM AND AMAZON DELIVERIES … AND ALL THAT OTHER STUFF!” we cried. Never before had it been easier to zone out to high-quality entertainment on demand. We had at our fingertips the means to ease anxiety and cure boredom ad infinitum. As Nolen Gertz had suggested two years ago in a seminal book, we regard these “tech feelings” not as important elements of the human experience but rather as problems to be solved, precisely because we now have the means to alleviate them. It is a reverse of that old Bible quote in the Book of Job: what technology hath taketh away, so shall technology giveth.

But many said “Hold on! We must wrest back a semblance of control. We need new models and methods!” And so a cry is heard across the lands: “OBVIOUSLY WE MUST REGULATE CERTAIN TECHNOLOGY!!” 

But getting to “obviously” takes a lot of work and time. Among U.S. legal scholars and pundits in telecommunications, information privacy, and finance, there is fairly widespread consensus that administrative law … antitrust, consumer protection, finance, etc. … is in crisis but substantially less agreement on how we got there and how (or whether) it can be fixed. The reason? Regulatory processes are befuddled by the regulatory issues and problems created by information markets and networked information and communications technologies.

It is the goal of this monograph to examine the new regulatory state through the lens of a changing, an evolving political economy that has seen a significant reconstruction – a shift from an industrial mode of development to an informational one. Economic and political power has realigned. Regulatory institutions are unprepared for this era of informational capitalism.

These are large issues so I will begin with some table-setting observations on the relationship between industrialism and informationalism. I have been aided by an extensive “reading consumption regimen” that started with some foundational texts which were then sprinkled with more recent readings (all noted throughout the text), plus numerous conversations and interviews at multi-faceted technology and sociology conferences and technology trade shows over the past three years.

So let’s begin.

SETTING THE TABLE

 

Some earlier technology 

 

I became more mindful of all of this while writing a separate series of posts about the recent U.S. Department of Justice (DOJ) suit against Google (you can scan my archive for my musings), plus this week’s news about Photoshop’s “Deep Fake” edition which brought (again) a clamor of “REGULATION!!”  

Photoshop just added a set of ML-based tools to allow you to manipulate images in new ways – to change the expression of a face in a photograph, for example. Technically impressive, but now everyone can fake any image or video even more than they could before. Obviously this is enormous in the context of legal investigations and journalistic investigations. Up to now, deepfake detection has relied on semantic and contextual understanding of the content, and the basic forensic methods available. But as I have noted in previous posts, it will get so much harder to detect these images. The more advanced technological developments in image manipulation currently underway via GAN models, plus synthesized video technology which combines video and audio synthesis together in one tool to create realistic voices, will make these images undetectable. 

So the AI to create photorealistic faces, objects, landscapes, and video isn’t that far behind GPT-3 which can already write dialogue and articles (and movie plots) almost indistinguishable from ones written by humans. When OpenAI released its latest GPT-3 this past summer, it marked the first time a general-purpose AI saw the light of the day. But it also marked just the latest in a long history of knowledge technologies, the subject of  my second monograph due next year.

And the level of abstraction? There is a running joke in Silicon Valley that says with GPT-3 and GAN models you can write a Taylor Swift song about Harry Potter. But soon, you’ll be able to make a new Harry Potter movie starring Taylor Swift. 

It is precisely at such moments of technological dependency that one might consider interrogating one’s relationship with technology more broadly. Yes, I’ve heard it: “This too shall pass,” because technology always holds the key to our salvation. It’s not the technology: it’s our implementation of it. Or so the story goes. But, alas, we stand guilty of so much original sin. As I dove further and further into technology regulation and our culture, I developed these “givens” :

1. Data privacy and data sovereignty did not die of natural causes. They were poisoned with a lethal cocktail of incompetence, arrogance, greed, short-sightedness and sociopathic delusion. Technology has become the person in the room with us at all times. By introducing technology so very rapidly into every aspect of the human existence in such a preposterously short historical period of time. As Julia Hobsbawm says at her TEDx events “Big Tech has thrown a metaphorical person into all our lives”.

2. When a technology is introduced into society, unimpeded, through private enterprise, and then quickly adapted and adopted, the game is over. We (they?) have built a world, a system, in which physical and social technologies co-evolve. How can we shape a process we don’t control?

3. And all those lawyers who say “we must do something about privacy!” are … pardon my French … full of merde. Well, most of them (not all) are actually outliers not involved in the game. It’s their legal brethren and cohorts guilty of forming their own logic of informational capitalism in comprehensive detail. Look at the legal foundations of platforms, privacy, data science, etc. which has focused on the scientific and technical accomplishments undergirding the new information economy. Institutions, platforms, “data refineries” that are first and foremost legal and economic institutions. They exist as businesses; they are designed to “extract surplus”.

4. And when a technology is introduced into society, unimpeded, nobody can catch up. With the continuing development of computer vision and video analytics, facial recognition may ultimately turn out to be one of the more benign applications of camera surveillance. Put another way, facial recognition is just part of a much larger program of surveillance – equipped with higher scale and granularity—associated with cameras, microphones, and the multitude of sensors in modern smart phones. As these systems become more sophisticated and more ubiquitous, we are likely to experience (actually are experiencing) what Big Tech calls “phase transition” – the transition from collection-and-storage surveillance to mass automated real-time monitoring. At scale.

5. What sets the new digital superstars (the “4 Horsemen of the Digital Apocalypse”?) apart from other firms is not their market dominance; many traditional companies have reached similarly commanding market shares in the past. What’s new about these companies is that they themselves ARE the markets. They have introduced technology so very rapidly into every aspect of the human existence in such a preposterously short historical period of time, that the “regulators” cannot cope.

6. And blame the regulators themselves. Corporate lawyers have pushed themes of “innovative flexibility” and “privatized oversight”, playing on the anti-regulatory narrative that has gained increasing traction as the shift to informationalism has gathered speed. And regulators have caved in (as well-noted this month in the U.S. House Committee report on antitrust). For the last several decades, advocacy emanating from Wall Street and Silicon Valley has pushed for deregulation and devolution of governance to the private sector, invoking asserted imperatives relating not only to “market liberty” but also and more fundamentally to “innovation and economic growth”. This has proved extraordinarily powerful in structuring public debate about what regulatory goals should be, as well as their methods and sanctions.

Those 4 Horsemen … Google, Amazon, Apple and Facebook … have left the barn. Examining our technological practices – practices ranging from Netflix and Chill to Fitbit and Move to Twitter and TikTok – reveals how our nihilism and our technologies have become intertwined, creating a world of techno-hypnosis, data-driven activity, pleasure economics, herd networking, and orgies of clicking.

There is also a 7th point but it applies more specifically to the EU. Europe is still in thrall to the analogue mindset. The five-year digital policy blueprint the EU Commission unveiled this year was written as if the technology age will not develop any further. You see it in the GDPR, both the wording of the Regulation and its implementation.

As I have written before, and European regulation will be a separate chapter in this monograph, the GDPR will fail because it is based on three fallacies: (1) the delusion that data protection law can give individuals control over their data; (2) the misconception that the reform simplifies compliance, while in fact it makes compliance even more complex; and (3) the assumption that data protection law should be comprehensive, stretching data protection to the point of breaking which EU regulators are now recognising. Data protection specialists and information governance mavens do not want to talk about this because they cannot make any money from it. Their task is to sell data protection service and product “solutions”. 

The EU is, understandably, culpably complacent, protective about the challenges of the digital age. It was left behind. Reflect on the relative decline of Nokia (it may need to merge or sell itself) and Ericsson. Europe seems desperate to hang on to what it has, what it understands – at the expense of future prosperity. Jack Mabry, a telecom analyst with IDC, reminded me of the story about a leading German telecom manufacturer who in the late 1990s produced the most sophisticated analogue telephone exchange ever seen – when digitalization was already the wave of the future and advancing exponentially.

And a fascinating side note: in 1909, following a watershed era of technological progress (unfortunately preceding the industrialized massacres of the Somme and Verdun), E.M. Forster imagined, in “The Machine Stops,” a future British society in which the entirety of lived experience is administered by a kind of mechanical demiurge. The story is the perfect allegory for the moment, owing not least to its account of a society-wide sudden stop and its eerily prescient description of isolated lives experienced wholly through screens.

The denizens of Forster’s world wile away their days where all of their basic needs are made available on demand: “The Machine feeds us and clothes us and houses us,” they exclaim, “through it we speak to one another, through it we see one another, in it we have our being.”

But then The Machine runs amok “rendering disorder and foaming an unmanageable glut of information”. Calls to “alter The Machine” ring out. Alas, too late.

Read it if you can. One of my favorite lines: “Everyone simply accepts that although the machine’s video feeds do not convey the nuances of one’s facial expressions, they’re good enough for all practical purposes to know what we feel”.

Jacques Ellul, writing in the 1950s and early 1960s, certainly “got it”. His book “The Technological Society” (originally published in 1954 and one of my foundational texts that I recommend to everybody) was one of the first books to do an analysis of our technological civilization, showing how technology would begin innocuously enough as a servant of humankind but then be our master. He wrote that technology is “no mere means” but rather its own realm of existence, one that legislates the perspective with which we experience the world. While we use technology to exert our will over nature, “our technologies exert their own will back on us, nudging us toward those modes of production and ways of thinking that are most conducive to the available technologies themselves”. Modern life would be subordinated to “technique,” meaning technology would “transform almost everything it touched into a machine” and technology would become “the main preoccupation of our time”.

Getting a grip on the regulatory state in the Information Age

 

Before I retired, I had the good fortune to work in (what once were) three discreet technology areas: cyber security, digital/mobile media, and legal technology. In years gone by these were discreet areas. No longer. Though most of the practitioners in each seem stuck in their respective silos, rarely venturing out to learn/understand the other two. Few make it out and if so, only to make superficial contact with their “neighbors”. But the wise ones have merged the three.

But as I have noted in many previous posts, the modern human (and especially the modern technologist) moves through all three of those myriad, overlapping spheres. They are forever entangled. It is because we live in a world of exponential companies, fundamentally different in characteristics to industrial age ones. To borrow from hydraulics, it’s the compression ripple writ large. Traditional sector classifications (developed to categorise the industrial economy) make no sense. What is included in the bucket of “information technology sector” are companies serving the modern economy across food, power, transport, health, security, logistics, etc., etc. by harnessing modern capabilities (i.e. “information technology”). The thing that the “IT” sector has in its hands are exponential, rather than extractive, technologies. And while old-economy companies are trying to integrate technology into their business, for new-age companies technology is their business.

These are complex systems, with numerous unknown unknowns that happen and numerous unintended consequences. And not only unintended consequences, but un-modelable consequences. All of this laying on top of an interconnected and educated/uneducated society with unlimited/limited resources, as computation gets cheaper and cheaper, and more easy to distribute.

I say this all because when we discuss the disintegration of the legal process paradigm that animates the regulatory state today, you need to bring in experts and scholars in a variety of other fields, including cyberlaw, telecommunications, information privacy, and finance, most of whom have argued that regulatory processes have failed to respond across all fields because regulatory frameworks were not developed for information markets and networked information and communications technologies.

Because the regulatory state … and herein my focus is antitrust and competition law, plus consumer protection law … is confounded by two things:

1. platform power: the power to link facially separate markets and/or to constrain participation in markets by using technical protocols, and

2. infoglut: unmanageably voluminous, mediated information flows that create information overload

And there is the problem. As Julie Cohen notes in her book Between Truth and Power: The Legal Constructions of Informational Capitalism :

Generally speaking, industrial-era regulatory mandates rely on concepts of market power and market distortion that presume well-defined industries and ascertainable markets and choices and/or posit discrete harms amenable to targeted regulatory responses. The ongoing shift to an information economy has disrupted traditional approaches to defining both markets and harms, making it more difficult to articulate compelling accounts of what precisely should trigger compliance obligations, enforcement actions, and other forms of regulatory oversight.

There are other issues besides platform power and info glut which I will address in due course but those two are the dominant agenda items for this monograph. And the reason regulators are confounded is because the law and rules with which they work were designed around the regulatory problems and competencies of an era in which industrialism was the principal mode of development, so they do not comprehend an informationalized development’s harm.

Although, on the bright side, we are seeking glimmers of regulatory activity that follows nontraditional institutional models. Such activity may blend policymaking and enforcement, involve public-private partnerships in rulemaking and standard setting, and/or enlist expert auditors in evaluating compliance. Nontraditional regulatory models are being seen in areas such as privacy, telecommunications, health, plus food and drug regulation – all of which are information intensive. I’ll elaborate as I proceed.

This is (or should be) unsurprising. As I have noted in previous articles, auditing a credit rating algorithm, interrogating the health implications of a new food additive, or evaluating the competitive implications of a dominant software firm’s acquisition of an information aggregator is a different and more difficult task than evaluating a proposed merger between two grocery chains or inspecting a factory assembly line.

And a note to my economist readers before they object: yes, I know. The relationship between industrialism and informationalism is not sequential, but rather cumulative, and the emergence of informationalism as a mode of economic development is powerfully shaped by its articulation within capitalist modes of production. My guide has been Manuel Castells’ brilliant trilogy The Information Age

And, no, traditional industrial activity does not go away. It’s just that information technology assumes an increasingly prominent role in the control of industrial production and in the management of all kinds of enterprises. I strongly recommend the classic text on how this developed: James Beniger’s The Control Revolution: Technological and Economic Origins of the Information Society (1986) plus Susan Landau’s most recent book Listening In (2017).

To pull this all together, I think there is no better example than the Volkswagen emissions control software case.

The Volkswagen scandal neatly encapsulates the tensions and contradictions in the shift to informationalism described above.

 

In September 2015, the public learned that European automotive giant Volkswagen had designed the emissions control software for its diesel engines to comply with prescribed emissions limits only when the software detected that a vehicle was being subjected to emissions testing. At all other times, the software employed a “defeat device” to disable emissions-control functionality, resulting in emissions that vastly exceeded applicable regulatory limits. The scandal resulted in the resignation of Volkswagen’s CEO, a precipitous drop in the company’s stock value, and a wave of fines and recalls spanning three continents.

From one perspective, the automobile industry is a paradigmatic industrial-era formation. In fact, computer software resides at the core of the modern automobile and regulates nearly everything about its performance.

Modern regimes of emissions regulation, meanwhile, are themselves the product of an information-era realignment in societal understanding of the harms flowing from economic development. That realignment began in the mid-twentieth century with the recognition of toxic torts and systemic environmental degradation and continued in the 1980s and 1990s as new methods of financial trading and new derivative financial instruments introduced unprecedented volatility into financial markets.

Even so, the story of the defeat device revealed a regulatory apparatus pushed beyond its capabilities. The striking success of Volkswagen’s defeat device — which escaped detection for six years and ultimately was discovered not by regulators but by independent researchers — illustrates a large and troubling mismatch between regulatory goals and regulatory methods. Traditionally, emissions regulators have been concerned with setting and enforcing performance targets, not with conducting software audits.

The now-undeniable need to move into the software audit business in turn raises unfamiliar methodological and procedural problems. If regulation of automotive emissions — and thousands of other activities ranging from loan pricing to derivatives trading to gene therapy to insurance risk pooling to electronic voting — is to be effective, policymakers must devise ways of enabling regulators to evaluate algorithmically-embedded controls that may themselves have been designed to detect and evade oversight.

The Volkswagen scandal also illustrates the pervasive institutional influence of economic power — and shows that influence operating on levels that are both political and ideological. In the weeks after the news broke, press coverage
documented Volkswagen’s systematic efforts to stave off more intrusive regulation in the European Union and probed its close ties with the private European emissions testing laboratories that act as regulatory surrogates.

Such efforts and ties are not unusual, however. We have gigabytes of analysis by scholars and policymakers and pundits who have long recognized that regulated industries are intensely interested in matters of regulatory capacity and institutional design. More noteworthy are Volkswagen’s apparent justifications for designing and installing the defeat device: it was deemed necessary to enable improved engine performance, which in turn enabled Volkswagen to maintain and burnish its glowing reputation as an innovator in the field of automotive design.

Also noteworthy is European regulators’ choice to devolve primary responsibility for emissions testing to private entities that certify compliance. Yep: those themes of “innovative flexibility” and “privatized oversight” I noted above.

Let’s get further into the weeds a bit to unpack the challenges of information-era regulation, identifying some of the disconnects between information-era activities and industrial-era regulatory constructs that make regulation of Big Tech rather futile.

PLATFORM POWER

 

The balance is all wrong: it’s the companies, not the legislators/regulators, that hold more power.

 

As I finished writing this earlier today, I was reviewing the results of yesterday’s U.S. Senate hearing which ostensibly was about whether to revise or undo a bedrock law of the internet that made possible sites like Facebook and YouTube by providing a limited legal shield for what users post. It is in principle a worthwhile debate about how U.S. laws should balance protecting people from online horrors with providing room for expression online.

The hearing concerned Section 230, a piece of Internet legislation in the United States, passed into law as part of the Communications Decency Act of 1996, formally codified as Section 230 of the Communications Act of 1934 at 47 U.S.C. § 230. The provision that allows these companies to keep immunity over user-generated content while moderating it. Contrary to the common misunderstandings, it doesn’t mean they can’t moderate or take down content, or that they have to be neutral. If anything, it frees them to be less neutral while retaining immunity from liability. For a very good “explainer” on all the Section 230 issues click here.

But the hearing was a pointless circus. As they all are. But we know that. And if you had scanned Twitter Tuesday last night you’d have seen congressional hearing positioned as a “free speech showdown” — essentially a verbal WrestleMania match:

 

This is not the hallmark of a serious exercise in policymaking. It’s always the same problem. Lawmakers are never showing that they’re grappling with the law. Instead, they’re mostly just shouting.

And, oh, the irony. One of the goals of legislators in these hearings is to generate short videos that they can use in their ads placed on broadcast media plus the ads they will place on these platforms themselves, to the tune of many millions of dollars. 

Reality? The hearing was merely a last minute ploy for electionering.

The other reality? Legislators should stop asking questions of these men and start giving them answers. The only way to do this is to make these questions into what they actually are: political questions. That’s where you need to debate everything from the dominance of few social media companies over the public sphere to the problem of regulating attention in an age of information glut. It always turns into the legislators NOT playing the referees, but handing off to the executives themselves. The balance is all wrong: it’s the companies, not the legislators, that hold all the power.

 

And I read the Big Tech earnings reports. One of the issues I will explore in this monograph is how platforms have used the pandemic period to consolidate their power, even as incompetent skeptical government regulators begin to make-believe they will take action against them. Due to a quirk of timing, the platforms I will we cover in this monograph all reported their third-quarter earnings today. And it has never been clearer that the pandemic has been very good for the tech industry so far.

Apple reported record third-quarter revenue of $64.7 billion. The pandemic pushed the planned launch of the iPhone 12 by a month, so we won’t know how many of the new phone the company has sold until January. But in the meantime, Apple set an all-time record for sales of its Mac computers, with $9 billion in revenue.

Amazon reported record third-quarter earnings as well, with revenue of $96.1 billion. It was the company’s second record-setting quarter in a row, as the pandemic pushed more people to online shopping. Notably, the company hit the revenue milestone even though its big Prime Day sales event got delayed to the fourth quarter.

Alphabet beat expectations with $46.17 billion in revenue. Google’s parent company saw a strong rebound in digital advertising revenue, which had declined in the second quarter but is now roaring back. One sign of how the pandemic helped Alphabet: YouTube ad revenue was up 32 percent, to $5 billion, as more people turned to it for quarantine entertainment.

Facebook beat expectations with $21.47 billion in revenue. Ad revenue grew 22 percent despite the high-profile advertiser boycott. But at least some of the positive effects of the pandemic are wearing off for Facebook: daily users fell by 2 million in the United States and Canada, to 196 million people. One question mark: how well will Oculus Quest 2 sell? It began shipping a couple weeks ago; we’ll know more in January.

Finally, much-smaller Twitter also benefited from the rebound in digital ad sales, with $936 million in revenue. That’s up 17 percent from the year-ago quarter, and daily users increased by 29 percent, to 187 million people. But the stock fell in after-hours trading; analysts had expected usage to grow more.

Of course, there’s nothing illicit about building a big business, even if it happens during a broader recession in the economy. But when a handful of our biggest businesses are earning record profits during a broader recession in the economy, it will naturally focus more scrutiny on their business practices. As we head into a long year of antitrust litigation, look for the platforms’ outsized success during a turbulent period to be part of the story.

And just a few words about these eye-popping sales numbers. These companies also spend gobs of money, which in turn helps them make more money. The ability to spend like crazy – because Big Tech has money and hardly anyone questions how the companies spend it – is one of the secrets to why the tech industry giants are so difficult to unseat.

A few examples: Amazon hired 250,000 full- and part-time employees – on average roughly 2,800 each day in the 90 days that ended in September – and then about 100,000 more people in October, the company said. Google has spent nearly $17 billion this year on things like hulking computer equipment – that’s about the same as Exxon’s comparable spending figure for digging oil and gas out of the ground.

Facebook’s Mark Zuckerberg talked excitedly on Thursday about spending whatever it takes on futuristic projects like eyeglasses that overlay virtual images with the real world. Some of this stuff, yes, can immediately help companies generate more of those eye-popping sales and profits. When Amazon hires people to work in its warehouses or to drive trucks, those workers help push more packages to your door this Christmas.

But a lot of this stuff, honestly, who knows. What the heck is Apple cooking up in its research labs, on which it spent $19 billion in the last year? Can Facebook get us to buy into a future of our world mixed with virtual images? Are Amazon’s gazillions of new package warehouses, transportation depots and computer centers really justified? This is the kind of stuff that might never pay off.

And that’s one reason Big Tech is so different. Few large companies get mostly patted on the back for spending money in ways that may – or may not – pay off. This is part of the ultimate dilemma about these technology giants that dominate our lives and often our leisure and work hours. They make tons of money, which means they have more money to stay on top. (Also, governments and competitors say these companies break the rules to advantage themselves at the expense of rivals, hurting consumers like us.)

One of the most cringe-inducing words in business (and antitrust law) is “moat.” What this means is a company has some unique advantage – a globally recognized brand name for Coca-Cola, or a unique technology that helps Uber move cars around efficiently – that gives it an unbreachable border of water filled with monsters. It’s a terrible, overused piece of jargon. But the tech superstars have a moat. (Imagine me cringing as I typed that.) Their unique advantage is access to giant piles of money. And they’re using it to dig that watery trench of monsters even deeper.

 

 

These companies don’t look like traditional monopolies, and our laws (be it in the U.S. or Europe) haven’t caught up with what they actually are. All those regulators sitting in the DOJ, the U.S. Federal Trade Commission, the EU Directorate of Competition and the UK Competition and Markets Authority … and even all the other consumer protection agencies across the U.S., Europe and the UK … sit, confounded, with multiple, confusing parts: new business model, network-effects, the power of sociality, the preferential-attachment dynamics (meaning winner-takes-more, which often evolves into winner-takes-almost-all), etc., etc.

Ah, it used to be so easy. Where market structure was concerned, U.S. regulators and legal thinkers were accustomed to defining impermissible results in terms of concepts like market power, discrimination, and deception – benchmarks that are relatively easy to assess when markets are distinctly ascertainable, goods have fixed properties, and information about consumers is limited.

But now, in the interlinked markets constituted by contemporary information processing practices, none of those things is true. Markets are fluid and interconnected, information services sit within complex media ecologies, and networked platforms and infrastructures create complex interdependencies and path dependencies. With respect to harms, information technologies have given scientists and policymakers tools for conceptualizing and modeling systemic threats. At the same time, however, the displacement of preventive regulation into the realm of models and predictions complicates, and unavoidably politicizes, the task of addressing those threats.

What has happened is that the power has changed. A core concern of economic regulation is identifying the circumstances in which economic power requires oversight. Power in markets for goods or services can translate into predatory pricing or barriers to competitive entry, while power embedded in the structure of particular distribution channels or relationships can facilitate other types of inefficient or normatively undesirable behavior.

But the markets for information-related goods and services introduce bewildering new variations on these themes. Understanding economic power and its abuses in the era of informational capitalism requires discussions about the new patterns of intermediation and disintermediation that information platforms enable, and about the complexity and opacity of information-related goods and services. The key take-away as discussed in Julie Cohen’s book which I noted above (and I am giving you a very simplistic summary of the 100 pages she devoted to this one topic) is this:

Networked information markets disrupt conventional understandings of market power and market harm. Our current regulatory architecture is not equipped (or versed) to deal with it.

The earliest iteration of the conceptual difficulties posed by networked markets was the antitrust litigation against Microsoft Corporation for bundling the Internet Explorer browser with its operating system. Microsoft’s software licenses with its original equipment manufacturers (OEMs) required that personal computers be shipped with Internet Explorer preinstalled.

No, I am not going to get into the details of that case 🙂 . I am only going to highlight some regulatory highlights in the next few paragraphs on what the DOJ learned from the case and how it had an opportunity (but failed) at writing a new rulebook for the information age. Just Google “microsoft antitrust case” and you’ll get over 10.5 million hits. My suggestion is you first read the resulting District Court decision, and the Circuit Court decision … Wikipedia has all the links here … and then read “The Microsoft Antitrust Cases” by Andrew Gavil and Harry First. I’ve read four books on the case plus about 10 law review articles. The Gavil/First book is the most comprehensive account (and the most neutral account) I found.

From the standpoint of antitrust doctrine formulated for the industrial era, it was a weird case. The market for browsers was unusual. To begin with, it was hard to discover a price advantage that accrued to Microsoft because the leading competitors offered their software free of charge. Moreover, Microsoft also asserted copyrights in its operating system and browser software, and traditions of rightholder control over licensing afforded a powerful countervailing narrative to competitors’ complaints. Finally, and importantly, although Microsoft’s licenses prohibited OEMs from removing Internet Explorer and its desktop icons, the licenses did not prohibit either OEMs or consumers from installing and using competing browsers. In the traditional language of antitrust law, they were vertical restrictions rather than horizontal restrictions, and therefore less suspect. All of this is detailed in the court cases referenced above.

But the DOJ team that built and prosecuted the Microsoft case recognized that platform markets and platform-based media ecologies can create powerful path-dependencies. As Gavil/First note in their book:

Although Microsoft did not prohibit OEMs or consumers from using competing browsers, it carefully crafted installation pathways to steer them toward Internet Explorer. That design decision effectively restricted both user choice and competitive entry. If it was too late to build out that argument in the present case, the Department could create a template for the future. 

In today’s mobile computing markets, Google, Facebook, and Apple have built integrated systems that offer users a wide variety of information services under one brand, and that simultaneously enable comprehensive control over advertising markets and over the collection of user personal information. Most of those services are available to consumers at no direct financial cost, but that does not make them costless. Loss of control over personal information creates a variety of near-term and longer-term risks that are difficult to understand and value. Amazon has expanded into seemingly every conceivable consumer market, and has become embroiled in such diverse issues as the book publishers imbroglio regarding its asserted attempts to dictate terms of sale. The doctrinal landscape has grown still more complicated numerous times when counterclaims have been added in various other cases that dealt with trade secrecy and free speech interests in the operation of search algorithms. We’ll discuss that in later chapters of this monograph.

The antitrust understanding of these and related issues remains rudimentary. So, for example, although the government ultimately obtained a judgment against Microsoft requiring it to unbundle its licensed products, the judgment issued ten years after the complaint had been filed, and the proceedings lumbered to their conclusion without the benefit of a coherent framework for determining harm.

While the litigation was underway, the DOJ began to revise its guidelines for antitrust investigations in intellectual property-related and Information Age matters, but the resulting document did little to unpack the questions about the power of dominant platforms that had prompted the litigation in the first place. By then the administration had changed (Clinton was President) and the DOJ was backing off serious investigations because they were buying into  the imperatives of “market liberty” and “innovation and economic growth” which I noted above.

However, legal scholars did begin to identify and explore a variety of discrete platform-related issues, but there was no systematic attempt to formulate a definition of platform power or to develop a methodology for determining when platform-related advantages ripen into antitrust injuries. However, it was also the time when several legal scholars raised the issue of whether and when information platforms should be subject to common carrier or public-utility obligations – a controversial topic which arose again a few years ago during the net neutrality debates.

But the fact it was even raised shows you the regulatory dilemma. It’s a case of “Hey! We need new rules!” versus “Hmmm … let’s adapt industrial-era notions of common carriage and/or public utility provision to the networked information age”. The latter just does not fully encompass all of the interests and issues at stake. The telephone-based communications paradigm is too narrow to encompass all of the different activities and functions that digital networked communications enable, and different actors have very different views about what ought to be considered essential services subject to common carriage or public provision obligations.

The net neutrality debate: it’s not going to be in this post (for a decent summary click here) other than to say it raises more general questions about the extent to which communications regulation should incorporate public access and social justice considerations. Notably, each side in the debate has attempted to claim the mantle of innovative liberty and economic growth. And privacy issues. From the business perspective, the ability to discriminate among different types of traffic makes it easier for providers of networked information services to exert end-to-end control over the collection of consumer personal information, which is THE increasingly valuable economic resource.

But I must note (and leave it here) with a few notes Julie Cohen makes in her book: “Arguably, net neutrality is itself a neoliberally-inflected regulatory conception to the extent that it denotes reliance on market forces operating on an intraplatform basis to produce services of adequate variety and quality. Consider, for example, all of the information services that enable individual consumers to seek employment, housing, and education — services that privileged consumers take for granted, but that less privileged consumers struggle to obtain. At least given current capabilities, many such services require higher bandwidth or more versatile platforms to be delivered effectively, and many lower-income consumers in marginal communities lack access.”

European regulators have attempted to articulate a more demanding conception of permissible market behavior, at least are contending more directly with the various kinds of external costs that platform power can create. But whether or not they’ll be effective remains to be seen. The EU has levied a lot of fines … chump change for these companies … but almost zero change in behavior.

Reinvigorating antitrust and competition law in the era of informational capitalism will require a willingness to rethink major assumptions about the causes and effects of power in information markets. That project demands both more careful investigation of the kinds of power that information platforms wield and a much more open-minded discussion of corrective measures.

INFO GLUT

 

The old style regulatory mandates that relate to market structure include what is termed “anti-distortion rules”, which are rules intended to ensure that flows of information about the goods, services, and capabilities on offer are accurate and unbiased. The template is simple:

1. Some anti-distortion rules are information-forcing; rules in that category include those requiring disclosure of material information to consumers or investors.

2. Other anti-distortion rules are information-blocking; such rules include antidiscrimination, false advertising, and insider trading prohibitions.

Both information-forcing and information-blocking rules are premised on the assumptions that information is scarce and costly to obtain and convey, and that regulatory mandates therefore can produce meaningful changes in the nature and quality of information available to market participants. Information-forcing rules additionally presume that consumers and investors have the motivation and cognitive capacity to benefit from required disclosures. For example, it’s why we have such such mandates as food and drug labeling requirements, truth in lending rules and equal opportunity commitments.

The difficulty currently confronting regulators is that under contemporary conditions of infoglut – of unmanageable, mediated information flows leading to information overload – none of those assumptions is right. To achieve meaningful anti-distortion regulation under conditions of infoglut, a different set of foundational premises is needed.

As Mark Andrejevic explains in his book Infoglut: How Too Much Information Is Changing the Way We Think and Know (published in 2013 with an update in the works), infoglut confounds our most deeply rooted instincts about the role of information in a democratic society. Those instincts “took shape during an era of relative information scarcity,” in which many defining political battles “revolved around issues of scarcity and the restriction of access to information.” The political and epistemological dilemmas of infoglut flow instead from abundance:

Techniques of critique and deconstruction increasingly become tools of the powerful, and sophisticated appeals to emotion and ingrained instinct readily overshadow reasoned argument. For example, the rejoinder to critique is not the attempt to reassert a counternarrative about, say, the scientific consensus around global warming, but to cast doubt on any narrative’s attempt to claim dominance: all so-called experts are biased, any account partial, all conclusions the result of an arbitrary and premature closure of the debate.

Information abundance also enables new types of power asymmetries that revolve around differential access to data and to the ability to capture, store, and process it on a massive scale. Under conditions of infoglut, the problem is not scarcity but rather the need for new ways of cutting through the clutter, and the re-siting of power within platforms, databases, and algorithms means that meaning is easily manipulated.

From a regulatory perspective, the problem with infoglut is that it makes information-forcing rules easy to manipulate and information-blocking rules easy to evade. Julie Cohen uses this example:

Consider first the problem of how to conduct meaningful antidiscrimination regulation and enforcement under conditions of infoglut. To enforce existing antidiscrimination laws effectively, the various agencies with enforcement authority need the ability to detect and prove discrimination, yet that task is increasingly difficult when decisions about lending, employment, and housing are made via criteria deeply embedded in complex algorithms used to detect patterns in masses of data. Markers for protected class membership can be inferred with relative ease and near-impunity from other, seemingly neutral data, and data-intensive methods seem naturally to support arguments about legitimate business justification that can be used to overcome a prima facie case of disparate treatment or disparate impact.

In an era when decision-making is mediated comprehensively by so-called “big data,” regulators will have to contend with the methods by which regulated decisions are reached — i.e., with the algorithm as an instrumentality for conducting (regulated) activity.

In general, the existing regulatory toolkit is poorly adapted for scrutinizing algorithmic models. Frank Pasquale notes this in Black Box Society:

One rudimentary gesture toward algorithmic accountability is the Federal Reserve’s Regulation B, which lists criteria for the Consumer Financial Protection Bureau (CFPB) to use in determining whether credit scoring systems are “empirically derived [and] demonstrably and statistically sound.” The list relies heavily on “accepted statistical principles and methodology,” but leaves unexplained what those principles and methods might be and how they ought to translate into contexts involving automated, predictive algorithms with artificial intelligence components. How can a court or overseeing regulator possibly deal with this?

Infoglut also impairs the ability to conduct effective consumer protection regulation. Consumer protection regulation typically involves both information-forcing and information-blocking strategies. Regulators seek both to require disclosure of material information about quality and other nonprice terms and to prevent marketing practices that are deceptive or that prey upon vulnerable populations. The increasing amounts of information associated with even basic consumer products can be bewildering, however. Chris Dodding (long-time friend and since retired as a product manager at the advertising firm Oligvy) told me:

In markets for information-related goods and services, consumer awareness is easy to manipulate more directly, and the goods and services frequently are amenable to versioning in ways that embed material nonprice terms within price discrimination frameworks. Yes, we game the system. 

One enormous area where consumers’ inability to self-protect is compounded is via providers who use predictive profiles supplied by data brokers to target offers and disclosures. Predictive profiles can convey valuable information about consumers’ priorities and reservation prices, and vendors then can rely on that information to make sure that consumers see only certain marketing materials and feature packages. Over the last 5 years scholars, social justice advocates and data protection professionals have accelerated their analysis to draw attention to the linkages between the new types of pattern-based discrimination enabled by data-intensive profiling and the emergence of a seemingly permanent economic underclass. But current consumer protection paradigms framed in terms of notice and choice are ill-suited to address these issues, which are fundamentally issues of economic and social inclusion. The California Consumer Privacy Act did address some of these issues. But as recent commentary has noted, the statute and rules issued thereunder are ambiguous which suggests that the impacts for certain data-driven businesses in the online advertising ecosystem may not be that significant.

In general, information businesses have attempted to forestall more comprehensive approaches to regulating highly informationalized markets by appealing to those neoliberal conceptions of innovative and expressive freedom I noted above. That strategy is having a clear effect on the regulatory dialogue. As regulators have struggled to develop adequate responses to the ways that infoglut shapes markets, the rhetoric of innovation and private choice has burrowed ever more deeply into the regulatory lexicon. Rob Wells in his brilliant book The Enforcers noted:

Even after the global financial crisis of 2007-2008, both market participants and influential public officials habitually used the term “innovation” to describe what financial firms do. It forestalled scores of attempts to seriously reign in financial services  Seeing a winner, “innovation” started popping up everywhere.  

“Innovation” rhetoric also figures prominently in attempts to forestall or water down information privacy regulation.

 

In the following chapters we’ll get further into the weeds of platform power and info glut, and look deeper at a few current innovations in the regulatory landscape I noted above with respect to accountability and oversight. One increasingly common method involves technical standard-setting. Readers who know and work with The National Institute of Standards and Technology (NIST) and the International Telecommunication Union (ITU) are familiar with these methods.

I will also dig deeper into how data has created such a competitive advantage for these platforms. The virtuous cycles generated by data-enabled learning may look similar to those of regular network effects, wherein an offering — like a social media platform —becomes more valuable as more people use it and ultimately garners a critical mass of users that shuts out competitors. But in practice regular network effects last longer and tend to be more powerful. To establish the strongest competitive position, you need them and data-enabled learning. However, few companies are able to develop both. These platforms have done it, and succeeded.

Many aspects of networked systems are worthy of study, and it becomes imperative when discussing competition. A practitioner needs to know the nature of the individual components: how a computer works, how the internet works, ow the World Wide Web works, how a human being feels or acts, what is the nature of the connections or interactions, what are the communication protocols used on the Internet or the dynamics of human friendships.

It all comes down to the most crucial element of behavior of the system: what is the pattern of connections between components and how does that provide competitive advantage, and is that advantage “against the rules”. The DOJ might just be coming around to that view. Read the 55 pages of the complaint against Google, and the attachments, and you’ll come out to where I am: Google may have earned its position honestly, but it is maintaining it illegally.

Additionally, I want to address a larger issue, the structural mismatch between the regulatory state and information-era regulatory problems: the jurisdictional boundaries of the existing administrative framework. By this I do not mean simply that many contemporary regulatory disputes are artifacts of outdated statutory grants of authority, though that is also true. More fundamentally, many information-economy activities are “driving outside the lines”, completely off the regulatory organization chart, blundering around and across existing lines of authority. Activities such as digital content protection, pharmaceutical patenting, regulation, artificial intelligence-driven predictive profiling, regulation of health-related information services, etc,. etc. all implicate the jurisdiction of multiple entities.

Thank you for reading.

NOTE: this is a rough draft of the Introduction to the monograph. All questions, comments, critiques are very much welcome. Please email me at :

[email protected]

One Reply to “The struggle (futility?) of controlling or regulating technology in the modern world: an introduction”

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top