Book Review: The Unaccountability Machine (Dan Davies)
The book offers a compelling balance of cynicism and earnestness
I highly recommend
book The Unaccountability Machine.It offers a new way of thinking about the ways that organizations succeed and fail, and the reasons why they can fail in unpredictable ways when confronted with complexity of challenges that they aren’t prepared for (the subtitle is “Why Big Systems Make Terrible Decisions — and How The World Lost Its Mind”).
It’s very readable; it works through a series of concepts sequentially without feeling like a slog. I have questions about the book, but to get a sense of why it’s worth reading I would recommend
’s “What Stafford Beer and Dan Davies say and why you need to read them”We live in a complex world which keeps on producing variety that builds on previous variety. That means that there are many, many surprises - complex systems are by their nature difficult to predict. If you do want to anticipate these surprises, and even more importantly, to manage them, you need to have your own complex systems, built into your organization. And these systems need to be as complex as the system that you’re trying to manage.
Hence, the “Requisite Variety.” In Dan’s summarization “anything which aims to be a ‘regulator’ of a system needs to have at least as much variety as that system.” Or, put a little differently, “if a manager or management team doesn’t have information-handling capacity at least as great as the complexity of the thing they’re in charge of, control is not possible and eventually, the system will become unregulated.”
You can see why the book speaks to my earnestness. Despite Davies’ entertaining cynicism, it offers the possibility that the world is full of people who are trying to do their best and failing in ways that reflect poor systems rather than malice1. In fact the dedication of the book celebrates, “the middle managers of the world, the designers of spreadsheets and the writers of policies. Your work may be prosaic, but you are the ones who shape the world we live in.”
I particularly appreciate the connection Farrell connecting this book to another recent book about the need for an efforts to improve the technical infrastructure used to implement policy.
has also written about the book and hails it as a good contribution to the grand problems of civilization:The most cybernetic book, apart from Dan’s, that I have read in the last few years is Jen Pahlka’s Recoding America, even if it doesn’t mention Beer, cybernetics or any of the technical terms that I’ve been sharing with you. If you read Jen’s book carelessly, you might come away with the impression that it is about the U.S. government’s incompetence at contracting out software development. If you read it carefully, you will realize that it is actually an applied informational theory of the state. The U.S. government is bad at making all kinds of policy in a non-hierarchical way. Everything seems to come from the top. Old policies are rarely erased, and new ones are perpetually layered on top, in ways that are at best inefficient, and at worst contradictory in mutually toxic ways. Previous efforts to fix the problem (e.g. through the Paperwork Reduction Act) have tended to make it worse. And civil servants have every incentive to just go along with the orders from the top, making “concrete boats” (to use a pithy phrase from one of the jobsworths that Jen talks to) without paying any attention to whether they will float, or whether they are wanted in the first place.
So what do we do? Davies says the first step is for him to write his book, attempting to revive what was once an important intellectual movement of the post-World War II world, cybernetics—Norbert Weiner’s idea that there should be principles that we can discover about how to make our increasingly large and complex systems of human organization comprehensible, and manageable by human beings. The root is the Greek kybernētikos, meaning “good at steering a boat”. Cybernetics would have been a discipline, metaphorically, about how to steer a boat, or perhaps about how to build a boat that can be steered.
…
Thus the book is one of great, if Sisyphean, hope. We can fix our excessive dependence on unaccountable inhuman-scale systems. It is a problem of information flow: greater transparency, human oversight, and reintroducing personal responsibility. But this requires conscious efforts to combat and fix the tendency towards unaccountability and system opacity and misoptimization.
The book offers reasons for the optimism that Farrell and DeLong describes; it argues both that there are better ways to understand organizational function, and identifies some specific intellectual traps that people fall into and how to climb out of them.
One way to think about the insights of the book is to recall the reason that Brad Delong and Noah Smith gave, two years ago, for why it would be desirable for Elon Musk to take over twitter. Broadly speaking they argue that the then-current leadership of twitter didn’t appear to have a good sense of the problems thee needed to solve. They think that having someone who, “knows how to make things well” and is also an active user of the platform could be an improvement.
This book suggests that the challenge of building an organization that can be responsive to changing conditions (and, certainly, twitter found themselves in the position of needing to solve problems which were very different from the company’s original strengths).
One of Stafford Beer’s key criteria for establishing a viable system was that careful attention needs to be paid to preserving information when a signal crosses a boundary — the ‘translation and transduction’ problem. There is always the issue of ensuring that information is received in a form and at a time which allows it to be part of the decision-making process, but it is also the case that communication is just difficult. As anyone who has tried to organize a moderately complicated set of meeting between people at different organizations knows, cross-purposes and misunderstandings are frightfully common.
A lot of the seeming redundancy in middle management used to be dedicated to mitigating this problem; extra capacity and processing power was installed at the communication boundaries, to make sure messages got through with the appropriate degree of nuance and content, and that wobbles and flutters would be handled at the appropriate level. This is one of the things that gets thrown away when companies outsource. By looking for activities in which they could be global leaders and outsourcing everything else, companies exchanged internal boundaries for external ones. Since a large part of the reason for doing so was to economize on management capacity, these relationships necessarily attenuated information — that was the purpose of doing so.
In other words, the ability to apprehend the nature of an event — is it an opportunity or a warning, and in what ways — is a skill and organizations, like people, can be better or worse at the skill. Being able to do well requires both capacity and experience. That capacity can seems like an unnecessary expense when things are going well, but can be very valuable for dealing with new information or changing circumstances, and it’s hard to build that capacity when there is a crisis on the horizon.
You can see the same question in other circumstances. For example, consider the much-discussed pattern of enshitification.
Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die. I call this enshittification, and it is a seemingly inevitable consequence arising from the combination of the ease of changing how a platform allocates value, combined with the nature of a "two sided market", where a platform sits between buyers and sellers, hold each hostage to the other, raking off an ever-larger share of the value that passes between them.
One wonders to what extent this reflects conscious choices and a desired outcome and to what extent this represents an organizational problem — of decisions getting made to improve some metric at the time, but also shifting the platform in a way that puts it on a path to driving away users.
I also see the same dynamic that Davies is writing about when reading this description by
:From [Mondale] on, the Democrats have all been meritocratic and technocratic managers, running on the premise that they were the best to run. Most of them have been weak campaigners, with the exception of Obama. … None of them have seemed to have a sense of their underlying electoral coalition beyond what professionals told them during active campaign seasons. I don’t think that’s somehow a remarkably coincidental set of personal shortcomings. … Their short-sightedness and limited aspiration was shared by tens of thousands of Americans like them.
That sounds to me like something which is not necessarily a personal feeling, and quite possibly a problem of, “preserving information when a signal crosses a boundary.” That we have lost a number of intermediate institutions which served to connect voters and politicians — sending information about voters concerns upwards and supporting a shared structure for thinking about the purpose and reasons for policies.
However, that points a significant problem for the initial optimism of the book. Davies writing is helpful for someone inside an organization trying to figure out how they improve things2 but it’s less clear how someone outside the organization would tell what solutions are necessary or how to push for those solutions.
Davies offers a couple of broad policy recommendations (mostly trying to restrict leveraged buy-outs) and some suggests for how to recognize the problems he’s writing about, but they are limited. If readers of the book wanted to advocate for a society that is more attuned to the problems and solutions Davies discusses, it’s not quite clear what that would look like.
In fact the book opens by discussing a slightly red herring (a pink herring?) that Davies encounters. He introduces the idea of an “accountability sink” — a structure which is intended to follow policy and allow no recourse for feedback, even when the policy is absurd, thereby not offering any specific point of accountability.
I dreamed of giving talks to huge audiences, gently chiding the managers of the world for avoiding accountability, and perhaps ushering in a new age of responsive government. I had even, to my utter shame, begun to coin a law that I could foresee appearing in books of aphorisms alongside Murphy’s Law and t
“The principle of diminished accountability: unless conscious steps are taken to prevent it from doing so, any organization in a modern industrial society will tend to restructure itself so as to reduce the amount of personal responsibility attributable to its actions. This tendency will continue until crisis results
I can still remember the crushing disappointment of realizing that it was all too simple.
…
Many of the things I’ve identified as “accountability sinks” could just as easily be called “the rule of law.”
To it’s credit, the resulting book offers a much richer description of how to think through the problem, but none of that is quite as visible externally as the original idea of being able to identify “accountability sinks.”
For a concrete example, consider applying these ideas to two Michael Lewis books. The Fifth Risk is, in many ways about the excellent work being done deep in government bureaucracies (often in the Commerce Department) and the fact that much of that work isn’t directly visible to the public and that if that work was crippled in some way; either discontinued, or restructured to offer less public value, we might be very slow to figure out that had happened. There isn’t an obvious reason that would lead someone, going about their life, to consider whether that work is being done by a functional or dysfunctional organization. However, his later book The Premonition: A Pandemic Story is a vivid demonstration of a public bureaucracy suffering from exactly the problems that Davies writes about — lacking sufficient capacity to process information and make decisions that went beyond existing policy and habits.
Compare Davies summary of the necessary functions of an organization, “Think of soldiers, quartermasters, battlefield commander, reconnaissance and field marshal, or… musicians, conductor, tour manager, artistic director and Elton John…” with this description from Lewis:
"And Charity picked up all of Carter Mescher's analysis. And she said it was like pouring water on a dying plant, that it was the first person she met who was thinking about this threat the way she was thinking about it," Lewis says. "And so she's very soon on the private calls. ... Think of her as an actual battlefield commander. She's in the war, in the trenches, as if she's figured out in the course of her career in public health that there are no generals or the generals don't understand how the, how the battle's fought. And she's going to have to kind of organize the strategy on the field."
…
"[then-CDC Director Robert] Redfield is a particularly egregious example, but he's an expression of a much bigger problem. And if you just say, 'oh, it's the Trump administration' or 'oh, it's Robert Redfield,' you're missing the bigger picture," Lewis says. "And the bigger picture is we as a society have allowed institutions like the CDC to become very politicized. And this is a larger pattern in the U.S. government. More and more jobs being politicized, more and more people in these jobs being on shorter, tighter leashes. More the kind of person who ends up in the job being someone who is politically pleasing to whoever happens to be in the White House. And so ... the conditions for Robert Redfield being in that job were created long ago."
The book provides useful analysis for how someone might have tried to make the CDC more effective. It would make the world a better place if that were to happen but, reading it, it’s unclear to me how it would work for the political system to actually pressure the CDC to do better rather than the existing pressures that Lewis mentions.
I desperately hope that the pessimism I’m expressing here is just learned habit; a expectation of non-responsiveness which could be changed. But I have a hard time figuring out what would make these problems visible and comprehensible from the outside before a crisis arrives.
Dan Davies tells this anecdote
Once upon a time, in the early to mid stages of the Global Financial Crisis, a client said to me …
“Danny, since this thing began, there have been two types of analysts. Some people, like yourself, have been trying to develop their understanding of an incredibly complicated system, under huge pressure, absorbing vast amounts of technical detail in a short time, and doing a fairly good job of it. Others have just been mindless bomb-throwers, trying to attract attention to themselves with ill-informed displays of competitive panic. I decided early in this crisis that I was going to listen to the second type of analyst – and they have turned out, systematically, to be much closer to being right”
That’s funny, and it’s clear that the client is partially teasing him. But it’s also a reminder that it’s really difficult to be able to anticipate, in advance, what might break under stress. It’s hard enough to do that inside one’s own sphere of knowledge, it’s particularly tricky to guess what might break somewhere else. Davies says that conversation improved his work.
He was correct; I changed my approach as a result and consequently, I think, did a much better job of understanding the Eurocrisis. There’s a very great danger in believing that either a) the whole problem is of a size that you can fit in your head, so understanding it is just a matter of working hard enough, or that b) the relationship between the amount of detail you know and your understanding of the system is positive and monotonic. This is often not the case.
But that may be a lesson which is a difficult one to teach . . . . it often takes practical experience of making those mistakes to be able to look for them in the future.
[ETA] I should also mention that, while Davies identifies some cases of choices taken to move in the wrong direction, the solution isn’t simply going back to the previous working version.
Every new idea in management seems to come as a reaction to the previous generation’s attempts to find the right way to solve these two problems without taking on too much in the way of overhead costs. And for the same reason, the management practices of the previous decades will always seem hilariously dated. The complexity of the environment is always increasing, and technological progress means that there will always be new ways of getting information from where it is generated to where decisions are taken. Of course management theories change — when the foundations of the problem are constantly shifting, the answers are bound to change.
If you translate this problem into the more abstract language of cybernetics, it becomes easier to understand. In the business environment, complexity (environmental variety) will naturally increase. In any given organizational structure, the variety which management systems can bring to bear also increases, but more slowly. At some point the difference in growth becomes critical; the organization needs to change its architecture so that variety is matched to variety at every level of decision making. John Paul Getty of Getty Oil was able to read status reports from all his rigs every morning in the 1930s, but that’s simply not possible for a company like BP or ExxonMobile today.
An introductory note says, “A great deal of intellectual energy is wasted on trying attribute events to the categories of ‘conspiracy or cock-up’, when most of them should probably be blamed on something more abstract. History is the study of decisions, not of events, and many decisions are best understood as the outcome of larger systems rather than individual acts of will.”
He offered as a simple version of the cybernetics analysis that is the subject of the book:
The following checklist is adapted from the one in “Creative Problem Solving” by Flood & Jackson (hahaha look at the prices of academic books), and I think they make a reasonable case that these are the most common problems to find in a Viable Systems Model analysis:
· Signals simply aren’t being transmitted between different systems, or the capacity to translate them into action hasn’t been maintained, or they aren’t being transmitted fast enough.
· An important operational subsystem hasn’t been identified as such, and consequently doesn’t have internal management structures of its own
· An administrative subsystem that ought to be serving the direct operations has started to act as if it was a viable entity on its own and started to function at the expense of the overall system rather than for its benefit.
· The regulatory structures (timetables, inventory control, etc) which are meant to stop the operations getting in the way of each other are being ignored or destroyed by operational managers who find them annoying or inconvenient.
· The intelligence function is weak and regarded as a “staff” or “head office” function rather than being integrated into the “line” management.
· The highest level “philosophy” function isn’t performing its role of balancing present and future demands, usually because it’s got too involved in day-to-day management because the intelligence function is weak.
· “Line” management are interfering with the detail of operations rather than performing their co-ordinating and optimising role.
If any of these statements are true about an organization, then you have a “that’s yer problem right there” moment – the principle of information capability matching (Ashby’s Law) isn’t being respected, and the consequences are likely to be similar to forgetting about the expansion of gases in a rocket.
Subscribe to Earnestness Is Underrated
I keep getting into conversations on substack, and want a place to store longer thoughts.