On trustware
This blog post digs into the politics of trustware. “Trustware” is a new word for an old thing: a system that reifies expectations about how social relationships should work, making it easier for participants to cohere to those expectations than to subvert them. I focus on three cases of trustware—the cash register, the national identity system, and the hacker handle—to illustrate how designers’ needs dictate whose trust trustware serves. I call for restorative trustware, which counteracts incumbent power to equalize power dynamics in trust relationships.
This is a written version of a talk I gave at the Metagovernance Seminar. Many thanks to Metagov for the space, time, and feedback to think through these ideas. You can find a video recording of this talk here.
“Trustware”
Orca, a fellow builder of DAO tooling, recently coined the term “trustware,” which they defined as “a subset of digitization that focuses specifically on trust agreements that incur a social cost through coordination.” The goal of trustware, in Orca’s telling, is to place “code at the center, humans at the periphery.” Trustware should create assurances around things that would otherwise require building and maintaining social trust relationships.
As Karl Polanyi would remind us, engineering can “automate away” neither social forces nor their “costs.” Engineering can reify, amplify or counteract social forces, but it cannot remove them. Consider, as Ben-Zvi and Weber did in their recent piece, Coase’s theorem. Coase’s theorem predicted that zero transaction costs would result in a perfect distribution of goods. Quoting Ben-Zvi and Weber:
The Coase theorem became the herald of a new and profound laissez faire ideology that didn’t just argue (as did the Reagan-Thatcher neoliberal movement) for the reduction of government control of markets. It went further to propose that a society built on marketizing literally every possible social interaction would be more efficient, more sustainable, more fair, and more gratifying and meaningful for human beings.
[...]
Web 3 wants to go all the way — by ignoring, routing around, and ultimately driving into irrelevance the non-market forces and actors that get in the way. It’s another utopian (or dystopian) market-world vision...
Undoubtedly, a naive techno-utopianism runs through web3. Many in the community would wish away (or, rather, “innovate away”) the frictions of incumbent institutions. While that goal remains a pipe dream, there is something to the notion that web3 toys with trust. Systems like voting and payroll do with smart contracts what might have been done by human officials. DAOs place code where people might have been.
Does trustware place humans at the “periphery” of trust relationships? My thesis is that it does not. Rather, trustware reifies expectations about how social relationships should work, making it easier for participants to cohere to those expectations than to subvert them.
The cash register
In The Psychology of Human Misjudgment (1995), Charlie Munger introduces incentive-caused bias as a primary driver of human behavior. He summarizes this bias with a proverb: “whose bread I eat, his song I sing.” To solve any business problem, management consultants will suggest more management consulting. To resolve any legal issue, lawyers recommend more lawyering. To progress on any matter of intellectual concern, a peer-reviewed paper will always conclude by suggesting further research. The essence of incentive-caused bias is this: people are more likely to believe an action is justified when it aligns with their existing incentives. Munger ties this bias to the story of cash register magnate John Henry Patterson:
One implication [of incentive-caused bias] is that people who create things like cash registers, which make dishonest behavior hard to accomplish, are some of the effective saints of our civilization because, as Skinner so well knew, bad behavior is intensely habit-forming when it is rewarded. And so the cash register was a great moral instrument when it was created. And, by the way, Patterson, the great evangelist of the cash register, knew that from his own experience. He had a little store, and his employees were stealing him blind, so that he never made any money.
Why do cash registers work, in Munger’s telling? Because they make it hard enough to steal from the cash register that stealing is no longer in the service worker’s interest. Cash registers were not and are not “provably secure;” they are secure enough to counteract the service worker’s incentive. They make it easier to comply than to disobey.
Critically, cash registers do not place “machines at the center, humans at the periphery” of commercial transactions, much to the capitalists’ chagrin, those both of both Patterson’s time and ours. Service-sector employment—“personing the register,” as it were—has been stubbornly resistant to automation in the decades since the cash register’s invention, well into the supposed demise of brick-and-mortar prophesied in the dot-com years. The reason, of course, is that there is much more to service-sector jobs than moving products and cash. A rich tapestry of formal and informal work holds the service industry together.
What, then, do cash registers do? They reify and enforce expectations about who gets paid first: the boss. They are not “moral instruments,” as Munger would tell us (whose morality prohibits stealing from one’s boss?). Instead, they counteract incentives to do things their designer doesn’t want done.
This is the essence of trustware. Trustware is not de novo. Cash registers are a pre-digital trustware, one that configured trust to favor the commercial boss. And cash registers illustrate an essential question for trustware broadly: who is the designer, and what do they (not) want?
Personhood
As I’ve written previously, identity systems are a vital facilitator of social trust. It is identities—natural-born persons, brands—with which we, ourselves legal persons, form trust relations. “Trustware” of any kind must contend with them.
Web3 embeds a specific notion of identities. It does so through wallets, a pair of cryptographic keys with a few methods for using them. These wallets are singular identities. But they do not necessarily relate to natural-born persons. A single person can make as many wallets as they like. They are a model of identities, but not personhood: a single person can make many identities.
The problem here is evident to anyone who’s ever received (or sent) spam: a single person with multiple identities could distort governance and discourse. In the parlance of security, this is called a sibyl attack. In the context of DAOs, a sibyl attack corresponds to the “dead people are voting” threat model in electoral democracies. (A lack of sibyl resistance birthed “the Juno whale”). In contrast, a DAO that can establish unique personhood can (in theory) provision elections that are at least as free and fair as vote-by-mail. So sibyl resistance—the ability to mitigate sibyl attacks “well enough” relative to some security objective—is likely a frontier for anything that can be labeled “trustware.”
In thinking through sibyl resistance, consider another form of pre-digital trustware: national identity systems. In general, these systems role together two disparate concepts. One is identity. Identity is a coherent set of facts about a person: one’s name, date of birth, gender, appearance, and biometric data may or may not appear in a given identity system. The role of an identity scheme is to map those facts to a unique identifier (e.g., a number). Another is unique personhood—the notion that, regardless of who you are, you are only permitted exactly one unique identifier. The role of a unique personhood scheme is to assure that any single person has a maximum of one identity.
So, how might a DAO establish unique personhood? Ask a cryptographer that question, and you’ll likely hear an answer about proof-of-personhood. Proof-of-personhood schemes assure a one-to-one mapping between natural-born persons and cryptographic identities. Particular strategies vary from the straightforward (a single, centralized institution) to the radical (decentralized “pseudonym parties” and webs of trust), but all fall into two basic models. The first is an emergent model. In an emergent model, people go about their lives and meet others, learn who they are, remember them, and attest their identities to others. This strategy is how personhood worked in some hazy and amber-hued “back in the day” before formalized systems of identity and account. We lived with people and got to know them. Unique personhood emerged from our shared understanding of who they are, of what their ‘deal’ is.
The second model is a centralized one. In a centralized proof-of-personhood, a trusted institution does the work of establishing the uniqueness of persons. Centralized personhood is the Napoleonic system that arrived in the Netherlands and gave every person a surname, settling for “doesn’t wear pants” when the need arose. It is the Spanish forcing Spanish-language surnames on Indigenous Californians. This model describes most formal or state-run systems we interact with, like the DMV or the passport office.
Speaking in broad strokes, the web3 community prefers emergent schemes to centralized ones. (“Why trust an institution when you don’t have to?”). Here’s the rub: the best-available emergent scheme, pseudonym parties, cannot scale to the size of a nation.1 Does this mean that decentralized, global identity systems are impossible? Of course not. We live in a world of decentralized and global identities - each nation issues its own passports. How does this work? One centralized issuer (say, Canada) can assure that I hold only one Canadian passport, but how can they assure that I have no other passports—even worse, passports in some other names? Well, they can’t. The best they can do is trust. An ecosystem of centralized and emergent schemes confederate and overlap to produce multiple identities. Each system is imperfect, but each is good enough to accomplish a wide range of functional goals together in and concert.
Take passports. Passports are a morally imperfect system (look at the many who are locked out of global mobility thanks to where they are born); however, from the perspective of any one participant (e.g., a particular country’s policymakers and intelligence officers), the system works well enough for their purposes. Canadian border guards can trust U.S. passports and vice-versa. Canadian border guards may not trust U.S. officials with everything, but they trust them to run a passport scheme that works for Canada. Within any one system, trust is robust enough for its designers’ purposes.
In their review of decentralized proof-of-personhood schemes, Siddarth et al. (2020) uncovered a simple but devastatingly important truth: proof-of-personhood schemes that work lean into, rather than away from, social trust. The takeaway here is clear: from cash registers to identity schemes, trustware is not about decreasing coordination costs---or, at least, doing so isn’t what makes trustware valuable. Trustware formalizes and codifies the rules by which social trust is established. The value of trustware is in this encoding.
Handles
The most compelling scheme for sybil resistance that I’m aware of has nothing to do with cryptography. It’s hacker handles. Elsewhere, outside of this blog, I am elsehow. People—some of whom I work with daily—know me only by this name. There are people who would give you a blank look if you told them you knew Nick Merrill but would light up if you told them you knew elsehow.
How did this happen? Thanks to my unending capacity to exhaust myself, I developed two robust identities: Nick Merrill, a research scientist, and elsehow, a hacker. That both identities refer to the same person is not secret; occasionally, someone from one world is curious about my presence in the other. But, broadly, the two identities neither interact nor need to. They address different audiences. They sing different songs, eat different bread.
The Nick Merrill identity I built since birth, amassing state-given records and state-sanctioned credentials (like education and traditional employment). But how did elsehow come to be? A profile on the internet emerged called “elsehow.” Then, several profiles appeared across several platforms. Not only did these profiles ‘do’ things that only a human would do (like publishing code or writing messages), they all spoke to one another. They used the same avatars; they spoke the same language, perhaps even with the same quirks.
In other words, someone who observes elsehow could build trust in them. The more that observer interacted with elsehow, the thicker their trust could grow. This model, in which trust builds over repeated observation and interaction, undergirds zero-trust architectures in computer security. Add on the rich layers of nuance and detail that emerge in social life, and we can see how people can build trusted relationships with usernames without ever meeting, seeing, or talking to the humans behind them.
Is this identity robust enough to partake in a voting system? Yes. To the constant amusement of the non-hacker community, this identity alone is “robust” enough to put me on the web3 equivalent of corporate boards (e.g., controlling large voting shares in a well-funded DAO). In some DAOs, most members know most of the others only by and as their pseudonyms.
How is this possible? How can we be sure that one person doesn’t control many of these? Because we, as a group, have observed each other to be unique. We have worked, responded to messages, often simultaneously and occasionally in conflict with one another. We have expressed unique beliefs and opinions and changed those opinions over time through deliberation and discussion. Never, through this process, have any of us suspected any one of us to be any of the other. Not only that, we’re confident that we would have. We have worked with them enough that we would have found out if one person were frantically personing two cash registers.
What can we say about sibyl resistance in these circumstances? Interactions over time build trust. This trust is transitive to a point (if I trust Meow, and Meow trusts Cyborg69 completely, I trust Cyborg69 exactly as much as I trust Meow, plus or minus any ‘vibe’ I get from observing or interacting with Cyborg69). This web never “proves” anything, let alone about the humanness or uniqueness of any participant. It does not need to. It only needs to work well enough to establish uniqueness given our shared expectations about the resources required to circumvent it.
And what can we say about trustware? First, it has no one center. My handle spreads across many platforms and technical systems (GitHub, Discord, and beyond), each with its own affordances for building trust. Second, it is not a technical system alone. It is a sociotechnical one: trustware is the technical affordances of particular systems and the people to whom those affordances coalesce into a structure in which trust can emerge.
What holds any system together—from a liberal democracy to an open source project—is that its participants trust it to do its job as much as it needs to be trusted. I trust Google, and the corporate and political ecosystem in which it’s suspended, enough to give them the data I give them. I trust them with that data. I wouldn’t trust them with my organs. On the other side, I trust my partner with my life, but I do not trust her to keep her web browser up to date.
As my collaborator Coye Cheshire has said since the web1 days: trust is social and contextual. There is no “trust,” only “trust with.” Bosses trust their employees with all of the transactions of a business—as long as there’s a cash register involved.
Whose trust?
The world is filled with cash registers. They are the turnstiles at the subway, the ankle monitors on convicts. They are the pace-making systems that push pharmacists beyond safety and Amazon warehouse workers beyond the limits of exhaustion. These machines make it harder to do what The Boss doesn’t want done: skipping a fare, leaving one’s house, taking a break at work. They are the systems that create a warped kind of “trust” around an imbalanced power relation—and aim to tilt that relation further toward the boss’s favor. Neoliberal governance has turned the world into a cash register. Endless legal fictions keep not only employees but also citizens, taxpayers, and national governments away from the money that flows through the proverbial till.
Trustware is undoubtedly real. It is not undoubtedly good. We—the developers, maintainers, and users of trustware—can produce trustware that persists and embeds liberatory ideals: flexible and mutable identities, free association; voluntary entrance and exit. These systems will be as good as we, the developers, need them to be. The question trustware makers—me, us—will have to grapple with is: what kinds of trustware equalize and counteract power dynamics?
Restorative trustware
How do we build trustware that equalizes power among participants? First, we need to know whose goals are left at the periphery. Second, we must bring those people inside and allow them to deliberate. There are models in deliberative polling. Deliberative polling is deliberation via sortition: people are selected by lottery to deliberate on an issue that affects them. In practice, deliberative polling has been critiqued for a lack of representativeness---in practice, only privileged or highly interested people participate. DAOs have the capacity to counteract this bias by paying people for their participation. The rewards of learning the needs of people outside the DAO could be great.
Only by encountering those outside our DAO will we learn where our incentive-caused bias lies, and to what possibilities it has made us blind. Knowing the outside isn't the end of equity, but it's a start.
Many thanks to Divya Siddarth for moderating the talk at Metagov, and to the rest of Metagov for their feedback.
Why? Because they require people to be in the same room at the same time (Ford, 2020). Even worse, we cannot “merge” pseudonym parties straightforwardly; people can always travel from one to the other to create more identities. You can fudge it a bit with travel times via best-available transit, but at some point, you have to throw your hands up and admit the scheme has limitations.