Bio & headshot
Short: Nick Merrill directs the Daylight Lab at the UC Berkeley Center for Long-Term Cybersecurity; founded Daylight AI, an algorithmic bias training and education practice; and is a founding member and worker-owner of DAO DAO, a DAO that builds interchain DAO tooling.
Long: His work is built on a simple observation: that security is difficult to practice in part because it's difficult to understand. His lab's work shifts the way people understand and identify the harms of technology—and expands the populations able to do so. By generating novel tools, practices, and representations, Merrill seeks to make “security” specific and actionable to those who need it. His research on internet fragmentation has been covered by CNN, and used by policymakers and industry leaders. His research on threat identification practices has been integrated into Meta’s threat ideation process, and into the U.S. Cybersecurity and Infrastructure Security Agency (CISA)'s participatory threat modeling process. His algorithmic bias training materials is taught to hundreds of students around the world every year. Nick has published over a dozen articles in peer-reviewed venues such as ACM CHI, DIS and CSCW.
Projects I’ve worked on
I have busy hands and creative friends. In roughly reverse-chronological order of when I began work on each project:
DAO DAO - A worker-owned co-op that builds software for managing worker-owned co-ops. We’re built with our own tools. Our software protects over $8 million U.S. dollars worth of value across hundreds of organizations.
Internet Atlas - Real-time data on who controls the global internet. Our data has been integrated into Internet Society's Pulse dashboard, covered by CNN, and used by policymakers, industry leaders, and the intelligence community.
Daylight AI - We teach people how to identify and ameliorate algorithmic bias. Our open-access lectures and labs were the first to teach algorithmic bias to students using real-life, hands-on experiential examples. Our materials reach hundreds of students and policymakers worldwide every year.
Adversary Personas - A game to help non-experts identify security threats. This practice has been integrated into Meta’s threat ideation process, and into the U.S. Cybersecurity and Infrastructure Security Agency (CISA)'s participatory threat modeling process.
Cybersecurity Arts Contest - I began the world’s first Cybersecurity Arts Contest, meant to expand, through art, notions of cybersecurity—what it can be, who does it, and whom it protects. The contest has disbursed over $100,000 USD to artists from Berlin to Uganda.
Passthoughts - Using brain-machine interface to “think your password.” Passthoughts was the world's first one-step, three-step authentication mechanism, and has been widely covered in the media (see a partial list here).
My “Big Ideas”
Over the years, I have arrived at a couple of “Big Ideas.”
Grassroots internets could scaffold and support grassroots democracies. See: This internet, on the ground.
Machines can, in theory, read the mind. Why? First, because "the mind" is material, thus amenable to sensing. Second, our notions of what the mind is move, and have always moved, relative to the capacities of contemporary technologies. This was the subject of my dissertation work: Mind-Reading and Telepathy for Beginners and Intermediates: What People Think Machines Can Know About the Mind, and Why Their Beliefs Matter.