elsehow

Share this post

What does it mean to "be secure?"

www.else.how

What does it mean to "be secure?"

Nick Merrill
Mar 17
1
Share this post

What does it mean to "be secure?"

www.else.how

This post is Part 2 of a series: What Cybersecurity Is and Why You Should Make It Your Life. This series explains what it takes to become a cybersecurity professional, how anyone can do it, and why you should.

Content warning: This post discusses intimate partner violence.

Philipp Haager

Someday, the defenders will win. There will be no more attackers—and no need for cybersecurity professionals.

Thanks for reading elsehow! Subscribe for free to receive new posts and support my work.

If only. To attack is human. To defend is… less than divine, but more than a passing need. People want to do things that others don’t want them to do. That dynamic won’t change. (See: Attacker & defender).

What does it mean for a system to be “secure” in such a world?

Reasonably secure

A home truth: there is no such thing as a perfectly secure system. (PSA: If anyone tells you a system is “unhackable” or any other guarantee aside from “reasonably secure under the following assumptions,” you can be assured they either don’t know what they’re talking about or want to mislead you).

However, a system can be reasonably secure: it can have high enough barriers to the attacks we know about to satisfy our needs.

In this post, we’ll dive into this notion. We’ll discuss barriers to attack and learn our second technique: red teaming.

Barriers to attack

As we discussed in Attacker & defender, our ability to anticipate attacks undergirds any notion of security. We must understand what our attackers want and how much pain they’re willing to go through to get it.

The million-dollar dissident

Sometimes, attackers really want something. The Citizen Lab has a wonderful (and horrifying) writeup about the “million-dollar dissident.” The human rights defender Ahmed Mansoor, and the people to whom he spoke, were such a thorn in the side of the United Arab Emirates government that it spent at least a million dollars attacking his smartphone.

While most attackers would scoff at a million-dollar attack, understanding precisely what attackers are willing to pay (in dollars, but also in time and energy) is critical to understanding what it means for a system to be secure.

After all, financial resources are only one way of understanding barriers to attack. Some barriers to attack are rooted in physical realities.

Battle plans

Imagine you’re a combatant in a conflict. You want to pass a message to your comrade about a battle plan. Suppose even the most powerful computer in the world would require centuries to crack that message. In that case, we might deem the encryption sufficiently secure: even if someone is motivated to decrypt the message, the battle will be over by the time they manage the feat. (As we’ll discuss more in Crypto(graphy), experts gauge the “security” of an encryption scheme by these physical constraints).

The key point in either of these scenarios—the dissident and the battle plans—is that barriers to attack are about technical means and social context—a regime’s goals, a general’s plans. Even in the ostensibly technical encryption case, security is always sociotechnical—it spans the social and technological worlds. We cannot understand security without understanding what security is aiming to achieve in context.

Let’s push this intuition further.

(Warning: the following section discusses intimate partner violence. You can safely skip below the image that follows).

Intimate partner violence

Imagine you just got out of an abusive relationship. You share custody of your child with your ex-partner. Now, your ex-partner is spying on you—by installing an app on your phone, or on your child’s phone, or by instructing your child to start a FaceTime call and leave it on all day.

These examples are all drawn from real events.

What do “barriers to attack” look like for this abuser?

There is no strictly technical answer here. A conversation with the child is likely needed. Then there’s the matter of social support—finding people who believe you and can help you. Legal options, like restraining orders, only work insofar as they’re enforceable against these digital violations.

In these cases, “barriers to attack” look more like solidarity than software patches. Building networks of care, people who understand your situation and can support you. Networks of care are always part of cybersecurity. Sometimes they sit in the foreground. Other times they sit in the background. But they’re always there.

As a cybersecurity professional, you will sit in these networks of care. Doing so is part of how you will provide a barrier to attacks—a barrier in the form of solidarity.

Philipp Haager

The red team

To deal with darkness, we must become it. This is one of the core mysteries of cybersecurity.

Who are our attackers? What do they want? How badly do they want it? Only once we have answers to these questions can we hold a system against those answers and call it “secure enough.”

Notice that the limit here is the realism of our imagination. If our imagination is too limited, we will miss likely attacks. If our imagination is too unbounded and we’re busy thinking of far-fetched attacks, we’ll waste our time at best—at worst, we’ll miss the attacks right under our noses.

Honing our imagination’s realism comes from practice—playing The Attacker. Becoming The Attacker. Only through this experience can we understand what it’s like to attack a system. From that experience, we can understand what an attacker needs to do to achieve their goals.

To this, we must join the red team. Playing the attacker in a simulated (consensual and safe) environment is called red teaming. (The red team’s opposite is the blue team—acting out the role of The Defender). Red teaming is the second technique we’ll learn in this series.

Scope and consent

Before going further, a warning. It’s only red teaming if you have the consent of the people impacted by the systems or someone authorized to make decisions on their behalf. The key to consent is scope.

In red teaming, scope refers to the systems with which the red team can interact. Say your friend hires you to red-team the entrance to their home. You’ll first agree on the scope. The scope might include any physical barrier to the home or any social barrier (e.g., socially engineering a door attendant).

Say you get consent from your friend on that scope and begin your exercise. You get past the doorman. Once you enter the building, you might notice that a closet with administrative controls to the key fob system is left unlocked. Is that within scope? It’s certainly relevant, but unless your friend owns the building, they cannot give you consent to red-team it. You can tell your friend what you found, but you cannot mess with that system yourself—unless you get approval from the landlord/strata/homeowners association/other relevant body.

For everything you do as a red team, ask: is this within scope? Did the person who put it within scope have the authority to do so? If not, get consent. Without consent, it’s not red teaming. It’s just a crime.

Philipp Haager

Exercise: Red teaming

In threat modeling, we come up with possible attacks. In red teaming, we do those attacks. We play out the actions of an attacker and see how far we can get.

In Attacker & defender, you came up with some possible attacks an attacker could use to get into your home. Let’s red-team them.

  • Leave your home and lock the door.

  • Get back in without your key.

To increase realism, you can have a friend lock you out. That will make it harder for you to give up.

Reflect

  • What did you learn from doing the attack (versus thinking about doing it)?

    • Did you try an attack you imagined in the last post? If so, did it work? If not, why?

    • Did you discover a new attack that you hadn’t threat modeled? If so, how did you discover it? Would you have been able to find it without red teaming?

  • Think back to the attacker motives you listed in Attacker and defender. For someone who wants those things, would there have been any easier ways to get them (for example, by getting into your neighbor’s house)?

  • What do you think is a reasonable barrier to attack to getting into your home?

Want more practice? Repeat this exercise for other things in your life. Get into your car without breaking a window. Steal your bike. Steal money from yourself. And so on.

Jargon review

Here’s the jargon we introduced. Like last time, don’t worry about strict definitions—we’ll build an intuitive sense for these terms. If you don’t remember encountering these terms, refer to the post to see how I used them.

  • Barrier to attack

  • Sociotechnical

  • Red team and blue team

  • Red teaming

  • Scope

Thanks for reading elsehow! Subscribe for free to receive new posts and support my work.

Share this post

What does it mean to "be secure?"

www.else.how
Comments
TopNewCommunity

No posts

Ready for more?

© 2023 Nick Merrill
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing