There Are No Muggles
There will never be a consumer product that takes off purely for political reasons—like “being censorship resistant” or “decentralizing banking”
Think of consumers as lizard brains: they mindlessly tap on rectangles on a screen for basic needs—like making money or finding a date
- Nikita Bier on Twitter
As ever when he found himself in close proximity to Muggles going about their daily business, Mr. Weasley was hard put to contain his enthusiasm. “Simply fabulous,” he whispered, indicating the automatic ticket machines. “Wonderfully ingenious.”
“They’re out of order,” said Harry, pointing at the sign.
“Yes, but even so …” said Mr. Weasley, beaming fondly at them.
— J.K. Rowling, Harry Potter and the Order of the Phoenix
Running with the magic-as-sufficiently-advanced-technology analogy, I think the Harry Potter character who most embodies the technomagic hacker ethos is not wunderkind Harry himself, not genius Hermione, not prankster twins Fred and George, not the potion-inventing Half-Blood Prince, not legendary Dumbledore – I think it’s Arthur Weasley.
Arthur is the lovably bumbling patriarch of the nine-person Weasley family. His greatest passion is learning about how non-magical people, known to wizards as Muggles, use technology like “eckeltricity” and the “fellytone”. The common view among wizards – even the likable protagonists, even the authorial voice at times – is that relative to the magical community, Muggles are more or less dull, conventional sheep. They aren’t interested in magic, or are even willfully ignorant of it; they lack drive and creativity. And Arthur’s the only one to challenge that: he has a genuine respect for how non-wizards go about solving their problems. His pet project, in one of the books, is enchanting a regular car in his garage to make it fly; that car is probably one of the few wizarding artifacts a Muggle could actually use.
And this epitomizes the mindset I wish the advocates and current users of free, open-source, privacy-focused, decentralized software held more often about regular everyday people. Laypeople don’t have all the skills that allow them to perform technological magic for themselves, but that doesn’t mean they should be, or want to be, left to the mercy of user-hostile software. Muggles, like wizards, want to have cool stuff that does neat things and doesn’t hurt them. I think the archetype of the mindless lizard person described in Nikita’s tweet more or less doesn’t exist.
It’s pessimistic and cheemsy for libre-software advocates themselves to think of the cypherpunk-utopia vision of technology as something only their tribe would want. Like, come on, trust your own taste more! The current state of the Internet and consumer technology really is broken in a bunch of ways that most people, not just Richard Stallman types, are unhappy about. Regular people are good and lindy and capable; they want better. They’re making backup Instagram accounts to get around arbitrary bans. They’re complaining about the lack of recourse when their accounts on Facebook or Twitter get hacked or censored. They’re creeped out by too-smart targeted ads. They’re lamenting the addictiveness of the Algorithm. The one place where the political opinions of Hacker News and the New York Times overlap is “big tech invasive and bad”. “Normies” made GDPR, for crying out loud – pro-privacy to a fault!
No, the reason regular people aren’t all messaging on Signal and browsing with Brave and navigating with OpenStreetMaps isn’t because they love corporations and couldn’t care less about privacy. Rather, it’s because they have competing concerns, more than nerds do, and existing sovereign tools just aren’t good enough to meet those needs yet.
If You Build Something Actually Good, They Will Come
Having the wherewithal to use free-as-in-freedom tools in the first place is largely limited to nerds. Partly this is because the design is worse (or it’s designed for what nerds want rather than for what non-nerds want). Partly it’s because the tools are often poorly funded, leading to bugs and underdocumentation, and for a nontechnical user it’s just not worth their time to go on a goose chase to diagnose the problem. Partly it’s because these tools tend to require configuration and legwork on the user’s part to actually achieve the privacy and security guaranteees they advertise. And partly it’s because network effects compound these problems for social applications (and even for non-social ones, because fewer users means less documentation and fewer troubleshooting guides).
Why don’t non-nerds use Mastodon, for instance? One, because onboarding is annoying and unintuitive. Two, because discovering accounts to follow and curating a good timeline (and getting a decent algorithmic one in the meantime) is too hard. Three, because they care more than non-nerds about actually good, high-bandwidth social networking (notice how there’s no big open-source image- or video-based social platform?). And four, because even if there’s a critical mass of nerds using Mastodon, there isn’t a critical mass of non-nerds (because of the previous three problems), and so the numbers go even lower.
Moxie Marlinspike tells the story of how he published his GPG keys in case people wanted to send him encrypted email, and then got tempted to take them down because he hated the vibes of all the emails he received that way:
Eventually I realized that when I receive a GPG encrypted email, it simply means that the email was written by someone who would voluntarily use GPG. I don’t mean someone who cares about privacy, because I think we all care about privacy. There just seems to be something particular about people who try GPG and conclude that it’s a realistic path to introducing private communication in their lives for casual correspondence with strangers.
And that’s Moxie Marlinspike! Imagine the rest of us!
Incidentally, popular wisdom says that the reason everyone stays on big corporate user-hostile social media, even when it sucks, is because of network lock-in. I think that accounts for a lot of the problem. But it’s not totally impossible to jump ship. Coordination is tricky, but out-of-the-ordinary events can serve as rallying points: just as a single outrage in the news can start a social movement because that’s when everyone expects everyone else to riot in the streets, so the announcement of some particularly cartoon-villainish new policy (e.g. Twitter censoring Substack links) can get people, especially ones who belong to dense subgraphs, to switch. But the jump to Bluesky/Mastodon never stuck. And my theory is that those newer platforms actually were just meaningfully worse.1 That it is in fact really hard to build a social platform as good as existing ones, because you have to faithfully represent tons of nuances of human signaling and communication that don’t map naturally onto a simple diagram. The same goes, I think, for anything that interfaces extensively with the messy real world, like maps. This might also explain why open-source tools that successfully outcompete closed-source competition are usually low-level rather than application-layer: their use cases are primarily for nerds, they don’t need much of a UI, and their interaction with the real world is a few degrees removed. (Probably the most successful example is Linux.)
If this is true, then the lack of adoption is less a failure to politically overcome a coordination problem than a sign that people have considered and rejected the current usability tradeoff, waiting for a better one to come along.
Some Common Design Failures
Common Problem 1: Confusion Around Threat Models
The other day I opened Element, a chat app built on top of the Matrix protocol, and got a popup asking me to “Verify this session”.
You’re a random user just trying to text your friend a meme, you open the app and see this, what do you make of it? What does it mean to verify the session – are you verifying yourself to the server? To other users? Is it like verifying your email? What happens if you don’t do it? Is it risky to select “Do not ask again”? What does “Other users may not trust it” mean – are your friends going to think someone’s impersonating you? This doesn’t match anything about your experience with regular messaging apps: what’s going on? Instead of feeling safer about using a more secure app, now you feel less safe because you’re worried you’ll screw something up without realizing it.
It’s possible to convey privacy and security threat models clearly, since we’ve seen the big corporate platforms do it when they need to. Think of public versus private Instagram accounts and the equivalent locking on Twitter; Close Friends and Twitter Circles (RIP); checkmark verification; Snapchat’s disappearing-message countdown and notifications when the recipient takes a screenshot; the concepts of blocking and muting. It just takes some thought to implement well and especially to differentiate between good UI for power users who need hardcore levels of security and good UI for people who just want a minimal level of not being subject to the most user-hostile behaviors.
Common Problem 2: Concept Overload
Here’s my philosophy on magic versus configurability: hide as much complexity as possible behind doors that are easy to open when and only when they’re required. Many products try to dump new concepts and ontologies on users, without asking, do users actually need to know about this to be able to perform the basic function of the product? Even if a conceptual understanding makes it easier to use some features, there’s no need to introduce those concepts until they’re required.
There are concepts that can be entirely obscured from the user until they ask for them. Even when that’s impossible, at most one new concept at a time should be introduced, or if that’s absolutely impossible, then a good skeuomorphic analogy to an existing set of concepts. Everything should be as easy as possible to discover if necessary, but only what’s absolutely necessary should be shown by default. The effect of this is that the user can learn anything they want about how the product works under the hood, but primarily they should be able to use as much of the functionality as possible with the minimum amount of gears-level comprehension. Power user features belong behind menus.
Common Problem 3: Bad Visual UI
Every extra second or extra bit of mental effort caused by bad layout, hierarchy, or navigation will compound across the entire set of users. Even users savvy enough to fill in the gaps will be annoyed that the design didn’t save them the time of having to do that.
Common Problem 4: Botching Reliability
If the user ever can’t count on any of a handful of important things, the product loses tons of utility. Examples: messages always get delivered unless you’re informed otherwise; you always see the same content on your end that other people see on their end; you always receive any notifications you’re signed up for; you never ever lose data you put effort into generating. Signal isn’t good enough to overtake Messenger because people really care about being able to reliably access and sync their chat history, and because the reliability of actually receiving notifications is just subpar enough to lose users’ trust.
Common Problem 5: Sloppiness
Rough-sounding copy, random tiny bugs, outdated information: none of these is a critical flaw in itself. But when a product doesn’t execute on even small details, that signals to the user that they’re not in good hands, that the product wasn’t made by professionals, that quality control is lacking, that other things are likely to go wrong too (which ties back to problem 4).
Common Problem 6: Lag
The difference between “as fast as you can think” and “any slower than that” is gaping; it’s the difference between the feeling of driving a car and the feeling of driving a bus, of wielding versus maneuvering. I converted from a life of monastic Firefox purity to a Chrome sellout for this reason, and switched from Notion to Obsidian, and still tend to use Apple Notes over Obsidian on mobile because it syncs faster, and switched from Todoist back to Apple’s Reminders. This is especially true for tools that integrate into your daily life, where you use them briefly dozens of times a day: navigation, payment, search, note-taking, microposting.
You Can Preserve The Vibe And Also Let People Have Nice Things
By and large, applications designed for privacy, interoperability, and accountability are built by nerds. But most people aren’t nerds, so the money is in what non-nerds want. If you’re optimizing for profit, then you’re building for non-nerds. And optimizing for profit eventually tends toward optimizing toward antisocial dark patterns, once your product gets large enough that negative externalities and monopoly effects enter the picture and the company’s interests diverge from the user’s. Once you have the lock-in resulting from a network monopoly, existing users will find it hard to leave, and there’s not much to upsell them on, so you just want to pull in more users to a) profit more than zero off them and b) make it even harder for existing users to leave by strengthening the network effect further.2 So people associate “building for non-evil purposes” with “building for nerds”, since they’re correlated.
But optimizing for positive impact also means you should build for non-nerds. You can help a few people like you, or you can help lots of people who aren’t like you;3 and there’s a lot more alpha in the latter. User-friendliness and cypherpunk ideological purity can, should, and must coexist.
There’s a tension here. Sometimes bad design is a moat securing a social enclave: the old Internet was fun because you had to have a certain level of resourcefulness and motivation to access it. Some people feel like this about Mastodon today – limiting it to nerds is a feature, not a bug. (A memorable onboarding doc I saw assumed readers would naturally be familiar with doggirls.) And I can’t think of any explanation for many of Discord’s design choices other than “forums should be gated by a navigation system confusing and annoying enough that only the dedicated may pass.”
But I think all those kinds of gating are still possible on top of a set of privacy-preserving, open-source, censorship-resistant base layers. Ironically, part of what makes it difficult to perform the good kind of gatekeeping in the status quo is censorship. A platform that keeps its content on a tight leash of community guidelines (for instance, look at how Reddit and StackExchange died) is unable to sustain an immune system of people being grouchy to new users who violate emergent cultural norms (“welcoming and inclusive” = “DAU-maxxing”). There are a hundred ways to implement an immune system that don’t rely on the entire user experience being unpleasant: some technical (reputation requirements, privileges that grow over time) and some cultural (illegibility, jargon, curmudgeonliness). You can put your fortress at the top of a mountain, requiring outsiders to scale the mountain to get in but also requiring you to slog up the mountain every day; or you can just install tap cards on the doors.
What To Do
To some extent there exist inherent tradeoffs between ease of use and values like security and privacy (although I think that’s true to a much more limited degree than the state of current tooling would imply). Another part of the difficulty is that we have to design for new ontologies, rather than just getting to copy a standard set of handles for concepts. But also, cypherpunk types tend curmudgeonly (I say this lovingly – the security mindset has to come from somewhere), and UX designer types don’t, which makes them less likely to be cypherpunks themselves; and so we need to either find the people who already fall within the overlap, or get a fresh crop of really good designers on board, either by evangelizing the value system enough that they get behind it, or by coming up with the money to poach good designers from other areas.
I don’t, mean, of course, that current open-source builders personally aren’t working hard enough or deserve blame. Quite the opposite – tons of them are building products and tools for free, just because they want to live in a world where those things exist. (And more are building products that have investment and some kind of monetization, but probably not as much as they’d have if they had a clear path to unicorn-level profits.) This is wonderful and prosocial, and the fact that we’ve gotten this far using the status-quo funding model is a great sign. What I mean is that collectively we’ve failed to source the amount and quality of work required for value-aligned free software to be viable for the average person’s use, which I assume is largely because the labor is bottlenecked by funding and by the lack of profit motives. It’s like how effective altruists noticed that lots of traditional charities are wholly ineffective, because altruism is directionally good but doesn’t generate the kinds of market forces that produce consistently excellent work.
If anything, this points to the desperate need for efficient allocation of public goods funding. It’s not a lack of marketing or motivation that prevents these products from gaining traction; it’s just that they need to actually get good enough to win, and empirically, it costs money to pay people to make that happen.
In a way, these kinds of funding gaps are actually a cause for optimism: they suggest that current applications aren’t good enough yet because we just haven’t yet managed to coordinate resources toward making them good, rather than because the values built into them are inherently unappealing or because their architectures are fundamentally incompatible with ease of use. If we can improve on existing economic models for funding prosocial software, very likely there is a future for systems aligned with our better angels. But standards are high, and we’re not there yet.
To quote Arthur Weasley once more:
Ingenious, really, how many ways Muggles have found of getting along without magic.
Let’s give them their flying cars.
- An alternative explanation is that Twitter has gotten worse at the discovery part of onboarding too, but that as an incumbent, it can hobble along without fresh blood for a while, whereas a new upstart can’t. [return]
- See also The Tyranny of the Marginal User. [return]
- You know who actually understood that really well? Old-school global-development effective altruists. With EA, this is partly because poor people in developing countries are, well, very poor, and your money will go a lot further in helping them, but also, there are just a lot of them. [return]