From American Mindset from The American Mind <[email protected]>
Subject Governance by Sus
Date August 12, 2021 10:41 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
View this post on the web at [link removed]

Arguably the most consequential diversion of the pandemic is Among Us, an app game that pits a handful of players against one another in short quests to determine which of their crew (they’re astronauts) is the imposter who’s killing them off one by one. Over the course of the lockdowns, Among Us went from somewhere in the anonymous middle pack of app downloads to the very top. 
Far more important than its rankings in the App Store, however, is the game’s entry into the highest tier of social significance online for the younger generations. The fire hose of Among Us meme content on apps like TikTok is nearing cultural breakthrough. And while it will defy comprehension for anyone ill-versed in gamer Among Us humor—from sussy bakas to chungus pogchamps and well beyond—there’s a reason it’s taking off. 
The centerpiece of the game itself is that players must make their determinations about who is most likely the impostor during the breaks in play triggered by the death of a crewmate or by an emergency meeting. At these moments play shifts from trying not to get killed to arguing in chat about who should be ejected on the basis of their suspicious—or, as the slang goes, “sus”—behavior. 
In short order, sus has become not just the standard online slang for suspicious outside the game, but has become a sort of zeitgeist. After all, if there’s one thing that defines online content today, it’s that so much of it, like the people making it, is “kinda sus”. Awkward and questionable content of all kinds is being interpreted as sus, but, increasingly, the content earning the label falls into the growing category of sex-and-gender charged material created by creepy, deviant, or essentially mentally ill people. 
In this sense, sus is a warning label, sort of a popular inverse of the ones that have been slapped from the top down onto content posted to major platforms. What’s unnerving, however, is that the past year or so has evinced a decisive turn amongst major online corporations and governments to try to police and ban content and users, not on the basis of content they have already posted, but on the basis of their being sus. 
This transformation is apparent in many realms of online life, but the most current indicator is the controversy over Apple’s “shocking” decision to tweak its longstanding privacy policy by monitoring users for potential pornographic or abusive imagery involving children. 
The inner workings of the policy are meant to build some layers of escalation into the system, but they build something else in too. After a certain number of suspicious images, Apple’s software will bump a user up to a higher level of scrutiny. The idea is that only serious red flags will trigger something especially invasive. But in practice, the system is based on a method with vast applicability to all digital content: automated moderation optimized for flagging people and their data as, well, sus. 
There’s little question that the model being adopted by leading regimes for governance in a digital age is just this model. It is, in fact, the basic mechanism required by any regime that wants to, or believes it needs to, fold all its exercises of sovereignty over policy and law into a social credit system. Whether out of fear that this is the only way to nose out China for world dominance, faith that this is the only way to nose out true woke revolutionaries for domestic dominance, or some combination of reasons like these, America’s regime seems bent on “nudging” Americans with all deliberate speed into such an arrangement. 
Already, there are signs that the regime recognizes the limits of its attempted “war” on “misinformation”, which requires online organizations to reactively play whack-a-mole across the whole online space, and relegates to “journalists” and “educators” the tasks of preempting officially frowned-upon content from being created in the first place. The regime has its reasons of state to desire more comprehensive and efficient power than that, and the means to achieve such controls run straight through a shift from policing posting “crimes” to moderating “precrimes” and “precriminals”. 
It’s clear to see how and why the regime and its institutional arms in the corporate, commercial, and communications industries would want to determine to their own satisfaction who is sus among us. What’s not clear is how such a policy would even begin to square with our constitutionally guaranteed rights and form of government—which is doubtless why it is being rolled out through “the private sector” first and foremost, with the regime standing a few steps behind working the levers, guaranteeing that the lives of said private entities will be made unpleasantly difficult should they refuse to comply or even drag their feet. 
The concept of suspicion is an old one: go back to the Latin roots of suspect and you will find suspectus, a word formed from roots carrying the meaning of “to look at secretly.” Americans’ instinctive familiarity with the Salem Witch Trials leads to a misimpression that the greatest threat to our rights and our civic life is baseless public accusations. These are dangerous, but more dangerous still are the secret investigations that precede such accusations or obviate them entirely—investigations now to be carried out by machine and not human eyes, and concluded and executed in impenetrable silence.
James Poulos (@jamespoulos [[link removed]]) is Executive Editor of The American Mind. He is the author of The Art of Being Free (St. Martin's Press, 2017), contributing editor of American Affairs, and a fellow at the Center for the Study of Digital Life.

Unsubscribe [link removed]
Screenshot of the email generated on import

Message Analysis