Five years ago this summer, my life was changed forever. After becoming the first Black woman elected as student government president in the 124-year history of American University, I became a target of a hate crime and an online troll storm. While I slept on May 1, 2017, a masked perpetrator hung bananas from black nylon nooses and strung them from lamp posts and bus stops around my campus during the early morning hours of my first day in office. The bananas had references to Alpha Kappa Alpha Sorority, Inc., and had “Harambe Bait” written on them in black marker.
Three days later, I was targeted again; this time it was by individuals hiding behind a computer screen. After hearing of the hate crime on AU’s campus, Andrew Anglin, a known Neo-Nazi and white supremacist, posted private and identifying information about me in an effort to encourage his followers to further target me — a practice called “doxxing.” In an “article” he wrote and published on the Neo-Nazi website he founded, the Daily Stormer, Anglin provided his followers with a direct link to my social media platforms and encouraged them to use links he provided to harass and threaten me online. Soon the Daily Stormer’s fanbase began to inundate my social media pages with racist, sexist and violent threats, calling me slurs — like “Nigger Agitator,” a “Negress” and a “Sheboon.” As a result of the hate crime and troll storm, I began to suffer from post-traumatic stress disorder (PTSD), depression and anxiety, and was required to significantly change how I navigated my in-person and online lives daily.
The hate crimes I experienced were emblematic of the surge in racism and white supremacy nationwide. The recent mass shooting in Buffalo, New York, by a white gunman inspired by the “great replacement” conspiracy theory, as well as the shooting in Dallas at a Korean-owned hair salon, are the latest tragic examples in a long history of hate crimes in America. Black Americans are disproportionately likely to be victims of hate crimes, and that is even more alarming given that millions of victims of hate crimes go unreported.
During that first year of the Trump presidency, deep-rooted hatred and white supremacist notions spread like a cancer. They were undoubtedly fueled by public figures. But they spread so far and so fast for another reason — they were amplified through online chat groups and websites, due in large part to the unwillingness of social media and tech companies to appropriately respond to hateful activity on their platforms.
Tragically, in weeks following the attacks I experienced, online racism and hate turned physically violent elsewhere in the country, shattering lives forever. On May 20, 2017, 2nd Lieutenant Richard W. Collins III — a young Black man, newly commissioned ROTC officer, and soon-to-be-graduate of Bowie State University — was murdered on the nearby campus of the University of Maryland College Park. His killer was a white man who was a member of the Facebook group “Alt-Reich: Nation” and had racist memes saved to his phone that advocated for violence against African-Americans.
On August 12, 2017, a young white woman named Heather Heyer was killed and 35 others were injured when a white supremacist intentionally drove his car into a crowd engaged in a peaceful-counter demonstration protesting the so-called “Unite the Right” rally in Charlottesville, Virginia. The night before, alt-right groups carried torches and assault weapons and chanted racist and anti-Semitic slurs as they marched through the streets. The rally was planned and organized online, using the Daily Stormer and other websites. Prior to the attack, organizers of the rally shared memes through online white supremacist forums discussing “legal” ways to “run over protestors” and “crack skulls.”
Through these shocking murders, I cannot help but think about how each of these events may have been prevented if social media companies had taken swift and robust action to prevent their platforms from being used worldwide by extremists as a tool to accelerate radicalization, recruitment and mobilization. As racist conspiracy theories continue to spread about everything from the Big Lie of a stolen 2020 presidential election to blaming Asian communities for the transmission of the coronavirus, so do the sometimes-violent attacks on people of color — both online and in person. We all risk becoming a target of a bias-motivated attack because of what we look like, who we are or the causes for which we advocate. Yet, tech and social media companies still continue to dodge their ethical responsibility to prevent hateful activity from going unchecked on their platforms.
As a society, we cannot afford to allow tech companies to look the other way as hate festers, spreads and inflicts real-world harm on communities on and off their platforms. Online platforms, social media leaders and tech giants must take doxing, harassment and discrimination on their platforms seriously before they lead to more tragic and deadly results. It may seem challenging or costly, but I know from lived experience how much a difference this change could make. It could have kept one person from suffering from PTSD and others alive.
Taylor Dumpson, J.D., is a hate crime survivor and plaintiff in Dumpson v. Ade, where a federal court held for the first time that online racial harassment can interfere with a person’s use of public accommodations. Dumpson currently serves as the President’s Fellow at the Lawyers’ Committee for Civil Rights Under Law, a national racial justice legal organization, working in their James Byrd Jr. Center to Stop Hate, named after a man who was lynched in Jasper, Texas, by three white vigilantes in 1998.