Orenda Software Solutions
Are you Triggered?: Evaluating Participation in Public Conversation
Twitter announced in March, 2019 that it was testing a prototype version of the app to a select number of users. One of the most prominent changes to the focus-group test platform was the removal of publicly displayed metrics like followers, likes, and retweet counts. This doesn’t imply that tweet engagements will be removed completely, only that they will no longer be displayed for everyone to see. The idea is to allow everyone to participate in public conversation without being judged for clout or status.
Testing these new features are likely part of the company’s strategy to address user abuse. Since anyone can access and interact with any public figure from an anonymous account, there are little or no systems in place that protect people from becoming targets for trolling, bullying, and harassment. Users have been asking Twitter to do something about this targeted harassment issue for years, as it has become a prominent issue in the ever-changing world of social media.
Another common complaint amongst Twitter users is the call to remove accounts that are used to spread hateful, racist, sexist, or otherwise inappropriate messages, such as white supremacists and nationalists accounts. However, Twitter has sometimes appeared to be tolerant of controversial figures, even verifying Jason Kessler’s account.
Twitter co-founder and CEO Jack Dorsey has outlined in several interviews that they are committed to understanding what constitutes a healthy conversation, and that they are considering ways to remove abusive accounts and increase the civility of public interactions on Twitter. Part of that process, as Dorsey has said, is figuring out what past behaviours have been incentivized on the platform and understanding how those incentives influence future behaviour.
For non-influencers (people who are not celebrities, popular public figures, etc.), the average number of followers is around 200, and rarely do tweets from these accounts go viral. Likes, follows, and comments can be addicting – the effect of receiving online validation has been studied and not surprisingly, it makes people feel good. Online recognition and validation gives us a sense of belonging and approval from people like us. Initially, this is fulfilling and drives us to spend more time on and post more on social media. Some studies however, show that social media use is linked to feelings of loneliness and depression, suggesting that feelings of insecurity stem from comparing how our own posts perform against the most popular.
The majority of accounts on Twitter, and the majority of posts by extension, largely go unnoticed. More tweets receive no interactions or engagement than those that do, and engagement functions as some form of positive reinforcement or external validation, encouraging people to behave in ways that compete for attention – even negative attention.
That might provide some insight into the harassment and trolls accounts on Twitter. While the company wants everyone to be able to participate in public conversations, they have to be able to protect people from harassment and violence. Over the past few years, meme culture has evolved to a point where a subculture of memes are intended to offend or ‘trigger’ most people into displaying an emotional reaction.
While comedy and satire can be used to highlight, call-out, and shame an undesirable behaviour, a lot of the most offensive memes are communicating racist or sexist messages with no intent of creating a healthy dialogue on the subject matter. It seems that only after the fact, when the person sharing the meme is accused of perpetuating the harmful behaviour do they claim that their intent was to point out how wrong the offensive meme was, and that they themselves don’t necessarily agree with the message. The position of meme sharing is conveniently ambiguous if we only judge them on intent. What we need to keep in mind though, is that discrimination is not determined by intent, it is about impact.
Personally, I’ve seen people within my own social networks who claim they love it when they trigger people. It makes sense to me now that it is the goal of these people to feel recognized or validated, even for something negative. They see a personal gain to share and curate offensive material for the post engagement. For that reason, I would be willing to participate in social platforms that hide account and post details. If trolling is discouraged and deterred, then that should result in a public conversation that incentivizes healthy dialogue and interaction.
At Orenda, we work mostly with Twitter data. Our logic is that the platform functions as an ongoing focus group for the most topical subjects and stories in society. People share their thoughts and feelings in a stream of consciousness, and with our algorithm we can analyze the general will of society on any number of topics and discover insights into how people think about them. We want to see more people participate in public conversation, and that doesn’t mean banning people for the content of their speech, but holding them accountable for the impact they have on others. Our company has a code of conduct, and we support any effort to remove harmful accounts in order to make the platform safe for all to participate.