pre-election tricks goes deeper than social media.

Written by: Lisa Forte

Categorized: General

Pre-election manipulation screenshot

How pre-election manipulation goes deeper than social media.

A critical moment is almost upon us. It will be a test of the protections we’ve tried to implement to protect our democratic freedoms. 2020 will once again see the spotlight hit the US elections, the latest and greatest battleground for clandestine information warfare. Given the current climate it is reasonable to assume both China and Russia will employ dirty tricks to sway public opinion.

After several electrical and voting controversies social media companies have been forced to up their game.

Facebook, Twitter, Google have all put troll and disinformation strategies in place. But the information wall has got dirtier. The more the platforms crack down the more persuasive, covert and clandestine the disinformation campaign must become.

The goal is to create an environment in the US and Europe where citizens can no longer tell the difference between fact and fiction. They achieve this through the use of bots, trolls, micro targeting and algorithmic manipulation.

It isn’t actually about creating completely fake news. More using and exaggerating real issues in society to polarise it. A few weeks ago the FBI tipped off Facebook to a group called peace data. An obscure and seemingly left wing group that was really a clandestine front for Russian trolls. They targeted and posted in the Facebook groups that hosted users that may be swayed to stop supporting Biden. They used ai generated photos for the editors and tricked genuine Americans into contributing content for them.

But How do these large-scale operations work? The first thing to grasp is that it is extremely organised.

First they set up, purchase or compromise a whole load of social media accounts.

They create groups to share and promote content.

Then they go on Reddit or similar platforms and scrape it for partisan issues in the target country. They put all the issues and discussions identified into a database.

Now they can filter that database for negative sentiments and use those to form the basis of the posts.

You need people working around the clock so that the posts go out at the right time in the right timezone.

Next create, purchase or compromise paypal accounts. Now they can hire copywriters in the target country to produce content for them. This is known as “franchising”. Real americans creating real content hides the connection to Russia.

As the popularity of websites and social media groups grow the troll farm can start marketing advertising space to legitimate US businesses who want to reach the demographic of followers they have. This money can be paid into the paypal accounts and used to pay for the copywriters.

A perfect self-funding circle.

It is claimed that two thirds of Americans get their news from social media. So it is a powerful tool. The more controversial the easier it is to get people to share it and buy into it. Human beings have proved time and time again to be terrible at detecting deception. Even highly trained people are little better than chance.

There are some clues that researchers have found to help us detect potential trolls:

  • Low levels of self-reference in the posts. It sounds more credible if I say “we all believe in chemtrails” rather than “I believe”
  • They will use negative emotional language to mask the lack of specific references in their posts
  • They will employ quotation marks and URLs more than legitimate users to try and make their posts appear more credible and impartial
  • They will use very generic, albeit geotargeted, hashtags a lot.

This is a hot issue and one I strongly urge you all to read more on as this information war continues.

Related Content