Everyday there seems to be a story about enhancing our organizing techniques in new and interesting ways. Yet, despite the advances made possible by new media, it also creates added challenges. The Internet is filled with fabricated information and it’s often purposely created to meet a political end – the fallout can be very damaging. Once people read a message from several sources, or their friends, the information becomes “common knowledge” and is accepted as fact.

One way organizations try to insert their message into the public conscience is through astroturfing or cashroots. The purpose of astroturfing is to create a false sense of popular support for an idea or person. It’s the manufactured form of grassroots. It can be paying people to canvass or having people send a stock letter to the editor of their local paper. Sometimes their deception is more blatant than that. For example, in 2009, Bonner and Associates used NAACP letterhead, forged signatures, and sent them in to Representative Tom Perriello pressing him to vote against clean energy reform. Techniques like this now run rampant on the Internet.

Corporations and governments create fake personas, also known as sockpuppets, to digitally astroturf. They make several accounts on social media sites or they comment on blogs, manufacturing broad support or disapproval for an idea. Astroturfers deceive indirectly through a false perception of widespread support and directly by spreading lies or linking to a website containing false information.

Luckily we do not have to sit idly by and watch these deceptions continue. Truthy, a system crafted by The Indiana University Center for Complex Networks & Systems Research, analyzes and maps data distribution on Twitter. Their first study, Detecting and Tracking the Spread of Astroturf Memes in Microblog Streams, developed technology to identify harmful Twitter users. These are the users that astroturf, disseminate misinformation, and smear people that oppose them.

The folks behind Truthy collect data from Twitter and analyze the method of delivery to determine whether it’s astroturfing or genuine accounts. They identify a particular piece of information and create visualization for how it was shared on Twitter. These images help identify the “truthiness” (a term coined by Stephen Colbert: it is claimed to be true based on emotion or feeling and not on evidence or fact) of a tweet. They do this by displaying the direction and distribution of information flow, i.e. how many users tweet or retweet the information, where the tweets originate from, and how many times a tweet is retweeted among the originators.

Below are some examples of these visualizations. Black dots are Twitter accounts that post the information, blue is a retweet, and orange is a mention.

If the information seems to have spread in a misleading way, such as the originators tweeting only one message or only following users who retweet their message, then it is labeled as truthy.

Despite innovations in the field of astroturf detection, companies and governments are becoming more sophisticated in avoiding the appearance of being truthy. Last February, the Daily Kos reported on a leaked HBGary email. As they describe it, HBGary is creating an “army of sockpuppets,” or what the company calls “persona management.”

Persona management involves software that automatically creates a whole fake identity online. It provides an astroturfer with online identities, equipped with everything they need to look real. The personas have emails, web pages, accouns on Twitter or Myspace, and even full names for Facebook and LinkedIn, giving the appearance of a real person. The software can then update these profiles, reposting and retweeting information from other sites. Companies and governments then have “pre-aged” accounts they can use to overpower the narrative on the Internet. Scary stuff.

Thankfully it does not seem that the voice of the people has been drowned out yet. We have to continue developing technology like Truthy to counter it and preserve online democracy. Yet, the technology will not be enough. It is essential that we engage with these tools and report astroturfing wherever we see it. Source Watch, an online encyclopedia where anyone can report on manipulation of public opinion, is a great place to start.