Since the November presidential election in the United States, Facebook has been taking heat for the way its platform may have helped spread "fake news" designed to sway the election.
And this week, Facebook Chief Security Officer Alex Stamos revealed the company allowed six figures' worth of ads by fake accounts linked to Russia.
"We have found approximately $100,000 in ad spending from June of 2015 to May of 2017—associated with roughly 3,000 ads—that was connected to about 470 inauthentic accounts and Pages in violation of our policies," he said in a company statement.
"Our analysis suggests these accounts and Pages were affiliated with one another and likely operated out of Russia."
Worth noting here is that most of the ads did not advocate for a specific candidate or party.
Facebook says, instead, the ads were "Amplifying divisive social and political messages across the ideological spectrum—touching on topics from LGBT matters to race issues to immigration to gun rights."
In this day and age of bots and AI used by all sides in the battle over information sharing, the stakes are high.
And Facebook says it has spent months turning the tide in favor of what is real. Machine Learning has played a part.
"We are exploring several new improvements to our systems for keeping inauthentic accounts and activity off our platform. For example, we are looking at how we can apply the techniques we developed for detecting fake accounts to better detect inauthentic Pages and the ads they may run," the company says.
"We are also experimenting with changes to help us more efficiently detect and stop inauthentic accounts at the time they are being created."
The company has cracked down on many fake accounts already in 2017.
Because when "Diana from Detroit" creates a post, it really should be Diana from Detroit behind the keyboard.