/Facebook, Twitter, Google describe efforts to Congress to prevent foreign interference in 2020 election

Facebook, Twitter, Google describe efforts to Congress to prevent foreign interference in 2020 election

WASHINGTON – Tech companies such as Facebook and Twitter told a House panel Thursday that they have removed dozens of networks of participants spreading misinformation about elections and coronavirus, but Democratic lawmakers worried about continued interference in the 2020 election.

“The threat of interference is real and evolving,” said Nick Pickles, director of global public policy strategy and development at Twitter, told the House Intelligence Committee.

The committee held a virtual hearing with officials from Facebook, Twitter and Google. But Republicans, who have complained about Democrats holding remote hearings by teleconference, refused to participate.

The U.S. intelligence community concluded after the 2016 election that Russia interfered through a coordinated internet campaign to benefit President Donald Trump and hurt Democratic nominee Hillary Clinton.

“A tweet or Instagram photo or a YouTube video can be viewed by millions of Americans in the span of hours,” said Rep. Adam Schiff, D-Calif., the chairman of the House Intelligence Committee. “A policy that only identifies and acts upon misinformation, whether from a foreign or domestic source, after millions of people have seen it is only a partial response at best.”

A Facebook engineer inputting code on his computer.

Former Vice President Joe Biden, the presumptive Democratic nominee to challenge Trump in November, has urged Facebook to remove disinformation. House Speaker Nancy Pelosi, D-Calif., has called the company’s resppnse to fake posts “shameful.”

Concerns about the widespread distribution of inaccurate information this year have spread beyond the election to include the response to the coronavirus pandemic and the nationwide protests against racial injustice.

In some cases, the manipulation is as simple as peddling non-existent T-shirts for the Black Lives Matter movement. But tech companies have also removed messages promoting ineffective or dangerous remedies for coronavirus.

Facebook removed 50 deceptive networks last year and another 18 so far this year that sought to manipulate public debate from Russia, Iran and the U.S., according to Nathaniel Gleicher, the head of security policy at Facebook, who has prosecuted cybercrime at the Justice Department and served as director for cybersecurity policy at the National Security Council. The company also disabled 1.7 billion fake accounts between January and March, he said.

“My team was built to find and stop these bad actors and we’re working tirelessly to do so,” Gleicher said. “We also know that malicious actors are working to interfere with these conversations, to exploit our societal divisions, to promote fraud, influence our elections and delegitimize authentic social protest.”

Twitter has rules prohibiting fake accounts and impersonating others, Pickles said. The site also prohibits manipulating or interfering with elections, such as through posting or sharing content that may suppress participation or mislead people about when, where or how to participate in civic events such as the census, he said.

When the substance of a tweet is disputed, the company might label tweets, such as when Trump criticized absentee voting. Thousands of tweets around the world have received labels, he said.

In 2018, Twitter launched a public archive of 200 million tweets that had been removed, so that researchers could review the attempts at manipulation, he said.

“We want people to have confidence in the integrity of information found on Twitter,” Pickles said.

Richard Salgado, Google’s director for law enforcement and information security matters, said the search engine created in 1998 recognizes that elections, the global pandemic and protests against injustice make the site more important than ever. The company created a quarterly bulletin with updates on influence campaigns, he said.

“While we saw limited misconduct linked to state sponsored activity in the 2018 midterms, we continue to keep the public informed,” he said.

A challenge for the social media companies that capitalize on the free exchange of information is whether and how to regulate content that is found to be inauthentic or misleading.

Rep. Jackie Speier, D-Calif., whose district includes Facebook’s headquarters, said she still can’t understand why the company doesn’t consider itself a media platform. Gleicher said the company is first and foremost a technology company.

Speier asked how YouTube, which is part of Google, could quickly remove a video that was slowed down to make Pelosi appear drunk, while Facebook did not.

Gleicher said Facebook reduced the ranking for the video, which “radically” dropped the number of times it was shared. But he argued removing it doesn’t work because the video could still be found on the internet.

“You’re highlighting a really difficult balance,” he said. “People who are looking for it will still find it.”

Rep. Jim Himes, D-Conn., said he left the meeting more concerned about foreign adversaries igniting the country’s polarized politics through tech companies that thrive on division and anger.

“All it takes is a match from Russia, or from Iran, or from North Korea or from China to set off a conflagration,” Himes said. “That is what scares me most.”