Lawmakers on Tuesday introduced legislation to require social media companies to report any online extremist activity they become aware of to law enforcement.
The bill by Sen. Dianne Feinstein and Sen. Richard Burr is a replica of language that was dropped from the Senate's annual intelligence authorization bill in September.
Companies would be required to report to law enforcement if they became aware of activity such as attack planning, recruiting or distribution of extremist material.
"This bill doesn't require companies to take any additional actions to discover terrorist activity, it merely requires them to report such activity to law enforcement when they come across it," Feinstein said in a statement.
Last week, 28-year-old Syed Farook and his wife, 29-year-old Tashfeen Malik, went on a shooting attack in San Bernardino, California, that killed 14 people and left 21 injured. A Facebook post on Malik's page around the time of the attack included a pledge of allegiance to the leader of the Islamic State group, and the FBI is investigating the shooting as an act of terrorism.
Facebook found the post, which was under an alias, the day after the attack. The company removed the profile from public view and informed law enforcement.
In his address to the nation Sunday, President Barack Obama urged high-tech and law enforcement leaders to make it harder for extremists to use technology to escape from justice. Social media increasingly has become a tool of recruitment and radicalization for the Islamic State group, and tech companies are dedicating more resources to tracking reports of extremist or other violent threats.
The newly introduced legislation already has some prominent critics, including some social media companies.
The proposal is an "unworkable standard for reporting and a massive new liability regime that could chill free speech and innovation online, without any appreciable national security benefits," Michael Beckerman, who heads the Internet Association, said in a statement. The group represents 37 Internet companies, including Facebook, Google, LinkedIn, reddit, Twitter and Yahoo. "The legislation incentivizes Internet platforms to over report, even poor quality information, to authorities, making it more difficult to find credible threats."
Sen. Ron Wyden said in a statement that he opposes it because "terrorist activity" isn't defined and—in a potential unintended consequence—companies may not look for suspicious content to avoid breaking the law if they fail to report something.
"I'm for smart security policies. If law enforcement agencies decide that terrorist content is not being identified quickly enough, then the solution should be to give those agencies more resources and personnel so they know where to look for terrorist content online," he said.
The bill models a law that requires reporting of online child pornography. But in that situation, images are automatically matched to an existing database that helps with swift removal.
Tech industry representatives have said new laws could result in excessive reports to law enforcement and an overload of unhelpful data, which would make it more difficult to determine legitimate threats.
It's also unclear if the bill is necessary. During a July hearing before the Senate Intelligence Committee, Feinstein asked FBI Director James Comey about requiring such reports. He called it an interesting idea but said, "I do find in practice that they are pretty good about telling us what they see."
Explore further: Could Twitter stop the next terrorist attack?