Leonard Pozner says he spends hours every day trying to erase online conspiracy theories that the death of his 6-year-old son Noah at the Sandy Hook Elementary School was a hoax.
He has taken Alex Jones of Infowars, by far the most visible Sandy Hook denier, to court. He has put pressure on major tech companies to take action against the conspiracy theorists who flourish on their platforms.
But the bulk of his work is more methodical. Sandy Hook conspiracies are strewn around the internet on various platforms, each with its own opaque rules and reporting mechanisms. So Mr. Pozner has studiously flagged countless videos and posts for a wide variety of offenses — invasions of privacy, threats and harassment, and copyright infringement — prompting Facebook, Amazon and Google to remove false material about his son.
Twitter has been less receptive to his claims and some smaller sites have simply not responded at all. But one company, Mr. Pozner says, has actively pushed back against his attempts.
WordPress.com, one the internet’s biggest blogging platforms, is operated by a company called Automattic, which also runs a wide array of smaller sites and internet services. Sandy Hook conspiracy theorists have been able to remain on WordPress.com thanks, in part, to policies put in place to resist previous campaigns to get content removed from its service, particularly through the strategic use of copyright claims.
“Posting conspiracy theories or untrue content is not banned from WordPress.com, and unfortunately this is one of those situations,” Automattic said in a statement. “It is a truly awful situation, and we are sympathetic to the Pozner family.”
You have 2 free articles remaining.
Last week, Apple, Facebook and Google’s YouTube removed videos and podcasts from Mr. Jones and Infowars, the conspiracy site he created, from their platforms. Facebook, after fielding criticism about its decision, wrote a blog post about its commitment to free expression and the difficult questions it faces in allowing “baseless conspiracy theories” and other offensive material on its sites. Twitter, like WordPress.com, has allowed the content to remain.
These debates have put tech companies into a sort of existential crisis. But for Mr. Pozner and others like him, the arguments have long been much more personal, as they struggle with images of family members being repurposed in horrifying new ways and experience harassment themselves because of misinformation online.
“The only items that concern me is when his image is being used in a negative, ugly way — denying the tragedy, calling him a crisis actor and everything else that the typical global village idiot on the net does,” Mr. Pozner said.
In the absence of uniform online policies about hoaxes, Mr. Pozner’s most effective tool has been filing copyright claims on images of Noah. He has filed such claims with Automattic about photos of Noah appearing on posts that labeled him a “crisis actor” who had been spotted in Pakistan after Sandy Hook and others that claimed he was a “fiction” and that photos of him were created using images of his older half brother.
Automattic has repeatedly responded to Mr. Pozner with form letters saying “because we believe this to be fair use of the material, we will not be removing it at this time.” The letters explain that fair use could include “criticism, comment, news reporting, teaching, scholarship, and research.” They also warn that the company could collect damages from people who “knowingly materially misrepresent” copyrights.
“The responses from their support people are very automated, very generic, very cold and there’s just no getting through to them,” Mr. Pozner said.
“They have taken this incorrect interpretation of freedom of speech to an extreme,” he added. “The only thing WordPress has taken out — and where I’ve been successful — is if someone posts personal information like my driver’s license or address.”
Automattic said that the responses Mr. Pozner received were “a predefined statement” that is used in copyright situations. “We regret that it was used in this situation,” the company said. “We offer our apologies to the family for the response we gave to them.”
Mr. Pozner’s complaints appear to have been thwarted in part by longtime policies at Automattic intended to prevent the use of copyright claims to censor criticism and journalism on its platform. The responses sent to Mr. Pozner included a link to a post from 2013 describing the company’s efforts to deal with spurious but effective copyright claims. The post also highlighted that the company had filed suit against two particularly egregious offenders in an effort to “fight back” on behalf of people who were posting material on the platform.
The company created a “Hall of Shame” to call out businesses and people filing notices for frivolous reasons or to tamp down negative news coverage. (The New York Times Company is an investor in Automattic.)
For years, Automattic’s strident response to copyright abuse earned praise from digital rights advocates. Now, this approach has effectively lumped in Mr. Pozner with the abusers. “Strictly from a copyright perspective, WordPress.com’s response is outside the norm,” said Tom Rubin, a lecturer at Stanford Law School who oversaw Microsoft’s copyright group and takedown process for 15 years.
“They avoid getting involved because fair-use determinations are notoriously complex and fact specific,” Mr. Rubin said of online platforms. “Platforms would rather eliminate their own potential liability by taking the content down and leaving it to the parties to battle amongst themselves in court.”
Matt Mullenweg, the chief executive of Automattic, suggested in a recent interview with Recode that the company was confronting misinformation. “For things that we host and run and provide our kind of company backing to, implicitly through hosting it, we do avoid hate speech,” he said. He added that “egregiously fake or harmful things — we’re pretty good at getting off the system.”
In the case of Mr. Pozner, however, Automattic suggested that its approach was imperfect. “While our policies have many benefits to free expression for those who use our platform, our system like many others that operate at large scale, is not ideal for getting to the deeper context of a given request,” the company said in a statement.
Although the posts reported by Mr. Pozner ”are not violating any current user guidelines, or copyright law,” the company said, “the pain that the family has suffered is very real and if tied to the contents of sites we host, we want to have policies to address that.”
Mr. Pozner, who has created a nonprofit group called the Honr Networkdevoted to “stopping the continual and intentional torment of victims” of major tragedies like Sandy Hook, has become an expert on the many compliance procedures and content-governing bureaucracies that exist inside tech companies.
He has removed photos of Noah from Facebook by relying on policies that protect the privacy of children under 13, a process that has required him to send the company his driver’s license and a copy of his son’s birth certificate. Mr. Pozner has also successfully filed such reports with Google.
“You can’t even measure the volume of content I’ve taken down at this point,” Mr. Pozner said.
At times, he has been able to explain the abuse he and his family have received, some of it because of his efforts to purge Sandy Hook conspiracies from the internet, and seek removals based on a slowly evolving awareness in the tech community about the issue. (In June of last year, a 57-year-old woman in Florida was sentenced to five months in prison for making death threats against Mr. Pozner and his family.)
A report to Vimeo led to a response on Friday from a representative who said he would assign the case to a specialist, but first told Mr. Pozner that he was sorry to hear about his situation.
“Everyone has gotten better this year, especially with all the work that I’ve done to shame a lot of these platforms for continuing to abuse us and the memory of our children and just all of the ugliness that goes on,” Mr. Pozner said. “If you type in Noah Pozner now into an image search on Google, you’ll see it’s mostly normal results but it used to be 99 percent hateful angry memes, so the cleanup is huge.”
Mr. Pozner said he was tired of hearing technology companies say that they do not want to be “arbiters of truth,” an oft-repeated refrain, particularly as concerns around misinformation on social media grow.
“Technology platforms have had this misguided, futuristic vision of freedom of speech and everything was built around that, but it doesn’t really fit into the day-to-day use of it,” Mr. Pozner said. “By not taking action, they have made a choice. They are the arbiters of truth by doing nothing.”
Across the internet, a war is waging: Free speech versus the spread of false information and conspiracy theories. Social media companies are buckling under the pressure as the public condemns them for allowing vitriolic conspiracy peddlers to remain on their sites. One company, however, is steadfast in its defense of free speech, which includes Sandy Hook conspiracy theories: WordPress.
As the New York Times reports, Leonard Pozner spends hours each day finding and attempting to scrub the internet of hurtful Sandy Hook conspiracy theories. The theories he’s trying to quell are ones that say the death of his 6-year old son, Noah, was a hoax. In particular, he flags content that uses his son’s image in negative ways, ones that say Sandy Hook was a hoax or call the child out as a crisis actor.
It’s a heart-wrenching situation, and he’s had some success in flagging content shared to Facebook, Google, and Amazon. Blog posts shared to WordPress, however, are a different story.
“Posting conspiracy theories or untrue content is not banned from WordPress.com, and unfortunately this is one of those situations,” Automattic told the New York Times in a statement. “It is a truly awful situation, and we are sympathetic to the Pozner family.”
Pozner, in some cases, has tried getting content removed after filing copyright claims on his son’s photos. However, even this won’t work for Automattic, which has deemed the imagery to be “fair use.” (A stance one Stanford lecturer considered “outside the norm,” as fair use determinations can be incredibly complex.) The company has these policies in place to protect journalism, but in this case, it also seems to be protecting the conspiracy theories of the far-right.
Automattic’s CEO has said in a Recode interview that the company avoids hate speech, and egregiously fake, harmful things on its platform.
Striking the right balance between allowing people to share what they want to share, protecting journalists from outlandish copyright claims, and ensuring dangerous content doesn’t proliferate is proving one of the toughest challenges of technology companies in 2018.