Dallas Attorneys Talk About the Take It Down Act. | Dallas Observer
Navigation

Dallas Attorneys Talk About the Take It Down Act

Posting revenge porn and non-consensual illicit deepfakes may soon be federal crimes, but prosecuting them may not be easy.
Image: AI technology
Deepfake technology will likely help both criminals and law enforcement. Adobe Stock
Share this:
Carbonatix Pre-Player Loader

Audio By Carbonatix

There's little doubt that U.S. Sen. Ted Cruz is one of the more polarizing politicians on the national political landscape these days. But a bill the Republican from Texas helped introduce and that has passed the Senate is far from polarizing, although there are some critics. Even First Lady Melania Trump has championed this effort. Senate Bill 146, aptly named the Take it Down Act, now awaits its turn in the House.

The bipartisan legislation aims to create a federal law to criminalize the distribution of non-consensual intimate imagery (or NCII) and deepfake revenge porn while placing a legal obligation on social media platforms to take down these images.

Like 48 other states in the union, Texas already has established revenge porn laws and penalties on the state level. South Carolina is currently the only state in the country without established legal consequences for distributing NCII. The Take It Down Act now creates a national standard where there hasn’t been one. In a February, the Electronic Frontier Foundation, a group that says it "defends civil liberties in the digital world," called SB 146 "flawed" while expressing concern the bill will "lead to censorship and overreach" due to overly broad definitions contained in the legislation.

As critics point out, a noble aim doesn't necessarily make even a bipartisan effort clear and unimpeachable. Questions on how to combat revenge porn of all sorts will remain regardless of how the bill is received by the House. According to some Dallas attorneys, rapidly evolving technology might make this type of online crime more prevalent while possibly also helping to fight it.

Unbalanced Gender Scales

“Around 75% of victims of these offenses are women, and this phenomenon usually happens during or after breakups,” said Dallas attorney Michelle O’Neil, who is also an adjunct professor at Baylor Law School.

O’Neil says these cases are escalating, particularly in contentious divorces where false and manipulated content is being weaponized as false evidence. She predicts an influx as technologies advance and advocates for the passage of federal law ensuring social media take down posts containing NCII.

“In our field, we’ve even seen young women becoming suicidal over it because of the trauma that they suffer. This act would reduce the trauma that the victims of revenge porn and the deepfake videos are suffering.”

In 2022 Congress added a new provision in the Violence Against Women Reauthorization Act that lets plaintiffs bring forward civil action in federal court against someone who shared intimate images, explicit pictures, recorded videos, or other depictions of them without consent. The Take it Down Act would make such an offense a criminal violation.

AI Doom and Gloom? Maybe not

Andrew Speer, partner at Dallas law firm Walters Gilbreath, taught himself to create AI images and videos, known in the AI world as LoRAs, or Low-Rank Adaptation, by reading online manuals and watching YouTube videos. This self-education has prepared him for a legal environment that he agrees will most likely see an influx of deepfake evidence.


“I can take 25 or 30 pictures of a person, and [AI] eventually learns a person. Pictures you can fake very easily now, but there are ways to tell a fake — like if it looks cinematic or almost staged and consistently has a blurred background.”


Speer has become a consulting expert in technology in family law. He argues there’s money to be made through government contracts with the private technology sector from programs that can detect deepfakes and other falsified and altered images and audio in legal proceedings. He thinks the tech used to commit the crime will soon be used to catch the criminals.

“I’m not so sure it’s all the gloom and doom it could be,” said Speer. “There will likely be specific programs that eventually everyone will use to determine if the evidence is AI-generated, like how Soberlink and Family Wizard have become the standard in DWI and family courts.”

Still, Speer, like O’Neil, is concerned that an influx of falsified evidence will flow in as the general public gains more access to advancing technologies and loopholes in AI software remain open.

“If you can remove someone from a picture or add something as simple as a beer in a photo, that can impact a whole case,” he said. “You can take a real picture of someone and run it through an extension to put someone else’s face on it — this is how most deepfake porn is made. Many [pieces of] software censor porn, but people are going in and editing the blocks in the code.”


Free Speech and Intent

Critics argue that if passed, the law could not only impede free speech and due process but also give the Trump administration too much power. The Electronic Frontier Foundation wrote in March that the president heartily endorsed the bill, but not for the reasons its authors likely intended.

"The Senate just passed the Take It Down Act…. Once it passes the House, I look forward to signing that bill into law," the president said to a recent joint session of Congress. And I’m going to use that bill for myself too, if you don’t mind, because nobody gets treated worse than I do online."

Opponents also argue that the bill could create copyright exceptions, causing a plethora of other legal problems.

Dallas attorney Lisa McKnight said she hasn’t seen anyone prove a deepfake, but she has seen digitally altered images of assault injuries in family law cases, altered phone records, and falsified paternity tests. Regardless, she’s confident that attorneys and judges will be able to distinguish intent in AI NCII cases.

“When you’re trying to defame someone that isn’t a public figure, that’s where you draw the line,” she said. “And we have to stop and ask: are you making an allegation, or is it political satire?”

McKnight is adamant in her stance that AI companies need to be held responsible as well as social media companies.

“AI companies need to be held responsible. The willingness for people to do it [create deepfakes] is there, so there needs to be an appropriate response.”