Close Menu
  • Home
  • AI
  • Aspiring Tech
  • Cybersecurity
  • Entrepreneur
  • Gadgets
  • Startup
  • Tech
  • Wired

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

8 automatic trash bins we tested and recommended (2025)

March 3, 2025

All smart home news, reviews, and gadgets you need to know

January 24, 2025

Nano Labs unveils new AI and blockchain ASICs

December 26, 2024
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
Reocomm Tech NewsReocomm Tech News
  • Home
  • AI
  • Aspiring Tech
  • Cybersecurity
  • Entrepreneur
  • Gadgets
  • Startup
  • Tech
  • Wired
Reocomm Tech NewsReocomm Tech News
Home » AI-generated images of child sexual abuse are going viral. Law enforcement is desperate to stop them
AI

AI-generated images of child sexual abuse are going viral. Law enforcement is desperate to stop them

adminBy adminOctober 25, 2024No Comments7 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


WASHINGTON (AP) — A child psychiatrist altered a first-day-of-school photo he saw on Facebook to show a group of girls nude. A US Army soldier has been accused of creating images depicting the children of his acquaintances being sexually abused. A software engineer responsible for producing highly realistic sexually explicit images of children.

Law enforcement agencies across the United States are cracking down on the proliferation of child sexual abuse images created by artificial intelligence technology, from doctored photos of real children to graphic computer-generated depictions of children. Justice Department officials say they are aggressively pursuing criminals who misuse AI tools, while states are legally prosecuting people who create ‘deepfakes’ and other harmful images of children. I’m hurrying so I can do it.

“We must communicate early and often that this is a crime and that if the evidence supports it, it will be investigated and prosecuted,” said Stephen Grocki, head of the Justice Department’s child exploitation and obscenity division. ” he said in an interview with The Paper. Associated Press. “If you’re sitting there thinking otherwise, you’re fundamentally wrong, and it’s only a matter of time before someone holds you accountable.”

The Justice Department said existing federal law clearly applies to such content, and recently announced the first case involving purely AI-generated images, where the children depicted are virtual rather than real. He filed what appears to be a federal lawsuit. In a separate case, federal authorities arrested a U.S. soldier stationed in Alaska in August for allegedly making sexually explicit pictures of an acquaintance’s biological child by publishing them through an AI chatbot.

trying to catch up with technology

The charges come as child advocates say authorities are working urgently to curb the misuse of the technology, with authorities concerned that the flood of disturbing images could make it difficult to rescue real victims. It was done. Law enforcement officials worry that investigators will waste time and resources trying to identify and track down exploited children who don’t actually exist.

Meanwhile, lawmakers are passing a flurry of bills to ensure that local prosecutors can bring charges under state law over AI-generated “deepfakes” and other sexually explicit images of children. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered images of child sexual abuse, according to research by the National Center for Missing and Exploited Children.

“Frankly, as a law enforcement agency, we are catching up to technology that is advancing much faster than we are,” said Ventura County, California, District Attorney Eric Nasarenko.

Nasarenko pushed for a bill signed by Gov. Gavin Newsom last month that makes it clear that AI-generated child sexual abuse content is illegal under California law. Mr. Nasarenko said that between December of last year and mid-September, his office was unable to provide any information on the images because California law required prosecutors to prove that the images depicted real children. said it had failed to prosecute eight cases related to AI-generated content.

AI-generated child sexual abuse images could be used to groom children, law enforcement officials say. And even if they are not physically abused, children can be seriously affected if their images are altered to appear sexually explicit.

“I felt like a part of me was taken away from me, even though I wasn’t physically assaulted,” she said on the Disney Channel show “Just Roll With It.” said Kaylin Heyman, a 17-year-old who helped push the California bill after being the victim of a “deepfake” image.

Heyman testified last year in the federal trial of a man who digitally composited his and another child actor’s faces onto their bodies during sex acts. He was sentenced in May to more than 14 years in prison.

Open-source AI models that users can download onto their computers are known to be favored by criminals, who can further train and modify the tools to include explicit depictions of children. Experts say it can be produced in large quantities. Officials say abusers are exchanging tips in dark web communities on how to manipulate AI tools to create such content.

A report last year by the Stanford Internet Observatory found that research datasets that are the source of major AI image creators such as Stable Diffusion include links to sexually explicit images of children, which some tools It turns out that this is one of the reasons why it is easy to create. Harmful image. The dataset was deleted, and the researchers later announced that they had removed more than 2,000 web links to images of suspected child sexual abuse from it.

Top technology companies including Google, OpenAI, and Stability AI have agreed to work with anti-child sexual abuse organization Thorn to combat the spread of child sexual abuse images.

But experts say more should have been done from the beginning to prevent abuse before the technology became widely available. And the steps companies are taking now to make it harder to exploit future versions of AI tools “will do little to prevent criminals from running older versions of models on computers” “undetected.” We cannot,” Justice Department prosecutors wrote in a recent court filing.

“Time wasn’t being spent on making the product more secure instead of more efficient. As we’ve seen, that’s very difficult to do after the fact,” says the Stanford Internet Observatory. said David Thiel, Chief Engineer.

AI images become even more realistic

Last year, the National Center on Missing and Exploited Children’s CyberTipline received approximately 4,700 reports of content related to AI technology. This is just a fraction of the more than 36 million total reports of suspected child sexual exploitation. By October of this year, the group was submitting about 450 reports each month on AI-related content, said Jota Souras, the group’s chief legal officer.

But experts say the images are so realistic that it’s often difficult to tell whether they were generated by AI or not.

“Law enforcement officials need to determine whether the image actually depicts a real minor or whether it was generated by AI,” said Recall Kelly, a Ventura County deputy district attorney who helped write the California bill. “I spend hours just making decisions.” “In the past, there may have been some clear indicators, but with advances in AI technology, that is no longer the case.”

Justice Department officials say they already have tools under federal law to go after the perpetrators of these images.

In 2002, the U.S. Supreme Court struck down a federal ban on virtual child sexual abuse material. However, a federal law signed the following year prohibited the production of visual depictions, including pictures of children engaged in sexually explicit acts, that were considered “obscene.” According to the Justice Department, this law has been used in the past to prosecute cartoons depicting child sexual abuse, but it specifically states that there is no requirement that “the minor depicted actually exists.” has been done.

In May, the Justice Department indicted a Wisconsin software engineer accused of using an AI tool called “Stable Spread” to create graphic images of children engaging in sexually explicit acts. He was arrested for sending images to a 15-year-old boy through direct contact. Authorities say he posted a message on Instagram. The man’s attorney is asking for the charges to be dismissed on First Amendment grounds, but declined further comment on the charges in an email to The Associated Press.

A spokesperson for Stability AI said the man is accused of using an early version of the tool released by another company, Runway ML. Since taking over exclusive development of the model, Stability AI says it has “invested in proactive features to prevent the misuse of AI for the creation of harmful content.” A Runway ML spokesperson did not immediately respond to a request for comment from The Associated Press.

The Justice Department has filed charges under federal child pornography laws in a case involving “deepfakes,” which are digitally altered photos of real children to make them sexually explicit. In one case, a child minder in North Carolina used an AI application to digitally “undress” girls posing for their first day of school in a decades-old photo shared on Facebook. A doctor was convicted on federal charges last year.

“These laws exist. They will be used. We have the will. We have the resources,” Grocki said. “Just because there aren’t actual children involved doesn’t mean it’s a low priority to ignore.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
admin
  • Website

Related Posts

Nano Labs unveils new AI and blockchain ASICs

December 26, 2024

Inside Super Micro’s wake-up call: After riding the AI wave, the $20 billion tech giant is crashing back to earth amid a financial crisis and family drama

October 31, 2024

Nvidia generated nearly $50 billion this year from sales of AI products and services. But here’s another, and very different, way in which AI can significantly increase your company’s revenue.

October 31, 2024
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

3 insights to turn your passion into business success

October 31, 2024

6 morning routines for successful entrepreneurs

October 31, 2024

Risk Taker: Sandeep Kumar, CEO, L&T Semiconductor Technologies

October 31, 2024

Greater Bay Area Entrepreneurs Forum

October 31, 2024
Top Reviews
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 reocommtech. Designed by reocommtech.

Type above and press Enter to search. Press Esc to cancel.