AI Content Policy

Last updated: May 7, 2026

This policy explains what content can be generated, edited, or stored on iDesign Platform. It complements the Acceptable Use Policy with AI-specific detail.

Categories we don't allow

CSAM and child sexualization

Zero-tolerance, no matter how stylized. CSAM is detected, blocked, removed, and reported to NCMEC.

Non-consensual intimate imagery

No nudes, semi-nudes, or sexualized depictions of identifiable real people without verifiable consent.

Realistic deepfakes of real people

Photorealistic depictions of identifiable public or private individuals are restricted unless you can show the depicted person consented or it's clearly satirical and labeled as such.

Content that endangers public safety

  • Synthetic election media designed to mislead voters about candidates, processes, or outcomes.
  • Fabricated emergency alerts or imitations of authoritative public communications.
  • Detailed instructional imagery for building weapons capable of mass harm.

Counterfeits & forgeries

Currency, IDs, passports, certificates, signatures, official seals, branded packaging used to deceive — all off-limits.

Hate symbols and content

No content that promotes, glorifies, or recruits for hate groups, terrorist organizations, or violent ideologies.

Realistic gore and graphic cruelty

Some violence is part of art and storytelling. Realistic torture, execution, and animal cruelty rendered for shock are not.

Things to be careful with (but not banned)

Recognizable styles of living artists

It's technically possible to mimic a living artist's style. It can be legally and ethically risky, especially commercially. Get permission before publishing work in the unmistakable style of a living artist.

Brands, logos, and characters

Generating copyrighted characters or trademarked logos is fine for personal exploration. Commercial use almost certainly requires the rights-holder's permission.

Adult content (where allowed)

We currently do not support generation of sexually explicit content. If we change this in the future, it will require age verification and additional safeguards.

Provenance & disclosure

Where outputs are likely to be confused with real photographs of real people, we encourage you to disclose AI use. Many platforms now require it.

How we enforce

  • At input: some prompts are blocked before reaching the model. False positives happen — appeal via abuse@idesignplatform.com.
  • At output: generated content may be scanned for the prohibited categories above before delivery.
  • On report: abuse reports are reviewed by humans, typically within 24 hours.

Appeals

If a generation was blocked or removed in error, reply to the notification — a real person will look at it and respond. If you've been wrongly suspended, email appeals@idesignplatform.com.

Why this exists

AI image tools are powerful and the harms when they're misused are real. Most users will never see this page because most uses are creative, legitimate, and welcome. The line is here so the rest can keep being fun.