Content Moderation Policy

Our approach to content moderation and safety on GPT Image 1.5

2026/01/07

Content Moderation Policy

Last Updated: January 7, 2026

GPT Image 1.5 is committed to maintaining a safe, respectful, and lawful environment for all users. This policy explains our content moderation practices and safety measures.

Our Approach

Independent AI Service

GPT Image 1.5 is an independent AI service running state-of-the-art open-source models under Apache-2.0 license. We are not affiliated with any model provider. We deploy and optimize these models with built-in safety filters and moderation systems.

Multi-Layer Safety System

  1. Pre-Generation Filtering

    • Automated prompt analysis
    • Keyword and pattern detection
    • Intent classification
  2. Model-Level Safety

    • Safety-trained models with built-in filters
    • Refusal to generate prohibited content
    • Content classification systems
  3. Post-Generation Review

    • Automated image analysis
    • Safety score assessment
    • Flagging of suspicious content
  4. Human Review

    • Review of flagged content
    • Investigation of user reports
    • Manual moderation decisions

Prohibited Content Categories

We prohibit generation of:

1. Child Safety

  • Any content depicting, suggesting, or related to minors in any context
  • Zero tolerance policy with immediate account termination

2. Violence and Gore

  • Graphic violence or gore
  • Depictions of physical harm
  • Weapons used in threatening contexts

3. Adult Content

  • NSFW content
  • Sexually explicit imagery
  • Nudity or sexual content

4. Hate and Discrimination

  • Content promoting hate speech
  • Discriminatory imagery
  • Content targeting protected groups

5. Illegal Activities

  • Content related to illegal acts
  • Controlled substances
  • Criminal activities

6. Deception and Fraud

  • Deepfakes without disclosure
  • Fake credentials or documents
  • Impersonation of real individuals

AI-Generated Content Disclosure

All images generated by GPT Image 1.5 are:

  • AI-generated and synthetic
  • Do not depict real people or real events
  • Should be labeled as AI-generated when shared publicly

Automated Detection

Our systems use:

  • Machine learning classifiers
  • Keyword and pattern matching
  • Computer vision analysis
  • Behavioral analysis
  • Risk scoring algorithms

Note: Automated systems are not perfect. We continuously improve our detection capabilities and appreciate user cooperation.

User Reporting

How to Report

Email: [email protected]

Include:

  • Description of the violation
  • Content URL or reference
  • Screenshots (if applicable)
  • Your contact information

What We Do

  1. Immediate Review: Reports reviewed within 24 hours
  2. Investigation: Thorough examination of reported content
  3. Action: Appropriate enforcement measures
  4. Response: Notification of action taken (when appropriate)

Enforcement Actions

Warning

  • First-time minor violations
  • User education about policies

Content Removal

  • Immediate removal of violating content
  • Notification to user

Temporary Suspension

  • Repeated violations
  • Duration: 7-30 days
  • Appeal available

Permanent Ban

  • Severe violations
  • Repeated offenses after warnings
  • Illegal content generation
  • Limited appeal process
  • Child exploitation content → NCMEC
  • Criminal activities → Law enforcement
  • As required by law

Data Usage Policy

We Do NOT:

  • Use user prompts to train AI models
  • Share user-generated content with third parties
  • Store images longer than 30 days

We DO:

  • Use anonymized data to improve safety systems
  • Analyze patterns to detect abuse
  • Comply with legal requirements
  • Cooperate with law enforcement when required

Geographic Compliance

We comply with content laws in:

  • United States
  • European Union (DSA, GDPR)
  • United Kingdom
  • All jurisdictions where we operate

Transparency

We are committed to:

  • Clear communication of our policies
  • Regular policy updates
  • Transparency reports (planned)
  • User education

False Positives

If content was incorrectly flagged:

  1. Appeal Process:

    • Email: [email protected]
    • Include: Account details, content reference, explanation
    • Response: 2-5 business days
  2. Review:

    • Human review of automated decisions
    • Fair assessment
    • Account restoration if appropriate

Changes to This Policy

We may update this policy to:

  • Improve safety measures
  • Comply with new regulations
  • Address emerging risks
  • Enhance user protection

Users will be notified of significant changes via email.

Contact

General Questions: [email protected] Report Abuse: [email protected] Legal Inquiries: [email protected]

Emergency Content

For emergency situations involving:

  • Imminent danger
  • Child exploitation
  • Terrorism

Contact: [email protected] or call law enforcement directly.