GAIT v1 now available for preview. Help define the future of AI in games by adopting GAIT and giving feedback.

Purpose

GAIT (Game AI Transparency) is a voluntary, multi-level disclosure framework for communicating how generative AI is used in video game products. It provides a shared vocabulary for studios, developers, players, platforms, and regulators.

GAIT is neutral on AI adoption. It describes how AI is used, not whether it should be. Studios at every level — from fully human-crafted to extensively AI-powered — can participate.

Scope

What GAIT covers

GAIT applies to the shipped game product: the software and content that players download, install, and play. This includes all pre-generated assets and any live generative systems that operate at runtime.

What GAIT does not cover

  • Marketing and promotional materials (trailers, advertising, social media, key art, press materials). AI use in marketing is a distinct domain and does not affect a game's GAIT level.
  • Traditional game AI: Pathfinding, behavior trees, finite state machines, utility AI, GOAP, and similar deterministic or heuristic systems.
  • Procedural generation: Deterministic algorithmic content creation with set rules (e.g., Minecraft terrain generation, roguelike level layouts). These systems raise no training-data consent issues.
  • ML-driven optimization systems: Matchmaking algorithms, dynamic pricing engines, recommendation systems, anti-cheat detection, and similar classification/prediction models that do not generate creative content.
  • Upscaling and rendering technology: NVIDIA DLSS, AMD FSR, Intel XeSS, and similar frame generation or resolution upscaling techniques.

Definition of "AI"

"AI" in this framework means generative AI: machine learning systems trained on data that produce non-deterministic outputs (text, images, audio, video, 3D assets) via models such as large language models, diffusion models, or generative adversarial networks.

A studio whose only AI use falls into the excluded categories listed above does not require GAIT disclosure and may optionally adopt Level 0.

Accessibility Exemption

AI systems used exclusively to translate, transcribe, or modify game states for accessibility purposes do not trigger Level 3 or Level 4 designations, provided the core game content remains human-led. Exempt uses include:

  • Real-time text-to-speech and speech-to-text
  • Dynamic visual filters or UI adaptation for colorblindness or low vision
  • AI-powered audio description of on-screen events
  • Adaptive difficulty or input remapping driven by accessibility needs
  • Real-time sign language avatar interpretation

The exemption is narrow. It applies only to systems whose sole purpose is accessibility. A system serving both accessibility and general audiences (e.g., AI voice synthesis used for NPC dialogue and accessibility narration) is evaluated at the appropriate level for its general use. Studios claiming the exemption must document which AI systems qualify and confirm their exclusive accessibility purpose.

Universal Red Lines

These prohibitions apply at every level, including Level 0. They are non-negotiable conditions of GAIT participation. A studio that violates any red line forfeits its GAIT designation until the violation is remediated and independently verified.

Red Line 1: No Unconsented Creative Replication

No studio may use generative AI to replicate, imitate, or synthesize the creative output of any individual — including voice, likeness, motion capture performance, artistic style, or musical composition — without that individual's informed, documented consent.

Consent must be:

  • Given before the AI-generated content is created
  • Specific to the project and use case
  • Documented in writing
  • Compensated per applicable collective bargaining agreements or individual contracts

Red Line 2: No Concealment of Disclosed-Level AI Use

A studio that has adopted a GAIT level must not actively conceal AI use that would place it at a higher level. Accidental inclusion of placeholder AI content triggers a mandatory correction process (update the level, patch or replace the content, issue a public acknowledgment) but does not itself constitute a red line violation. Deliberate concealment does.

A QA failure is correctable. A cover-up is disqualifying.

Red Line 3: No AI-Generated Content Targeting Minors Without Safeguards

Any game rated for audiences that include minors (ESRB E through T, PEGI 3 through 16, CERO A through C, or equivalent) that uses live generative AI features (Level 4) must implement content filtering, output monitoring, and human review mechanisms appropriate to the age rating.

The Levels

GAIT defines six levels: 0, 1, 1.5, 2, 3, and 4. The half-step at 1.5 addresses code's unique position — it ships to players but is not creative content players experience aesthetically.

0 Free
Level 0: AI-Free
No generative AI at any stage

No generative AI was used at any stage of development or production for this game, including internal tools, concept ideation, code generation, localization, or QA.

Everything in this game — art, music, writing, code, voice acting, translations — was created entirely by humans using non-generative tools.

Documentation required

  • Written studio AI policy confirming prohibition for this product
  • Attestation by project lead

Note: Level 0 does not require that no employee has ever used generative AI in their personal lives, on other projects, or in general learning. It applies to the specific game product.

1 Assisted
Level 1: AI-Assisted Development
AI used in dev, nothing AI-generated ships

Generative AI was used as a development tool during production, but no AI-generated output reaches the player in the shipped product — neither creative content nor code. All player-facing assets and all shipped code are human-authored.

AI may have been used behind the scenes (brainstorming, prototyping, generating placeholder art later replaced, research), but everything in the final product is human-made.

The Level 1 / Level 1.5 boundary for code

  • Level 1: Generative AI is used in a conversational, external capacity for code-related tasks. A developer asks ChatGPT how to structure a function, pastes a snippet into a chat for debugging advice, or queries an AI tool for architectural guidance. The AI tool is not integrated into the project environment.
  • Level 1.5: Any integrated generative code tool — whether IDE-based (GitHub Copilot, Cursor, Amazon CodeWhisperer, Tabnine, Codeium) or agentic/CLI-based (Claude Code, Windsurf, Aider, Devin) — is present in the development environment and directly outputs code into the project.

This boundary is binary: is a generative code tool part of the project's development toolchain? If yes → Level 1.5. If AI is used only through external conversational interfaces → Level 1.

Documentation required

  • Written description of AI tools used and for what purposes
  • Confirmation that no integrated generative code tools are part of the development toolchain
  • Process documentation showing how AI-generated content is excluded from the final build
  • Attestation by project lead
1.5
Level 1.5: AI-Generated Code
AI code tools in the toolchain, AI code ships

Generative AI code tools — whether IDE-integrated (GitHub Copilot, Cursor, Codeium) or agentic/CLI-based (Claude Code, Windsurf, Aider, Devin) — are part of the development toolchain, and AI-generated code ships in the final product. However, no AI-generated creative content reaches the player. All art, audio, text, voice, music, localization, and other experiential content is human-authored.

Code occupies a unique position: it ships to the player and runs on their machine, but it is not creative content players experience aesthetically. A player cannot perceive whether a rendering function or save system was written by a human or generated by Copilot or Claude Code.

Documentation required

  • Identification of all generative AI code tools in the development toolchain (IDE-integrated and agentic/CLI-based)
  • Attestation by project lead and engineering lead confirming no AI-generated creative content in shipped product
2 Aug
Level 2: AI-Augmented Creative Pipelines
Human-led creative pipelines with AI tools integrated

Generative AI tools are integrated into human-driven creative pipelines that produce player-facing content. The creative pipeline is led by human professionals who make the core aesthetic, narrative, and design decisions. AI tools serve as instruments within that pipeline — accelerants, not replacements.

Level 2 is defined at the pipeline level, not the individual asset level. The question is: is this creative pipeline fundamentally human-driven, with AI tools integrated into it?

Indicators of a Level 2 pipeline

  • Human creative professionals with domain expertise are staffed and make core creative decisions
  • AI tools are one component in a multi-step workflow, not the entire workflow
  • The pipeline could exist without the AI tools; they accelerate rather than constitute the process
  • Creative direction, quality judgment, and final approval rest with humans

Examples

  • AI LOD tool integrated into a human-built environment art pipeline
  • AI-assisted texture upscaling refined by technical artists
  • AI-powered rigging or animation cleanup tools used by animators
  • Generative fill used as one step in a concept artist's process
  • AI-assisted localization with human translator review and finalization
  • AI draft generation used by writers as a starting point for substantial rewriting

Content category tags

One or more must be specified:

TagCovers
ARTVisual art, textures, concept art, UI elements, icons
AUDIOMusic, sound effects, ambient audio
VOICEVoice acting, spoken dialogue, narration
TEXTNarrative text, dialogue, lore, item descriptions, tutorials
LOCTranslations, localized text or audio
VIDEOCinematics, cutscenes, FMV content
3D3D models, animations, environments, character models

A designation reading GAIT 2: ART, LOC tells players that art and translations involve AI-augmented human-led pipelines, while other categories remain fully human-authored.

Documentation required

  • Inventory of creative pipelines where AI tools are integrated, with content category tags
  • Description of AI tools and their role within each pipeline
  • Team structure documentation showing human creative professionals staffed on AI-augmented pipelines
  • Attestation by project lead and relevant department leads
3 Gen
Level 3: AI-Generated Shipped Content
AI is the creative pipeline for specified content

Generative AI is the creative pipeline for one or more categories of player-facing content. The AI system is the primary content generator; human involvement is concentrated on prompting, selecting, curating, light editing, quality control, and integration. For the specified categories, humans are gatekeepers rather than authors.

A game may be Level 3 for some content categories and Level 2 or lower for others.

Content category tags

One or more must be specified (same tags as Level 2):

TagCovers
ARTVisual art, textures, concept art, UI elements, icons
AUDIOMusic, sound effects, ambient audio
VOICEVoice acting, spoken dialogue, narration
TEXTNarrative text, dialogue, lore, item descriptions, tutorials
LOCTranslations, localized text or audio
VIDEOCinematics, cutscenes, FMV content
3D3D models, animations, environments, character models

A designation reading GAIT 3: ART, LOC tells players that art and translations were primarily AI-generated while other categories remain human-led.

Code is not a Level 3 tag. AI-generated code is addressed at Level 1.5 regardless of scope.

For VOICE tag: Performer consent documentation per Red Line 1 is required.

For studio-owned or studio-trained AI models: Training data provenance documentation is required (see Section 7).

Documentation required

  • All Level 2 documentation, plus:
  • Specific content category tags with descriptions
  • Volume estimate (e.g., "approximately 40% of environmental textures")
  • Identification of AI models/services used
  • Training data provenance for studio-owned models
4 Live
Level 4: Live Generative AI
Real-time generative AI during gameplay

The game includes AI systems that generate content in real time during gameplay, producing non-deterministic outputs experienced directly by the player. This includes LLM-powered NPC dialogue, real-time AI-generated quests or narratives, runtime image/texture generation, AI-generated music beyond pre-composed variations, or any system where the player experiences content generated by a generative AI model during play.

Live content category tags

One or more must be specified:

TagCovers
LIVE-DIALOGUEAI-generated NPC conversation, reactive dialogue
LIVE-NARRATIVEAI-generated quests, storylines, scenarios, events
LIVE-ARTRuntime-generated textures, images, environments
LIVE-AUDIOGenerative music, adaptive soundscapes beyond pre-composed stems
LIVE-VOICEReal-time AI voice synthesis
LIVE-WORLDAI-generated world content, environments, creatures (generative, not procedural)

A game where generative AI is the explicit creative premise (e.g., AI Roguelite, Death by AI) carries Level 4 with full tag disclosure — which in those cases is a feature, not a warning.

For LIVE-VOICE: Performer consent documentation per Red Line 1 is required.

For LIVE-DIALOGUE: An in-game disclosure mechanism must inform players they are interacting with AI (may be contextually obvious in AI-themed games but must be explicit where it is not).

For games rated for minors: Red Line 3 safeguard documentation is required.

Documentation required

  • All applicable lower-level documentation, plus:
  • Technical description of each live AI system
  • Content safety and moderation measures
  • Data handling disclosure (what player data the AI system collects or uses)
  • Training data provenance for studio-owned models

Distinguishing Level 2 from Level 3

The distinction is architectural, not quantitative. The question is whether humans drive the creative pipeline with AI as a tool (Level 2), or whether AI is the creative pipeline with humans as curators (Level 3).

Level 2Level 3
Who drives the pipeline?Human creative professionalsThe AI system
Role of AITool within a human-directed workflowPrimary content generator
Role of humansAuthors using AI-augmented toolsCurators, prompters, selectors, QC
Could it work without AI?Yes, with different/slower toolsNot without fundamental restructuring
What does an auditor examine?Workflow architecture, team structureWhether content categories have human-led or AI-led pipelines

Solo Developers and Micro-Studios

For teams of 1–3 people without differentiated departments, the Level 2/3 distinction uses a primary authorship test:

  • Level 2: The human creator is the primary author of the base asset — sketching the original layout, writing the original draft, composing the original melody — with AI used for refinement, upscaling, or augmentation.
  • Level 3: The AI generates the foundational asset from a prompt and the human's role is selection, curation, and polish.

At studio scale: who drives the pipeline? At solo-developer scale: who authored the base asset?

Boundary Examples

  • Concept artist generates Midjourney references, paints final art by hand → Level 1 or 2 (human authored the shipped asset)
  • Studio uses AI texture generator in a pipeline staffed by technical artists who direct and approve → Level 2 (human-led pipeline)
  • Studio prompts diffusion model for character portraits, ships with color correction → Level 3: ART (AI generated the foundational asset)
  • Solo dev writes all dialogue, uses AI for localization with personal review → Level 2 if dev speaks target language; closer to Level 3: LOC if not
  • Two-person team: one composes music with AI orchestration, the other prompts an image generator for all visuals → Level 2 for audio, Level 3: ART for visuals

Training Data Provenance

Training data provenance requirements apply only to AI models that the studio owns, has trained, has fine-tuned, or has commissioned for exclusive use. Third-party generative AI services (OpenAI, Stability AI, Midjourney, etc.) are governed by their own providers and regulated separately.

Studios using off-the-shelf third-party AI tools are not required to document those providers' training data under GAIT. A game studio using Midjourney cannot meaningfully audit Midjourney's training corpus — that responsibility lies with model providers and the regulatory frameworks governing them.

Where provenance documentation is required (Levels 3 and 4, for studio-owned models)

  • Identification of training data sources (proprietary assets, licensed datasets, public domain, etc.)
  • Confirmation that training data use complies with applicable copyright and licensing terms
  • For models trained on work by identifiable individuals: documentation that consent or licensing requirements were met
  • Description of data curation or filtering applied

Composite Levels

A game may have different AI use levels across content categories. The framework uses the highest applicable level as the headline designation, with tags and sub-levels providing granularity.

Example designations
ScenarioGAIT Designation
AI code assistants + human-led art pipeline with AI tools + AI-generated item textGAIT 3: TEXT · Code: 1.5 · 2: ART
AI-assisted concept art + LLM-powered NPC dialogueGAIT 4: LIVE-DIALOGUE · 2: ART
Copilot for engineering, all creative content human-madeGAIT 1.5
No AI used anywhereGAIT 0

DLC, Updates, and Live Service Content

The GAIT level applies to the product as shipped. If post-launch content changes the AI use profile, the designation must be updated from the date of that content update.

Middleware and Engine-Level AI

If an engine or middleware component uses generative AI in a way that affects player-facing content, the resulting content is evaluated under the same criteria as studio-produced content. Studios cannot evade disclosure by attributing AI generation to a third-party tool.

User-Generated Content

If a game provides AI-powered creation tools to players, the game carries at minimum a Level 4 designation for the live AI system it provides, with appropriate tags.

Display Format

Format: GAIT [Level]: [Tags]

Examples

GAIT 0 — AI-Free

GAIT 1 — AI-Assisted Development

GAIT 1.5 — AI-Generated Code

GAIT 2: ART, LOC — AI-Augmented Art and Localization Pipelines

GAIT 3: ART, LOC — AI-Generated Art and Localization

GAIT 4: LIVE-DIALOGUE | 3: ART — Live AI Dialogue, AI-Generated Art

Recommended placement

  • Digital storefront pages
  • Physical packaging (where applicable)
  • In-game settings or info menu (non-intrusive placement)

Verification

GAIT currently operates on self-attestation. Studios complete the GAIT Self-Assessment Questionnaire, declare their level, and maintain the documentation required for that level. This is the sole verification mechanism at launch.

Self-attestation is a practical starting point: it allows adoption without gatekeepers, fees, or infrastructure that doesn't yet exist. It is also similar to how existing platform disclosure systems (e.g., Steam's AI content survey) operate.

If GAIT gains meaningful adoption, stronger verification mechanisms could be developed — including peer review by participating studios and independent third-party audits. A future "GAIT Verified" designation backed by independent audit would carry more weight than self-declaration alone. These structures will be built if and when there is demand for them.

Governance

GAIT is currently maintained by its creator as an individual proposal. There is no governance board, formal revision process, or institutional structure at this time.

If GAIT gains adoption, governance structures would need to follow — including transparent revision processes, public comment periods, and representation from studios, developers (including union representatives), players, and regulatory/academic advisors. Those are structures you build when there is something to govern.

Regulatory Compliance Crosswalk

Regulation Effective L0 L1 L1.5 L2 L3 L4
EU AI Act Art. 50 Aug 2026 Non-intrusive disclosure Machine-readable marking; visible disclosure Full chatbot/deepfake disclosure
South Korea AI Framework Jan 2026 User notification likely Notification mandatory; risk assessment Enhanced risk assessment
China AI Labeling Sep 2025 Explicit + implicit labels Full labeling regime Full labeling + human-like AI rules
Steam Current Optional Not required Not required Pre-Generated disclosure Pre-Generated with specifics Pre-Generated + Live-Generated
US FTC Current If provenance misrepresented Provenance applies All L3 + companion AI inquiry
California SB 942 Aug 2026 Exempt (video games) Exempt (video games) UGC AI tools may trigger
Microsoft XR-018 Current Recommended Mandatory; in-app reporting
Tennessee ELVIS Act Jul 2024 If performer voice/likeness Applies (VOICE) Applies (LIVE-VOICE)

"—" indicates no specific obligation at this level under this regulation.

Licensing

The GAIT designation, level names, and associated visual marks are available for use by any studio that completes the GAIT Self-Assessment Questionnaire, maintains the required documentation for their declared level, and complies with the Universal Red Lines. There is no fee for participation.

GAIT is an open framework proposed by James Jennings, a game developer and AI expert. For the latest version, updates, and registration, visit gaitframework.org.