Frequently asked questions
Everything studios, developers, and players ask about the framework — from basic mechanics to edge cases to the hard questions.
The Basics
GAIT (Game AI Transparency) is a free, voluntary disclosure framework that gives game studios a standardized way to communicate how generative AI is used in their products. It defines six levels — from 0 (AI-Free) to 4 (Live Generative AI) — based on a single question: what AI-generated content, if any, reaches the player?
GAIT was created by James Jennings — a game developer and AI expert who saw three colliding problems: players organizing witchhunts against studios suspected of using AI, developers paralyzed by fear of job displacement, and studios quietly adopting AI with no accountability or transparency. The framework grew out of extensive research into the regulatory landscape, player sentiment data, developer surveys, platform policies, and historical parallels from other industries.
GAIT is not maintained by the ESA, ESRB, IGDA, or any platform holder. No trade association or publisher funded or directed it. It's one individual's proposal for how the industry could handle this — offered freely in the hope that it's useful.
No. Adoption is entirely voluntary. No platform, store, or government requires it. Studios opt in by filling out the self-assessment questionnaire, maintaining the required documentation, and displaying the GAIT badge. There is no fee.
That said, the framework is designed to be regulatory-forward — every level maps to specific obligations under the EU AI Act, South Korea's AI Framework Act, China's AI labeling regime, Steam's disclosure policy, and other frameworks.1See GAIT Definition Document §12: Regulatory Compliance Crosswalk Studios that adopt GAIT now will have a head start when mandatory requirements arrive.
No. The framework is neutral on AI adoption. It describes how AI is used, not whether it should be. A studio that is proudly AI-free and a studio whose entire game is built on live AI systems can both participate. Games like AI Roguelite (82% positive Steam reviews)14ScreenHub — Steam generative AI games analysis or Death by AI (20 million players in two months)15Inworld AI — GDC 2025 report — where generative AI is the explicit creative premise — would carry Level 4 with full tags, which in their case is a selling point, not a warning.
Participation is free. No membership fee, no licensing cost, no per-title charge. The GAIT designation and badge assets are available to any studio that completes the self-assessment and maintains documentation. GAIT charges nothing.
Understanding the Levels
0 AI-Free — No generative AI used anywhere in production.
1 AI-Assisted Development — AI used behind the scenes; nothing AI-generated reaches the player.
1.5 AI-Generated Code — AI code generation tools (IDE-integrated and agentic/CLI-based) ship code in the product; all creative content is human-made.
2 AI-Augmented Creative Pipelines — AI tools integrated into human-led creative workflows. Humans author; AI assists. Content tags specify which categories.
3 AI-Generated Shipped Content — AI is the primary creator for one or more content categories. Humans curate. Content tags specify which categories.
4 Live Generative AI — AI generates content in real time during gameplay.
Code occupies a genuinely unique position. It ships to the player — it runs on their machine — but it's not creative content players experience aesthetically. Nobody can tell whether a save system or a network handler was written by a human or generated by Copilot.
Putting AI-generated code at Level 2 (alongside AI in creative pipelines) would overstate its impact on the player experience. Leaving it at Level 1 (no AI output ships) would be inaccurate — it does ship. The half-step acknowledges code's distinct character without creating a precedent for 2.5 or 3.5 gradations. It's the framework's only half-step.
GAIT uses the highest applicable level as the headline, with tags and sub-levels providing detail. Your game would be:
GAIT 3: LOC · 2: ART
This tells players that localization is AI-generated (Level 3) while art is human-led with AI tools (Level 2). The composite format lets studios be precise without requiring players to parse a matrix.
No. Level 0 applies to the specific game product carrying the designation. It means no generative AI was used in any capacity connected to that game's production. It doesn't require that employees have never used AI in their personal lives, on other projects, or for general learning. The attestation is about the product, not the person.
Code & Level 1.5
Level 1. The boundary is binary and based on tool integration, not the degree of human editing. If you're using AI in a conversational, external capacity — asking ChatGPT a question in a browser, pasting code into a chat window for debugging advice — that's reference material. You wrote the shipped code. The AI tool isn't integrated into your project environment.
Yes. The trigger is the tool's presence in the development toolchain, not the proportion of AI-generated output that ships. If an integrated generative code tool (Copilot, Cursor, CodeWhisperer, Tabnine, Claude Code, Windsurf, Aider, etc.) is part of your engineering team's development environment, you're Level 1.5 — regardless of acceptance rates. This keeps the boundary binary and instantly auditable: is the tool installed in the project environment or not?
Level 3 content tags (ART, VOICE, TEXT, etc.) describe content players experience aesthetically. Code is functional infrastructure — players interact with its effects but don't perceive the code itself. A studio that uses AI extensively for code but has human-led creative pipelines is Level 1.5, not Level 3. The two concerns are separate: creative provenance and engineering tooling.
The Level 2 / Level 3 Boundary
It is the hardest call, and it's intentionally a pipeline-level test, not an asset-by-asset percentage test. The question isn't "how much did a human change this texture?" It's: who drives the creative pipeline?
Level 2: Human creative professionals lead the pipeline. AI tools are integrated into it as instruments. The pipeline could exist without the AI tools — they accelerate, they don't constitute.
Level 3: AI is the pipeline for that content category. Humans prompt, select, curate, and quality-check — but the AI generates the foundational work. Removing the AI would require fundamentally restructuring how that content is produced.
For solo developers and micro-studios (1–3 people), the distinction shifts to a primary authorship test — the individual-scale analog of the pipeline test.
Level 2 requires you to be the primary author of the base asset: you sketched the layout, wrote the draft, composed the melody, modeled the geometry — then used AI for refinement, upscaling, or variation.
Level 3 applies when AI generates the foundational asset from a prompt and your role is selection, curation, and polish.
You sketch a character and use AI to generate color variations → Level 2
You prompt a diffusion model for character designs and pick the best one → Level 3: ART
Level 1 at most (likely Level 1 if the Midjourney output doesn't ship). The artist authored the final asset. Using AI for visual reference is no different from using a mood board. If the AI-generated reference images are discarded and the final shipped art is painted by hand, no AI-generated content reaches the player.
Content Tags
Content tags are required at Level 2, Level 3, and Level 4. They specify which content categories involve AI — whether AI-augmented, AI-generated, or live-generated. This gives players granular information rather than a blanket label.
Level 2/3 tags: ART, AUDIO, VOICE, TEXT, LOC, VIDEO, 3D
Level 4 tags: LIVE-DIALOGUE, LIVE-NARRATIVE, LIVE-ART, LIVE-AUDIO, LIVE-VOICE, LIVE-WORLD
A game can carry multiple tags. GAIT 3: ART, LOC tells players that art and translations are AI-generated while other categories (writing, voice, music) are human-led.
It depends on the pipeline. If human translators who speak the target language review, correct, and finalize every line, that's a human-led pipeline with an AI tool — Level 2: LOC. If the AI generates translations and a non-speaker does a surface pass for formatting, that's closer to Level 3: LOC because the human can't meaningfully evaluate the AI's creative output. The key question: can the human reviewer actually judge the quality of the AI's work in this domain?
Red Lines
No. Red Line 2 (No Concealment) distinguishes between QA failures and deliberate cover-ups. Accidental inclusion of placeholder AI content — a documented pattern across multiple shipped titles20Game Developer — Ubisoft, 11 Bit Studios, Frontier, and Sandfall all claimed AI assets were "placeholders" that shipped accidentally — triggers a mandatory correction process: update the GAIT level if needed, patch or replace the content, and issue a public acknowledgment. But it's not a red line violation in itself.
Deliberately concealing AI use that would place you at a higher level is a violation and forfeits the designation.
Red Line 1 applies to replicating an identifiable individual's creative output — a specific voice actor's voice, a specific performer's likeness, a named artist's identifiable style. It does not attempt to regulate the use of general-purpose AI models trained on broad datasets; that's a copyright question being litigated in 70+ pending lawsuits21Lewis Silkin — "From Pixels to Policies: Potential Impacts of the UK Government Copyright and AI," Feb 2025 and is governed by model providers and regulators, not a studio disclosure framework.
The line is: if you're specifically targeting a recognizable person's work, you need their consent. If you're using a general-purpose tool, your obligation is accurate level disclosure.
Registration & Badges
Four steps: (1) Review the level definitions and determine your GAIT level. (2) Fill out the self-assessment questionnaire on this site — you'll declare your level, applicable content tags, and confirm red line compliance. (3) Maintain the documentation required for your level on file. (4) Display the GAIT badge on your storefront page, packaging, and/or in-game settings. Badge assets and usage guidelines are provided upon registration.
Recommended placements, in order of priority:
Digital storefronts — Steam page, Epic store listing, PlayStation Store, Xbox Store, Nintendo eShop, App Store, Google Play. This is where players make purchase decisions.
Physical packaging — for boxed releases, on the back cover alongside other ratings.
In-game — accessible via a settings or info menu. Non-intrusive placement, consistent with the EU AI Act's artistic exemption for creative works (disclosure that doesn't "hamper the display or enjoyment of the work").16AI Law and Policy — "Some Implications of the EU AI Act on Video Game Developers," Feb 2025
Right now, GAIT is self-attestation only. You fill out the self-assessment questionnaire, declare your level, and maintain documentation to back it up. This is similar to how Steam's current AI disclosure system works — an honor system.
Self-attestation is an honest starting point. It means the framework can launch without gatekeepers, fees, or infrastructure that doesn't exist yet. It also means studios are making a public claim they can be held to.
If GAIT proves popular and studios want stronger credibility signals, verification services could be developed — peer review by other participating studios, or independent third-party audits analogous to USDA Organic certification. A "GAIT Verified" designation backed by an independent audit would carry more weight than self-declaration alone.
But that's a bridge to cross if and when there's demand. Right now, the priority is getting the vocabulary and the levels right. This is one person's proposal — building an audit infrastructure before anyone has adopted the framework would be putting the cart before the horse.
The GAIT level applies to the product as shipped. If post-launch content changes the AI use profile — say, a live service game adds an LLM-powered NPC system in a seasonal update — the designation must be updated from the date of that content update. The new level applies going forward.
Scope & Definitions
No. GAIT applies to the shipped game product only. Marketing and promotional materials are out of scope. AI use in advertising is a distinct domain with its own emerging frameworks (e.g., the IAB AI Transparency and Disclosure Framework). Using AI to generate a trailer doesn't affect your game's GAIT level.
No. Upscaling and rendering technologies (DLSS, FSR, Intel XeSS, frame generation) are explicitly excluded from GAIT's definition of "AI." They are broadly accepted by players, are not part of the generative AI debate, and raise no training-data consent issues. A game whose only AI technology is DLSS support does not need GAIT disclosure.
Explicitly out of scope. Procedural generation — deterministic algorithmic content creation with set rules — is fundamentally different from generative AI. Procedural systems use hand-designed rules and produce predictable outputs within defined parameters. They raise no training-data consent issues and are well-understood by players. A procedurally generated dungeon in a roguelike and a diffusion-model-generated texture are not the same thing, and the framework doesn't conflate them.
Out of scope. GAIT's definition of "AI" means generative AI specifically — systems trained on data that produce non-deterministic creative outputs. Traditional game AI (pathfinding, behavior trees, FSMs, utility AI, GOAP) is deterministic or heuristic, doesn't generate creative content, and has been standard practice in games for decades. These systems require no GAIT disclosure.
No. ML-driven optimization systems — matchmaking, dynamic pricing, store recommendations, anti-cheat — are classification/prediction models that don't generate creative content. They're excluded from GAIT's scope. (Note: these systems may trigger separate regulatory obligations under the EU AI Act or other frameworks depending on their impact, but that's outside GAIT's domain.)
GAIT includes an accessibility exemption. AI systems used exclusively for accessibility purposes — text-to-speech, audio description, adaptive difficulty for accessibility needs, visual filters for colorblindness, sign language interpretation — do not trigger Level 3 or 4 designations. This exists because disclosure frameworks must not disincentivize inclusive design for the roughly 429 million players with disabilities worldwide.13Estimated from WHO global disability prevalence (~16%) applied to ~2.7 billion global gamer population
The exemption is narrow by design: it applies only to systems whose sole purpose is accessibility. A system that serves both accessibility and general audiences is evaluated for its general use. You can't route all your AI through an accessibility wrapper to avoid disclosure.
Yes. If an engine or middleware component uses generative AI in a way that affects player-facing content, the resulting content is evaluated under the same criteria as studio-produced content. You can't evade disclosure by attributing AI generation to a third-party tool. What matters is what reaches the player, not who built the tool that made it.
Regulation & Compliance
The full definition document includes a regulatory crosswalk covering: the EU AI Act (Article 50, effective August 2026)2EU AI Act, Article 50 — Transparency obligations for providers and deployers of certain AI systems, South Korea's AI Framework Act (effective January 2026)3Library of Congress — "South Korea: Comprehensive AI Legal Framework Takes Effect," Feb 2026, China's AI labeling regime (effective September 2025)4Inside Privacy — "China Releases New Labeling Requirements for AI-Generated Content", Steam's content survey, US FTC existing authority, California SB 942 (effective August 2026)17Orrick — "Navigating the California AI Transparency Act: New Contract Requirements," Jan 2025, Microsoft XR-018 / Store Policy 11.1618Neowin — "Microsoft Store policies updated with new rules on generative AI, child safety, and more", and the Tennessee ELVIS Act (effective July 2024)5Davis+Gilbert LLP — AI legal updates on synthetic performer transparency and state-federal conflict. Each level maps to specific obligations under each framework.
It varies by jurisdiction. The EU AI Act classifies games as "minimal risk"16AI Law and Policy — "Some Implications of the EU AI Act on Video Game Developers," Feb 2025 and grants a creative-works carve-out that limits (but doesn't eliminate) transparency obligations. California SB 942 explicitly exempts non-user-generated video games.17Orrick — "Navigating the California AI Transparency Act: New Contract Requirements," Jan 2025 But South Korea's AI Framework Act explicitly includes games3Korea Herald — South Korea AI Framework Act coverage of video games confirmed by National Assembly Research Service — the National Assembly Research Service confirmed video games fall under its scope. China's regime applies to all AI-generated content on Chinese internet platforms.
The picture is a patchwork: some jurisdictions exempt most game AI, others don't. GAIT's crosswalk helps studios navigate this jurisdiction by jurisdiction rather than assuming a blanket exemption that doesn't exist.
No. Training data provenance requirements apply only to AI models your studio owns, has trained, has fine-tuned, or has commissioned for exclusive use. Third-party services are governed by their providers and regulated separately. A studio using Midjourney can't meaningfully audit Midjourney's training corpus — that's Midjourney's responsibility and the job of the regulatory frameworks governing model providers. GAIT doesn't impose obligations studios can't fulfill.
The Hard Questions
This is the "shampoo argument"6GameSpot — "AI Disclosures 'Make No Sense' for Game Stores" — the position that AI labeling is like disclosing what shampoo developers use. It's a real possibility, and GAIT takes it seriously.
Two responses. First, even if most games settle at Level 1.5 or 2, the upper levels remain meaningful. Players would still want to know whether a game generates NPC dialogue with an LLM in real time (Level 4) or whether all the art was AI-generated (Level 3: ART). The difference between "AI was used in development" and "AI is creating what you see and hear right now" matters regardless of how common the former becomes.
Second, the craft beer independence seal provides a precedent. The seal remains active with 5,700+ breweries eight years after launch7Brewers Association — Independent Craft Brewer Seal — even though the acquisition threat it was built to address has largely reversed. Labels can retain cultural and identity value even when market conditions shift. The bet is that the upper levels will always carry signal even if the lower levels become universal.
Probably not for most consumers, if the craft beer parallel holds. The 85% negative sentiment8Quantic Foundry — "How Do Gamers Feel About Generative AI?" Survey of 1,799 gamers, Oct–Dec 2025 is real, but stated preferences and purchasing behavior diverge. The Brewers Association independence seal reached 50%+ awareness among craft drinkers within three years, but Nielsen scanner data showed only a 0.2 percentage-point growth advantage for independent brands.9Brewers Association — "The Independent Craft Brewer Seal Is Steadily Gaining Awareness" (Nielsen scanner data, 2020) People say they care; purchasing data is murkier.
GAIT is designed to work regardless of whether labels change purchasing behavior. Its value extends across four dimensions: (1) consumer information for the engaged minority who do act on disclosure; (2) regulatory compliance for studios navigating a multi-jurisdiction patchwork; (3) developer information — knowing your studio's GAIT level tells you where AI fits in your pipeline and your role; and (4) cultural signaling — a shared vocabulary that makes the conversation more precise than "AI bad" vs. "AI inevitable."
If it only achieves the fourth, that's still worth having.
It might not be, and GAIT is honest about that risk. The loot box disclosure data is damning for voluntary self-regulation — compliance rates ranged from 7% (Epic)10Xiao et al. — "Shopping Around for Loot Box Presence Warning Labels," Games: Research and Practice (ACM), 2023 to 89% (Microsoft), and South Korea's move to legal mandates produced higher compliance than any voluntary scheme.
GAIT is honest about this. Right now it's self-attestation — studios declare their level and maintain documentation. That's the same honor system Steam uses. If GAIT gains traction, stronger verification could develop: peer review, independent audits, a "GAIT Verified" designation. But building audit infrastructure before anyone has adopted the framework would be premature.
Let's be direct: if no platforms integrate the framework and no enforcement mechanism develops, GAIT will be exactly as weak as every other voluntary label. The framework's design can't overcome a collective action problem on its own. It can only make it easier to solve — and it starts with one person putting a proposal out there.
Honestly? Right now, nothing except the definitions themselves being public. GAIT is one person's proposal — there's no governance board, no formal revision process, no institutional safeguards yet. The craft beer independence seal's history shows what happens when definitions shift to accommodate powerful members: the Brewers Association revised its definition four times, each time accommodating its largest member.
If GAIT gains adoption, governance would need to follow — transparent revision processes, public comment periods, representation from studios, developers, and players. But those are structures you build when there's something to govern. For now, the best defense is that the definitions are public, the rationale is documented, and anyone can call out if they shift in self-serving directions.
This is a genuinely sharp critique, and it has real force. GDC survey data shows negative AI sentiment is highest among visual artists (64%) and narrative designers (63%)11GDC 2026 State of the Game Industry Survey — the people whose jobs are most directly threatened. Upper management uses AI tools at nearly twice the rate of rank-and-file workers.19Game Developer — upper management (47%) vs. rank-and-file workers (29%) Anti-AI sentiment within the industry maps closely to economic vulnerability.
GAIT doesn't pretend the labor dimension doesn't exist. Every level includes a "what this means for developers" dimension, and the Level 2/3 pipeline test explicitly maps to the labor question: Level 2 means creative professionals using new tools in their existing roles; Level 3 means those roles have been structurally reduced or replaced for specific content categories.
But labor concerns and consumer transparency aren't mutually exclusive — they're intertwined. Consumers who want to know whether a game's art was AI-generated are often motivated by the same values that make developers oppose wholesale AI replacement: that creative work by humans has distinctive value. The framework serves both audiences. Saying "it's really about labor" doesn't mean consumer disclosure is pointless; it means disclosure has more dimensions than pure product information.