Skip to content
WaifuStack
Go back

How to Use Claude and GPT for NSFW Bot Development (Without Getting Blocked)

Here’s the irony of NSFW AI development: the best coding assistants refuse to help you build the thing you’re building.

You’re writing a roleplay bot that handles adult content. You want Claude or GPT to help with code review, refactoring, architecture decisions. But the moment your codebase mentions anything explicit, the assistant refuses or gives watered-down responses.

We solved this. Suzune’s GM Console lets us use Claude for development tasks on our NSFW project — Claude never sees the explicit content, but can work on everything else. Here’s how.

Table of contents

Open Table of contents

The Core Problem

When you ask Claude to help with code in an NSFW project:

Developer: "Can you review the system prompt assembly in memory.py?"
Claude: "I notice this code handles adult content. I'd prefer not to..."

Or worse — Claude gives a response but avoids engaging with the NSFW-related logic, giving you useless feedback on exactly the parts you need help with most.

The problem isn’t that Claude can’t help — it’s that Claude sees content it’s not comfortable with in the context.


The Solution: Separation of Concerns

The key insight: separate NSFW content from development tasks at the file level.

File-Level Separation

In Suzune, every character’s definition is split into three files (see How to Design AI Personalities with YAML for the full character system):

characters/sakura/
├── persona.md      ← WHO the character is (SFW)
├── rules.md        ← HOW the character speaks (SFW)
└── nsfw.md         ← NSFW-specific behavior (explicit)

The split is done by scanning section headers for keywords:

Keywords FoundGoes To
Speech patterns, tone, style rulesrules.md
NSFW, sexual, explicit content descriptorsnsfw.md
Everything else (personality, background, relationships)persona.md

This means:

Selective Context Loading

When the GM Console calls Claude for development tasks, it only loads SFW files:

def _load_persona_text(self, character) -> str:
    """Load persona text — deliberately excludes nsfw.md"""
    persona_path = character.character_dir / "persona.md"
    if persona_path.exists():
        return persona_path.read_text()
    return ""  # fallback

Claude sees the character’s personality, speech patterns, and relationships — everything needed for code review, architecture decisions, and character design feedback — without ever encountering explicit content.


The GM Console Architecture

We built a CLI tool called GM Console (gmc) that provides Claude with 45+ development tools, all designed for NSFW-safe operation:

# Single command
gmc "audit all characters for consistency issues"

# Interactive session
gmc
GM> list characters
GM> run quality check on sakura
GM> draft a new character card for a detective archetype

Dual-Model Design

GM Console
├── Primary (DeepSeek V3.2)
│   └── Reasoning, analysis, diagnostics
│       (NSFW-safe by context control)

└── Creative (GLM-5)
    └── Character creation, scenario writing
        (NSFW-capable when needed)

Two models serve different roles:

What the GM Console Can Do

CategoryToolsNSFW Exposure
DiagnosticsList characters, health check, audit, cost reportNone
MaintenanceRepair memos, compress history, clean databaseNone
Character ManagementEdit persona/rules, migrate file structurepersona.md + rules.md only
Character CreationDraft new character cards, iterative refinementCreative LLM (GLM-5)
Image ManagementWardrobe, expressions, image generationPrompt text only
Scenario GenerationMulti-character RP scenariosCharacter personalities only
Quality AnalysisCharacter consistency checks, side-by-side comparisonsSFW sections only
InfrastructureRestart service, run backupsNone

Techniques You Can Steal

Even without building a full GM Console, these patterns work for any NSFW project:

1. The File Split Pattern

If your project has mixed SFW/NSFW content, split it:

config/
├── character.yaml      ← SFW config (name, capabilities)
├── personality.md      ← SFW personality description
└── adult_content.md    ← NSFW rules (never shown to Claude)

When asking Claude for help, reference only the SFW files:

"Here's the character config (character.yaml) and personality
(personality.md). Can you review the system prompt assembly
logic in memory.py?"

2. The Abstraction Pattern

When you need Claude’s help with NSFW-adjacent logic, abstract the content:

# Instead of showing Claude the actual NSFW detection code:
NSFW_KEYWORDS = ["explicit", "term", "list", ...]

# Show this:
CONTENT_KEYWORDS = ["<placeholder>", ...]  # actual keywords in separate config

# Claude can review the detection LOGIC without seeing the CONTENT

3. The Truncation Pattern

When loading character data for Claude, truncate to essential context:

persona = load_persona_text(character)
if len(persona) > 1500:
    persona = persona[:1500] + "..."

This prevents accidentally including NSFW content that might appear late in a long file, and keeps Claude’s context focused on what matters.

4. The Confirmation Pattern

For tools that modify NSFW content, use a dry-run + confirm flow:

def _pend_action(self, description, action_fn):
    action_id = uuid.uuid4().hex[:8]
    self._pending[action_id] = {
        "desc": description,
        "fn": action_fn,
        "expires": time.time() + 300  # 5 min timeout
    }
    return f"[Preview] {description}\nRun confirm_action('{action_id}') to execute"

Claude proposes changes, you review them, then confirm. No blind writes to NSFW files.

5. The Log Suppression Pattern

Prevent NSFW content from leaking into development logs:

# Suppress LLM call logging that might contain NSFW prompts
logging.getLogger("core.llm").setLevel(logging.WARNING)

This keeps your terminal and log files clean when working alongside non-NSFW-aware tools.


Real-World Workflow

Here’s what a typical development session looks like:

# Morning: check system health
$ gmc "run monitor, report any issues"
 "All 12 characters healthy. Mao's memo is 89% of max size.
   Cost this week: $8.40. No errors in last 24h."

# Afternoon: create a new character
$ gmc
GM> draft a new character card: shy librarian archetype,
    mid-20s, hides passion for romance novels
 [GLM-5 generates character card with personality, speech
   patterns, backstory all SFW for the initial draft]

GM> revise: make her more quietly witty, less passive
 [Refined card]

GM> save card as "shiori"
 [Saved to characters/shiori/]

# Later: edit speech rules
GM> edit file characters/sakura/rules.md
 [Claude reviews and suggests improvements to speech patterns]

Claude participates in the entire development workflow. The NSFW aspects of the project are handled by the appropriate models at runtime — Claude handles the craftsmanship.


The Bigger Picture

This pattern — separating concerns so that restrictive tools can still be useful — applies beyond NSFW development:

The principle is the same: you don’t need to show the AI everything to get useful help. Show it what it needs, hide what it doesn’t.


Getting Started

If you’re building an NSFW AI project and want Claude’s help:

  1. Split your content files into SFW and NSFW components
  2. Never paste NSFW content directly into Claude — reference SFW files instead
  3. Abstract explicit logic — Claude can review detection code without seeing the keywords
  4. Use a development model for NSFW-aware tasks (DeepSeek or GLM-5 via OpenRouter)
  5. Build a simple CLI tool that pre-filters context before sending to Claude

You don’t need a full GM Console to start. Even manually splitting your character files and being deliberate about what you paste into Claude makes a huge difference.


For the multi-model architecture behind this, see Navigating AI Content Filters. For model comparisons, see DeepSeek vs Claude vs Gemini.


Share this post on:

Previous Post
Running an AI Roleplay Bot on $50/month: A Cost Breakdown
Next Post
Welcome to WaifuStack: Building AI Companions, One Stack at a Time