Advanced

Enhancements & Next Steps

Your AI image generator is deployed and working. This final lesson covers the features that separate a side project from a real product: content moderation, user accounts, monetization strategies, and answers to the most common questions.

Content Moderation

Any public image generator will receive requests for inappropriate content. You need moderation at two levels: prompt filtering (before generation) and image scanning (after generation).

Prompt Filtering

Screen prompts before sending them to the API. This saves money (no wasted API calls) and prevents generating content you do not want associated with your platform.

# services/moderation_service.py
import re

# Blocked terms and patterns
BLOCKED_PATTERNS = [
    r"\b(nsfw|nude|naked|explicit|pornograph)\w*\b",
    r"\b(gore|gory|violent|murder|torture)\w*\b",
    r"\b(hate|racist|sexist)\w*\b",
    r"\b(child|minor|underage)\w*\s*(nude|naked|explicit)\w*\b",
]

BLOCKED_REGEX = re.compile(
    "|".join(BLOCKED_PATTERNS), re.IGNORECASE
)


class ModerationService:
    def check_prompt(self, prompt: str) -> dict:
        """Screen a prompt for prohibited content.

        Returns:
            {"allowed": bool, "reason": str or None}
        """
        # Pattern-based check
        match = BLOCKED_REGEX.search(prompt)
        if match:
            return {
                "allowed": False,
                "reason": "Prompt contains prohibited content.",
            }

        return {"allowed": True, "reason": None}

    async def check_prompt_with_llm(self, prompt: str) -> dict:
        """Use an LLM for more nuanced moderation."""
        from openai import OpenAI
        import os

        client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
        response = client.moderations.create(input=prompt)
        result = response.results[0]

        if result.flagged:
            categories = [
                cat for cat, flagged
                in result.categories.model_dump().items()
                if flagged
            ]
            return {
                "allowed": False,
                "reason": f"Content flagged: {', '.join(categories)}",
            }

        return {"allowed": True, "reason": None}

Add the moderation check to the generate endpoint:

# In routers/generate.py
from services.moderation_service import ModerationService

moderation = ModerationService()

@router.post("/generate")
async def generate_image(request: Request, body: GenerateRequest):
    # Check moderation FIRST
    mod_result = moderation.check_prompt(body.prompt)
    if not mod_result["allowed"]:
        raise HTTPException(status_code=400, detail=mod_result["reason"])

    # ... rest of generation logic ...

Post-Generation Image Scanning

Even with prompt filtering, models can sometimes generate unexpected content. Add a post-generation safety check:

async def scan_generated_image(self, image_path: str) -> dict:
    """Scan a generated image for NSFW content using an API."""
    # Option 1: Use Stability AI's built-in safety filter
    # (enabled by default in most API calls)

    # Option 2: Use a dedicated NSFW detection model via Replicate
    import replicate

    with open(image_path, "rb") as f:
        output = replicate.run(
            "andreasjansson/nsfw-image-detection-model:...",
            input={"image": f},
        )

    is_safe = output.get("safe", True)
    confidence = output.get("confidence", 1.0)

    if not is_safe and confidence > 0.8:
        # Delete the generated image
        Path(image_path).unlink(missing_ok=True)
        return {"safe": False, "action": "deleted"}

    return {"safe": True, "action": "none"}
Moderation is not optional for public apps. Without it, your app will be used to generate harmful content within hours of launch. Both Stability AI and Replicate include basic safety filters by default, but you should add your own layer on top. The reputational and legal risks of an unmoderated image generator are significant.

User Accounts

Adding user accounts lets you track usage per person, offer personalized galleries, and implement tiered access. Here is a minimal implementation using JWT tokens:

# Add to requirements.txt:
# python-jose[cryptography]==3.3.0
# passlib[bcrypt]==1.7.4

# services/auth_service.py
from datetime import datetime, timedelta
from jose import jwt, JWTError
from passlib.context import CryptContext
import os

SECRET_KEY = os.getenv("JWT_SECRET", "change-this-in-production")
ALGORITHM = "HS256"
ACCESS_TOKEN_EXPIRE = timedelta(hours=24)

pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")

# Simple in-memory user store (use a database in production)
users_db: dict[str, dict] = {}


def create_user(email: str, password: str) -> dict:
    """Register a new user."""
    if email in users_db:
        raise ValueError("Email already registered")

    user = {
        "email": email,
        "hashed_password": pwd_context.hash(password),
        "created_at": datetime.now().isoformat(),
        "tier": "free",  # free, pro, enterprise
        "images_generated": 0,
        "daily_limit": 20,  # Free tier: 20 images/day
    }
    users_db[email] = user
    return {"email": email, "tier": user["tier"]}


def authenticate_user(email: str, password: str) -> dict | None:
    """Verify email and password. Returns user dict or None."""
    user = users_db.get(email)
    if not user:
        return None
    if not pwd_context.verify(password, user["hashed_password"]):
        return None
    return user


def create_token(email: str) -> str:
    """Create a JWT access token."""
    expire = datetime.utcnow() + ACCESS_TOKEN_EXPIRE
    payload = {"sub": email, "exp": expire}
    return jwt.encode(payload, SECRET_KEY, algorithm=ALGORITHM)


def verify_token(token: str) -> str | None:
    """Verify a JWT token and return the email, or None."""
    try:
        payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
        return payload.get("sub")
    except JWTError:
        return None

User Tiers

FeatureFreePro ($9/mo)Enterprise ($49/mo)
Daily image limit20200Unlimited
Max resolution1024x10242048x20482048x2048
Style presets3AllAll + custom
LLM prompt enhanceNoYesYes
Batch generationNoUp to 4Up to 8
Image-to-imageNoYesYes
InpaintingNoYesYes
API accessNoNoYes
Priority queueNoYesYes
Image storage7 days30 days90 days

Monetization Strategies

There are several proven ways to monetize an AI image generator:

1. Freemium Subscription (Recommended)

Offer a free tier with limits and paid tiers for power users. This is the model used by Midjourney, Leonardo.ai, and most successful AI image platforms.

  • Pros: Predictable revenue, scales well, easy to implement
  • Cons: Requires enough free-tier value to attract users
  • Implementation: Stripe for payments, user tiers in the database

2. Credit-Based System

Users buy credits and spend them per generation. Different features cost different amounts.

  • Pros: Pay-per-use is fair, users understand the model
  • Cons: More complex billing, less predictable revenue
  • Pricing example: 100 credits for $5, 1 image = 1 credit, upscale = 2 credits

3. Advertising

Display ads alongside the gallery. Works best with high traffic and free access.

  • Pros: No barrier to entry for users, simple to implement
  • Cons: Low revenue per user, degrades user experience
  • Best for: High-traffic apps with casual users

4. API Access

Sell API access for developers who want to integrate image generation into their own apps.

  • Pros: High-value customers, B2B revenue
  • Cons: Requires API documentation, support, and SLAs
  • Pricing example: $0.01 per image via API, volume discounts
💡
Start with freemium. It is the fastest path to revenue and the easiest to iterate on. You can always add credits or API access later. The key is getting users first — monetization follows adoption.

Additional Enhancements

Here are features you can add to make your image generator stand out:

  • Social sharing: One-click share to Twitter, Instagram, or a public gallery page with SEO-friendly URLs.
  • Image variations: "Generate 4 variations of this image" button that creates similar images with different seeds.
  • Community gallery: A public feed of (opted-in) user-generated images with likes and prompt sharing.
  • Prompt marketplace: Let users sell or share their best prompt templates.
  • Image editing tools: Crop, rotate, add text overlay, adjust brightness/contrast after generation.
  • Generation queue: For high traffic, add a Redis-based job queue so generations run asynchronously.
  • Webhook notifications: Notify users when long-running generations complete.
  • Multi-model support: Let users choose between Stable Diffusion XL, DALL-E 3, Flux, and other models.

Frequently Asked Questions

How much does it cost to run this in production?

The main cost is the image generation API. At approximately $0.004 per image with Stability AI, 1,000 images per day costs about $4/day or $120/month. Server costs are minimal — a $5-10/month VPS can handle the application itself since the heavy computation happens on the API provider's infrastructure. Total: expect $50-200/month for a moderately popular app.

Can I use the generated images commercially?

Yes, with the cloud APIs used in this project. Stability AI and Replicate both grant you full commercial rights to images generated with their APIs (check their current terms of service to confirm). However, you cannot claim copyright on AI-generated images in most jurisdictions. You also cannot generate images that infringe on existing copyrights or trademarks.

What if the API provider goes down or changes pricing?

This is why the code abstracts the provider behind an interface. To switch from Stability AI to Replicate (or any other provider), you only need to change the image_service.py implementation. To add a new provider, implement the same methods and add it as an option in the router. Always have at least two providers configured as fallbacks.

How do I handle copyright and legal issues?

Add clear terms of service stating that users are responsible for the content they generate. Implement content moderation to prevent generating copyrighted characters, real people's likenesses, or trademarked content. Keep logs of who generated what (with user accounts) in case you need to respond to legal requests. Consider adding a DMCA takedown process for your community gallery.

Can I run this without any API costs using local models?

Yes. Replace the API calls in image_service.py with local inference using the diffusers library from Hugging Face. You will need a GPU with at least 8GB VRAM (12GB+ recommended for SDXL). A consumer RTX 3060 or better works well. Local inference is free but slower and limited to your hardware's capacity. The code structure in this project makes swapping to local models straightforward.

How do I add more image generation models?

The cleanest approach is to create a new method in ImageService for each model and add it as a provider option. For example, to add DALL-E 3: create a generate_dalle method that calls the OpenAI Images API, add "dalle" to the provider validation pattern, and add a case for it in the router. Replicate hosts hundreds of models you can integrate the same way — just change the model ID.

How should I handle image storage long-term?

For a small app, local disk storage works fine. As you scale, move to S3 or a similar object storage service with a CDN in front. Implement an image lifecycle policy: keep images for 7-90 days depending on user tier, then delete them automatically. This prevents storage costs from growing indefinitely. Always give users the option to download their images before deletion.

What is the best way to improve image quality?

Three things have the biggest impact: (1) Better prompts — the LLM enhancement feature from Lesson 3 handles this automatically. (2) Negative prompts — always include quality-focused negative prompts. (3) Higher step counts — 30-40 steps produces noticeably better results than 20, with diminishing returns above 40. Also consider using the latest models (SDXL, SD3) which produce significantly better results than older versions.

What You Built

Congratulations. You have built a complete, production-ready AI image generation application from scratch. Here is everything you accomplished:

  1. Project Setup — Designed the architecture, chose your tech stack, and configured API keys
  2. Image Generation API — Built a FastAPI backend with Stability AI and Replicate integration
  3. Prompt Enhancement — Added LLM-powered prompt improvement, style presets, and smart negative prompts
  4. Web Interface — Created a responsive UI with gallery, history, downloads, and a polished dark theme
  5. Advanced Features — Implemented image-to-image, inpainting, upscaling, and batch generation
  6. Deployment — Containerized with Docker, added rate limiting, cost controls, and CDN serving
  7. Enhancements — Added content moderation, user accounts, and explored monetization
💡
Keep building. The best way to learn is to keep adding features. Try integrating a new model, building a mobile app frontend, or adding real-time generation with WebSockets. Every feature you add deepens your understanding of AI application development.