invoke-ai/invokeai
Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
Professional AI creative engine for Stable Diffusion image generation
User interactions in the web UI trigger Redux actions that build node graphs, which are sent to the FastAPI backend where the Invoker executes them through the invocation system, ultimately generating images stored via the ImageService
Under the hood, the system uses 3 feedback loops, 4 data pools, 4 control points to manage its runtime behavior.
Structural Verdict
A 10-component fullstack with 5 connections. 2158 files analyzed. Loosely coupled — components are relatively independent.
How Data Flows Through the System
User interactions in the web UI trigger Redux actions that build node graphs, which are sent to the FastAPI backend where the Invoker executes them through the invocation system, ultimately generating images stored via the ImageService
- User Input — User configures generation parameters in React UI components
- Graph Building — Frontend builds execution graphs from UI state using graph builders
- API Request — Redux RTK Query dispatches graph to FastAPI /queue/enqueue endpoint
- Session Processing — SessionProcessor queues and executes invocation workflows
- Model Loading — ModelManagerService loads AI models based on invocation requirements
- AI Inference — Invocations execute diffusion pipeline steps using loaded models
- Image Storage — Generated images are saved via ImageService and metadata stored
- WebSocket Updates — Progress and results sent to frontend via WebSocket events
System Behavior
How the system actually operates at runtime — where data accumulates, what loops, what waits, and what controls what.
Data Pools
SQLite database storing image metadata, board associations, and generation parameters
In-memory cache of loaded AI models to avoid repeated disk loading
SQLite-backed queue of pending generation sessions
Client-side application state including UI settings and generation parameters
Feedback Loops
- Generation Progress (polling, balancing) — Trigger: Generation starts. Action: WebSocket sends progress updates to frontend. Exit: Generation completes.
- Model Loading Retry (retry, balancing) — Trigger: Model loading fails. Action: ModelManager retries loading with fallback strategies. Exit: Model loads successfully or max retries exceeded.
- Session Queue Processing (polling, balancing) — Trigger: Sessions exist in queue. Action: SessionProcessor picks up and executes next session. Exit: Queue is empty.
Delays & Async Processing
- Model Loading (async-processing, ~variable (seconds to minutes)) — Generation requests wait for models to load into memory
- AI Inference (async-processing, ~variable (seconds to minutes)) — Users wait for diffusion process to complete
- Queue Processing (queue-drain, ~variable) — Multiple users' requests are processed sequentially
Control Points
- Model Precision (env-var) — Controls: Whether models run in fp16 or fp32 precision
- Seamless Axes (runtime-toggle) — Controls: Which axes (x/y) to apply seamless tiling. Default: empty list
- Control Weight (threshold) — Controls: Strength of ControlNet guidance in generation. Default: 1.0
- Queue Prepend (feature-flag) — Controls: Whether to prioritize new requests over queued ones. Default: true
Technology Stack
Backend web framework
Frontend UI framework
Frontend state management
AI model inference engine
Stable Diffusion pipeline library
Local database for metadata
Data validation and serialization
Real-time communication
Key Components
- InvokeAIUI (component) — Main React application component that orchestrates the entire UI
invokeai/frontend/web/src/app/components/InvokeAIUI.tsx - ApiDependencies (service) — Initializes and provides all backend services to FastAPI
invokeai/app/api/dependencies.py - Invoker (service) — Core orchestration service that executes invocation workflows
invokeai/app/services/invoker.py - ControlNetInvocation (class) — Handles ControlNet model invocations for guided image generation
invokeai/app/invocations/controlnet.py - MainModelDefaultSettings (class) — Configuration class for default model parameters like resolution and CFG scale
invokeai/backend/model_manager/configs/main.py - ModelManagerService (service) — Manages AI model loading, caching, and lifecycle
invokeai/app/services/model_manager/model_manager_default.py - ImageService (service) — Handles image CRUD operations and metadata management
invokeai/app/services/images/images_default.py - SessionProcessor (service) — Processes queued generation sessions and manages execution state
invokeai/app/services/session_processor/session_processor_default.py - buildControlLayersGraph (function) — Constructs the node graph for control layer-based image generation
invokeai/frontend/web/src/features/nodes/util/graph/generation/buildControlLayersGraph.ts - cropImageModalApi (module) — State management for the image cropping modal using nanostores
invokeai/frontend/web/src/features/cropper/store/index.ts
Configuration
pins.json (json)
python(string, unknown) — default: 3.12torchIndexUrl.win32.cuda(string, unknown) — default: https://download.pytorch.org/whl/cu128torchIndexUrl.linux.cpu(string, unknown) — default: https://download.pytorch.org/whl/cputorchIndexUrl.linux.rocm(string, unknown) — default: https://download.pytorch.org/whl/rocm6.3torchIndexUrl.linux.cuda(string, unknown) — default: https://download.pytorch.org/whl/cu128
invokeai/app/api/routers/boards.py (python-pydantic)
board_id(str, unknown) — default: Field(description="The id of the board that was deleted.")deleted_board_images(list[str], unknown) — default: Field(deleted_images(list[str], unknown) — default: Field(description="The names of the images that were deleted.")
invokeai/app/api/routers/images.py (python-pydantic)
width(int, unknown) — default: Field(..., gt=0)height(int, unknown) — default: Field(..., gt=0)
invokeai/app/api/routers/images.py (python-pydantic)
image_dto(ImageDTO, unknown) — default: Body(description="The image DTO")presigned_url(str, unknown) — default: Body(description="The URL to get the presigned URL for the image upload")
Science Pipeline
- Load base model — ModelManagerService loads diffusion model weights into GPU memory
invokeai/app/services/model_manager/model_manager_default.py - Process control inputs — ControlNet processes control images to guidance tensors [(batch, height, width, channels) → (batch, height, width, control_channels)]
invokeai/app/invocations/controlnet.py - Diffusion denoising — UNet performs iterative denoising guided by prompts and controls [(batch, channels, height//8, width//8) → (batch, channels, height//8, width//8)]
invokeai/app/invocations/ - VAE decode — VAE decoder converts latent tensors to RGB images [(batch, 4, height//8, width//8) → (batch, 3, height, width)]
invokeai/app/invocations/
Assumptions & Constraints
- [warning] Assumes control images match expected dimensions for the loaded ControlNet model but no explicit validation (shape)
- [info] Hardcoded default resolutions (512x512 for SD1, 768x768 for SD2, 1024x1024 for SDXL) without runtime validation (value-range)
- [warning] Assumes IP-Adapter model and image encoder model are compatible but no explicit compatibility check (dependency)
Explore the interactive analysis
See the full architecture map, data flow, and code patterns visualization.
Analyze on CodeSeaRelated Fullstack Repositories
Frequently Asked Questions
What is InvokeAI used for?
Professional AI creative engine for Stable Diffusion image generation invoke-ai/invokeai is a 10-component fullstack written in TypeScript. Loosely coupled — components are relatively independent. The codebase contains 2158 files.
How is InvokeAI architected?
InvokeAI is organized into 5 architecture layers: Web Frontend, API Layer, Service Layer, Backend Engine, and 1 more. Loosely coupled — components are relatively independent. This layered structure keeps concerns separated and modules independent.
How does data flow through InvokeAI?
Data moves through 8 stages: User Input → Graph Building → API Request → Session Processing → Model Loading → .... User interactions in the web UI trigger Redux actions that build node graphs, which are sent to the FastAPI backend where the Invoker executes them through the invocation system, ultimately generating images stored via the ImageService This pipeline design reflects a complex multi-stage processing system.
What technologies does InvokeAI use?
The core stack includes FastAPI (Backend web framework), React (Frontend UI framework), Redux Toolkit (Frontend state management), PyTorch (AI model inference engine), Diffusers (Stable Diffusion pipeline library), SQLite (Local database for metadata), and 2 more. A focused set of dependencies that keeps the build manageable.
What system dynamics does InvokeAI have?
InvokeAI exhibits 4 data pools (Image Records Database, Model Cache), 3 feedback loops, 4 control points, 3 delays. The feedback loops handle polling and retry. These runtime behaviors shape how the system responds to load, failures, and configuration changes.
What design patterns does InvokeAI use?
5 design patterns detected: Invocation System, Service Layer Pattern, Redux Toolkit Query, Pydantic Configuration, Graph-based Generation.
Analyzed on March 31, 2026 by CodeSea. Written by Karolina Sarna.