invoke-ai/invokeai

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.

27,021 stars TypeScript 8 components

Generates AI images using Stable Diffusion models with a web UI and node-based workflow system

Data flows from user input through the React frontend to FastAPI endpoints, where workflows are queued as SessionQueueItems. The invocation engine processes these by converting workflow graphs into executable tasks, loading AI models as needed, and generating images which are stored in SQLite with files on disk. Real-time progress updates flow back through WebSocket events to update the UI.

Under the hood, the system uses 4 feedback loops, 5 data pools, 5 control points to manage its runtime behavior.

A 8-component fullstack. 2270 files analyzed. Data flows through 6 distinct pipeline stages.

How Data Flows Through the System

Data flows from user input through the React frontend to FastAPI endpoints, where workflows are queued as SessionQueueItems. The invocation engine processes these by converting workflow graphs into executable tasks, loading AI models as needed, and generating images which are stored in SQLite with files on disk. Real-time progress updates flow back through WebSocket events to update the UI.

  1. Workflow submission — React frontend sends workflow JSON through FastAPI endpoints to queue service, validating workflow structure and storing as SessionQueueItem [WorkflowWithoutID → SessionQueueItem]
  2. Session processing — DefaultSessionProcessor pulls queued sessions, converts workflow nodes into BaseInvocation instances, and builds execution graph with dependency resolution [SessionQueueItem → BaseInvocation]
  3. Model loading — ModelManagerService loads required AI models (Stable Diffusion, ControlNet, LoRA) from disk into GPU memory using configuration from ModelConfig records [ModelConfig → loaded model instances]
  4. Invocation execution — Individual invocation nodes execute in dependency order, applying transformations like ControlNet conditioning, LoRA weights, and Stable Diffusion denoising [BaseInvocation → PIL Image]
  5. Image storage — Generated images are processed by ImageService which extracts metadata, generates thumbnails via DiskImageFileStorage, and creates ImageDTO records in SQLite [PIL Image → ImageDTO]
  6. Real-time updates — FastAPIEventService broadcasts progress events and completion status through WebSocket connections to update React frontend in real-time [execution progress → WebSocket messages]

Data Models

The data structures that flow between stages — the contracts that hold the system together.

ImageDTO services/api/types
Pydantic model with image_name: str, board_id: str | None, image_url: str, thumbnail_url: str, width: int, height: int, created_at: datetime, metadata: dict, workflow: dict | None
Created during generation or upload, stored in SQLite with metadata, served via REST API to frontend for display
BaseInvocation invokeai/app/invocations/baseinvocation.py
Pydantic base class with id: str, workflow_id: str, type: str, plus subclass-specific fields for inputs and outputs
Parsed from workflow JSON, queued as executable tasks, processed by invocation engine with typed inputs/outputs
ModelConfig invokeai/backend/model_manager/configs/base.py
Pydantic model with key: str, name: str, base: BaseModelType, type: ModelType, format: ModelFormat, path: str, plus model-specific configuration fields
Created during model installation, stored in SQLite, loaded into memory when needed for generation
WorkflowWithoutID invokeai/app/services/workflow_records/workflow_records_common.py
Pydantic model with name: str, description: str, version: str, contact: str, tags: list[str], notes: str, exposedFields: list, meta: dict, nodes: dict, edges: list
Built in node editor, validated and stored in database, converted to execution graph for invocation processing
SessionQueueItem invokeai/app/services/session_queue/session_queue_common.py
Pydantic model with session_id: str, batch_id: str, workflow: WorkflowWithoutID, created_at: datetime, updated_at: datetime, status: SessionQueueItemStatus
Created when workflow is queued, processed by session runner, status updated during execution, results stored as images

Hidden Assumptions

Things this code relies on but never validates. These are the things that cause silent failures when the system changes.

critical Environment weakly guarded

DOM element with id 'root' exists in the HTML document when React app initializes

If this fails: Application fails to mount with 'Cannot read properties of null' error if HTML template lacks root div or uses different id

invokeai/frontend/web/src/main.tsx:ReactDOM.createRoot
critical Domain unguarded

window.location.origin provides a valid backend API base URL that matches the FastAPI server location

If this fails: All API requests fail with network errors when frontend and backend are served from different origins or ports in development/deployment scenarios

invokeai/frontend/web/src/services/api/index.ts:getBaseUrl
critical Resource unguarded

PyTorch can successfully initialize and detect GPU/CUDA availability at application startup

If this fails: Service initialization fails silently or falls back to CPU-only mode without user notification when CUDA drivers are missing or incompatible

invokeai/app/api/dependencies.py:torch
warning Contract weakly guarded

Editor instance always has a destroy() method and calling it is safe during modal close

If this fails: Uncaught exception during modal cleanup if Editor implementation doesn't provide destroy() or if destroy() throws an error, leaving modal in broken state

invokeai/frontend/web/src/features/cropper/store/index.ts:state.editor.destroy
warning Scale guarded

Image dimensions (width/height) are always multiples of 8 and at least 64 pixels as enforced by Field validation

If this fails: Model configuration creation fails with validation errors when loading models that expect different dimension constraints or when users specify invalid dimensions

invokeai/backend/model_manager/configs/main.py:MainModelDefaultSettings
warning Domain unguarded

BaseModelType enum values directly map to specific default dimensions (SD1=512x512, SD2=768x768, SDXL=1024x1024)

If this fails: Generated images have suboptimal quality or aspect ratios when new model variants are added without updating the dimension mappings, or when models have non-standard optimal resolutions

invokeai/backend/model_manager/configs/main.py:from_base
info Environment unguarded

Storybook can find and load all story files matching the glob patterns in src/**/*.stories.@(js|jsx|mjs|ts|tsx)

If this fails: Storybook development environment silently excludes stories if file extensions or naming conventions change, making component documentation incomplete

invokeai/frontend/web/.storybook/main.ts:stories glob pattern
warning Temporal weakly guarded

onApplyCrop callback can be either synchronous or asynchronous (returns void or Promise<void>) and will complete successfully

If this fails: Crop operation appears to succeed but changes aren't persisted if the async callback fails silently, or UI becomes unresponsive if synchronous callback throws

invokeai/frontend/web/src/features/cropper/store/index.ts:onApplyCrop
warning Contract unguarded

All RTK Query cache tag types listed in the tagTypes array are consistently used across all API endpoints for proper cache invalidation

If this fails: Stale data displayed in UI when cache tags are mismatched between endpoints, causing inconsistent state between components that should update together

invokeai/frontend/web/src/services/api/index.ts:tagTypes
critical Ordering unguarded

Service dependency imports can be resolved in the order listed and circular dependencies don't exist between service modules

If this fails: Application startup fails with import errors or circular import exceptions when service dependencies change or new interdependencies are introduced

invokeai/app/api/dependencies.py:service imports

System Behavior

How the system operates at runtime — where data accumulates, what loops, what waits, and what controls what.

Data Pools

SQLite Database (database)
Central database storing image records, workflow definitions, model configurations, queue items, and application state
Model Cache (in-memory)
GPU memory cache of loaded AI models with LRU eviction and memory pressure management
Session Queue (queue)
FIFO queue of workflow execution sessions with priority support and status tracking
Image Storage (file-store)
Organized directory structure storing generated images, thumbnails, and metadata on disk
Invocation Cache (cache)
Memory cache of invocation results to avoid recomputing identical operations within workflows

Feedback Loops

Delays

Control Points

Technology Stack

FastAPI (framework)
Provides REST API endpoints and WebSocket communication for the backend server
React (framework)
Builds the web-based user interface with both traditional controls and node-based workflow editor
PyTorch (compute)
Runs Stable Diffusion models and other AI inference workloads on GPU
SQLite (database)
Stores application data including images, workflows, models, and queue state
Diffusers (library)
Provides Stable Diffusion model implementations and pipeline management
Pydantic (serialization)
Handles data validation and serialization for API models and configuration
Redux Toolkit (framework)
Manages frontend application state and API communication
Vite (build)
Builds and serves the React frontend with hot reloading in development

Key Components

Explore the interactive analysis

See the full architecture map, data flow, and code patterns visualization.

Analyze on CodeSea

Related Fullstack Repositories

Frequently Asked Questions

What is InvokeAI used for?

Generates AI images using Stable Diffusion models with a web UI and node-based workflow system invoke-ai/invokeai is a 8-component fullstack written in TypeScript. Data flows through 6 distinct pipeline stages. The codebase contains 2270 files.

How is InvokeAI architected?

InvokeAI is organized into 5 architecture layers: Frontend UI, API Layer, Invocation Engine, Model Management, and 1 more. Data flows through 6 distinct pipeline stages. This layered structure keeps concerns separated and modules independent.

How does data flow through InvokeAI?

Data moves through 6 stages: Workflow submission → Session processing → Model loading → Invocation execution → Image storage → .... Data flows from user input through the React frontend to FastAPI endpoints, where workflows are queued as SessionQueueItems. The invocation engine processes these by converting workflow graphs into executable tasks, loading AI models as needed, and generating images which are stored in SQLite with files on disk. Real-time progress updates flow back through WebSocket events to update the UI. This pipeline design reflects a complex multi-stage processing system.

What technologies does InvokeAI use?

The core stack includes FastAPI (Provides REST API endpoints and WebSocket communication for the backend server), React (Builds the web-based user interface with both traditional controls and node-based workflow editor), PyTorch (Runs Stable Diffusion models and other AI inference workloads on GPU), SQLite (Stores application data including images, workflows, models, and queue state), Diffusers (Provides Stable Diffusion model implementations and pipeline management), Pydantic (Handles data validation and serialization for API models and configuration), and 2 more. A focused set of dependencies that keeps the build manageable.

What system dynamics does InvokeAI have?

InvokeAI exhibits 5 data pools (SQLite Database, Model Cache), 4 feedback loops, 5 control points, 4 delays. The feedback loops handle auto-scale and polling. These runtime behaviors shape how the system responds to load, failures, and configuration changes.

What design patterns does InvokeAI use?

5 design patterns detected: Invocation Pattern, Service Layer Architecture, Event-Driven Updates, Model Registry, Queue-Based Processing.

Analyzed on April 20, 2026 by CodeSea. Written by .