openstatushq/openstatus
🫖 Status page with uptime monitoring & API monitoring as code 🫖
Open-source uptime monitoring platform with status pages and synthetic monitoring
Monitoring data flows from global checker regions through the server API to trigger incident workflows and update status pages in real-time
Under the hood, the system uses 2 feedback loops, 2 data pools, 4 control points to manage its runtime behavior.
Structural Verdict
A 10-component dashboard with 0 connections. 1439 files analyzed. Minimal connections — components operate mostly in isolation.
How Data Flows Through the System
Monitoring data flows from global checker regions through the server API to trigger incident workflows and update status pages in real-time
- Schedule Monitors — Cron jobs trigger checker tasks based on monitor periodicity (config: services.checker.environment)
- Execute Checks — Regional checker services perform HTTP/ping monitoring (config: x-common-variables.DATABASE_URL)
- Process Results — Workflows service receives results and updates monitor status
- Incident Detection — Failed checks trigger incident creation and notification workflows
- Status Updates — Status pages and dashboard reflect current system health (config: x-common-variables.NEXT_PUBLIC_URL)
- Screenshot Capture — Playwright captures incident screenshots for documentation
System Behavior
How the system actually operates at runtime — where data accumulates, what loops, what waits, and what controls what.
Data Pools
Stores monitors, incidents, users, and status data
Accumulates incident screenshot images
Feedback Loops
- Monitor Config Refresh (polling, balancing) — Trigger: 10-minute timer. Action: Fetch updated monitor configurations. Exit: Context cancellation.
- Incident Recovery (auto-scale, balancing) — Trigger: Successful health check after incident. Action: Resolve incident and notify recovery. Exit: Incident marked resolved.
Delays & Async Processing
- Cron Schedule Intervals (scheduled-job, ~30s/1m/5m/10m/30m/1h) — Monitor checks execute at configured periodicity
- Config Refresh (polling, ~10 minutes) — Private location monitors update configuration
- Graceful Shutdown (async-processing, ~5 seconds) — Server waits for requests to complete before shutdown
Control Points
- Monitor Periodicity (env-var) — Controls: How often health checks execute. Default: 30s|1m|5m|10m|30m|1h
- OPENSTATUS_KEY (env-var) — Controls: API authentication for private location checkers
- SELF_HOST (env-var) — Controls: Enables additional auth providers in self-hosted mode. Default: true
- CRON_SECRET (env-var) — Controls: Authorization for cron job endpoints
Package Structure
This monorepo contains 11 packages:
Go-based synthetic monitoring service that performs health checks and reports status to the platform
Next.js admin dashboard for configuring monitors, viewing analytics, and managing incidents
Astro-based documentation site with Starlight integration
Go service for running monitors from private network locations
Go reverse proxy for routing checker requests across Railway regions
Hono service using Playwright to capture incident screenshots and store in S3
Main Hono API server providing REST and RPC endpoints for the platform
Go SSH server providing terminal-based status page access
Next.js public-facing status pages with custom domains and theming
Next.js marketing website with MDX content and pricing information
Hono service handling background tasks, cron jobs, and incident workflows
Technology Stack
Frontend framework for dashboard and status pages
Lightweight web framework for API services
High-performance language for checker and proxy services
SQLite-compatible database with edge replication
Browser automation for screenshot capture
Containerization for all services
Authentication with multiple providers
Monorepo build system and task runner
Static site generator for documentation
Observability and structured logging
Key Components
- MonitorManager (service) — Manages scheduled monitor execution and updates from the private location checker
apps/checker/pkg/scheduler - checkerRoute (handler) — Processes monitoring results and triggers incident workflows
apps/workflows/src/checker/index.ts - auth (middleware) — NextAuth configuration with GitHub, Google, and Resend providers
apps/dashboard/src/lib/auth/index.ts - app (service) — Main Hono API server with OpenAPI documentation and RPC routes
apps/server/src/index.ts - findOpenIncident (function) — Queries database for unresolved incidents for a given monitor
apps/workflows/src/checker/index.ts - useStatusPage (hook) — React context hook for managing status page theme and display settings
apps/status-page/src/components/status-page/floating-button.tsx - components (config) — MDX component mapping for rendering marketing content with custom elements
apps/web/src/content/mdx-components/index.tsx - gracefulShutdown (function) — Handles graceful server shutdown with cleanup on SIGTERM/SIGINT signals
apps/private-location/cmd/server/main.go - proxy (handler) — Routes requests to regional checker instances based on Railway region headers
apps/railway-proxy/main.go - S3 (service) — S3 client for uploading incident screenshots to R2 storage
apps/screenshot-service/src/index.ts
Configuration
config.openstatus.yaml (yaml)
tests.ids(array, unknown) — default: 1,771,2662
coolify-deployment.yaml (yaml)
version(string, unknown) — default: 3.8x-common-variables.DATABASE_URL(string, unknown) — default: ${DATABASE_URL:-http://libsql:8080}x-common-variables.DATABASE_AUTH_TOKEN(string, unknown) — default: ${DATABASE_AUTH_TOKEN:-}x-common-variables.AUTH_SECRET(string, unknown) — default: ${AUTH_SECRET:-default-secret-change-me}x-common-variables.NEXT_PUBLIC_URL(string, unknown) — default: ${NEXT_PUBLIC_URL:-http://localhost:3002}x-common-variables.SELF_HOST(string, unknown) — default: ${SELF_HOST:-true}x-common-variables.AUTH_GITHUB_ID(string, unknown) — default: ${AUTH_GITHUB_ID:-}x-common-variables.AUTH_GITHUB_SECRET(string, unknown) — default: ${AUTH_GITHUB_SECRET:-}- +216 more parameters
docker-compose-lightweight.yaml (yaml)
networks.openstatus.driver(string, unknown) — default: bridgenetworks.openstatus.name(string, unknown) — default: openstatusvolumes.libsql-data.name(string, unknown) — default: openstatus-libsql-dataservices.libsql.container_name(string, unknown) — default: openstatus-libsqlservices.libsql.image(string, unknown) — default: ghcr.io/tursodatabase/libsql-server:latestservices.libsql.networks(array, unknown) — default: openstatusservices.libsql.ports(array, unknown) — default: 8080:8080,5001:5001services.libsql.volumes(array, unknown) — default: libsql-data:/var/lib/sqld- +49 more parameters
docker-compose.github-packages.yaml (yaml)
networks.openstatus.driver(string, unknown) — default: bridgenetworks.openstatus.name(string, unknown) — default: openstatusvolumes.libsql-data.name(string, unknown) — default: openstatus-libsql-datavolumes.workflows-data.name(string, unknown) — default: openstatus-workflows-dataservices.libsql.container_name(string, unknown) — default: openstatus-libsqlservices.libsql.image(string, unknown) — default: ghcr.io/tursodatabase/libsql-server:latestservices.libsql.networks(array, unknown) — default: openstatusservices.libsql.ports(array, unknown) — default: 8080:8080,5001:5001- +104 more parameters
Explore the interactive analysis
See the full architecture map, data flow, and code patterns visualization.
Analyze on CodeSeaRelated Dashboard Repositories
Frequently Asked Questions
What is openstatus used for?
Open-source uptime monitoring platform with status pages and synthetic monitoring openstatushq/openstatus is a 10-component dashboard written in TypeScript. Minimal connections — components operate mostly in isolation. The codebase contains 1439 files.
How is openstatus architected?
openstatus is organized into 4 architecture layers: Frontend Layer, API Layer, Monitoring Layer, Shared Libraries. Minimal connections — components operate mostly in isolation. This layered structure keeps concerns separated and modules independent.
How does data flow through openstatus?
Data moves through 6 stages: Schedule Monitors → Execute Checks → Process Results → Incident Detection → Status Updates → .... Monitoring data flows from global checker regions through the server API to trigger incident workflows and update status pages in real-time This pipeline design reflects a complex multi-stage processing system.
What technologies does openstatus use?
The core stack includes Next.js (Frontend framework for dashboard and status pages), Hono (Lightweight web framework for API services), Go (High-performance language for checker and proxy services), Turso/LibSQL (SQLite-compatible database with edge replication), Playwright (Browser automation for screenshot capture), Docker (Containerization for all services), and 4 more. This broad technology surface reflects a mature project with many integration points.
What system dynamics does openstatus have?
openstatus exhibits 2 data pools (LibSQL Database, R2 Screenshot Storage), 2 feedback loops, 4 control points, 3 delays. The feedback loops handle polling and auto-scale. These runtime behaviors shape how the system responds to load, failures, and configuration changes.
What design patterns does openstatus use?
5 design patterns detected: Multi-Region Monitoring, Event-Driven Architecture, NextAuth Integration, Shared Package Architecture, Docker Compose Orchestration.
Analyzed on March 31, 2026 by CodeSea. Written by Karolina Sarna.