louislam/uptime-kuma

A fancy self-hosted monitoring tool

85,494 stars JavaScript 7 components

Monitors service uptime and sends notifications when services go down

The system runs monitors on scheduled intervals, where each monitor uses its specific MonitorType (HTTP, TCP, ping, etc.) to check service health. Results are stored as Heartbeat records in the database. When a monitor changes status (up to down or vice versa), NotificationProviders send alerts through configured channels like Discord or Slack. The frontend receives real-time updates via WebSocket and displays current status on dashboards and public status pages.

Under the hood, the system uses 3 feedback loops, 4 data pools, 4 control points to manage its runtime behavior.

A 7-component fullstack. 342 files analyzed. Data flows through 7 distinct pipeline stages.

How Data Flows Through the System

The system runs monitors on scheduled intervals, where each monitor uses its specific MonitorType (HTTP, TCP, ping, etc.) to check service health. Results are stored as Heartbeat records in the database. When a monitor changes status (up to down or vice versa), NotificationProviders send alerts through configured channels like Discord or Slack. The frontend receives real-time updates via WebSocket and displays current status on dashboards and public status pages.

  1. Schedule monitor execution — UptimeKumaServer.startMonitor() creates intervals for each active monitor based on their configured check frequency, storing running intervals in monitorList [Monitor]
  2. Execute health check — Monitor.beat() method delegates to appropriate MonitorType.check() (HttpMonitorType, TcpMonitorType, etc.) which performs actual connectivity test and measures response time [Monitor → Heartbeat]
  3. Record check result — Heartbeat record is saved to database with status (UP=1, DOWN=0), ping time, and error message if any. Previous status is compared to detect changes [Heartbeat]
  4. Trigger notifications — If monitor status changed, Monitor.sendNotification() loads configured NotificationProviders and calls their send() method with monitor context and heartbeat data [Heartbeat]
  5. Send alerts — Each NotificationProvider (Discord, Slack, Webhook, etc.) formats the alert message using templates and sends to external service via HTTP API [NotificationProvider]
  6. Update frontend — Socket.IO server pushes heartbeat updates to connected clients via sendHeartbeat event, updating dashboard monitors in real-time [Heartbeat]
  7. Render status pages — Public status pages query monitor heartbeats filtered by configured tags and display service status using StatusPage configuration [StatusPage]

Data Models

The data structures that flow between stages — the contracts that hold the system together.

Monitor server/model/monitor.js
Active Record model with id: number, name: string, type: string (http|tcp|ping|dns|push|etc), url: string, interval: number, active: boolean, plus type-specific fields like headers, method, keyword
Created via frontend form, continuously executed by monitor engine, updated with latest heartbeat status
Heartbeat server/model/heartbeat.js
Active Record model with id: number, monitor_id: number, status: number (0=down, 1=up, 2=pending), time: datetime, ping: number, msg: text
Generated each time a monitor check runs, stored in database for history, triggers notifications on status changes
NotificationProvider server/notification-providers/notification-provider.js
Base class with name: string, send(notification, msg, monitorJSON, heartbeatJSON) method, and subclasses for Discord, Slack, Webhook, etc
Instantiated when notification needed, receives monitor context and heartbeat data, formats and sends message through external service
MonitorType server/monitor-types/monitor-type.js
Base class with name: string, check(monitor, heartbeat, server) method, and subclasses for HTTP, TCP, Ping, DNS checks
Registered at startup, invoked by scheduler based on monitor interval, performs actual connectivity test and updates heartbeat
StatusPage server/model/status_page.js
Active Record model with id: number, slug: string, title: string, description: text, theme: string, published: boolean, show_tags: boolean
Created via admin interface, maps to public URLs, displays filtered monitor statuses for external users

Hidden Assumptions

Things this code relies on but never validates. These are the things that cause silent failures when the system changes.

critical Resource weakly guarded

Assumes cache cleaner interval runs indefinitely without memory pressure checks - Settings.cacheCleaner interval never stops once started and accumulates cache entries

If this fails: Under high load with many setting reads, cache grows unbounded until 60-second cleanup, potentially causing memory exhaustion in containers with strict memory limits

server/settings.js:Settings.get
critical Contract unguarded

Assumes all MonitorType.check() implementations follow the contract of setting heartbeat.status to UP on success and throwing descriptive errors on failure

If this fails: If a custom MonitorType sets heartbeat.status to undefined or returns instead of throwing on failure, the monitor appears stuck in PENDING state and notifications never trigger

server/monitor-types/monitor-type.js:MonitorType.check
critical Ordering weakly guarded

Assumes SQL patches in patchList are applied in object key iteration order, which is not guaranteed in JavaScript

If this fails: Database migrations may apply in wrong order causing foreign key constraint failures or missing columns when patches depend on each other

server/database.js:Database.patchList
critical Temporal unguarded

Assumes monitor intervals in monitorList are properly cleaned up when monitors are deleted - no explicit cleanup code visible for stopping intervals

If this fails: Deleted monitors continue executing their check intervals indefinitely, consuming CPU and potentially hitting rate limits on monitored services

server/uptime-kuma-server.js:monitorList
warning Scale unguarded

Assumes notification providers can handle unlimited concurrent sends without rate limiting - each monitor status change triggers immediate notification.send()

If this fails: When many monitors fail simultaneously (network outage), notification providers like Discord/Slack get flooded with requests and return 429 rate limit errors, causing notification delivery failures

server/notification-providers/notification-provider.js:NotificationProvider.send
warning Environment weakly guarded

Assumes WebSocket origin validation with 'cors-like' mode works correctly across all deployment scenarios - defaults to 'cors-like' if unset

If this fails: In reverse proxy setups or Docker containers, WebSocket connections may be rejected due to origin mismatch, breaking real-time dashboard updates without clear error messages

server/server.js:process.env.UPTIME_KUMA_WS_ORIGIN_CHECK
warning Domain unguarded

Assumes monitorJSON['type'] field always contains valid monitor type strings matching the hardcoded cases (push, ping, port, dns, etc)

If this fails: If database contains monitors with unknown or corrupted type values, extractAddress returns empty string, causing notification messages to show blank addresses for affected monitors

server/notification-providers/notification-provider.js:extractAddress
warning Shape unguarded

Assumes all setting values can be safely JSON.stringify()'d without circular references or functions

If this fails: Setting values containing circular objects or functions cause JSON.stringify to throw, leaving the setting in inconsistent state and potentially breaking system configuration

server/settings.js:Settings.set
warning Resource unguarded

Assumes build environment has sufficient disk space for both gzip and brotli compressed assets during production builds

If this fails: In CI/CD environments with limited disk space, build fails silently when compression plugins run out of space writing compressed assets to tmp directory

config/vite.config.js:viteCompression
info Contract weakly guarded

Assumes the underlying apicache module honors headerBlacklist configuration and the cache-control override works correctly

If this fails: The comment 'BUG! Not working for the second request' indicates cache-control header override fails intermittently, causing client-side caching when server-side only caching was intended

server/modules/apicache/index.js:headerBlacklist

System Behavior

How the system operates at runtime — where data accumulates, what loops, what waits, and what controls what.

Data Pools

Monitor heartbeat history (database)
SQLite/MariaDB tables storing timestamped heartbeat records for uptime calculation, charting, and incident detection
Monitor registry (in-memory)
Runtime cache of active monitor instances with their execution intervals and current status
WebSocket connections (in-memory)
Active Socket.IO client connections for pushing real-time updates to dashboard users
Settings cache (cache)
In-memory cache of system configuration with 60-second TTL to avoid frequent database queries

Feedback Loops

Delays

Control Points

Technology Stack

Vue.js (framework)
Powers the reactive dashboard frontend with real-time monitor status updates and configuration forms
Express.js (framework)
HTTP server handling API routes, static file serving, and WebSocket upgrade requests
Socket.IO (library)
Enables real-time bidirectional communication between server and dashboard for live monitor updates
RedBean ORM (library)
Database abstraction layer providing Active Record pattern for Monitor, Heartbeat, and other models
SQLite/MariaDB (database)
Primary data storage for monitor configurations, heartbeat history, user accounts, and system settings
Knex.js (library)
Database schema migrations and query building, particularly for MariaDB setup and complex queries
Axios (library)
HTTP client for executing monitor checks and sending webhook notifications
Vite (build)
Frontend build tool compiling Vue components, handling hot reload, and asset optimization

Key Components

Explore the interactive analysis

See the full architecture map, data flow, and code patterns visualization.

Analyze on CodeSea

Related Fullstack Repositories

Frequently Asked Questions

What is uptime-kuma used for?

Monitors service uptime and sends notifications when services go down louislam/uptime-kuma is a 7-component fullstack written in JavaScript. Data flows through 7 distinct pipeline stages. The codebase contains 342 files.

How is uptime-kuma architected?

uptime-kuma is organized into 5 architecture layers: Web Interface, API & WebSocket Server, Monitor Engine, Notification System, and 1 more. Data flows through 7 distinct pipeline stages. This layered structure keeps concerns separated and modules independent.

How does data flow through uptime-kuma?

Data moves through 7 stages: Schedule monitor execution → Execute health check → Record check result → Trigger notifications → Send alerts → .... The system runs monitors on scheduled intervals, where each monitor uses its specific MonitorType (HTTP, TCP, ping, etc.) to check service health. Results are stored as Heartbeat records in the database. When a monitor changes status (up to down or vice versa), NotificationProviders send alerts through configured channels like Discord or Slack. The frontend receives real-time updates via WebSocket and displays current status on dashboards and public status pages. This pipeline design reflects a complex multi-stage processing system.

What technologies does uptime-kuma use?

The core stack includes Vue.js (Powers the reactive dashboard frontend with real-time monitor status updates and configuration forms), Express.js (HTTP server handling API routes, static file serving, and WebSocket upgrade requests), Socket.IO (Enables real-time bidirectional communication between server and dashboard for live monitor updates), RedBean ORM (Database abstraction layer providing Active Record pattern for Monitor, Heartbeat, and other models), SQLite/MariaDB (Primary data storage for monitor configurations, heartbeat history, user accounts, and system settings), Knex.js (Database schema migrations and query building, particularly for MariaDB setup and complex queries), and 2 more. A focused set of dependencies that keeps the build manageable.

What system dynamics does uptime-kuma have?

uptime-kuma exhibits 4 data pools (Monitor heartbeat history, Monitor registry), 3 feedback loops, 4 control points, 3 delays. The feedback loops handle polling and retry. These runtime behaviors shape how the system responds to load, failures, and configuration changes.

What design patterns does uptime-kuma use?

4 design patterns detected: Plugin architecture for monitors, Active Record pattern, Observer pattern for notifications, Template method for notifications.

Analyzed on April 20, 2026 by CodeSea. Written by .