louislam/uptime-kuma
A fancy self-hosted monitoring tool
Monitors service uptime and sends notifications when services go down
The system runs monitors on scheduled intervals, where each monitor uses its specific MonitorType (HTTP, TCP, ping, etc.) to check service health. Results are stored as Heartbeat records in the database. When a monitor changes status (up to down or vice versa), NotificationProviders send alerts through configured channels like Discord or Slack. The frontend receives real-time updates via WebSocket and displays current status on dashboards and public status pages.
Under the hood, the system uses 3 feedback loops, 4 data pools, 4 control points to manage its runtime behavior.
A 7-component fullstack. 342 files analyzed. Data flows through 7 distinct pipeline stages.
How Data Flows Through the System
The system runs monitors on scheduled intervals, where each monitor uses its specific MonitorType (HTTP, TCP, ping, etc.) to check service health. Results are stored as Heartbeat records in the database. When a monitor changes status (up to down or vice versa), NotificationProviders send alerts through configured channels like Discord or Slack. The frontend receives real-time updates via WebSocket and displays current status on dashboards and public status pages.
- Schedule monitor execution — UptimeKumaServer.startMonitor() creates intervals for each active monitor based on their configured check frequency, storing running intervals in monitorList [Monitor]
- Execute health check — Monitor.beat() method delegates to appropriate MonitorType.check() (HttpMonitorType, TcpMonitorType, etc.) which performs actual connectivity test and measures response time [Monitor → Heartbeat]
- Record check result — Heartbeat record is saved to database with status (UP=1, DOWN=0), ping time, and error message if any. Previous status is compared to detect changes [Heartbeat]
- Trigger notifications — If monitor status changed, Monitor.sendNotification() loads configured NotificationProviders and calls their send() method with monitor context and heartbeat data [Heartbeat]
- Send alerts — Each NotificationProvider (Discord, Slack, Webhook, etc.) formats the alert message using templates and sends to external service via HTTP API [NotificationProvider]
- Update frontend — Socket.IO server pushes heartbeat updates to connected clients via sendHeartbeat event, updating dashboard monitors in real-time [Heartbeat]
- Render status pages — Public status pages query monitor heartbeats filtered by configured tags and display service status using StatusPage configuration [StatusPage]
Data Models
The data structures that flow between stages — the contracts that hold the system together.
server/model/monitor.jsActive Record model with id: number, name: string, type: string (http|tcp|ping|dns|push|etc), url: string, interval: number, active: boolean, plus type-specific fields like headers, method, keyword
Created via frontend form, continuously executed by monitor engine, updated with latest heartbeat status
server/model/heartbeat.jsActive Record model with id: number, monitor_id: number, status: number (0=down, 1=up, 2=pending), time: datetime, ping: number, msg: text
Generated each time a monitor check runs, stored in database for history, triggers notifications on status changes
server/notification-providers/notification-provider.jsBase class with name: string, send(notification, msg, monitorJSON, heartbeatJSON) method, and subclasses for Discord, Slack, Webhook, etc
Instantiated when notification needed, receives monitor context and heartbeat data, formats and sends message through external service
server/monitor-types/monitor-type.jsBase class with name: string, check(monitor, heartbeat, server) method, and subclasses for HTTP, TCP, Ping, DNS checks
Registered at startup, invoked by scheduler based on monitor interval, performs actual connectivity test and updates heartbeat
server/model/status_page.jsActive Record model with id: number, slug: string, title: string, description: text, theme: string, published: boolean, show_tags: boolean
Created via admin interface, maps to public URLs, displays filtered monitor statuses for external users
Hidden Assumptions
Things this code relies on but never validates. These are the things that cause silent failures when the system changes.
Assumes cache cleaner interval runs indefinitely without memory pressure checks - Settings.cacheCleaner interval never stops once started and accumulates cache entries
If this fails: Under high load with many setting reads, cache grows unbounded until 60-second cleanup, potentially causing memory exhaustion in containers with strict memory limits
server/settings.js:Settings.get
Assumes all MonitorType.check() implementations follow the contract of setting heartbeat.status to UP on success and throwing descriptive errors on failure
If this fails: If a custom MonitorType sets heartbeat.status to undefined or returns instead of throwing on failure, the monitor appears stuck in PENDING state and notifications never trigger
server/monitor-types/monitor-type.js:MonitorType.check
Assumes SQL patches in patchList are applied in object key iteration order, which is not guaranteed in JavaScript
If this fails: Database migrations may apply in wrong order causing foreign key constraint failures or missing columns when patches depend on each other
server/database.js:Database.patchList
Assumes monitor intervals in monitorList are properly cleaned up when monitors are deleted - no explicit cleanup code visible for stopping intervals
If this fails: Deleted monitors continue executing their check intervals indefinitely, consuming CPU and potentially hitting rate limits on monitored services
server/uptime-kuma-server.js:monitorList
Assumes notification providers can handle unlimited concurrent sends without rate limiting - each monitor status change triggers immediate notification.send()
If this fails: When many monitors fail simultaneously (network outage), notification providers like Discord/Slack get flooded with requests and return 429 rate limit errors, causing notification delivery failures
server/notification-providers/notification-provider.js:NotificationProvider.send
Assumes WebSocket origin validation with 'cors-like' mode works correctly across all deployment scenarios - defaults to 'cors-like' if unset
If this fails: In reverse proxy setups or Docker containers, WebSocket connections may be rejected due to origin mismatch, breaking real-time dashboard updates without clear error messages
server/server.js:process.env.UPTIME_KUMA_WS_ORIGIN_CHECK
Assumes monitorJSON['type'] field always contains valid monitor type strings matching the hardcoded cases (push, ping, port, dns, etc)
If this fails: If database contains monitors with unknown or corrupted type values, extractAddress returns empty string, causing notification messages to show blank addresses for affected monitors
server/notification-providers/notification-provider.js:extractAddress
Assumes all setting values can be safely JSON.stringify()'d without circular references or functions
If this fails: Setting values containing circular objects or functions cause JSON.stringify to throw, leaving the setting in inconsistent state and potentially breaking system configuration
server/settings.js:Settings.set
Assumes build environment has sufficient disk space for both gzip and brotli compressed assets during production builds
If this fails: In CI/CD environments with limited disk space, build fails silently when compression plugins run out of space writing compressed assets to tmp directory
config/vite.config.js:viteCompression
Assumes the underlying apicache module honors headerBlacklist configuration and the cache-control override works correctly
If this fails: The comment 'BUG! Not working for the second request' indicates cache-control header override fails intermittently, causing client-side caching when server-side only caching was intended
server/modules/apicache/index.js:headerBlacklist
System Behavior
How the system operates at runtime — where data accumulates, what loops, what waits, and what controls what.
Data Pools
SQLite/MariaDB tables storing timestamped heartbeat records for uptime calculation, charting, and incident detection
Runtime cache of active monitor instances with their execution intervals and current status
Active Socket.IO client connections for pushing real-time updates to dashboard users
In-memory cache of system configuration with 60-second TTL to avoid frequent database queries
Feedback Loops
- Monitor check scheduling (polling, reinforcing) — Trigger: Monitor interval timer expires. Action: Execute MonitorType.check() and update heartbeat status. Exit: Monitor becomes inactive or server shuts down.
- Notification retry logic (retry, balancing) — Trigger: NotificationProvider.send() throws error. Action: Retry sending notification with exponential backoff. Exit: Success or maximum retry attempts reached.
- Cache invalidation cycle (cache-invalidation, balancing) — Trigger: Settings cache cleaner interval (60 seconds). Action: Remove cached settings older than 60 seconds from Settings.cacheList. Exit: Cache cleaner interval continues indefinitely.
Delays
- Monitor check intervals (scheduled-job, ~1 second to 24 days (configurable per monitor)) — Determines how quickly failures are detected and reported
- Database connection pooling (async-processing) — Monitor checks may queue if database connections are exhausted
- Socket.IO event buffering (async-processing) — Real-time updates may be delayed under high load
Control Points
- UPTIME_KUMA_PORT (env-var) — Controls: HTTP server port binding (default 3001). Default: 3001
- UPTIME_KUMA_WS_ORIGIN_CHECK (env-var) — Controls: WebSocket origin validation (cors-like or bypass). Default: cors-like
- NODE_ENV (env-var) — Controls: Development vs production behavior, logging verbosity. Default: production
- Monitor interval limits (threshold) — Controls: Minimum 1 second, maximum 24 days between checks. Default: MIN_INTERVAL_SECOND=1, MAX_INTERVAL_SECOND=2073600
Technology Stack
Powers the reactive dashboard frontend with real-time monitor status updates and configuration forms
HTTP server handling API routes, static file serving, and WebSocket upgrade requests
Enables real-time bidirectional communication between server and dashboard for live monitor updates
Database abstraction layer providing Active Record pattern for Monitor, Heartbeat, and other models
Primary data storage for monitor configurations, heartbeat history, user accounts, and system settings
Database schema migrations and query building, particularly for MariaDB setup and complex queries
HTTP client for executing monitor checks and sending webhook notifications
Frontend build tool compiling Vue components, handling hot reload, and asset optimization
Key Components
- UptimeKumaServer (orchestrator) — Central server instance that manages monitor execution, WebSocket connections, and coordinates all system components
server/uptime-kuma-server.js - Database (store) — Manages SQLite/MariaDB connections, handles database migrations, and provides data access layer for all models
server/database.js - Monitor.beat (processor) — Executes individual monitor checks by delegating to appropriate MonitorType, measures response time, and determines up/down status
server/model/monitor.js - NotificationProvider (dispatcher) — Base class for 93+ notification channels that format monitor status changes into service-specific messages and deliver them
server/notification-providers/notification-provider.js - Settings (registry) — Centralized configuration store with in-memory caching that manages system settings, user preferences, and feature flags
server/settings.js - MonitorType.check (executor) — Pluggable check implementations for HTTP, TCP, ping, DNS, and other protocols that perform actual connectivity tests
server/monitor-types/monitor-type.js - Socket.IO handlers (gateway) — WebSocket event handlers that process real-time frontend requests and push monitor status updates to connected clients
server/socket-handlers/
Explore the interactive analysis
See the full architecture map, data flow, and code patterns visualization.
Analyze on CodeSeaRelated Fullstack Repositories
Frequently Asked Questions
What is uptime-kuma used for?
Monitors service uptime and sends notifications when services go down louislam/uptime-kuma is a 7-component fullstack written in JavaScript. Data flows through 7 distinct pipeline stages. The codebase contains 342 files.
How is uptime-kuma architected?
uptime-kuma is organized into 5 architecture layers: Web Interface, API & WebSocket Server, Monitor Engine, Notification System, and 1 more. Data flows through 7 distinct pipeline stages. This layered structure keeps concerns separated and modules independent.
How does data flow through uptime-kuma?
Data moves through 7 stages: Schedule monitor execution → Execute health check → Record check result → Trigger notifications → Send alerts → .... The system runs monitors on scheduled intervals, where each monitor uses its specific MonitorType (HTTP, TCP, ping, etc.) to check service health. Results are stored as Heartbeat records in the database. When a monitor changes status (up to down or vice versa), NotificationProviders send alerts through configured channels like Discord or Slack. The frontend receives real-time updates via WebSocket and displays current status on dashboards and public status pages. This pipeline design reflects a complex multi-stage processing system.
What technologies does uptime-kuma use?
The core stack includes Vue.js (Powers the reactive dashboard frontend with real-time monitor status updates and configuration forms), Express.js (HTTP server handling API routes, static file serving, and WebSocket upgrade requests), Socket.IO (Enables real-time bidirectional communication between server and dashboard for live monitor updates), RedBean ORM (Database abstraction layer providing Active Record pattern for Monitor, Heartbeat, and other models), SQLite/MariaDB (Primary data storage for monitor configurations, heartbeat history, user accounts, and system settings), Knex.js (Database schema migrations and query building, particularly for MariaDB setup and complex queries), and 2 more. A focused set of dependencies that keeps the build manageable.
What system dynamics does uptime-kuma have?
uptime-kuma exhibits 4 data pools (Monitor heartbeat history, Monitor registry), 3 feedback loops, 4 control points, 3 delays. The feedback loops handle polling and retry. These runtime behaviors shape how the system responds to load, failures, and configuration changes.
What design patterns does uptime-kuma use?
4 design patterns detected: Plugin architecture for monitors, Active Record pattern, Observer pattern for notifications, Template method for notifications.
Analyzed on April 20, 2026 by CodeSea. Written by Karolina Sarna.