High-performance, database-backed MQTT broker built on Vert.X and Hazelcast.
Store messages persistently, scale horizontally, and integrate with AI models.
Choose from PostgreSQL, MongoDB, CrateDB, or SQLite. Store retained messages and persistent sessions with enterprise-grade reliability.
Built-in Hazelcast clustering enables seamless horizontal scaling. Deploy from production line to enterprise with automatic failover and load balancing.
Integrated MCP (Model Context Protocol) server allows AI models to query real-time and historical MQTT data directly.
Native OPC UA client with cluster-aware device management and certificate-based security. Unified archiving ensures industrial data flows through the same central system as MQTT messages. Supports browse paths, node IDs, and wildcard subscriptions for flexible device connectivity.
High-performance bulk transfer from Siemens WinCC Open Architecture SCADA systems. Subscribe to millions of datapoints with a single continuous SQL query leveraging WinCC Open Architecture's powerful dpQueryConnectSingle function. Stream tag values and alerts directly to MQTT topics. Efficient handling of high-volume data changes without per-message overhead.
Modern SCADA integration via GraphQL/WebSocket for Siemens WinCC Unified. Real-time tag value subscriptions with flexible name filters and wildcards. Stream active alarms and alerts with complete alarm details including state, priority, and timestamps. Optional OPC UA quality information for tag values including quality codes and status flags.
Built-in MQTT bridge functionality enables bidirectional message flow between brokers. Forward messages to remote brokers with topic filters and transformations, or consume messages from external MQTT brokers. Perfect for building hierarchical broker architectures and connecting distributed systems without external tools.
Visual flow-based programming with JavaScript runtime. Create data processing pipelines with drag-and-drop nodes. Transform, filter, and aggregate MQTT messages in real-time with reusable flow templates and instance-specific configuration.
Query current topic states and historical data through a modern GraphQL interface with real-time subscriptions support.
All data stored in central databases can be queried with standard SQL, enabling powerful analytics and reporting.
TLS/SSL support, user authentication with BCrypt, fine-grained ACL rules, and database-level security.
Interactive topic tree navigation with search capabilities and message inspection. Browse MQTT topic hierarchies with expandable nodes and view message data in real-time.
Complete web-based management interface for archive groups, users, and system configuration. Real-time updates without broker restarts.
MonsterMQ adapts to your infrastructure. Choose the database that fits your needs,
from lightweight SQLite for development to distributed CrateDB for time-series analytics.
Best for: Production deployments with full SQL requirements
Best for: NoSQL workloads and flexible data models
Best for: Time-series data and IoT analytics
Best for: Development and single-instance deployments
Best for: Stream analytics and event-driven architectures
Best for: High-speed last values and distributed caching
| Database | Session Store | Retained Store | Message Archive | Clustering | SQL Queries |
|---|---|---|---|---|---|
| PostgreSQL | ✓ | ✓ | ✓ | ✓ | ✓ |
| MongoDB | ✓ | ✓ | ✓ | ✓ | ✗ |
| CrateDB | ✓ | ✓ | ✓ | ✓ | ✓ |
| SQLite | ✓ | ✓ | ✓ | ✗ | ✓ |
| Apache Kafka | ✗ | ✗ | ✓ | ✓ | ✗ |
| Hazelcast | ✗ | ✓ | ✗ | ✓ | ✗ |
Build hierarchical MQTT infrastructures from edge to cloud. MonsterMQ's Hazelcast-based
clustering ensures data flows efficiently without duplication or unnecessary replication.
Central data lake, AI/ML processing, business intelligence
Hazelcast Cluster - Aggregation, regional analytics, load balancing
Production machines, sensors, PLCs - Local message processing
Topic filters ensure only relevant data moves between levels, reducing bandwidth and storage costs.
Hazelcast clustering provides automatic failover and session migration between cluster nodes.
All cluster nodes share a central database, ensuring consistent state and enabling SQL analytics.