Architecture
How DINOC works
A clear separation between collection, processing, storage, and AI — each component replaceable and all communication encrypted.
How data flows
Collection
DINOC Agents run on target hosts and collect metrics via gRPC (push), or the server polls devices via SNMP and ICMP (pull). All agent-to-server communication is encrypted with mTLS.
Processing
The DINOC Server receives, normalizes, and validates telemetry. Alert rules are evaluated in real-time. Topology data is updated as LLDP/CDP neighbor information arrives.
Alerting & AI Query
Alert state changes trigger webhooks and notification channels. When a user issues an AI query, the DINOC Server sends context (recent metrics + alert history) to the local Ollama LLM and streams the response back to the UI.
Component breakdown
Monitored Devices
Any network-attached device: routers, switches, servers, CPEs, firewalls, UPS units.
DINOC Agent
Lightweight Go binary that collects metrics from the host and pushes them to the DINOC Server over an authenticated, encrypted gRPC channel.
DINOC Server
The core processing engine. Handles metric ingestion, alert evaluation, topology tracking, user authentication (RBAC), and AI query routing.
VictoriaMetrics
High-performance time-series database for all network metrics. Stores raw data at full resolution and downsampled views for long-term retention.
Ollama (Local LLM)
On-premise language model runtime. DINOC sends AI queries with metric context via HTTP. Zero data leaves your infrastructure.