There’s a category of problem where the obvious solution (buy a commercial product) and the interesting solution (build it yourself) are close enough in effort that you have to make a deliberate choice. Digital signage is one of those problems.
PanelOS is a Go backend that controls Raspberry Pi displays in real time. An admin sends a command; it reaches the display in milliseconds via WebSocket. The Go backend handles authentication, state, and orchestration — it publishes commands to Centrifugo, which manages the WebSocket connections, channel routing, and presence tracking. PostgreSQL holds the data. The clients are thin: connect, listen, render.
This post is about the architecture decisions — why I reached for Centrifugo instead of rolling my own WebSocket server, and what the whole thing looks like running in production.
The Architecture
Infrastructure
┌──────────────┐
│ PostgreSQL │
│ (data) │
└──────┬───────┘
│
┌──────▼──────────────────────────────────┐
│ Go Backend (:8080) │
│ │
│ - REST API (display control, auth) │
│ - JWT generation for Centrifugo auth │
│ - Publishes commands to Centrifugo │
└──────────────────┬──────────────────────┘
│ HTTP API
┌───────────▼──────────┐
│ Centrifugo (:8000) │
│ │
│ - WebSocket server │
│ - Channel routing │
│ - Presence tracking │
└──────────┬───────────┘
│ WebSocket
┌────────────┼────────────┐
│ │ │
┌────▼────┐ ┌────▼────┐ ┌────▼────┐
│ Admin │ │ Display │ │ Mobile │
│ Panel │ │ Client │ │ (soon) │
└─────────┘ └─────────┘ └─────────┘The Go backend is the only thing that knows the full state of the system. Centrifugo knows nothing about content — it’s a message bus. The display clients know nothing about each other. This separation makes each piece easy to reason about independently.
The Go Backend
The backend exposes a REST API for display control and auth. When an admin makes a change, the backend updates PostgreSQL and then publishes a command to Centrifugo over its HTTP API. Centrifugo fans it out to every connected display on the appropriate channel.
The backend also issues JWTs for Centrifugo connection auth. When a display client connects, it first hits the Go backend for a token, then uses that token to authenticate the WebSocket connection with Centrifugo. Centrifugo validates the JWT — the backend doesn’t need to be involved in the connection at all after that.
Go’s standard library handles the HTTP layer. No framework. PostgreSQL via pgx. The whole thing compiles to a single binary.
Why Centrifugo
The obvious alternative is gorilla/websocket and a connection pool you manage yourself. I’ve done that before. You end up owning:
- Connection lifecycle (open, close, timeout, reconnect)
- Channel subscription logic
- Presence tracking (who is currently connected)
- Broadcast fan-out
- The operational burden of keeping it running correctly under load
Centrifugo handles all of that. It’s a dedicated WebSocket server with a clean HTTP API for publishing — you POST a message to it, it delivers to subscribers. Presence tracking is built in. Reconnection is handled on the client SDK side. The result is that the Go backend stays focused on business logic instead of WebSocket plumbing.
The tradeoff is an extra service to run and operate. That’s a real cost, but for a system where real-time delivery is the core feature, it’s the right call.
Raspberry Pi Setup
Each display runs a Raspberry Pi with FullPageOS — a Raspberry Pi OS variant that boots directly into Chromium pointing at a URL. The display client is a web app that authenticates with the Go backend, connects to Centrifugo, and renders whatever commands come through.
The setup for each Pi:
- Flash FullPageOS, configure the target URL
- Install Tailscale for remote access
- Configure Chromium in kiosk mode (no cursor, no browser chrome, no sleep)
- Set up PipeWire for audio
- Configure auto-login and auto-start
Tailscale was the right call for remote access. SSH into any display from anywhere without opening ports or touching the shop’s router. When something needs updating, it’s a two-minute SSH session rather than a site visit.
Running in Production
The system has been running since deployment without intervention. A few things that contribute to that:
Centrifugo handles reconnection. If a display loses connectivity, the client SDK reconnects and resubscribes automatically. No custom reconnection logic on the display side.
Stateless displays. The Pi stores nothing important locally. If one dies and needs to be reflashed, it’s back to full operation in under an hour.
Presence tracking is free. Because Centrifugo tracks presence natively, the admin panel always knows which displays are online — no polling, no heartbeat logic, no extra infrastructure.
The failure modes are boring. If the Go backend goes down, displays stay on whatever they last received. If Centrifugo goes down, displays reconnect when it comes back. Nothing catastrophic happens at the edges when the center has a bad moment.