Skip to main content

Corsano On‑Premises Deployment Guide

Last updated: 2025‑08‑21

Version: 0.2

Intro

The On-Premises Corsano Cloud provides hospitals with a fully self-contained deployment of the Corsano Health platform, installed on a Linux server within the hospital’s firewall. This setup ensures that all patient data remains strictly on-site, with no connection to external cloud services. Corsano Health does not have access to any patient data, offering maximum privacy and data sovereignty.

The On-Premises solution includes the same capabilities as the Corsano Health Cloud—secure device connectivity, continuous vital signs monitoring, and API access—while giving hospitals complete control over their infrastructure and compliance environment.


Scope & System Diagram

┌──────────────┐   BLE Adv.   ┌─────────────────────┐  HTTPS   ┌─────────────┐
│ Bracelets │ ───────────▶ │ Cisco AP + 9800 WLC │ ───────▶ │ Python │
│ (patients) │ │ (BLE to mDNS/JSON) │ │ Ble │
└──────────────┘ └─────────────────────┘ │ Service │
│ │
▼ │
┌─────────────────┐ │
│ Backend API │◀─┘
│ (Docker) │
└─────────────────┘


┌─────────────────┐
│ Front‑end │
│ – Nginx │
└─────────────────┘

A single Linux host can run the three application containers (API, ble_app, UI) plus an optional reverse‑proxy. Cisco hardware sits on the hospital network.


Hardware & Software Prerequisites

ItemRecommendedNotes
Host OSUbuntu 22.04 LTS 64‑bitAny modern systemd Linux is fine
CPU / RAM4 physical cores / 8 threads, 16 GiBSizing aligns with expected load (≈120 bracelets)
Disk30 GiB freeDocker images + patient logs
Docker Engine25.xIncludes Compose v2 plugin
GitlatestTo pull repositories
Cisco Catalyst 9800‑L/‑40/‑80IOS‑XE 17.3+BLE IoT Radio feature licensed
Cisco AP9100 or 9300 seriesBLE Beacon/Scanning enabled
NetworkLayer‑3 reachability Host ↔ WLCHost polls WLC or receives mDNS streams
Corsano Bracelet
/opt/corsano-on-premises/
├── backend/ # corsano on-premises backend repo
├── corsano_multi/ # corsano‑ble‑collector repo
├── frontend/ # corsano-local-cloud repo
├── compose/ # docker‑compose.yml + .env
└── data/ # named volumes / storing persistent data

Everything is owned by user cloudsvc (UID 10050) and group docker.


Environment Files

Create one authoritative .env in compose/ and reference it from Compose. Suggested keys:

### Generic ###
COMPOSE_PROJECT_NAME=corsano-on-premises
TZ=Europe/Amsterdam

### Backend ###
PROFILES_ACTIVE=production
BACKEND_HTTP_PORT=8080

### ble_app ###
# WLC credentials
WLC_HOST=192.168.0.153
WLC_USER=CISCO_USER
WLC_PASS=CISCO_SECRET

### Cisco Iot-orchestrator registered apps
CISCO_CONTROL_APP=controlapp
CISCO_CONTROL_APP_KEY=controlapp_key
CISCO_DATA_APP=dataapp
CISCO_DATA_APP_KEY=dataapp_key
CISCO_ONBOARD_APP=onboardapp
CISCO_ONBOARD_APP=onboardapp_key

# API target
API_BASE_URL=http://localhost:80

### Front‑end ###
VITE_API_BASE=http://localhost
FRONTEND_HTTP_PORT=5173

Backend API Service

Clone, build, and start exactly as documented in the upstream repository:

unzip on-premises.zip
# Should replace repo with corsano name
cd ./backend
# Fetch docker image and run
docker compose up -d
# Build & run inside compose stack (see section 8).

The container exposes http://localhost (default 80).

Health endpoint /health must return HTTP 200.

Front‑end UI

Build using Vite and serve static files via Nginx. Keep VITE_API_BASE consistent with reverse proxy.

First‑time Login

Once the front‑end is reachable (e.g. http://localhost:5173), sign in with the default super‑admin credentials:

username: admin
password: admin

Immediately browse to ${VITE_API_BASE}/admin and create an HCP admin account for hospital staff. Change the default password or disable the admin user before go‑live.

Once HCP admin log in to the system, they should be able to create departments and inside department page they can create patients.


Unified docker-compose.yml

version: "3.9"
services:
api:
build: ../backend
env_file: .env
ports: ["8080:8080"]
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/actuator/health"]
interval: 30s
timeout: 5s
retries: 5
restart: unless-stopped

ble_app:
build: ../ble_app
env_file: .env
depends_on:
api: { condition: service_healthy }
restart: unless-stopped

web:
build: ../frontend
env_file: .env
ports: ["80:80"]
depends_on:
api: { condition: service_healthy }
restart: unless-stopped

networks:
default:
name: cloud-stack

Launch everything:

cd /opt/cloud‑stack/compose
docker compose --env-file .env up -d --build

Essential Operations

TaskCommand
List servicesdocker compose ps
Tail logsdocker compose logs -f ble_app
Stopdocker compose down
Prune unused imagesdocker image prune
Backup volumesrsync -a /opt/cloud-stack/data /backups/$(date +%F)

Update Strategy

Containers

  1. Tagging – Every Git tag triggers a CI build and publishes api:TAG, web:TAG, and ble:TAG images to your registry.
  2. Rolling upgrade – On the host run:
    docker compose pull
    docker compose up -d --no-deps --build api ble_app web
  3. Database migrations – Spring Boot auto‑runs Flyway/Liquibase; verify log output.

WLC & AP Firmware

Pin the hospital‑approved IOS‑XE release. Upgrades require maintenance windows; test BLE telemetry after every firmware change.

Zero‑downtime Approach (future)

Introduce a second host and front Traefik with DNS‑based canary – see TODO.


Troubleshooting Quick‑Reference

SymptomLikely CauseResolution
ble_app container exitswrong WLC credentialsCheck .env, rerun docker compose up -d ble_app
No vitals in UIBle → API connectivitydocker compose logs ble_app , look for HTTP 4xx
WLC subscription stuck in invalidgRPC destination wrongshow telemetry ietf subscription summary on WLC
TLS handshake failsself‑signed cert not trustedUse hospital PKI or Let’s Encrypt with DNS‑01

Next Steps / TODO

  • Add Prometheus + Grafana dashboards.
  • Write Ansible role for fully automated provision.

Licensing & Monitoring

End of Version 0.1