🛡️ Patrick Beane
SRE | Security Engineer | Self-Hosted Infra & Detection
I design and operate security-first, self-hosted infrastructure focused on detection, resilience, and sovereignty.
My lab functions as a live production environment where threat intelligence, automation, and reliability engineering intersect.
🛰 The Fleet (10 Nodes)
This environment blends production, research, and continuous experimentation. Availability and controls are intentionally tuned per node role.
| Node | Role | Specs | Status |
|---|---|---|---|
| Argus | SIEM / Brain / node-health Failover | Xeon E5-2660v2 (1 core) | 🟢 Online |
| Triton | High Performance Compute | EPYC 9634 (8 cores) | 🟢 Online |
| Ares | Gitea / Kubernetes Management Node (MicroK8s) | Ryzen 9 9950X (8 cores) | 🟢 Online |
| Zephyrus | Container Host | Ryzen 9 7950X (4 cores) | 🟢 Online |
| Iris | NGINX / PHP Edge | Vultr | 🟢 Online |
| Vault | Secrets Management | GCP (Vaultwarden) | 🟢 Online |
| Apollo | Intel Dashboard (Flask) | AWS | 🟢 Online |
| Hermes | Public API (Frontend) | Oracle Cloud | 🟢 Online |
| Hades | Public API (Backend) | Oracle Cloud | 🟢 Online |
| Zeus | Monitoring / Metrics NOC | Xeon Gold 6150 (1 core) | 🟢 Online |
🌐 Infrastructure Strategy
- Compute Layer: Zen 5 (9950X), Zen 4 (7950X), EPYC 9634 for sustained workloads.
- Edge Layer: Oracle Cloud & Vultr for low-latency public ingress.
- Sentinel Layer: Argus SIEM correlating telemetry and enforcing distributed decisions across nodes.
- Observability: Zeus as the centralized NOC and metrics authority.
🛡️ Detection & Response Lifecycle
- Triage: Telemetry ingested from 7 active nodes into the Argus engine.
- Escalation: Post-exploitation indicators (e.g. webshells) trigger immediate
PERM_BAN. - Retention:
- 24 hours for lower confidence scenarios
- 14 days for high-confidence IOCs
- 30 days for offender watchlist
- Notification: High-severity events dynamically pushed to Discord.
🛠 The Arsenal
Languages: Python (Flask, Gunicorn), Bash, JavaScript (React, Node.js)
Infrastructure: Kubernetes (K8s), Docker, Caddy, NGINX
Security: Argus (Custom SIEM), CrowdSec, Trivy, SQLite, Vaultwarden
Observability: Prometheus, Blackbox Exporter, Node Exporter
Backups: Borgmatic, Rsync.net (Encrypted Offsite)
🧠 Supporting Tooling & Concepts
Actively used across this environment or in adjacent projects:
- Security & Identity: Fail2Ban, MITRE ATT&CK mapping, OIDC, Authelia, MFA, TLS hardening
- Infrastructure & Cloud: Linux (Debian/Ubuntu), Terraform, AWS, GCP, Oracle Cloud, Vultr
- CI / Ops: Git, GitHub Actions, container image scanning
- Observability (Extended): Grafana, Netdata
⚡ Efficiency Metrics
- Codebase Growth:
10250lines of custom code across all our repositories - Commit Velocity:
272commits since Jan 1 - Ares: Ryzen 9 9950X sustaining ~0.06 load avg while running Gitea and a Kubernetes control plane
- Resilience: Automated failover between AWS and peer nodes
🧩 Deployment Patterns
- Reverse Proxy: Caddy/NGINX (Cloudflare where applicable)
- Observability: Prometheus + Node Exporter + cAdvisor
- Lifecycle: Watchtower for controlled auto-updates
- Access Control: Authelia where exposed
- Management: Portainer (loopback-bound where possible)
Nodes are intentionally heterogeneous.
Each host is scoped to its role to reduce blast radius and cognitive load.
📍 Triton
Primary high-density services node running:
- Prometheus + Grafana
- Code-server
- Authelia
- Trilium
- CrowdSec bouncers
Optimized for sustained workloads and observability aggregation.
🔗 Live Projects
- Threat Decisions & Telemetry:
threats.beane.me - Threat Intelligence & Analytics:
intel.beane.me - Vulnerability Scanning (Trivy):
vuln.beane.me - Backups & Restore Verification:
backups.beane.me - Threat Decision Observability:
observe.beane.me - Source Control (Gitea + K8s):
git.beane.me
🚜 Resource Management
- Compute Density: Kubernetes control plane with Postgres and CI workloads on Zen 5 hardware
- Sovereignty: All code, telemetry, and backups remain self-hosted
- Backups: Multiple daily encrypted Borgmatic snapshots shipped offsite
"If it's not blocked, it just hasn't found our infrastructure yet."