fix: use full README.md for lines of code edit

This commit is contained in:
2026-01-20 19:33:49 -05:00
parent 812ebfee75
commit 47eda7d60b

View File

@@ -18,37 +18,131 @@ jobs:
run: |
LOC=$(grep '"total_code"' public/code-stats.json | sed 's/[^0-9]*//g')
FORMATTED_LOC=$(printf "%'d" $LOC)
grep -v "Codebase Growth" README.md > README.tmp
sed -i "/## ⚡ Efficiency Metrics/a \\
\\
- **Codebase Growth:** $FORMATTED_LOC lines of custom code across all our repositories" README.tmp
if [ -s README.tmp ]; then
mv README.tmp README.md
else
echo "Corruption detected, aborting"
exit 1
fi
echo "Hard-coding $FORMATTED_LOC into the template..."
- name: Update README
run: |
LOC=$(grep '"total_code"' public/code-stats.json | sed 's/[^0-9]*//g')
FORMATTED_LOC=$(printf "%'d" $LOC)
echo "Syncing $FORMATTED_LOC lines..."
cat <<EOF > README.md
# 🛡 Patrick Beane
awk -v new_val="$FORMATTED_LOC" '
/.*/ {
sub(/.*/, "" new_val "")
}
{ print }
' README.md > README.md.tmp && mv README.md.tmp README.md
**SRE | Security Engineer | Self-Hosted Infra & Detection**
if [ $(wc -l < README.md) -lt 5 ]; then
echo "ERROR: AWK output looks too small. Guarding against corruption."
exit 1
fi
I design and operate **security-first, self-hosted infrastructure** focused on detection, resilience, and sovereignty.
My lab functions as a live production environment where threat intelligence, automation, and reliability engineering intersect.
---
## 🛰 The Fleet (10 Nodes)
> This environment blends production, research, and continuous experimentation.
> Availability and controls are intentionally tuned per node role.
| Node | Role | Specs | Status |
| :--- | :--- | :--- | :--- |
| **Argus** | SIEM / Brain / node-health Failover | Xeon E5-2660v2 (1 core) | 🟢 Online |
| **Triton** | High Performance Compute | EPYC 9634 (8 cores) | 🟢 Online |
| **Ares** | Gitea / Kubernetes Management Node (MicroK8s) | Ryzen 9 9950X (8 cores) | 🟢 Online |
| **Zephyrus** | Container Host | Ryzen 9 7950X (4 cores) | 🟢 Online |
| **Iris** | NGINX / PHP Edge | Vultr | 🟢 Online |
| **Vault** | Secrets Management | GCP (Vaultwarden) | 🟢 Online |
| **Apollo** | Intel Dashboard (Flask) | AWS | 🟢 Online |
| **Hermes** | Public API (Frontend) | Oracle Cloud | 🟢 Online |
| **Hades** | Public API (Backend) | Oracle Cloud | 🟢 Online |
| **Zeus** | Monitoring / Metrics NOC | Xeon Gold 6150 (1 core) | 🟢 Online |
---
## 🌐 Infrastructure Strategy
- **Compute Layer:** Zen 5 (9950X), Zen 4 (7950X), EPYC 9634 for sustained workloads.
- **Edge Layer:** Oracle Cloud & Vultr for low-latency public ingress.
- **Sentinel Layer:** **Argus SIEM** correlating telemetry and enforcing distributed decisions across nodes.
- **Observability:** Zeus as the centralized NOC and metrics authority.
---
## 🛡 Detection & Response Lifecycle
- **Triage:** Telemetry ingested from 7 active nodes into the Argus engine.
- **Escalation:** Post-exploitation indicators (e.g. webshells) trigger immediate `PERM_BAN`.
- **Retention:**
- 24 hours for lower confidence scenarios
- 14 days for high-confidence IOCs
- 30 days for offender watchlist
- **Notification:** High-severity events dynamically pushed to Discord.
---
## 🛠 The Arsenal
**Languages:** Python (Flask, Gunicorn), Bash, JavaScript (React, Node.js)
**Infrastructure:** Kubernetes (K8s), Docker, Caddy, NGINX
**Security:** Argus (Custom SIEM), CrowdSec, Trivy, SQLite, Vaultwarden
**Observability:** Prometheus, Blackbox Exporter, Node Exporter
**Backups:** Borgmatic, Rsync.net (Encrypted Offsite)
---
### 🧠 Supporting Tooling & Concepts
Actively used across this environment or in adjacent projects:
- **Security & Identity:** Fail2Ban, MITRE ATT&CK mapping, OIDC, Authelia, MFA, TLS hardening
- **Infrastructure & Cloud:** Linux (Debian/Ubuntu), Terraform, AWS, GCP, Oracle Cloud, Vultr
- **CI / Ops:** Git, GitHub Actions, container image scanning
- **Observability (Extended):** Grafana, Netdata
---
## ⚡ Efficiency Metrics
- **Codebase Growth:** `$FORMATTED_LOC` lines of custom code across all our repositories
- **Ares:** Ryzen 9 9950X sustaining ~0.06 load avg while running Gitea and a Kubernetes control plane
- **Resilience:** Automated failover between AWS and peer nodes
---
### 🧩 Deployment Patterns
- **Reverse Proxy:** Caddy/NGINX (Cloudflare where applicable)
- **Observability:** Prometheus + Node Exporter + cAdvisor
- **Lifecycle:** Watchtower for controlled auto-updates
- **Access Control:** Authelia where exposed
- **Management:** Portainer (loopback-bound where possible)
> Nodes are intentionally heterogeneous.
> Each host is scoped to its role to reduce blast radius and cognitive load.
---
#### 📍 Triton
Primary high-density services node running:
- Prometheus + Grafana
- Code-server
- Authelia
- Trilium
- CrowdSec bouncers
Optimized for sustained workloads and observability aggregation.
---
### 🔗 Live Projects
- **Threat Decisions & Telemetry:** `threats.beane.me`
- **Threat Intelligence & Analytics:** `intel.beane.me`
- **Vulnerability Scanning (Trivy):** `vuln.beane.me`
- **Backups & Restore Verification:** `backups.beane.me`
- **Threat Decision Observability:** `observe.beane.me`
- **Source Control (Gitea + K8s):** `git.beane.me`
---
## 🚜 Resource Management
- **Compute Density:** Kubernetes control plane with Postgres and CI workloads on Zen 5 hardware
- **Sovereignty:** All code, telemetry, and backups remain self-hosted
- **Backups:** Multiple daily encrypted Borgmatic snapshots shipped offsite
> *"If its not blocked, it just hasnt found our infrastructure yet."*
EOF
- name: Commit and Push
run: |