George Prodromou

Principal SEO & Growth Leader

Networking

Routing, HA edge, anycast and IP space — the parts of the stack where uptime is actually decided.

The layer where uptime lives

A fast frontend and a tuned database count for nothing if the packets never reach them. Networking sits below everything else on this site — where a stuck BGP session, a bad route, or a single failed firewall will take an entire platform offline regardless of how clean the code above it is.

I operate my own infrastructure end-to-end, so this is hands-on work, not theoretical. I've run the same stack — routing, switching, edge firewalls, DNS, IP space — from a bedroom home lab through to production colocation.

Routing & BGP

Production BGP means peering with upstream transit, announcing prefixes from a registered ASN, and making sure those announcements actually reach the wider internet the way you expect. Multi-homing over two physically diverse transits is the default posture, not a nice-to-have.

Day-to-day that means working comfortably with AS-path prepending, BGP communities, prefix filters, and route maps; keeping RPKI records valid and IRR objects in sync; and knowing how to read a looking glass to spot an asymmetric route or a hijack before it becomes a customer ticket.

High-availability edge

The edge router is usually the single biggest point of failure in a self-operated stack. I design this tier to fail gracefully: pairs of pfSense / OPNsense firewalls running active/passive with CARP for virtual-IP failover and pfsync for state replication, so stateful connections survive a box reboot.

Paired WAN uplinks, independent power feeds, and out-of-band management sit alongside — because the most valuable thing in an outage is the ability to reach the kit when the kit itself is down.

Anycast

Anycast is how you make a single IP address appear simultaneously close to users in multiple regions. I use it primarily for authoritative DNS — cheap, huge latency win, and a natural DDoS sponge — and it's the same mechanism behind any serious edge service.

Operationally: one anycast prefix announced from each POP via BGP, consistent service behaviour behind the address everywhere, and a withdrawal model you can trust under partial failure so traffic doesn't pile up at a sick site.

Switching & Layer 2

Enterprise switching fabric — Cisco Catalyst and Juniper EX-class gear — with VLANs, 802.1Q trunking, LACP bonds, spanning-tree discipline, and strictly separated management networks. The boring part of the network, and the part that will ruin your week if you let it rot.

DNS

Authoritative DNS on PowerDNS, typically anycast-fronted. Split-horizon where internal resolution needs to differ from public; DNSSEC for records that warrant it; and automation that treats DNS changes as first-class deploys with review, diff, and rollback rather than ad-hoc edits in a control panel.

Recursive resolvers on the same footing — local Unbound / dnsmasq caches on every worker so application hot paths don't have to reach out to upstream resolvers on every request.

Mail & deliverability

Running a production mail server is an unfashionable skill that separates platforms that can be trusted with email from platforms that can't. I operate a self-hosted stack — Postfix on SMTP, Dovecot on IMAP, and a webmail frontend — plus the full deliverability discipline it requires to actually land in the inbox.

That means SPF, DKIM, DMARC, ARC correctly published and aligned; reverse DNS that matches forward; MTA-STS, DANE, and TLS-RPT for transport security; and active maintenance of sending-IP reputation across the big mailbox providers with postmaster tooling, seed-list monitoring, and DMARC aggregate reports.

Inbound, the pipeline runs through amavis wired to SpamAssassin and ClamAV — Bayesian classification, DNSBL / URIBL lookups, greylisting, header and content rules, and per-tenant policy. The interesting work isn't stopping spam — it's stopping spam without quietly eating real mail.

IP space & IPAM

Running out of IPs quietly is a hazing ritual every self-operated platform eventually goes through; proper IPAM is what prevents it. I manage IPv4 and IPv6 space with a documented allocation policy, clean separation between infrastructure / management / tenant / public ranges, and per-tenant delegations that don't bleed into each other.

Edge security

Network-layer filtering — firewall rule design, rate limiting, and ingress DDoS absorption — sits upstream of the application WAF (ScaleShield). The two layers have different jobs: the network tier throws away obviously bad traffic cheaply, and the L7 WAF concentrates on intent and behaviour.

Observability

NetFlow / sFlow into a collector for traffic analysis, SNMP polling for interface and device health, and a Grafana board that shows the network and the apps on the same screen. When something is on fire you want one dashboard, not three.

Core capabilities

  • BGP Peering & Route Management
  • Multi-Homed Transit
  • ASN-Registered IP Space Announcement
  • RPKI · IRR · Prefix Filtering
  • pfSense / OPNsense Firewalls
  • HA Routers (CARP / VRRP / pfsync)
  • Anycast IP Architecture
  • Anycast DNS
  • Authoritative DNS (PowerDNS · DNSSEC)
  • Recursive Resolvers (Unbound · dnsmasq)
  • Self-Hosted Mail (Postfix · Dovecot)
  • SPF · DKIM · DMARC · ARC
  • MTA-STS · DANE · TLS-RPT · Reverse DNS
  • Amavis · SpamAssassin · ClamAV · DNSBL / URIBL
  • IP Reputation & Deliverability Ops
  • Enterprise Switching (Cisco Catalyst · Juniper EX)
  • VLANs · 802.1Q · LACP · Spanning Tree
  • IPv4 + IPv6 Addressing & IPAM
  • Out-of-Band Management
  • DDoS Mitigation
  • NetFlow / sFlow · SNMP · Grafana
  • WAF & Bot Protection (ScaleShield)