George Prodromou

Principal SEO & Growth Leader

Web Architecture

High-concurrency edges, layered caching, rendering strategy, and redundancy that actually holds under load — from hardware up through Next.js.

From hardware up

I design web stacks end-to-end — enterprise server hardware and switches in the rack, an event-driven edge on top (OpenResty / Nginx), modern application frameworks (Next.js, React, Payload, Strapi, Ghost, Medusa, WordPress at scale) behind it, and the caching and redundancy glue that keeps it all responsive under real traffic.

The sections below are the handful of areas that repeatedly decide whether a platform stays fast or falls over.

Concurrency at the edge

The single biggest lever for response time and capacity is how the edge tier handles connections. Nginx and OpenResty are event-driven — a fixed pool of worker processes each servicing many thousands of concurrent connections through epoll, without one connection blocking another. Getting near the theoretical ceiling takes discipline on both sides of that model.

On the nginx side: worker_processes and worker_connections sized against CPU and RLIMIT_NOFILE; upstream keepalive pools so each request doesn't pay connection setup; Unix-domain sockets between nginx and upstream where they share a host; buffer sizes tuned so request bodies don't silently hit disk; SSL session caching so TLS handshakes stop dominating CPU.

On the kernel side: net.core.somaxconn, tcp_tw_reuse, listen backlog, ephemeral port range, file descriptor limits, and socket buffers sized for the actual workload.

Done well, this pushes a single nginx host well past the point where commodity hosting providers start recommending you add a load balancer. One production tier I built sustained 25,000 requests per second on modest hardware without queueing — Lua security logic, TLS termination, and cache-status decisions all running inside the request path.

Caching, layered

Caching is rarely one thing; it's a stack. Browser → CDN → reverse proxy → FastCGI / object cache → database. Each layer has its own eviction, staleness, and warm-up behaviour, and knowing which layer is answering a given request matters more than any single TTL value.

I work across FastCGI caching at the nginx layer, Varnish where VCL is the right tool, proxy_cache_lock for stampede control, microcaching (1–5s) for dynamic pages that would otherwise melt the origin, and Redis for application-level object and fragment caching. Cache-status headers stay exposed in development so the path is never a mystery; purge is API-driven, not manual.

Rendering strategy

SSR / SSG / ISR / CSR aren't style choices — they're capacity decisions. Server-rendered content scales by origin CPU and cache hit rate; static content scales by CDN; client-rendered content scales by the user's laptop but costs you indexability and TTFB. Most production architectures are a mix, and the value is in picking the right mode per route rather than adopting one religion.

Next.js is a good default for this precisely because it lets you mix. The Bridebook rebuild, the Kingfisher SPA framework, and this portfolio itself all use the same pattern: server-render routes that matter to search, statically generate where data is immutable enough to justify it, keep interactive work on the client.

Redundancy that actually holds under load

Redundancy only counts if it's exercised. That means two of everything that matters, with real traffic flowing across both paths during normal operation — not hot/cold pairs where the "cold" side hasn't served a request in months and has quietly drifted out of a working state.

In practice: active/active edge behind a shared virtual IP; health checks that fail fast enough to matter and don't flap; deployment rollouts that drain connections gracefully; database replicas warmed by replication lag you actually monitor. The test isn't whether the architecture survives a clean failure — it's whether it survives a degraded node pretending to be healthy.

WordPress at scale

WordPress has a reputation for being slow. That reputation is deserved at default settings and undeserved once you've handled PHP-FPM sizing, object cache (Redis), FastCGI cache, a clean cache-purge workflow, and a disciplined plugin posture. I've run WordPress estates under real enterprise traffic with sub-100 ms TTFB and cache hit rates above 90% — not by replatforming, but by giving the platform the infrastructure it needed.

Core capabilities

  • OpenResty (Nginx + Lua) at High Concurrency
  • Nginx Configuration & Tuning
  • Kernel & TCP Tuning (sysctl · somaxconn · tw_reuse)
  • Event-Loop & epoll Internals
  • TLS Termination & Session Caching
  • Layered Caching (Varnish · FastCGI · proxy_cache · Redis)
  • Microcaching · Cache Stampede Control
  • CDN Integration & Edge Computing
  • Next.js · React · SSR / SSG / ISR / CSR
  • Payload CMS · Strapi · Ghost · Medusa
  • WordPress at Scale (PHP-FPM · Object Cache · FastCGI)
  • Traefik Reverse Proxy & Load Balancing
  • HashiCorp Nomad Orchestration
  • Active/Active HA · Graceful Draining
  • On-Premises Server Deployment
  • Platform Migration Architecture
  • WAF & Edge Security (ScaleShield)