Back

How PeakofEloquence.org Scaled to 490K Monthly Users

In early 2024, I launched PeakofEloquence.org — an open-source Islamic education platform. The goal was simple: make classical texts accessible to a global audience with clean, fast, mobile-friendly UI. What I didn't expect was a +13,661% growth spike that would push the platform past 490,000 monthly active users across 15+ countries in under a year.

This is the technical story of how that happened, what broke along the way, and what I'd do differently.


The Architecture

From day one, I made a bet on edge computing. The audience was always going to be global — France, UK, Netherlands, Spain, Indonesia, Pakistan — so latency mattered. Traditional origin-server architectures would mean 200-400ms round trips for users in Europe hitting a US-East origin. That's not acceptable for a content-heavy reading experience.

The stack:

  • Cloudflare Workers for edge-rendered pages and API routes
  • Cloudflare R2 for static asset storage (no egress fees)
  • Kubernetes for the containerized backend services
  • GitHub Actions for CI/CD with automated deployments
  • Cloudflare Analytics for real-time traffic monitoring

The key insight was treating Cloudflare's edge network as the primary compute layer, not just a CDN. Workers handle routing, content transformation, and caching logic at the edge. The origin cluster only gets hit for database writes and cache misses.


The Growth Spike

The growth wasn't gradual. It looked more like a step function. One week we were at ~30K monthly users. The next week, 150K. Then 300K. Then it kept climbing.

What happened: the platform got shared organically across WhatsApp groups, Telegram channels, and Islamic education communities in francophone Africa, the UK, and Western Europe. There was no marketing budget, no paid ads, no influencer campaign. People found the content useful and shared it.

From an infrastructure standpoint, this is the best and worst kind of growth. Best because it's authentic and sticky. Worst because you have zero warning and no time to prepare.

What Broke

1. Cache invalidation was too aggressive. Early on I had short TTLs (60 seconds) because I was iterating on content frequently. When traffic 10x'd overnight, every cache miss hit the origin. I was burning through Worker request limits. Fix: bumped TTLs to 24 hours for stable content and implemented stale-while-revalidate patterns.

2. Database connection pooling. The Kubernetes backend was opening too many database connections under load. Each pod was creating its own connection pool, and with autoscaling kicking in, we hit the connection limit on the database. Fix: moved to PgBouncer as a sidecar and set hard limits per pod.

3. Image optimization was blocking renders. The original implementation processed images on-demand at the edge. Under heavy traffic, this created queues. Fix: moved to pre-processed image variants generated during the build step and served directly from R2.


What I Learned

Edge-first is the right default for global audiences

Cloudflare Workers are genuinely fast. Sub-50ms response times in Europe, sub-100ms in most of the world. For content platforms, this alone is a competitive advantage. Users don't bounce when pages load instantly.

Organic growth tests your assumptions

When you're building for 1K users, you can get away with lazy caching, synchronous image processing, and generous database connection limits. When you suddenly have 490K users, every shortcut you took surfaces. The lesson: design for 10x your expected traffic from the start, even if you're building a "small" project.

Simplicity scales

The stack is intentionally boring. Cloudflare Workers, R2, Kubernetes, PostgreSQL. No microservices architecture, no event-driven saga patterns, no GraphQL federation. Just straightforward request-response with aggressive caching. That simplicity is what let me handle a 13,000% traffic increase as a solo developer.

Open source builds trust

Making the platform open-source was a deliberate choice. The audience is a community that values transparency and shared knowledge. Open-sourcing the code aligned with those values and built trust. Several community members contributed translations and content corrections through GitHub.


Current State

As of early 2026, PeakofEloquence.org serves 490K+ monthly active users across 15+ countries. The top traffic sources are France, the United Kingdom, the Netherlands, Spain, and several West African nations. The platform runs on a single Kubernetes cluster with Cloudflare handling the edge layer. Monthly infrastructure cost is under $50.

The project proved something I believe deeply: you don't need a team of 20 or a $2M seed round to build something that reaches hundreds of thousands of people. You need a focused product, solid infrastructure, and a community that cares about the content.


Tech Stack Summary

LayerTechnology
Edge ComputeCloudflare Workers
Static AssetsCloudflare R2
BackendKubernetes (containerized)
DatabasePostgreSQL
CI/CDGitHub Actions
MonitoringCloudflare Analytics
Cost< $50/month

If you're building a content platform for a global audience, start at the edge. It's the single highest-leverage decision you can make.

Next Reads