#1: A “Bespoke” In-House Authentication System
The Choice:
Five years ago, I was leading a greenfield product. Instead of leaning on OAuth or OpenID Connect providers, I decided — in the name of “full control” — to build a custom authentication system from scratch.
Why I Did It:
✅ Wanted deep understanding of session mechanics
✅ Thought vendor lock-in would bite us
✅ Seemed “simple enough” (famous last words)
Why I Regret It:
We spent months re-building flows (password resets, 2FA, account recovery) that Auth0 or Cognito could have handled on day one.
Security reviews consumed weeks every release.
We reinvented the wheel badly: custom JWT signing, user lookup, session management, and constant fixes for edge-case bugs.
Audits were nightmares because we lacked an external standard to point to.
What I’d Do Differently:
👉 Use a proven identity provider from the start (e.g., Auth0, Okta, or Cognito).
👉 Keep custom logic limited to user profile enrichment, but offload authentication entirely.
Source reflections: Even senior engineers on r/webdev routinely warn: “Rolling your own auth is security debt you’ll pay forever.”
#2: Writing a “Homegrown” Job Queue
The Choice:
On another project, I wrote a bare-bones job queue using Postgres advisory locks and a few setTimeout
calls, because I didn’t want to bring in Redis or RabbitMQ.
Why I Did It:
✅ Fewer moving parts
✅ Keep infra “simple”
✅ Optimistic that our jobs were lightweight
Why I Regret It:
Concurrency control was a constant nightmare
We had zero visibility (no retries, no dead-letter queue, no metrics)
Maintenance spiraled as we added priorities, rate-limiting, delays — basically rebuilding BullMQ by hand
One crash corrupted job state because the locking code had a subtle bug
What I’d Do Differently:
👉 Reach for BullMQ (or better yet Temporal, or even Sidekiq on Ruby) for any serious async work.
👉 Pay the infra cost of a real message broker up front.
👉 Monitor queues with real observability tools like BullBoard, Celery Flower, or CloudWatch SQS metrics.
Source reflections: The BullMQ docs even list why rolling your own usually ends in tears.
#3: Coupling Frontend and Backend in a Monorepo Without Contracts
The Choice:
I had the noble idea of a monorepo housing both a React frontend and a Node API. The intention was to keep dev flow “smooth,” share models, and ship features faster.
Why I Did It:
✅ Single source of truth
✅ Shared types
✅ Smoother pull requests
Why I Regret It:
No clear API contract boundaries — the React code ended up reaching deep into server-only types
Merge conflicts constantly blocked unrelated teams
CI/CD pipelines became a rat’s nest of triggers and partial builds
Hard to swap backend versions because the frontend implicitly depended on internal server functions
Scaling teams was painful because the monorepo forced artificial coupling
What I’d Do Differently:
👉 Keep the monorepo, but enforce a strict contract via a GraphQL or OpenAPI schema, never importing server models directly.
👉 Add independent deployment pipelines
👉 Clearly version the shared types, not blindly reuse backend data classes
Source reflections: Even strong monorepo advocates on Stack Overflow advise contract-first design or you end up with a monolith disguised as a monorepo.
Final Takeaway
Every one of these choices felt “logical” in the moment, but ended up:
✅ Sapping engineering time
✅ Increasing cognitive load
✅ Becoming a huge maintenance burden
The next time you build a stack:
Lean on proven platforms for security
Use purpose-built tools for infrastructure
Preserve modular boundaries, even in monorepos
You can still build beautiful, composable systems — but you don’t need to do everything from scratch to prove your skill.
NEVER MISS A THING!
Subscribe and get freshly baked articles. Join the community!
Join the newsletter to receive the latest updates in your inbox.