How I scaled a frontend platform to 18 apps

8 min readยทFeb 19, 2026

Four years. One codebase. A platform that kept growing faster than the tools that managed it. This is the story of every infrastructure problem I ran into - and what I built to solve them.


The Routing Problem - Building Filesystem Routing Without Next.js

With 11 sub-applications and growing, manual routing had become a real bottleneck. Every new page required wiring an object like this:

// src/apps/crm/routes/routes.ts
[{
  path: '/customers',
  component: CustomersTable,
  exact: true,
  access: 'customers.view',
  breadcrumb: registerPath('/customers', () => {
    return t('Customers');
  }),
}];

Route path, breadcrumb registration, permissions, component import - all by hand, every time. It was tedious, error-prone, and made onboarding slower than it needed to be.

The obvious answer was Next.js, but migrating a large established CRA codebase wasn't something we could do safely at that point. So I built the next best thing: a custom webpack loader that implemented Next.js-style filesystem routing inside CRA using glob-based file discovery. Drop a file in the directory, and the route exists.

On top of routing, I embedded access control and breadcrumbs directly into the convention via a single exported config:

// src/apps/crm/pages/customers.tsx
export default function CustomersTable() { ... }
export const config = {
  title: t('Customers'),
  access: 'customers.view',
}

One file. One constant. Route, breadcrumb, and permission - all handled automatically.

Results

  • Eliminated manual route configuration.
  • Breadcrumb and access control wired automatically.
  • Faster new developers onboarding.

๐Ÿ”— Deep dive: "Building Filesystem Routing in CRA with a Custom Webpack Loader"


Migrating to Next.js - Without Breaking Anything

CRA was being deprecated. We had to move. But a direct migration to Next.js broke immediately - env variable naming conventions were different, static file serving worked differently, and Next.js's SSR-first architecture conflicted with a four-year-old codebase full of localStorage and window calls that we couldn't safely refactor overnight.

Rather than forcing a big-bang rewrite, I broke the migration into a series of incremental steps over a few months - each one making the codebase look a little more like Next.js, without ever breaking what was already working:

  • Migrated routes to a codegen approach inspired by Next.js conventions
  • Patched react-scripts with patch-package to align env variable naming
  • Made the app fully CSR to avoid SSR conflicts with localStorage and window
  • Resolved a Mapbox worker transpilation issue that differed between CRA and Next.js
  • Migrated static asset handling and removed arbitrary global CSS imports incompatible with Next.js

By keeping React Router and full CSR, we got something most Next.js apps don't have by default - instant client-side navigation with no server roundtrips.

By the time the actual migration commit landed, the codebase was already so aligned that moving to Next.js touched 18 files across a 4,000-file, 200-page, 13-app codebase - +878 / -4841 lines. Rolling back, if needed, was a single git revert.

Results

  • Final migration commit: 18 files changed across a 4,000-file codebase
  • Zero breaking changes in production
  • Instant client-side navigation in Next.js - no server roundtrips
  • Full rollback possible with a single commit revert

๐Ÿ”— Deep dive: "How I Migrated a 4-Year-Old CRA App to Next.js Without Breaking Anything"


The i18n Overhaul - Three Tools, One Pipeline

The internationalisation story is three separate problems that compounded over time - each solution unlocking the next.

Consolidating Locale Files

Each sub-application had its own locale directory - /apps/<n>/locales/en.json, es.json, ca.json. The intent was isolation, but in practice translation keys were duplicated across apps, and the app was loading and merging dozens of locale files on startup.

The fix: merge everything into a single /src/locales/en.json per language, deduplicating overlapping keys in the process. One file per language.

Result: bundle size reduced by 1.62MB - 77% smaller locale files.

Refactoring 10,000+ Translation Keys with a Codemod

The deeper problem was how we were using i18next. Keys followed nested path conventions like t('common.actions.confirm') - nobody could remember them, duplicates were everywhere, and reading the code gave you no idea what would actually render on screen.

The fix was a simple insight: use the display text itself as the key. t('Confirm') instead of t('common.actions.confirm'). Instantly readable. No duplicates possible. And crucially - it makes automatic translation possible, because the key is now self-describing.

But 10,000+ usages across the codebase needed updating, so I wrote a codemod:

  1. Find all t('path.key') usages across the codebase
  2. Look up the actual display value in the JSON
  3. Swap: t('common.actions.confirm') โ†’ t('Confirm')
  4. Rebuild the locale files with the new flat key structure

Result: ~800KB removed from the bundle - shorter keys repeated across thousands of call sites add up.

๐Ÿ”— Deep dive: "Refactoring 10,000 Translation Keys with a Custom Codemod"

Automatic Translation + CI Linter

With self-describing keys, automatic translation became possible. Every new t('Some new text') previously meant opening three locale files and manually translating into Spanish and Catalan - tedious, slow, and easy to forget.

I replaced that entirely with two tools:

  • npm run i18n - scans the codebase for all used keys, checks which are missing from locale files, and calls the Google Translate API to fill them in automatically
  • A CI linter built on top of i18next-parser - catches missing keys in merge requests before they ever ship. I built two versions: one in Node.js and a second in Deno for faster CI execution

Results

  • Bundle size: 1.62MB removed from locale file consolidation
  • Additional ~800KB removed from key refactor
  • Zero manual translations
  • Zero missing keys reaching production

๐Ÿ”— Deep dive: "Reduced bundle size by 28% by rearchitecting the entire i18n layer"


Migrating 364 files to TailwindCSS in just a second

When the team decided to adopt TailwindCSS, we had 364 files using inline styles - style={{ display: 'flex', padding: '16px' }} - everywhere. Migrating manually meant matching every value to its Tailwind equivalent, merging classnames by hand, and doing it carefully enough not to introduce bugs. At a sustainable pace of 20โ€“30 files a day, that's two weeks of work.

I searched for an existing tool to automate it. Nothing existed. So I wrote one.

Using JSCodeShift, I built a codemod that transformed inline styles to Tailwind classes at the AST level. The basic case is straightforward - style={{ display: 'flex' }} becomes className="flex". The edge cases are where it gets interesting:

  • Dynamic values: style={{ display: props.display }} - can't be statically converted, left in place automatically
  • Template literals: backtick expressions have a different AST node signature than string literals and require separate parsing logic
  • Merging with existing className: if className is a variable rather than a string literal, you can't simply concatenate - the expression has to be preserved as-is
  • Style/className conflicts: <div style={{ display: 'flex' }} className="block" /> - inline style takes precedence in the browser, so block must be removed from className entirely

Results

  • 364 files migrated in a single script execution
  • ~2 weeks of manual work โ†’ done in one run
  • Eliminated style/className conflicts across the entire codebase
  • Performance improved: No more inline style objects, no more unneeded re-renders
  • Consistent output across the entire codebase, zero human error

๐Ÿ”— Deep dive: "Writing a JSCodeShift Codemod to Migrate Inline Styles to TailwindCSS"


MUI to TailwindCSS - A 12x Performance Gain

Migrating to React 19 surfaced a problem we knew was coming: MUI v4 was incompatible. It relied on makeStyles - a CSS-in-JS solution that injects styles into the DOM at runtime via <style> tags, triggering style recalculation on every render cycle. It also used React's legacy context API and several internal APIs removed in React 19.

I read through the MUI v4 source code component by component - tracing internal hooks, theming APIs, and the makeStyles runtime to understand exactly what each component was doing - then re-implemented them in TailwindCSS. The key architectural shift was moving from runtime style injection to static Tailwind utility classes resolved at build time. That's not a cosmetic change. It's a fundamentally different rendering model.

The most striking example: the Autocomplete component rendered 12x faster after the rewrite.

Scope

  • 25 components fully re-implemented
  • 40 makeStyles usages migrated to Tailwind classes
  • 110 useTheme hooks replaced
  • 10 withStyles HOCs removed

All with no visual regressions in production.

Results

  • 3xโ€“12x faster render times across re-implemented components
  • 20% bundle size reduction from removing MUI entirely

๐Ÿ”— Deep dive: "How Re-implementing MUI in TailwindCSS Improved Performance by 12x"


From Webpack Loader to Codegen - Enabling Instant Reloads

Our custom webpack loader had become a ceiling. Turbopack was available in Next.js but couldn't be enabled - Turbopack doesn't support custom webpack loaders. The cost was real: dev server startup took ~30 seconds, hot reload took ~5 seconds. For a team of 6 shipping features every day, that compounds fast.

The solution was to extract the core idea out of webpack entirely. Instead of a custom loader using glob to discover route files at bundle time, I wrote a standalone codegen script that does the same thing - globs the filesystem, discovers route files, and generates a static .routes.autogen.ts file with all the imports. Same result, no webpack dependency. No loader, Turbopack enabled.

Results

  • Dev server startup: ~30s โ†’ ~10s (67% faster)
  • Hot reload: ~5s โ†’ near-instant
  • Estimated ~30 minutes saved per day across the team - 6 developers, dozens of save-refresh cycles each, every single day

Zero-Config Sub-Application Creation

As the platform grew, a small friction kept accumulating: every new sub-application required manually editing a central apps.config.ts file to register it. With 6 developers working in parallel, this file became a merge conflict magnet - two developers starting new apps in the same sprint would inevitably collide on the same lines.

Creating a new sub-application now takes minutes:

  1. Create a directory: /crm
  2. Add config.ts and a /pages folder
  3. Done - discovered, routed, access managed, and running

The Changelog Generator

Deployments were happening regularly, but users had no way to know what had changed. Hardcoding release notes meant a redeploy just to update text - so I built something simpler: control the entire release note experience through a markdown file.

A changelog generator reads git logs and produces a structured markdown file that developers review and edit before publishing, rendered in-app by a custom Next.js markdown renderer.

---
version: '1.94.7'
date: '2026-01-31T13:45:57.491Z'
---
 
### feat(Cell Contracts): Roaming feature added [[tour:roaming-feature]]
 
### feat(Calendar): Users info shown in calendar

The detail worth highlighting is the custom markdown syntax: [[tour:roaming-feature]]. That token compiles to a Start Tour button in the UI, launching the corresponding interactive product tour at exactly the moment a user is reading about the feature. No separate onboarding flow. The changelog is the onboarding.

Results

  • 509 versions shipped through this system
  • 6 interactive tours linked directly from release notes
  • Zero hardcoded release notes - no redeploys just to update text

Every problem in this post had the same shape: the platform grew faster than the tools that managed it. My job, consistently, was to close that gap - identify the friction, build the tool, and make sure the new system was meaningfully easier than what it replaced.

Good infrastructure means your team delivers faster without knowing why.


Questions, feedback, or want to talk architecture? Reach me at alirezawbhr@gmail.com