Some architecture metrics run faster than a round-trip to your cloud API
There is a common assumption about architecture analysis: it is a slow, batch operation you run overnight or before a big refactoring. Something that takes minutes. Something you schedule, not something you wait for.
The numbers don’t support that assumption.
What the measurements show
Here’s what Arxo spends its time on when analyzing Prettier — a 5,500-file TypeScript monorepo, a realistic medium-to-large codebase:
| Stage | Time |
|---|---|
| Full parse and call extraction (single pass) | ~172 ms |
| Resolution phase (building the call graph) | ~56–102 ms |
| End-to-end (both phases combined) | ~274 ms |
For context:
- A round-trip to a cloud API endpoint in the same region: ~10–30 ms
- A round-trip cross-region: ~50–150 ms
- A round-trip to an external third-party API: ~100–300 ms
Some stages of architecture analysis — specifically the resolution phase and individual metric computations — complete in less time than a call to an external service. Curvature inference (the math behind defect prediction using Ricci curvature) runs in under 1 ms per module on research datasets. The full SCC detection on Prettier comes in at 623 ms, about 3–5× faster than the equivalent in madge or dependency-cruiser.
These are not exceptional cases on tiny projects. These are measurements on real production-scale codebases.
Why this matters
The performance of a check determines where you can run it.
If architecture analysis takes 45 minutes, you run it in a scheduled pipeline, check the results in the morning, and decide whether to act on findings that are already hours old. Architecture becomes an audit, not a signal.
If the same analysis takes 300 ms, the question changes. You can run it in a pre-commit hook. You can run it as part of a PR check that completes before the reviewer has finished reading the title. You can run it in watch mode during development and see structural feedback update as you write code.
Speed doesn’t just change where the check runs. It changes the developer experience entirely. A check that completes in under a second is one a developer will leave on. A check that completes in 45 minutes is one that gets disabled.
Why architecture analysis was slow before
The slowness in tools like dependency-cruiser and madge is mostly parser overhead. Both are JavaScript tools parsing JavaScript/TypeScript — the parser, the analysis, and the output all run in the same Node.js process. On large codebases, the parsing bottleneck is real: dependency-cruiser takes 3,226 ms on Prettier; madge takes 2,310 ms.
Arxo is 3–5× faster end-to-end, and individual metric computations happen against an already-built graph — so each additional metric adds very little overhead on top of the initial analysis.
Where you can run architecture checks today
Given these numbers, here are the boundaries that have changed:
Pre-commit hooks — yes, for most codebases. SCC and basic structural metrics complete in under a second for projects up to tens of thousands of files. A hook that finishes in under 500 ms is invisible to the developer.
PR checks — definitely. A 300–600 ms analysis is a rounding error in CI/CD pipeline time. If your test suite takes 5 minutes, an architecture gate adds nothing perceptible.
Editor / watch mode — viable for fast metrics (SCC, propagation cost, centrality). Subsequent runs are significantly faster than the first.
Scheduled pipeline only — still the right place for the heaviest metrics (git history analysis, topology on very large graphs) where computation scales with history depth rather than file count.
The claim, stated precisely
We don’t say “architecture analysis is fast.” We say: many individual metric computations run faster than a round-trip to a remote API — and that means architecture feedback can live in the same feedback loops as tests and type checks, not in a separate overnight batch.
If you’ve been treating architecture analysis as a slow operation, it’s worth re-examining that assumption on your codebase:
npx arxo analyze