Cognitive complexity matters more than algorithmic complexity
Your startup doesn’t slow down because of slow code or inefficient algorithms. It slows down because every week, someone stares at a function for 20 minutes trying to figure out what it does, then writes a workaround because they’re not confident enough to change it.
That’s cognitive complexity. Not how fast the machine processes your code, but how fast a human can read it, trust it, and build on top of it. For most startups, the human is the bottleneck, and it’s not even close.
It starts with the wrong trade-off
Take a car marketplace where a dealership dashboard needs to show which of their listings need attention: stale pricing, low views, missing history checks. A dealership has 30 to 80 active listings. The “efficient” version computes everything in one query:
SELECT l.id, l.title, l.price,
CASE WHEN l.price_updated_at < NOW() - INTERVAL '30 days' THEN true ELSE false END AS stale_price,
CASE WHEN (
SELECT COUNT(*) FROM listing_views lv WHERE lv.listing_id = l.id
) < 10 THEN true ELSE false END AS low_views,
CASE WHEN NOT EXISTS (
SELECT 1 FROM history_checks hc WHERE hc.listing_id = l.id
) AND d.region IN ('GB', 'DE', 'FR') THEN true ELSE false END AS missing_history_check
FROM listings l
JOIN dealerships d ON l.dealership_id = d.id
WHERE l.dealership_id = $1 AND l.status = 'active';
Or you fetch the dealership’s listings and compute the flags in code:
const dealership = await getDealership(dealershipId);
const listings = await getActiveListings(dealershipId);
const requiresHistoryCheck = EU_REGIONS.includes(dealership.region);
const flags = [];
for (const listing of listings) {
const viewCount = await countViews(listing.id);
flags.push({
id: listing.id,
title: listing.title,
stalePrice: daysSince(listing.priceUpdatedAt) > 30,
lowViews: viewCount < 10,
missingHistoryCheck: requiresHistoryCheck && !listing.historyCheck,
});
}
The query is faster on paper. The loop is the one someone can debug at 11pm when a dealership complains their dashboard is wrong. At 50 listings, no one will notice the performance difference. Everyone will notice the readability one.
That trade-off is visible. You can compare both versions and pick the clearer one. The cognitive complexity that actually slows teams down doesn’t look like a trade-off at all. It starts small, one shortcut at a time.
Then someone writes a clever line
You’ve seen this. A one-liner that does 5 things, written by someone who was proud of it. A chain of ternaries, a reduce that does 3 things at once, a regex that handles 5 edge cases. It works, it’s compact, and it takes the next person 20 minutes to understand what it does.
Say you’re building a car marketplace and you need to check whether a buyer can reserve a vehicle:
const canReserve = l.active && !l.reserved && (l.seller?.verified || l.dealership?.tier === 'premium') && buyer.identityChecked;
Now the readable version:
// listing must be available, seller must be trusted, buyer must be verified
const isAvailable = listing.active && !listing.reserved;
const isTrustedSeller = listing.seller?.verified || listing.dealership?.tier === 'premium';
const canReserve = isAvailable && isTrustedSeller && buyer.identityChecked;
Same logic, 3 named conditions, one comment that gives you the business rule before you read the code. The first version requires you to mentally parse every clause to understand what’s being checked. The second tells you in plain English, then lets you verify.
Most cognitive complexity isn’t in the syntax (linters and static analysis can catch that). It’s in how you name things, where you draw boundaries, and what you leave implicit. In a startup, those choices compound fast.
Then you abstract what looks similar
A clever line is a local problem: one function, one fix. Abstractions are worse because they spread. Someone sees similar patterns across the codebase and consolidates them into a shared service.
In a car marketplace, dealership listings, private seller listings, and auction listings all need validation, pricing, and seller verification. On day one, the overlap looks significant: every listing has a title, a price, a seller, and a set of photos. So someone builds a shared ListingService:
class ListingService {
async validate(listing, seller) {
assert(listing.title, 'Title required');
assert(listing.price, 'Price required');
assert(listing.photos.length > 0, 'At least one photo required');
assert(seller.name, 'Seller name required');
}
}
4 shared checks, clean and simple. Then the product evolves. Dealerships need warranty info and history checks. Auctions need history checks too, but also identity verification and a reserve price instead of a fixed price. Private sellers need identity verification and a contact phone. Some checks apply to 2 of the 3 types but not all 3. 6 months later:
class ListingService {
async validate(listing, seller) {
assert(listing.title, 'Title required');
assert(listing.photos.length > 0, 'At least one photo required');
if (seller.type !== 'auction') {
assert(listing.price, 'Price required');
}
if (seller.type === 'auction') {
assert(listing.reservePrice, 'Reserve price required');
assert(listing.auctionEnd, 'Auction end date required');
}
if (seller.type === 'dealership' || seller.type === 'auction') {
assert(listing.historyCheck, 'History check required');
}
if (seller.type === 'dealership') {
assert(listing.warranty, 'Warranty required');
if (seller.region === 'EU') {
assert(listing.emissionsClass, 'Emissions class required for EU');
}
}
if (seller.type === 'private' || seller.type === 'auction') {
assert(seller.identityVerified, 'Seller identity must be verified');
}
if (seller.type !== 'dealership') {
assert(listing.contactPhone, 'Contact phone required');
}
}
}
The shared body is now 2 lines (title and photos). Everything else is branching, and the branches overlap in unpredictable ways: dealership and auction share history checks, private and auction share identity verification, non-dealership types share contact phone.
You can’t look at any single seller type and see its full validation without tracing every condition. A developer fixing auction validation has to read the entire function to be sure they haven’t missed a check that applies to auctions via !== 'dealership'. The code is shared in name only.
A wrong abstraction costs more than the duplication it was trying to prevent, because by the time you realise it’s wrong, everything depends on it. Split it:
function validateDealershipListing(listing, seller) {
assert(listing.title, 'Title required');
assert(listing.photos.length > 0, 'At least one photo required');
assert(listing.price, 'Price required');
assert(listing.historyCheck, 'History check required');
assert(listing.warranty, 'Warranty required');
if (seller.region === 'EU') {
assert(listing.emissionsClass, 'Emissions class required for EU');
}
}
function validatePrivateListing(listing, seller) {
assert(listing.title, 'Title required');
assert(listing.photos.length > 0, 'At least one photo required');
assert(listing.price, 'Price required');
assert(seller.identityVerified, 'Seller identity must be verified');
assert(listing.contactPhone, 'Contact phone required');
}
function validateAuctionListing(listing, seller) {
assert(listing.title, 'Title required');
assert(listing.photos.length > 0, 'At least one photo required');
assert(listing.reservePrice, 'Reserve price required');
assert(listing.auctionEnd, 'Auction end date required');
assert(listing.historyCheck, 'History check required');
assert(seller.identityVerified, 'Seller identity must be verified');
assert(listing.contactPhone, 'Contact phone required');
}
Yes, title and photos are checked 3 times. That’s 2 duplicated lines. The trade-off is that each function is a complete picture of what a valid listing looks like for that seller type. A developer working on auction validation doesn’t scroll past dealership warranty logic. A bug in the EU emissions check can’t break private seller listings. And each function is trivially testable: fixed inputs, fixed assertions, no branches to cover.
The best abstractions emerge from duplication, not from prediction. If you’ve written the same pattern 3 times, extract it. If you’re writing an abstraction before the second use case exists, you’re guessing.
This doesn’t mean never abstract. The abstractions that survive are the structural ones: the orchestration, the “validate then persist then notify” skeleton. A createListing function that calls validate, save, and notifySeller in sequence is fine, the order of operations is genuinely shared. But what validate does for a dealership versus an auction is not. Keep the orchestration shared, keep the domain logic separate.
Not every divergence is clean enough to split, though. Sometimes the differences are subtle, a regional rule here, a policy exception there. Not worth a whole new function, but too specific for the shared one. That’s when the flags start appearing.
Then the flags start multiplying
“We’ll add a boolean, gate the new behaviour, ship it.” Each flag is a small, reasonable decision. The problem is what happens when they accumulate.
In a car marketplace, this gets out of hand quickly. Dealerships have different financing rules, different identity verification requirements, different rules about whether a history check is mandatory before listing. Private sellers have a different set entirely. Some regions require a history check before a listing can go live, others don’t. Feature flags, environment overrides, seller-specific behaviour, regional rules, they all stack up.
10 boolean flags give you over 1,000 possible configurations, and no one is testing all of those. The real cost isn’t the flags themselves, it’s the interactions between them.
A flag that gates a feature is simple. A flag that determines whether a listing can go live, which also depends on the seller type, which also depends on the seller’s region, which also depends on whether the listing was created before or after the new compliance rules, that’s not configuration, that’s a state machine no one drew.
graph TD
A[Can listing go live?] --> B{Seller type}
B -->|Dealership| C{Seller region}
B -->|Private seller| D{Identity verified?}
C -->|EU| E{History check uploaded?}
C -->|Non-EU| F[Approved]
D -->|Yes| F
D -->|No| G[Blocked]
E -->|Yes| F
E -->|No| H{Listed before compliance change?}
H -->|Yes| F
H -->|No| G
5 branches, one outcome, and already 6 possible paths. Now imagine this for every feature in the system, layered on top of each other, with no diagram. That’s what your team is navigating every time they touch the listing flow.
A few ways to fight this:
- Derive what you can, only store what you can’t. “Is this listing eligible for promotion?” shouldn’t be a boolean on the listing if eligibility just means more than 3 photos and a price below market average. Someone edits the listing, removes a photo, the flag still says eligible. The flag only exists because someone didn’t want to write the derivation, and now you have 2 sources of truth that can disagree.
- Capture decisions at the moment they happen. When a listing is created, you know the region, the seller type, and the compliance rules. Persist the answer on the listing:
historyCheckRequired: true. Don’t re-derive it from the seller’s current settings every time someone loads the listing, because if the compliance rules change next month, existing listings shouldn’t retroactively change behaviour. The difference is between a flag that says “this was true when we checked” and a flag that says “go check again every time.” The first is a record, the second is a liability. - Make illegal states unrepresentable. If a private seller can never offer financing, don’t have a
financingEnabledfield on yourPrivateSellertype at all. If both seller types share a database table and the column exists as null for private sellers, that’s fine at the storage level, but make sure your application code for private sellers never reads it. When they do have to co-exist, check strictly:=== true,=== null, never truthy/falsy. The momentnullandfalsemean the same thing, someone will writeif (!seller.financingEnabled)and now “doesn’t apply” and “explicitly disabled” are the same branch. - Review flags like you review dependencies. Not on a strict cadence, but deliberately. Some flags are legitimately long-lived (regulatory rollouts, gradual migrations, old paths kept to handle records created under the previous rules). That’s fine, as long as it’s a conscious decision and not inertia. Keep the flag check at the entry point of the branch, not scattered across the codebase. If removing a flag later means touching 15 files, it wasn’t placed well. If it’s a single check that delegates to one of two paths, removing it is a one-line change. Document the deprecation inline so the next developer knows which path is the dead one and doesn’t build on top of it.
Flags, branches, conditions, at least those are in the code. You can grep for them, trace them, eventually understand them. The next layer of complexity is worse: it’s the stuff that was never written down at all.
Then the knowledge lives in people, not code
Every codebase has assumptions that only work because everyone “just knows” how things behave. No flag, no comment, no assertion, just an understanding that lives in the team’s heads.
In a car marketplace, this shows up constantly. The frontend never lets a private seller create a financing offer, so the backend doesn’t validate for it. That works until someone duplicates a listing through the API, or a new client integration skips the frontend entirely, or a developer changes the frontend routing and doesn’t realise they removed the guard.
One bug on the frontend is all it takes, not to mention different clients, admin tools, data migrations.
Another pattern: the team knows that dealership listings always have a history check, so downstream code assumes it. No assertion, no comment, just an implicit guarantee that lives in people’s heads.
A new developer joins, sees no validation, and reasonably concludes that history checks are optional. They build a feature on that assumption, it passes review because the reviewer also doesn’t see a validation, and the bug surfaces 3 months later in production when a dealership listing without a history check breaks a report.
The cost of an explicit check is one line. The cost of an implicit assumption failing in production is a postmortem and a trust problem.
- If something is always true, assert it.
- If a path should never be reached, throw.
- If a frontend gate is the only thing preventing bad data, the backend should enforce it too.
Wrong trade-offs, clever lines, wrong abstractions, multiplying flags, implicit knowledge. Each layer is manageable on its own. But they don’t stay on their own.
Then nobody can explain what the system does
Together, they compound into a codebase where nobody can confidently describe what the system does for a given input, including the person who wrote it 6 months ago. The costs don’t show up in dashboards. They show up in:
- Onboarding taking weeks because the codebase is hard to navigate
- Senior developers being the only ones who can safely modify the listing flow
- Pull requests surfacing misunderstandings about whether a check lives at the dealership level or the listing level
- Roadmaps slipping because “it was more complex than we thought”
High cognitive complexity means fewer people can work on certain areas, which means those areas get less review, which means they accumulate more complexity, which means even fewer people can work on them. It’s a cycle that’s expensive to break once it sets in. But it’s also a cycle you can interrupt.
Optimise for the reader, not the machine
Your startup’s codebase will be read far more often than it will be profiled. The next person to touch your code is more likely to be confused by its structure than constrained by its runtime performance.
This changes as systems mature. Algorithmic complexity starts to matter when query fanout appears, when background jobs scale with tenant count, when accidental quadratics (cost grows with the input) show up in code that used to run against 50 rows and now runs against 50,000. But that’s a problem you earn by surviving long enough to have scale. Early on, the constraint is how quickly engineers can change the system without fear.
Every section in this post comes down to the same thing: when changing one line means checking 5 other files first, you have a cognitive complexity problem. Name things so the next person doesn’t have to guess. Keep functions focused on one decision. When you need to handle edge cases (and you will, especially around identity verification, regional compliance, seller types), make the branching explicit and capture intent at the point where you have it, not 3 layers away.
How fast you can onboard people and ship features is what makes or breaks your startup. Plenty of startups have stalled because their own team couldn’t understand what they’d built. Your competitive advantage isn’t your algorithm. It’s how quickly your team can change direction. Everything in this article is about protecting that.