Why I Stopped Using ORMs (and When I Still Do)
The case against ORMs
Object‑relational mappers sell you on “write once, forget about SQL”. In practice they replace a declarative language you already know with a DSL that only approximates it. When a query runs slower than expected you are forced into a two‑step translation: first you read the generated SQL from the ORM’s logger, then you rewrite that SQL by hand, and finally you re‑wire the ORM to accept your custom fragment. The abstraction never disappears; it merely shifts the point of failure.
The classic N+1 pitfall is not a bug you can patch with a single include flag. Lazy loading is baked into every relationship definition. A single line such as user.posts.first.comments.first.author.name spawns a cascade of SELECTs, each incurring a round‑trip, a parse, and a lock acquisition. In our experience with a Node.js service handling 5 k RPS, the naive use of Sequelize’s eager loading added roughly 120 ms of latency per request—a measurable hit that would have been obvious in raw SQL.
Beyond performance, ORMs obscure data‑type fidelity. PostgreSQL’s jsonb operators, window functions, or ON CONFLICT … DO UPDATE upserts are either unsupported or hidden behind cumbersome method calls. You end up writing raw fragments anyway, defeating the promise of “no SQL”. The result is a codebase littered with .raw() calls, each a potential injection vector if not carefully parameterized.
What I use instead
My toolkit is deliberately minimal: a thin, type‑safe query builder plus a low‑level driver that talks directly to the database. In Python that means asyncpg for the network layer and SQLAlchemy Core for composable statements. In TypeScript the go‑to choices are Kysely or Drizzle. Both expose the full power of SQL while preserving static typing, so the compiler catches column‑name typos before they hit production.
// Kysely – explicit, type‑safe, no hidden magic
const recent = await db
.selectFrom('users')
.where('active', '=', true)
.select(['id', 'email', 'created_at'])
.orderBy('created_at', 'desc')
.limit(10)
.execute();
// recent: { id: number; email: string; created_at: Date }[]
In Python the equivalent is a single expression chain:
stmt = (
select(user_table.c.id, user_table.c.email, user_table.c.created_at)
.where(user_table.c.active == True)
.order_by(user_table.c.created_at.desc())
.limit(10)
)
rows = await conn.fetch(stmt)
# rows is a list of RowMapping objects with proper typing via mypy plugins
Both approaches make it trivial to add a FOR UPDATE SKIP LOCKED clause, a CTE, or a lateral join without hunting for a “feature request” in the ORM’s roadmap. Moreover, the generated SQL is visible at compile time, so you can benchmark it with EXPLAIN ANALYZE before you ship.
When ORMs still win
There are narrow, well‑defined scenarios where the convenience of an ORM outweighs its cost:
- Admin panels and internal tools. Rapid CRUD scaffolding (e.g.,
django-admin,Rails ActiveAdmin) slashes development time when the UI mirrors the database schema one‑to‑one. - Schema migrations. Even when you hand‑craft queries, a migration framework that tracks versioned DDL files is priceless. Tools like
Prisma Migrate,Alembic, orDrizzle Kitgenerate diff scripts that are far less error‑prone than ad‑hocALTER TABLEstatements. - Auto‑generated APIs. Platforms such as Hasura, PostgREST, and Prisma + tRPC rely on a declarative model to expose GraphQL or REST endpoints automatically. The “metadata layer” they require is essentially an ORM’s schema definition.
In those contexts the trade‑off is explicit: you accept the abstraction because the alternative—building a custom admin UI, writing migration scripts by hand, or wiring a GraphQL server—costs more developer hours than the modest performance penalty.
The hidden cost of leaving an ORM
Switching away from an ORM is not a refactor you can do in a weekend. In every team I’ve consulted, the migration period averaged two weeks of intense SQL boot‑camp. Junior engineers who never wrote a JOIN found themselves puzzling over execution plans, and senior engineers spent half their sprint time reviewing pull requests for basic syntax errors.
The real expense is not the learning curve but the churn in code review. When you replace a model method with a raw query, the reviewer must understand the business intent, verify the query plan, and ensure proper indexing. That extra diligence can double the review time for a single file.
Therefore, a migration should only happen if you have at least one team member who writes production‑grade SQL daily and can mentor the rest. Otherwise you risk a productivity cliff that outweighs any long‑term gains.
A middle path: query builders without the model layer
The sweet spot lies in extracting the builder component from the ORM and discarding the session/relationship machinery. SQLAlchemy Core provides exactly that: you get select(), insert(), and update() objects that compile to pure SQL, while the ORM adds Session, relationship(), and lazy loading. In practice I keep the Core API for all performance‑critical paths and only sprinkle the ORM’s declarative models in a few admin‑only endpoints.
The same pattern exists in the JavaScript ecosystem. Kysely and Drizzle expose a fluent builder that feels like an ORM but never hides the underlying SQL. If you need a quick findUnique helper, you can write a tiny wrapper around the builder instead of pulling in Prisma’s entire runtime.
Performance benchmarks you can trust
In a recent microbenchmark (single‑threaded Node.js, PostgreSQL 15 on an m5.large instance) we compared three stacks on a 100 k‑row pagination query:
- Prisma (v5) – 42 ms average
- Kysely – 18 ms average
- Raw
pgdriver with hand‑written SQL – 12 ms average
The gap widens as query complexity grows. Adding a CTE and a window function pushes Prisma to 78 ms, while Kysely stays under 30 ms because the generated SQL matches the hand‑written version exactly. The raw driver is unsurprisingly fastest, but the marginal cost of Kysely’s type safety is negligible compared to the hidden cost of ORM‑induced N+1 queries.
Guidelines for choosing the right tool
- Ask yourself: is the data access path performance‑critical? If the endpoint is on the hot path (e.g., serving a dashboard at 1 k RPS), default to a query builder.
- Do you need automatic schema introspection? For internal admin panels, scaffolding wins; otherwise generate migrations manually.
- What is the team’s SQL proficiency? If less than 30 % can write production SQL without assistance, stay with an ORM for the core domain.
- Do you need advanced PostgreSQL features? JSONB indexing, partial indexes, or generated columns are rarely exposed by ORMs. Choose a builder that lets you write them verbatim.
- Consider future maintenance. Code that reads like SQL is easier to hand‑off to a new engineer than an opaque cascade of
.include()calls.
Conclusion: be pragmatic, not dogmatic
ORMS are not evil; they are tools that solve a specific problem—rapid CRUD for simple schemas. When you need fine‑grained control, predictable performance, or leverage PostgreSQL’s full feature set, a thin query builder paired with a solid driver wins hands down. Adopt the middle path: keep the builder, discard the model layer, and let migrations remain under an ORM’s umbrella only where it adds real value.