Nine times out of ten, when an engineering team starts complaining about “slow database performance,” the database itself is perfectly fine. The real culprit is almost always hiding somewhere else — in the way queries are written, in how the application accesses data, or in decisions that made sense at the time but were never revisited as the system grew.
This distinction matters more than it might seem. If you assume the database is the problem, you start looking at hardware upgrades, migration to a “faster” database engine, or expensive infrastructure changes. All of that costs time and money — and probably none of it will fix the actual issue.
The symptom everyone misreads
A slow application feels like a slow database. A page that takes five seconds to load, a report that times out, an API endpoint that keeps failing under load — these all point fingers at the database layer. But the database is usually just doing exactly what it was asked to do. The problem is what it was asked to do.
We’ve seen queries that took literal minutes to complete, not because the server was underpowered, but because they were missing proper indexes, performing full table scans on millions of rows, or doing complex joins in the wrong order. In one real case, a reporting query was joining five tables with no indexes on any of the join columns — and then passing the entire result set back to the application to be filtered in code. The database was faithfully returning millions of rows that would eventually be narrowed down to a few hundred. It was working extremely hard to accomplish something that should have been trivial.
What’s actually going wrong
There are a handful of patterns that cause the vast majority of database performance problems:
Missing indexes on join and filter columns. When a query runs a WHERE clause or a JOIN on a column with no index, the database has no choice but to scan the entire table to find matching rows. On a table with a few thousand rows, this is fast enough that nobody notices. On a table with a few million rows, it becomes a bottleneck that grows worse every week as data accumulates.
Filtering in application code instead of in the database. This happens when developers pull a large dataset and then loop through it in the application to find what they need. The database does more work than necessary, more data travels across the network, and the application spends time on logic the database could handle in a fraction of a second.
N+1 query patterns. This is one of the most common and most damaging patterns, especially in applications that use ORMs. Instead of fetching related data in a single query, the application fires one query to get a list of records, then fires a separate query for each record to get its related data. Fetch a list of 200 orders? That’s 201 database queries. At scale, this silently destroys performance.
Joins ordered by convenience rather than efficiency. The order in which tables are joined can dramatically affect how much work the database needs to do. Starting a join with a large, unfiltered table and narrowing down later means the database is carrying a heavy load through most of the operation. Reordering joins so that the most selective conditions are applied early can cut execution time significantly.
What a proper fix looks like
When we analyzed the execution plans on the slow queries described above, the path forward became clear quickly. Execution plans show you exactly how the database is processing a query — which indexes it’s using (or not using), how many rows it’s scanning at each step, and where the time is actually being spent. Most teams never look at them.
The changes we made were not dramatic. We added indexes strategically on the columns used in WHERE clauses and JOIN conditions. We rewrote the join order to match the actual selectivity of each table. We moved filtering logic from application code into proper WHERE clauses. And we replaced N+1 patterns with proper JOINs or batch fetches.
The results were. Queries that previously took two to three minutes now complete in under 100 milliseconds. The database hardware didn’t change. The database engine didn’t change. Only the way the queries were written changed.
What this means for your team
If your application is feeling sluggish and the database is getting the blame, the first step is not to panic and not to reach for the infrastructure budget. The first step is to look at what your application is actually asking the database to do.
Pull the slowest queries from your logs. Look at their execution plans. Check whether the columns you’re filtering and joining on have indexes. Look for N+1 patterns in your ORM queries. These are not exotic problems — they are extremely common, and they are fixable without rewriting your application or migrating to a new database.
The database is rarely the bottleneck. It’s just being asked to do things inefficiently.
At bitGloss, we help engineering teams diagnose and fix exactly these kinds of problems — turning slow, expensive queries into fast, predictable ones without unnecessary infrastructure changes. If your application is struggling with database performance, get in touch.