MultiHub Forum

Full Version: What systematic SQL tuning approach for OLTP queries on tens of millions of rows?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I'm a database administrator for an e-commerce platform, and our order processing reports have started timing out as the transaction table has grown into the tens of millions of rows. The problematic query involves several joins and aggregates over date ranges, and while adding indexes helped initially, they're now bloating our storage and slowing down writes. I've looked at the execution plans, but I'm struggling to interpret them fully to identify the true bottleneck. For DBAs who have optimized similar high-volume OLTP queries, what's your systematic process for SQL performance tuning beyond just throwing indexes at the problem? I'm particularly interested in how you decide between query rewriting, strategic denormalization, implementing partitioning, or even architectural changes like moving aggregates to a separate reporting database, and how you measure the trade-offs for each approach.