What systematic SQL tuning approach for OLTP queries on tens of millions of rows?
#1
I'm a database administrator for an e-commerce platform, and our order processing reports have started timing out as the transaction table has grown into the tens of millions of rows. The problematic query involves several joins and aggregates over date ranges, and while adding indexes helped initially, they're now bloating our storage and slowing down writes. I've looked at the execution plans, but I'm struggling to interpret them fully to identify the true bottleneck. For DBAs who have optimized similar high-volume OLTP queries, what's your systematic process for SQL performance tuning beyond just throwing indexes at the problem? I'm particularly interested in how you decide between query rewriting, strategic denormalization, implementing partitioning, or even architectural changes like moving aggregates to a separate reporting database, and how you measure the trade-offs for each approach.
Reply


[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Forum Jump: