In my 14 years in development I learned that outside of poor schema design, nothing drains the performance of an application more than poorly performing SQL code. Even code that ran well on day one of production would sometimes come back to bite at the worst possible times. Even worse, as a DBA I was consistently asked to bail out a development team that was either tuning their code before the rush to production or that was trying to finger code that had fallen victim to a dropped or changed index. Never fun.
As a Product Manager with MySQL I have learned from meeting with friends/customers that this experience hasn't really changed much since I left the field. I hear things like:
- MySQL is not well instrumented for tracking code level performance metrics
- Logs are OK, but not centralized and too low-level for easy navigation
- We need help identifying "good code gone bad" and "bad code gone worse"
To these friends I say we have listened (and will continue to) and help is on the way! My team has been busy working on the new MySQL Query Analyzer that will ship as part of the Enterprise Monitor a little later this year. Just a few bullets on what it does and what you can do with it.
Cool things it does:
- Proactive collection and identification of problematic SQL code.
- Historical browsing/analysis of queries across all servers.
- Aggregated roll ups of all queries in canonical form (no variables).
- Fully qualified views of worst performing statements.
- Drill downs into query details, number of executions, execution stats
These things are baked into the Monitor to help:
- Tune your SQL code before it is promoted to production
- Quickly identify queries that are sapping performance
-- by query type, content, server or application user
- Once problematic code is surfaced you can:
-- view an EXPLAIN plan to see where you need indexes
Here's a screenshot of the Monitor with Quan enabled:
We are currently working with a few "friendlies" on the first beta, but plan to open things for general beta soon. Stay tuned (no pun intended...), more details to follow on how you can help with this.
Subscribe to:
Post Comments (Atom)
3 comments:
Two out of the three bits of user feedback you mentioned (tooling inside mysql server, and logging methodology)... the obvious solution for these is to fix this directly in the server, not tweak around on the outside with enterprise tools.
Right. Enterprise monitor and proxy have been bolt-on solutions to fix what can't or won't be fixed inside the server.
But then again, so is Maatkit.
It looks nice and can make Enterprise customers happy. But it provides 1/3 of what I want.
This can tell me the top-N queries that may cause problems. I also want to know the top-N accounts and tables that might be too busy.
The Google patch provides SHOW USER_STATISTICS (per account resource usage: #commands, #rows fetched/read/changed, #connects, #seconds busy, #failed connects, #bytes sent/received, ...), SHOW TABLE_STATISTICS (#rows read/changed) and SHOW INDEX_STATISTICS (#rows read).
One day (I hope) this will appear in an official MySQL release and then Maatkit can do its magic with the output from SHOW USER_STATS and SHOW TABLE_STATS. Until then, I think this functionality is in the Percona MySQL branch.
Statement level accounting comes at a cost. The only way to do this today is to run the slow query log with long_query_time=0.
Post a Comment