I've written a select statement in mySQL. The duration is 50 seconds, and the fetch is 206 seconds. This is a long time. I'd like to understand WHICH part of my query is inefficient so I can improve its run time, but I'm not sure how to do that in mySQL.
My table has a little over 1,000,000 records. I have an index built in as well:
KEY `idKey` (`id`,`name`),
Here is my query:
SELECT name, id, alt_id, count(id), min(cost), avg(resale), code from
history where name like "%brian%" group by id;
I've looked at the mySQL Execution Plan, but I can't garner from that what is wrong:
If I highlight over the "Full Index Scan" part of the image, I see this:
Access Type: Index
Full Index Scan
Key/Index:
Used Key Parts: id, name
Possible Keys: idKey, id-Key, nameKey
Attach Condition:
(`allhistory`.`history`.`name` LIKE '%brian%')
Rows Examined Per Scan: 1098181
Rows Produced Per Join: 1098181
Filter: 100%
I know I can just scan a smaller subset of data by adding a LIMIT 100
into the query, and while it makes the time much shorter, (28 second duration, 0.000 sec Fetch,) I also want to see all the records - so I don't really want to put a limit on it.
Can someone more knowledgeable on this topic suggest where my query, my index, or my methodology might be inefficient for what I'm trying to accomplish?