I am quite new to Redis and initially have used KEYS to iterate through my dataset, but from what I can read in the documents Redis worst practices , it's actually not recommended - especially in bigger datasets containing many keys, since KEYS iterate through the whole dataset, blocking for a long time, while SCAN iterate through chunks of data from the dataset and thereby only blocking in less time than KEYS. If that is understood correctly, I am wondering if there is any way to optimize the SCAN iteration, so that instead of iterating randomly (let's say) 10.000 datas, it would iterate from a given point.
Example:
a1
a2
a3
b1 < --- start iterating from here instead of from a1
b2
b3
and that way save "us" a lot of performance?