Probably, std::unordered_map
could help you, since:
- keys are hashed
- internally, the items are stored in buckets, like in a hash table
- access time is constant
Additional remarks:
THis snippet allows you to understand the layout of the map using the bucket interface:
std::cout << "Bucket count: " << m.bucket_count() <<std::endl;
for (int i=0; i < m.bucket_count(); i++ ) {
std::cout <<" bucket "<< i << " : size " << m.bucket_size(i) << std::endl;
}
std::cout<<"Average bucket load:" <<m.load_factor()<<std::endl;
If you didn't foresee from the start enough buckets, and if the dynamic growth of the map leads to a suboptimal bucket load with too many collisions, you can rehash the map:
m.rehash (2*m.bucket_count() );
Online demo