If you don't know anything about the order of the data, there is no more efficient algorithm, this is the theoretical optimum. You can - using some tricks - make it a few percent faster. For instance:
- cache the length of the array
- perform an immediate return instead of a
break
(breaks are, when it comes to performance, considered not to be that efficient).
- Use
o.equals
instead of arr[i].equals
. Smart compilers can cache the vtable
of o
.
But it is likely a compiler can already derive some of the optimizations itself.
A more efficient algorithm is thus probably:
public static int findIndex(Object[] arr, Object o) {
int n = arr.length;
for (int i = 0; i < n; i++) {
if (o.equals(arr[i])) {
return i;
}
}
return -1;
}
Although the difference will not be that huge. If one tests both implementations, the difference is:
//question first
findIndex (question): 4s 209 469 827
findIndex (answer): 4s 078 955 465
//answer first
findIndex (question): 4s 256 345 171
findIndex (answer): 4s 488 112 895
(there is a difference in which method you call first because of caching). Point is that the difference is not that huge and can be altered by OS calls, other servers tasks. So one can safely say we speak about milliseconds in real performance difference.
If however the data is ordered (by some order relation) you can perform binary search.
In case you want to do multiple lookups, you can first insert the elements in in a HashSet<T>
. This HashSet<T>
has an average lookup of O(1) (given a good hashing function).