I wrote a piece of code where the list size increases with each iteration and iterations can reach almost 100000 times.
sample :
def do_something():
Lst.append(k)
while N < 100000:
if k not in Lst:
do_something()
now, I noticed that this method took ages to finish . Please note, I did set setrecursionlimit() . Infact embarrassing enough, the program kept running for hrs.
later, while trying to find methods to optimise the code, i converted the Lst to a Dct. So the code looked something like:
def do_something():
Dct[k] = True
while N < 100000:
if Dct[k] == False:
do_something()
the code ran drastically faster. After reading few conversations on SOF(Repeatedly appending to a large list (Python 2.6.6)) , I realised its not list which is slow, but rather how memory is handled for larger list data. This website https://wiki.python.org/moin/TimeComplexity , sheds light on list and dict look-up time complexity. for list its O(n) where as a Dct lookup is O(1). Is it why Dct performs better ? How exaclty are list lookup and Dict lookup performed?