With an unfortunately outdated numpy version 1.8.2
, I get the following behavior:
I have a dictionary with eight sparse CSR matrices as values.
>>> tmp = [ (D[key][select,:].T.sum(0))[:,:,None] for key in D ];
Until this point, there is no problem. The list contains dense 2d numpy matrices with shapes (1,len(select),1)
. The len(select)
is less than 300. Memory consumption is only around 3% and almost 7 GB free RAM available.
>>> result = np.concatenate(tmp,axis=2);
Within the blink of an eye I get Segmentation Fault ('Speicherzugriffsfehler') from the terminal, no development visible on htop
that the memory would be running full or anything. Also, I would think that the consumption should not exceed roughly twice as much as before and that was practically nothing. Nevertheless, I can repeat as often as I want, it always gives me a SegFault.
I would like to rule out that it is a problem of my implementation.
UPDATE: It seems that after updating numpy slightly to version 1.10, the problem does not happen any more. Maybe some severe bug in 1.8.2 that no one cares about as it is completely outdated...