I am puzzled with the declaration of the zgefa()
function arguments in LINPACK:
subroutine zgefa(a,lda,n,ipvt,info)
integer lda,n,ipvt(1),info
complex*16 a(lda,1)
c
c ...
c
c a complex*16(lda, n)
c the matrix to be factored.
c
c lda integer
c the leading dimension of the array a .
c
c n integer
c the order of the matrix a .
c
c ...
c
c ipvt integer(n)
c an integer vector of pivot indices.
Why the developers decided to declare a
as a column-vector, whereas semantically it is a matrix (see the code comments)? Similarly, ipvt
is declared as a one-element array, whereas it is actually an array of size n
.
LINPACK is famous, and I do not question the qualification of the developers. I believe, they had something in mind. Most probably, the performance.
Background: I encountered that as a runtime error when compiled the code with gfortran -fcheck=bounds
. This also holds for at least zgeco()
and zgedi()
.