0

I am puzzled with the declaration of the zgefa() function arguments in LINPACK:

      subroutine zgefa(a,lda,n,ipvt,info)
      integer lda,n,ipvt(1),info
      complex*16 a(lda,1)
c
c     ...
c
c        a       complex*16(lda, n)
c                the matrix to be factored.
c
c        lda     integer
c                the leading dimension of the array  a .
c
c        n       integer
c                the order of the matrix  a .
c
c     ...
c
c        ipvt    integer(n)
c                an integer vector of pivot indices.

Why the developers decided to declare a as a column-vector, whereas semantically it is a matrix (see the code comments)? Similarly, ipvt is declared as a one-element array, whereas it is actually an array of size n.

LINPACK is famous, and I do not question the qualification of the developers. I believe, they had something in mind. Most probably, the performance.

Background: I encountered that as a runtime error when compiled the code with gfortran -fcheck=bounds. This also holds for at least zgeco() and zgedi().

Alexander Pozdneev
  • 1,289
  • 1
  • 13
  • 31
  • 1
    As perhaps explained in answers to [this question](https://stackoverflow.com/q/13532900) and [this question](https://stackoverflow.com/q/34613356/) using an array extent of `1` is an old-fashioned way of saying "of some unknown size". – francescalus Jul 12 '17 at 13:20
  • 1
    You could follow the hint in the comments and put the actual sizes in the argument declarations, so as to support bounds checks. The original authors may have used some compiler which gave special treatment to size 1. – tim18 Jul 12 '17 at 13:33

0 Answers0