A direct method for solving a system of equations Ax=b first factors the matrix A into the product of lower and upper triangular matrices and then computes the solution vector x. When A is large and sparse, reordering A prior to computing its factor may be necessary to keep memory requirements reasonable. If a matrix can be reordered so that the space required for its factors is no greater than that needed for the original matrix, we say the matrix is perfect elimination.
In this work we are looking to better understand what characterizes matrices that are perfect elimination, both in the case where the rows and columns must be ordered symmetrically and in the case where that constraint is removed. We are interested both in the theoretical and practical aspects of this question.
Many matrices from actual application are nonsymmetric. Hence, when ordering them to reduce fill, there is often no a priori reason why the rows and columns should be permuted in the same way. In  Melissa and I ask how close matrices from actual applications come to being perfect elimination, when it is permissible to permute rows and columns separately. We implemented an algorithm described in  for determining whether a matrix has a perfect elimination ordering and then ran our code on a test suite consisting of almost 200 nonsymmetric matrices from actual application. Our tests found several matrices that are perfect elimination, and several others that are nearly so.
A table of results listing matrices, their sizes, the percent to which they are perfectly eliminable, and the number of nonzeros in their factors computed by SuperLU with partial pivoting, can be found here.