Products of adjugate matrices












5












$begingroup$


Let $S$ and $A$ be a symmetric and a skew-symmetric $n times n$ matrix over $mathbb{R}$, respectively. When calculating (numerically) the product $S^{-1} A S^{-1}$ I keep getting the factor $det S$ in the denominator, while I would expect to get the square $$S^{-1} A S^{-1} = frac{(text{adj }S) A (text{adj }S)}{(det S)^2},$$ where $text{adj }S$ is the adjugate of $S$.



Is there a way to prove that the combination $(text{adj }S) A (text{adj }S)$ already contains a factor of $det S$?










share|cite|improve this question











$endgroup$












  • $begingroup$
    Not true for $n=3$ (where having a factor of $det S$ would mean being $0$). That said, we do (for $n=3$) have $left(operatorname{adj} Sright) A left(operatorname{adj} Sright) = h left(operatorname{adj} Sright)$, where $h$ is a certain scalar depeding on $A$ and $S$.
    $endgroup$
    – darij grinberg
    Aug 13 '15 at 18:12








  • 1




    $begingroup$
    The conjecture does hold for even $n$, though. Better yet: $operatorname{adj} S$ is divisible by the Pfaffian $operatorname{Pf} S$ (and as you know, we have $left(operatorname{Pf} Sright)^2 = det S$). I'll post this as an answer once I've figured out a nice proof that doesn't use Hilbert's Nullstellensatz.
    $endgroup$
    – darij grinberg
    Aug 13 '15 at 18:21












  • $begingroup$
    @darijgrinberg I'd be interested in the proof that does use it, if it can be summarized
    $endgroup$
    – Omnomnomnom
    Aug 13 '15 at 18:27






  • 1




    $begingroup$
    Summary of the proof using the Nullstellensatz: Let us work over an algebraically closed field. Then, $operatorname{Pf} S$ is an irreducible polynomial in the entries over $S$ (I think this is clear, though I'm not 100% sure), and whenever it vanishes, so does every entry of $operatorname{adj} S$ (since the vanishing of $operatorname{Pf} S$ shows that $operatorname{rank} S < 2n$ and thus $operatorname{rank} S leq 2n-2$, since the rank of a skew-symmetric matrix must be even). Gauss' lemma can then be used to pull back the divisibility from the algebraically closed field to any ring.
    $endgroup$
    – darij grinberg
    Aug 13 '15 at 18:28












  • $begingroup$
    Anyway, it seems to me that we have something explicit: If $S$ is a skew-symmetric $ntimes n$-matrix with $n$ even, then $operatorname{adj} S = operatorname{Pf} S cdot operatorname{Pdj} S$, where $operatorname{Pdj} S$ is the Pfaffian adjoint of $S$ (that is, the $ntimes n$-matrix whose $left(i,jright)$-th entry is $left(-1right)^{i+j+left[i>jright]} p_{i,j}left(Sright)$, where $p_{i,j}left(Sright)$ is the Pfaffian of the matrix obtained by removing the $i$-th and $j$-th rows and the $i$-th and $j$-th columns from $S$). Here, $left[i>jright]$ means $1$ if $i>j$ ...
    $endgroup$
    – darij grinberg
    Aug 13 '15 at 18:31


















5












$begingroup$


Let $S$ and $A$ be a symmetric and a skew-symmetric $n times n$ matrix over $mathbb{R}$, respectively. When calculating (numerically) the product $S^{-1} A S^{-1}$ I keep getting the factor $det S$ in the denominator, while I would expect to get the square $$S^{-1} A S^{-1} = frac{(text{adj }S) A (text{adj }S)}{(det S)^2},$$ where $text{adj }S$ is the adjugate of $S$.



Is there a way to prove that the combination $(text{adj }S) A (text{adj }S)$ already contains a factor of $det S$?










share|cite|improve this question











$endgroup$












  • $begingroup$
    Not true for $n=3$ (where having a factor of $det S$ would mean being $0$). That said, we do (for $n=3$) have $left(operatorname{adj} Sright) A left(operatorname{adj} Sright) = h left(operatorname{adj} Sright)$, where $h$ is a certain scalar depeding on $A$ and $S$.
    $endgroup$
    – darij grinberg
    Aug 13 '15 at 18:12








  • 1




    $begingroup$
    The conjecture does hold for even $n$, though. Better yet: $operatorname{adj} S$ is divisible by the Pfaffian $operatorname{Pf} S$ (and as you know, we have $left(operatorname{Pf} Sright)^2 = det S$). I'll post this as an answer once I've figured out a nice proof that doesn't use Hilbert's Nullstellensatz.
    $endgroup$
    – darij grinberg
    Aug 13 '15 at 18:21












  • $begingroup$
    @darijgrinberg I'd be interested in the proof that does use it, if it can be summarized
    $endgroup$
    – Omnomnomnom
    Aug 13 '15 at 18:27






  • 1




    $begingroup$
    Summary of the proof using the Nullstellensatz: Let us work over an algebraically closed field. Then, $operatorname{Pf} S$ is an irreducible polynomial in the entries over $S$ (I think this is clear, though I'm not 100% sure), and whenever it vanishes, so does every entry of $operatorname{adj} S$ (since the vanishing of $operatorname{Pf} S$ shows that $operatorname{rank} S < 2n$ and thus $operatorname{rank} S leq 2n-2$, since the rank of a skew-symmetric matrix must be even). Gauss' lemma can then be used to pull back the divisibility from the algebraically closed field to any ring.
    $endgroup$
    – darij grinberg
    Aug 13 '15 at 18:28












  • $begingroup$
    Anyway, it seems to me that we have something explicit: If $S$ is a skew-symmetric $ntimes n$-matrix with $n$ even, then $operatorname{adj} S = operatorname{Pf} S cdot operatorname{Pdj} S$, where $operatorname{Pdj} S$ is the Pfaffian adjoint of $S$ (that is, the $ntimes n$-matrix whose $left(i,jright)$-th entry is $left(-1right)^{i+j+left[i>jright]} p_{i,j}left(Sright)$, where $p_{i,j}left(Sright)$ is the Pfaffian of the matrix obtained by removing the $i$-th and $j$-th rows and the $i$-th and $j$-th columns from $S$). Here, $left[i>jright]$ means $1$ if $i>j$ ...
    $endgroup$
    – darij grinberg
    Aug 13 '15 at 18:31
















5












5








5


4



$begingroup$


Let $S$ and $A$ be a symmetric and a skew-symmetric $n times n$ matrix over $mathbb{R}$, respectively. When calculating (numerically) the product $S^{-1} A S^{-1}$ I keep getting the factor $det S$ in the denominator, while I would expect to get the square $$S^{-1} A S^{-1} = frac{(text{adj }S) A (text{adj }S)}{(det S)^2},$$ where $text{adj }S$ is the adjugate of $S$.



Is there a way to prove that the combination $(text{adj }S) A (text{adj }S)$ already contains a factor of $det S$?










share|cite|improve this question











$endgroup$




Let $S$ and $A$ be a symmetric and a skew-symmetric $n times n$ matrix over $mathbb{R}$, respectively. When calculating (numerically) the product $S^{-1} A S^{-1}$ I keep getting the factor $det S$ in the denominator, while I would expect to get the square $$S^{-1} A S^{-1} = frac{(text{adj }S) A (text{adj }S)}{(det S)^2},$$ where $text{adj }S$ is the adjugate of $S$.



Is there a way to prove that the combination $(text{adj }S) A (text{adj }S)$ already contains a factor of $det S$?







linear-algebra matrices determinant






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited May 10 '16 at 4:53









darij grinberg

11.5k33168




11.5k33168










asked Aug 13 '15 at 18:01







user54031



















  • $begingroup$
    Not true for $n=3$ (where having a factor of $det S$ would mean being $0$). That said, we do (for $n=3$) have $left(operatorname{adj} Sright) A left(operatorname{adj} Sright) = h left(operatorname{adj} Sright)$, where $h$ is a certain scalar depeding on $A$ and $S$.
    $endgroup$
    – darij grinberg
    Aug 13 '15 at 18:12








  • 1




    $begingroup$
    The conjecture does hold for even $n$, though. Better yet: $operatorname{adj} S$ is divisible by the Pfaffian $operatorname{Pf} S$ (and as you know, we have $left(operatorname{Pf} Sright)^2 = det S$). I'll post this as an answer once I've figured out a nice proof that doesn't use Hilbert's Nullstellensatz.
    $endgroup$
    – darij grinberg
    Aug 13 '15 at 18:21












  • $begingroup$
    @darijgrinberg I'd be interested in the proof that does use it, if it can be summarized
    $endgroup$
    – Omnomnomnom
    Aug 13 '15 at 18:27






  • 1




    $begingroup$
    Summary of the proof using the Nullstellensatz: Let us work over an algebraically closed field. Then, $operatorname{Pf} S$ is an irreducible polynomial in the entries over $S$ (I think this is clear, though I'm not 100% sure), and whenever it vanishes, so does every entry of $operatorname{adj} S$ (since the vanishing of $operatorname{Pf} S$ shows that $operatorname{rank} S < 2n$ and thus $operatorname{rank} S leq 2n-2$, since the rank of a skew-symmetric matrix must be even). Gauss' lemma can then be used to pull back the divisibility from the algebraically closed field to any ring.
    $endgroup$
    – darij grinberg
    Aug 13 '15 at 18:28












  • $begingroup$
    Anyway, it seems to me that we have something explicit: If $S$ is a skew-symmetric $ntimes n$-matrix with $n$ even, then $operatorname{adj} S = operatorname{Pf} S cdot operatorname{Pdj} S$, where $operatorname{Pdj} S$ is the Pfaffian adjoint of $S$ (that is, the $ntimes n$-matrix whose $left(i,jright)$-th entry is $left(-1right)^{i+j+left[i>jright]} p_{i,j}left(Sright)$, where $p_{i,j}left(Sright)$ is the Pfaffian of the matrix obtained by removing the $i$-th and $j$-th rows and the $i$-th and $j$-th columns from $S$). Here, $left[i>jright]$ means $1$ if $i>j$ ...
    $endgroup$
    – darij grinberg
    Aug 13 '15 at 18:31




















  • $begingroup$
    Not true for $n=3$ (where having a factor of $det S$ would mean being $0$). That said, we do (for $n=3$) have $left(operatorname{adj} Sright) A left(operatorname{adj} Sright) = h left(operatorname{adj} Sright)$, where $h$ is a certain scalar depeding on $A$ and $S$.
    $endgroup$
    – darij grinberg
    Aug 13 '15 at 18:12








  • 1




    $begingroup$
    The conjecture does hold for even $n$, though. Better yet: $operatorname{adj} S$ is divisible by the Pfaffian $operatorname{Pf} S$ (and as you know, we have $left(operatorname{Pf} Sright)^2 = det S$). I'll post this as an answer once I've figured out a nice proof that doesn't use Hilbert's Nullstellensatz.
    $endgroup$
    – darij grinberg
    Aug 13 '15 at 18:21












  • $begingroup$
    @darijgrinberg I'd be interested in the proof that does use it, if it can be summarized
    $endgroup$
    – Omnomnomnom
    Aug 13 '15 at 18:27






  • 1




    $begingroup$
    Summary of the proof using the Nullstellensatz: Let us work over an algebraically closed field. Then, $operatorname{Pf} S$ is an irreducible polynomial in the entries over $S$ (I think this is clear, though I'm not 100% sure), and whenever it vanishes, so does every entry of $operatorname{adj} S$ (since the vanishing of $operatorname{Pf} S$ shows that $operatorname{rank} S < 2n$ and thus $operatorname{rank} S leq 2n-2$, since the rank of a skew-symmetric matrix must be even). Gauss' lemma can then be used to pull back the divisibility from the algebraically closed field to any ring.
    $endgroup$
    – darij grinberg
    Aug 13 '15 at 18:28












  • $begingroup$
    Anyway, it seems to me that we have something explicit: If $S$ is a skew-symmetric $ntimes n$-matrix with $n$ even, then $operatorname{adj} S = operatorname{Pf} S cdot operatorname{Pdj} S$, where $operatorname{Pdj} S$ is the Pfaffian adjoint of $S$ (that is, the $ntimes n$-matrix whose $left(i,jright)$-th entry is $left(-1right)^{i+j+left[i>jright]} p_{i,j}left(Sright)$, where $p_{i,j}left(Sright)$ is the Pfaffian of the matrix obtained by removing the $i$-th and $j$-th rows and the $i$-th and $j$-th columns from $S$). Here, $left[i>jright]$ means $1$ if $i>j$ ...
    $endgroup$
    – darij grinberg
    Aug 13 '15 at 18:31


















$begingroup$
Not true for $n=3$ (where having a factor of $det S$ would mean being $0$). That said, we do (for $n=3$) have $left(operatorname{adj} Sright) A left(operatorname{adj} Sright) = h left(operatorname{adj} Sright)$, where $h$ is a certain scalar depeding on $A$ and $S$.
$endgroup$
– darij grinberg
Aug 13 '15 at 18:12






$begingroup$
Not true for $n=3$ (where having a factor of $det S$ would mean being $0$). That said, we do (for $n=3$) have $left(operatorname{adj} Sright) A left(operatorname{adj} Sright) = h left(operatorname{adj} Sright)$, where $h$ is a certain scalar depeding on $A$ and $S$.
$endgroup$
– darij grinberg
Aug 13 '15 at 18:12






1




1




$begingroup$
The conjecture does hold for even $n$, though. Better yet: $operatorname{adj} S$ is divisible by the Pfaffian $operatorname{Pf} S$ (and as you know, we have $left(operatorname{Pf} Sright)^2 = det S$). I'll post this as an answer once I've figured out a nice proof that doesn't use Hilbert's Nullstellensatz.
$endgroup$
– darij grinberg
Aug 13 '15 at 18:21






$begingroup$
The conjecture does hold for even $n$, though. Better yet: $operatorname{adj} S$ is divisible by the Pfaffian $operatorname{Pf} S$ (and as you know, we have $left(operatorname{Pf} Sright)^2 = det S$). I'll post this as an answer once I've figured out a nice proof that doesn't use Hilbert's Nullstellensatz.
$endgroup$
– darij grinberg
Aug 13 '15 at 18:21














$begingroup$
@darijgrinberg I'd be interested in the proof that does use it, if it can be summarized
$endgroup$
– Omnomnomnom
Aug 13 '15 at 18:27




$begingroup$
@darijgrinberg I'd be interested in the proof that does use it, if it can be summarized
$endgroup$
– Omnomnomnom
Aug 13 '15 at 18:27




1




1




$begingroup$
Summary of the proof using the Nullstellensatz: Let us work over an algebraically closed field. Then, $operatorname{Pf} S$ is an irreducible polynomial in the entries over $S$ (I think this is clear, though I'm not 100% sure), and whenever it vanishes, so does every entry of $operatorname{adj} S$ (since the vanishing of $operatorname{Pf} S$ shows that $operatorname{rank} S < 2n$ and thus $operatorname{rank} S leq 2n-2$, since the rank of a skew-symmetric matrix must be even). Gauss' lemma can then be used to pull back the divisibility from the algebraically closed field to any ring.
$endgroup$
– darij grinberg
Aug 13 '15 at 18:28






$begingroup$
Summary of the proof using the Nullstellensatz: Let us work over an algebraically closed field. Then, $operatorname{Pf} S$ is an irreducible polynomial in the entries over $S$ (I think this is clear, though I'm not 100% sure), and whenever it vanishes, so does every entry of $operatorname{adj} S$ (since the vanishing of $operatorname{Pf} S$ shows that $operatorname{rank} S < 2n$ and thus $operatorname{rank} S leq 2n-2$, since the rank of a skew-symmetric matrix must be even). Gauss' lemma can then be used to pull back the divisibility from the algebraically closed field to any ring.
$endgroup$
– darij grinberg
Aug 13 '15 at 18:28














$begingroup$
Anyway, it seems to me that we have something explicit: If $S$ is a skew-symmetric $ntimes n$-matrix with $n$ even, then $operatorname{adj} S = operatorname{Pf} S cdot operatorname{Pdj} S$, where $operatorname{Pdj} S$ is the Pfaffian adjoint of $S$ (that is, the $ntimes n$-matrix whose $left(i,jright)$-th entry is $left(-1right)^{i+j+left[i>jright]} p_{i,j}left(Sright)$, where $p_{i,j}left(Sright)$ is the Pfaffian of the matrix obtained by removing the $i$-th and $j$-th rows and the $i$-th and $j$-th columns from $S$). Here, $left[i>jright]$ means $1$ if $i>j$ ...
$endgroup$
– darij grinberg
Aug 13 '15 at 18:31






$begingroup$
Anyway, it seems to me that we have something explicit: If $S$ is a skew-symmetric $ntimes n$-matrix with $n$ even, then $operatorname{adj} S = operatorname{Pf} S cdot operatorname{Pdj} S$, where $operatorname{Pdj} S$ is the Pfaffian adjoint of $S$ (that is, the $ntimes n$-matrix whose $left(i,jright)$-th entry is $left(-1right)^{i+j+left[i>jright]} p_{i,j}left(Sright)$, where $p_{i,j}left(Sright)$ is the Pfaffian of the matrix obtained by removing the $i$-th and $j$-th rows and the $i$-th and $j$-th columns from $S$). Here, $left[i>jright]$ means $1$ if $i>j$ ...
$endgroup$
– darij grinberg
Aug 13 '15 at 18:31












4 Answers
4






active

oldest

votes


















2












$begingroup$

Assume that $det(S)=0$. Then $adj(S)$ has rank $1$ and is symmetric; then $adj(S)=avv^T$ where $ain mathbb{R}$ and $v$ is a vector. Thus $adj(S)Aadj(S)=a^2v(v^TAv)v^T$. Since $A$ is skew-symmetric, $v^TAv=0$ and $adj(S)Aadj(S)=0$. We use the Darij's method; here, the condition is that $det(S)$ is an irreducible polynomial when $S$ is a generic symmetric matrix; if it is true, then $det(S)$ is a factor of every entry of $adj(S)Aadj(S)$.



EDIT 1. For the proof that $det(S)$ is an irreducible polynomial when $S$ is a generic symmetric matrix, cf. https://mathoverflow.net/questions/50362/irreducibility-of-determinant-of-symmetric-matrix
and we are done !



EDIT 2. @ darij grinberg , hi Darij, I read quickly your Theorem 1 (for $K$, a commutative ring with unity) and I think that your proof works; yet it is complicated! I think (as you wrote in your comment above) that it suffices to prove the result when $K$ is a field; yet I do not kwow how to write it rigorously...



STEP 1. $K$ is a field. If $det(S)=0$, then $adj(S)=vw^T$ and $adj(S).A.adj(S)=v(w^TAw)v^T=0$ (even if $char(K)=2$). Since $det(.)$ is irreducible over $M_n(K)$, we conclude as above.



STEP 2. Let $S=[s_{ij}],A=[a_{i,j}]$. We work in the ring of polynomials $mathbb{Z}[(s_{i,j})_{i,j},(a_{i,j})_{i<j}]$ in the indeterminates $(s_{i,j}),(a_{i,j})$. This ring has no zero-divisors, is factorial and its characteristic is $0$ and even is integrally closed. Clearly the entries of $adj(S).A.adj(S)$ are in $mathbb{Z}[(s_{i,j})_{i,j},(a_{i,j})_{i<j}]$; moreover they formally have $det(S)$ as a factor.



Now, if $K$ is a commutative ring with unity, we must use an argument using a variant of Gauss lemma showing that the factor $det(S)$ is preserved over $K$. What form of the lemma can be used and how to write it correctly ?



I just see that the OP takes for himself the green chevron; we are our own best advocates






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Why is the rank of adj S = 1 for det S = 0?
    $endgroup$
    – user54031
    Aug 13 '15 at 23:06










  • $begingroup$
    It is true for any matrix !
    $endgroup$
    – loup blanc
    Aug 14 '15 at 9:59










  • $begingroup$
    In fact the rank is 0 or 1
    $endgroup$
    – loup blanc
    Aug 14 '15 at 10:00










  • $begingroup$
    +1. Nice trick in here! But you can spare yourself the algebraic geometry orgy by generalizing: For any $ntimes n$-matrix $S$ and any alternating $ntimes n$-matrix $A$, the entries of $left(operatorname{adj} Sright) cdot A cdot left(operatorname{adj}left(S^Tright)right)$ are divisible by $det S$. The irreducibility of $det S$ is much more elementary here, and instead of $operatorname{adj} S = avv^T$ you now need $operatorname{adj} S = vw^T$ (which is also easier to prove and less reliant on the base field).
    $endgroup$
    – darij grinberg
    Aug 14 '15 at 21:08










  • $begingroup$
    Yes Darij, you are right.
    $endgroup$
    – loup blanc
    Aug 15 '15 at 4:22



















1












$begingroup$

Here is a proof.



In the following, we fix a commutative ring $mathbb{K}$. All matrices are
over $mathbb{K}$.




Theorem 1. Let $ninmathbb{N}$. Let $S$ be an $ntimes n$-matrix. Let $A$
be an alternating $ntimes n$-matrix. (This means that $A^{T}=-A$ and that the
diagonal entries of $A$ are $0$.) Then, each entry of the matrix $left(
operatorname*{adj}Sright) cdot Acdotleft( operatorname*{adj}left(
S^{T}right) right) $
is divisible by $det S$ (in $mathbb{K}$).




[UPDATE: A slight modification of the below proof of Theorem 1 can be
found in the solution to Exercise 6.42 in
my Notes on the combinatorial
fundamentals of algebra
, version of 10 January 2019. More precisely,
said Exercise 6.42 claims that each entry of the matrix
$left(operatorname{adj} Sright)^T cdot A cdot
left(operatorname{adj} Sright)$
is divisible by $det S$; now it remains
to substitute $S^T$ for $S$ and recall that
$left(operatorname{adj} Sright)^T = operatorname{adj} left(S^Tright)$,
and this immediately yields Theorem 1 above. Still, the following (shorter)
version of this proof might be useful as well.]



The main workhorse of the proof of Theorem 1 is the following result, which
is essentially (up to some annoying switching of rows and columns) the
Desnanot-Jacobi identity used in Dodgson condensation:




Theorem 2. Let $ninmathbb{N}$. Let $S$ be an $ntimes n$-matrix. For
every $uinleft{ 1,2,ldots,nright} $ and $vinleft{ 1,2,ldots
,nright} $
, we let $S_{sim u,sim v}$ be the $left( n-1right)
timesleft( n-1right) $
-matrix obtained by crossing out the $u$-th row and
the $v$-th column in $S$. (Thus, $operatorname*{adj}S=left( left(
-1right) ^{i+j}S_{sim j,sim i}right) _{1leq ileq n, 1leq jleq n}$
.)
For every four elements $u$, $u^{prime}$, $v$ and $v^{prime}$ of $left{
1,2,ldots,nright} $
with $uneq u^{prime}$ and $vneq v^{prime}$, we let
$S_{left( sim u,sim u^{prime}right) ,left( sim v,sim v^{prime
}right) }$
be the $left( n-2right) timesleft( n-2right) $-matrix
obtained by crossing out the $u$-th and $u^{prime}$-th rows and the $v$-th
and $v^{prime}$-th columns in $S$. Let $u$, $i$, $v$ and $j$ be four elements
of $left{ 1,2,ldots,nright} $ with $uneq v$ and $ineq j$. Then,
begin{align}
& detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
j,sim u}right) \
& = left( -1right) ^{left[ i<jright] +left[ u<vright] }det
Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
vright) }right) .
end{align}

Here, we use the Iverson bracket notation (that is, we write
$left[ mathcal{A}right] $ for the truth value of a statement
$mathcal{A}$; this is defined by $left[ mathcal{A}right] =
begin{cases}
1, & text{if }mathcal{A}text{ is true;}\
0, & text{if }mathcal{A}text{ is false}
end{cases}
$
).




There are several ways to prove Theorem 2: I am aware of one argument
that derives it from the Plücker
relations (the simplest ones, where just one column is being shuffled around).
There is at least one combinatorial argument that proves Theorem 2 in the
case when $i = 1$, $j = n$, $u = 1$ and $v = n$ (see Zeilberger's paper);
the general case can be reduced to this case by permuting rows and columns
(although it is quite painful to track how the signs change under these
permutations). (See also a paper by Berliner and Brualdi for a
generalization of Theorem 2, with a combinatorial proof too.) There is
at least one short algebraic proof of Theorem 2 (again in the case when
$i = 1$, $j = n$, $u = 1$ and $v = n$ only) which relies on "formal" division
by $det S$ (that is, it proves that
begin{align}
& det S cdot left(detleft( S_{sim 1,sim 1}right) cdotdetleft( S_{sim 2,sim
2}right) -detleft( S_{sim 1,sim 2}right) cdotdetleft( S_{sim
2,sim 1}right) right) \
& = left(det Sright)^2
cdotdetleft( S_{left( sim 1,sim 2right) ,left( sim 1,sim
2right) }right) ,
end{align}

and then argues that $det S$ can be cancelled because the determinant of a
"generic" square matrix is invertible). (This proof appears in Bressoud's
Proofs and Confirmations; a French version can also be found in
lecture notes by Yoann Gelineau.) Unfortunately, none of these proofs
seems to release the reader from the annoyance of dealing with the signs.
Maybe exterior powers are the best thing to use here, but I do not see how.
I have written up a division-free (but laborious and annoying) proof of
Theorem 2 in my determinant notes; more precisely, I have written up
the proof of the $i < j$ and $u < v$ case, but the general case can easily
be obtained from it as follows:



Proof of Theorem 2. We need to prove the equality
begin{align}
& detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
j,sim u}right) \
& = left( -1right) ^{left[ i<jright] +left[ u<vright] }det
Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
vright) }right) .
label{darij.eq.1}
tag{1}
end{align}

If we interchange $u$ with $v$, then the left hand side of this equality
gets multiplied by $-1$ (because its subtrahend and its minuend switch places),
whereas the right hand side also gets multiplied by $-1$ (since
$S_{left( sim i,sim jright) ,left( sim u,sim
vright)}$
does not change, but $left[ u<vright]$ either changes from $0$
to $1$ or changes from $1$ to $0$). Hence, if we interchange $u$ with $v$,
then the equality eqref{darij.eq.1} does not change its truth value.
Thus, we can WLOG assume that $u leq v$ (since otherwise we can just
interchange $u$ with $v$). Assume this. For similar reasons, we can WLOG
assume that $i leq j$; assume this too. From $u leq v$ and $u neq v$,
we obtain $u < v$. From $i leq j$ and $i neq j$, we obtain $i < j$.
Thus, Theorem 6.126 in my Notes on the combinatorial
fundamentals of algebra
, version of 10 January 2019 (applied to $A=S$,
$p=i$ and $q=j$) shows that
begin{align}
& det
Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
vright) }right) \
& = detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
j,sim u}right)
label{darij.eq.2}
tag{2}
end{align}

(indeed, what I am calling $S_{left( sim i,sim jright) ,left( sim u,sim
vright) }$
here is what I am calling
$operatorname{sub}^{1,2,ldots,widehat{u},ldots,widehat{v},ldots,n}_{1,2,ldots,widehat{i},ldots,widehat{j},ldots,n} A$ in my notes).



But both $left[i < jright]$ and $left[u < vright]$ equal $1$ (since
$i < j$ and $u < v$). Thus,
$left( -1right) ^{left[ i<jright] +left[ u<vright] }
= left(-1right)^{1+1} = 1$
. Therefore
begin{align}
& underbrace{left( -1right) ^{left[ i<jright] +left[ u<vright] }}_{=1}det
Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
vright) }right) \
& = det
Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
vright) }right) \
& = detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
j,sim u}right)
end{align}

(by eqref{darij.eq.2}). This proves Theorem 2. $blacksquare$



Finally, here is an obvious lemma:




Lemma 3. Let $ninmathbb{N}$. For every $iinleft{ 1,2,ldots
,nright} $
and $jinleft{ 1,2,ldots,nright} $, let $E_{i,j}$ be the
$ntimes n$-matrix whose $left( i,jright) $-th entry is $1$ and whose all
other entries are $0$. (This is called an elementary matrix.) Then, every
alternating $ntimes n$-matrix is a $mathbb{K}$-linear combination of the
matrices $E_{i,j}-E_{j,i}$ for pairs $left( i,jright) $ of integers
satisfying $1leq i<jleq n$.




Proof of Theorem 1. We shall use the notation $E_{i,j}$ defined in Lemma 3.



We need to prove that every entry of the matrix $left( operatorname*{adj}
Sright) cdot Acdotleft( operatorname*{adj}left( S^{T}right)
right) $
is divisible by $det S$. In other words, we need to prove that,
for every $left( u,vright) inleft{ 1,2,ldots,nright} ^{2}$, the
$left( u,vright) $-th entry of the matrix $left( operatorname*{adj}
Sright) cdot Acdotleft( operatorname*{adj}left( S^{T}right)
right) $
is divisible by $det S$. So, fix $left( u,vright) inleft{
1,2,ldots,nright} ^{2}$
.



We need to show that the $left( u,vright) $-th entry of the matrix
$left( operatorname*{adj}Sright) cdot Acdotleft( operatorname*{adj}
left( S^{T}right) right) $
is divisible by $det S$. This statement is
clearly $mathbb{K}$-linear in $A$ (in the sense that if $A_{1}$ and $A_{2}$
are two alternating $ntimes n$-matrices such that this statement holds both
for $A=A_{1}$ and for $A=A_{2}$, and if $lambda_{1}$ and $lambda_{2}$ are
two elements of $mathbb{K}$, then this statement also holds for
$A=lambda_{1}A_{1}+lambda_{2}A_{2}$). Thus, we can WLOG assume that $A$ has
the form $E_{i,j}-E_{j,i}$ for a pair $left( i,jright) $ of integers
satisfying $1leq i<jleq n$ (according to Lemma 3). Assume this, and consider
this pair $left( i,jright) $.



We have $operatorname*{adj}S=left( left( -1right) ^{x+y}detleft(
S_{sim y,sim x}right) right) _{1leq xleq n, 1leq yleq n}$
and
begin{align}
operatorname*{adj}left( S^{T}right)
& = left( left( -1right)
^{x+y}detleft( underbrace{left( S^{T}right) _{sim y,sim x}
}_{=left( S_{sim x,sim y}right) ^{T}}right) right) _{1leq xleq
n, 1leq yleq n} \
& = left( left( -1right) ^{x+y}underbrace{detleft( left( S_{sim
x,sim y}right) ^{T}right) }_{=detleft( S_{sim x,sim y}right)
}right) _{1leq xleq n, 1leq yleq n} \
& = left( left( -1right) ^{x+y}detleft( S_{sim x,sim y}right)
right) _{1leq xleq n, 1leq yleq n} .
end{align}

Hence,
begin{align}
& underbrace{left( operatorname*{adj}Sright) }_{=left( left(
-1right) ^{x+y}detleft( S_{sim y,sim x}right) right) _{1leq xleq
n, 1leq yleq n}}cdotunderbrace{A}_{=E_{i,j}-E_{j,i}}cdot
underbrace{left( operatorname*{adj}left( S^{T}right) right)
}_{=left( left( -1right) ^{x+y}detleft( S_{sim x,sim y}right)
right) _{1leq xleq n, 1leq yleq n}} \
& =left( left( -1right) ^{x+y}detleft( S_{sim y,sim x}right)
right) _{1leq xleq n, 1leq yleq n}cdotleft( E_{i,j}-E_{j,i}right) \
& qquad qquad
cdotleft( left( -1right) ^{x+y}detleft( S_{sim x,sim y}right)
right) _{1leq xleq n, 1leq yleq n} \
& = left( left( -1right) ^{x+i}detleft( S_{sim i,sim x}right)
cdotleft( -1right) ^{j+y}detleft( S_{sim j,sim y}right) right. \
& qquad qquad
left. -left(
-1right) ^{x+j}detleft( S_{sim j,sim x}right) cdotleft( -1right)
^{i+y}detleft( S_{sim i,sim y}right) right) _{1leq xleq n, 1leq
yleq n} .
end{align}

Hence, the $left( u,vright) $-th entry of the matrix $left(
operatorname*{adj}Sright) cdot Acdotleft( operatorname*{adj}left(
S^{T}right) right) $
is
begin{align}
& left( -1right) ^{u+i}detleft( S_{sim i,sim u}right) cdotleft(
-1right) ^{j+v}detleft( S_{sim j,sim v}right) -left( -1right)
^{u+j}detleft( S_{sim j,sim u}right) cdotleft( -1right) ^{i+v}
detleft( S_{sim i,sim v}right) \
& = left( -1right) ^{i+j+u+v}left( detleft( S_{sim i,sim
u}right) cdotdetleft( S_{sim j,sim v}right) -detleft( S_{sim
i,sim v}right) cdotdetleft( S_{sim j,sim u}right) right) .
label{darij.eq.3}
tag{3}
end{align}

We need to prove that this is divisible by $det S$. If $u=v$, then this is
obvious (because if $u=v$, then the right hand side of eqref{darij.eq.3} is $0$). Hence,
we WLOG assume that $uneq v$. Thus, eqref{darij.eq.3} shows that the $left(
u,vright) $
-th entry of the matrix $left( operatorname*{adj}Sright)
cdot Acdotleft( operatorname*{adj}left( S^{T}right) right) $
is
begin{align}
& left( -1right) ^{i+j+u+v}underbrace{left( detleft( S_{sim i,sim
u}right) cdotdetleft( S_{sim j,sim v}right) -detleft( S_{sim
i,sim v}right) cdotdetleft( S_{sim j,sim u}right) right)
}_{substack{=left( -1right) ^{left[ i<jright] +left[
u<vright] }det Scdotdetleft( S_{left( sim i,sim jright) ,left(
sim u,sim vright) }right) \text{(by Theorem 2)}}} \
& = left( -1right) ^{i+j+u+v}left( -1right) ^{left[
i<jright] +left[ u<vright] }det Scdotdetleft( S_{left( sim
i,sim jright) ,left( sim u,sim vright) }right) ,
end{align}

which is clearly divisible by $det S$. Theorem 1 is thus proven. $blacksquare$






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Please see my answer. Does it agree with your answer?
    $endgroup$
    – user54031
    Aug 18 '15 at 8:19










  • $begingroup$
    @LBO: I fear I cannot tell, as I am not acquainted with the tensor notation you are using.
    $endgroup$
    – darij grinberg
    Aug 18 '15 at 11:52



















0












$begingroup$

I have managed to prove the claim in a pedestrian way. I'll just present the final result because the proof is quite involved and probably of interest only to me.



In the following, I will use the (abstract) index notation and Einstein summation convention.



First of all, the determinant of a square $n times n$ matrix $S$ is given by



$$det S = frac{1}{n!} epsilon_{a_1 ldots a_n} epsilon_{b_1 ldots b_n} S_{b_1 a_1} ldots S_{b_n a_n}.$$



Second, the adjugate of $S$ is given by a similar expression
$$(operatorname{ajd} S)_{a_1 b_1} = frac{1}{(n-1)!} epsilon_{a_1 a_2 ldots a_n} epsilon_{b_1 b_2 ldots b_n} S_{b_2 a_2} ldots S_{b_n a_n}.$$



Finally, we will need the another tensor of order $4$, defined as
$$(operatorname{ajd}_2 S)_{a_1 a_2,b_1 b_2} = frac{1}{(n-2)!} epsilon_{a_1 a_2 a_3 ldots a_n} epsilon_{b_1 b_2 b_3 ldots b_n} S_{b_3 a_3} ldots S_{b_n a_n}.$$



Then the following identity holds
$$((S^{-1}) A (S^{-1})^{T})_{ab} = frac{1}{2} frac{(operatorname{ajd}_2 S)_{ab,cd} A_{cd}}{det S}$$
for $A$ skew-symmetric.






share|cite|improve this answer









$endgroup$





















    -2












    $begingroup$

    Because $Smbox{adj}(S)=mbox{adj}(S)S=mbox{det}(S)I$, then $S$ is invertible iff $mbox{det}(S)ne 0$, and, in that case
    $$
    S^{-1} = frac{1}{mbox{det}(S)}mbox{adj}(S).
    $$
    Therefore,
    $$
    S^{-1}AS^{-1} = frac{1}{mbox{det}(S)^{2}}mbox{adj}(S),A,mbox{adj}(S).
    $$






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      Yes, thank you, I'm aware of that. I would like to prove that the numerator of your last expressio contains det S.
      $endgroup$
      – user54031
      Aug 13 '15 at 20:32










    • $begingroup$
      I don't really understand what you mean by contains $mbox{det}(S)$. There are determinants buried in the inverses on the left that match the determinants on the right. Is there something in particular bothering you about the expression?
      $endgroup$
      – DisintegratingByParts
      Aug 13 '15 at 20:44










    • $begingroup$
      Ok, to word it differently, I want to show that, whenever det S = 0, (adj S) A (adj S) also vanishes.
      $endgroup$
      – user54031
      Aug 13 '15 at 21:00












    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1396165%2fproducts-of-adjugate-matrices%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown
























    4 Answers
    4






    active

    oldest

    votes








    4 Answers
    4






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    2












    $begingroup$

    Assume that $det(S)=0$. Then $adj(S)$ has rank $1$ and is symmetric; then $adj(S)=avv^T$ where $ain mathbb{R}$ and $v$ is a vector. Thus $adj(S)Aadj(S)=a^2v(v^TAv)v^T$. Since $A$ is skew-symmetric, $v^TAv=0$ and $adj(S)Aadj(S)=0$. We use the Darij's method; here, the condition is that $det(S)$ is an irreducible polynomial when $S$ is a generic symmetric matrix; if it is true, then $det(S)$ is a factor of every entry of $adj(S)Aadj(S)$.



    EDIT 1. For the proof that $det(S)$ is an irreducible polynomial when $S$ is a generic symmetric matrix, cf. https://mathoverflow.net/questions/50362/irreducibility-of-determinant-of-symmetric-matrix
    and we are done !



    EDIT 2. @ darij grinberg , hi Darij, I read quickly your Theorem 1 (for $K$, a commutative ring with unity) and I think that your proof works; yet it is complicated! I think (as you wrote in your comment above) that it suffices to prove the result when $K$ is a field; yet I do not kwow how to write it rigorously...



    STEP 1. $K$ is a field. If $det(S)=0$, then $adj(S)=vw^T$ and $adj(S).A.adj(S)=v(w^TAw)v^T=0$ (even if $char(K)=2$). Since $det(.)$ is irreducible over $M_n(K)$, we conclude as above.



    STEP 2. Let $S=[s_{ij}],A=[a_{i,j}]$. We work in the ring of polynomials $mathbb{Z}[(s_{i,j})_{i,j},(a_{i,j})_{i<j}]$ in the indeterminates $(s_{i,j}),(a_{i,j})$. This ring has no zero-divisors, is factorial and its characteristic is $0$ and even is integrally closed. Clearly the entries of $adj(S).A.adj(S)$ are in $mathbb{Z}[(s_{i,j})_{i,j},(a_{i,j})_{i<j}]$; moreover they formally have $det(S)$ as a factor.



    Now, if $K$ is a commutative ring with unity, we must use an argument using a variant of Gauss lemma showing that the factor $det(S)$ is preserved over $K$. What form of the lemma can be used and how to write it correctly ?



    I just see that the OP takes for himself the green chevron; we are our own best advocates






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      Why is the rank of adj S = 1 for det S = 0?
      $endgroup$
      – user54031
      Aug 13 '15 at 23:06










    • $begingroup$
      It is true for any matrix !
      $endgroup$
      – loup blanc
      Aug 14 '15 at 9:59










    • $begingroup$
      In fact the rank is 0 or 1
      $endgroup$
      – loup blanc
      Aug 14 '15 at 10:00










    • $begingroup$
      +1. Nice trick in here! But you can spare yourself the algebraic geometry orgy by generalizing: For any $ntimes n$-matrix $S$ and any alternating $ntimes n$-matrix $A$, the entries of $left(operatorname{adj} Sright) cdot A cdot left(operatorname{adj}left(S^Tright)right)$ are divisible by $det S$. The irreducibility of $det S$ is much more elementary here, and instead of $operatorname{adj} S = avv^T$ you now need $operatorname{adj} S = vw^T$ (which is also easier to prove and less reliant on the base field).
      $endgroup$
      – darij grinberg
      Aug 14 '15 at 21:08










    • $begingroup$
      Yes Darij, you are right.
      $endgroup$
      – loup blanc
      Aug 15 '15 at 4:22
















    2












    $begingroup$

    Assume that $det(S)=0$. Then $adj(S)$ has rank $1$ and is symmetric; then $adj(S)=avv^T$ where $ain mathbb{R}$ and $v$ is a vector. Thus $adj(S)Aadj(S)=a^2v(v^TAv)v^T$. Since $A$ is skew-symmetric, $v^TAv=0$ and $adj(S)Aadj(S)=0$. We use the Darij's method; here, the condition is that $det(S)$ is an irreducible polynomial when $S$ is a generic symmetric matrix; if it is true, then $det(S)$ is a factor of every entry of $adj(S)Aadj(S)$.



    EDIT 1. For the proof that $det(S)$ is an irreducible polynomial when $S$ is a generic symmetric matrix, cf. https://mathoverflow.net/questions/50362/irreducibility-of-determinant-of-symmetric-matrix
    and we are done !



    EDIT 2. @ darij grinberg , hi Darij, I read quickly your Theorem 1 (for $K$, a commutative ring with unity) and I think that your proof works; yet it is complicated! I think (as you wrote in your comment above) that it suffices to prove the result when $K$ is a field; yet I do not kwow how to write it rigorously...



    STEP 1. $K$ is a field. If $det(S)=0$, then $adj(S)=vw^T$ and $adj(S).A.adj(S)=v(w^TAw)v^T=0$ (even if $char(K)=2$). Since $det(.)$ is irreducible over $M_n(K)$, we conclude as above.



    STEP 2. Let $S=[s_{ij}],A=[a_{i,j}]$. We work in the ring of polynomials $mathbb{Z}[(s_{i,j})_{i,j},(a_{i,j})_{i<j}]$ in the indeterminates $(s_{i,j}),(a_{i,j})$. This ring has no zero-divisors, is factorial and its characteristic is $0$ and even is integrally closed. Clearly the entries of $adj(S).A.adj(S)$ are in $mathbb{Z}[(s_{i,j})_{i,j},(a_{i,j})_{i<j}]$; moreover they formally have $det(S)$ as a factor.



    Now, if $K$ is a commutative ring with unity, we must use an argument using a variant of Gauss lemma showing that the factor $det(S)$ is preserved over $K$. What form of the lemma can be used and how to write it correctly ?



    I just see that the OP takes for himself the green chevron; we are our own best advocates






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      Why is the rank of adj S = 1 for det S = 0?
      $endgroup$
      – user54031
      Aug 13 '15 at 23:06










    • $begingroup$
      It is true for any matrix !
      $endgroup$
      – loup blanc
      Aug 14 '15 at 9:59










    • $begingroup$
      In fact the rank is 0 or 1
      $endgroup$
      – loup blanc
      Aug 14 '15 at 10:00










    • $begingroup$
      +1. Nice trick in here! But you can spare yourself the algebraic geometry orgy by generalizing: For any $ntimes n$-matrix $S$ and any alternating $ntimes n$-matrix $A$, the entries of $left(operatorname{adj} Sright) cdot A cdot left(operatorname{adj}left(S^Tright)right)$ are divisible by $det S$. The irreducibility of $det S$ is much more elementary here, and instead of $operatorname{adj} S = avv^T$ you now need $operatorname{adj} S = vw^T$ (which is also easier to prove and less reliant on the base field).
      $endgroup$
      – darij grinberg
      Aug 14 '15 at 21:08










    • $begingroup$
      Yes Darij, you are right.
      $endgroup$
      – loup blanc
      Aug 15 '15 at 4:22














    2












    2








    2





    $begingroup$

    Assume that $det(S)=0$. Then $adj(S)$ has rank $1$ and is symmetric; then $adj(S)=avv^T$ where $ain mathbb{R}$ and $v$ is a vector. Thus $adj(S)Aadj(S)=a^2v(v^TAv)v^T$. Since $A$ is skew-symmetric, $v^TAv=0$ and $adj(S)Aadj(S)=0$. We use the Darij's method; here, the condition is that $det(S)$ is an irreducible polynomial when $S$ is a generic symmetric matrix; if it is true, then $det(S)$ is a factor of every entry of $adj(S)Aadj(S)$.



    EDIT 1. For the proof that $det(S)$ is an irreducible polynomial when $S$ is a generic symmetric matrix, cf. https://mathoverflow.net/questions/50362/irreducibility-of-determinant-of-symmetric-matrix
    and we are done !



    EDIT 2. @ darij grinberg , hi Darij, I read quickly your Theorem 1 (for $K$, a commutative ring with unity) and I think that your proof works; yet it is complicated! I think (as you wrote in your comment above) that it suffices to prove the result when $K$ is a field; yet I do not kwow how to write it rigorously...



    STEP 1. $K$ is a field. If $det(S)=0$, then $adj(S)=vw^T$ and $adj(S).A.adj(S)=v(w^TAw)v^T=0$ (even if $char(K)=2$). Since $det(.)$ is irreducible over $M_n(K)$, we conclude as above.



    STEP 2. Let $S=[s_{ij}],A=[a_{i,j}]$. We work in the ring of polynomials $mathbb{Z}[(s_{i,j})_{i,j},(a_{i,j})_{i<j}]$ in the indeterminates $(s_{i,j}),(a_{i,j})$. This ring has no zero-divisors, is factorial and its characteristic is $0$ and even is integrally closed. Clearly the entries of $adj(S).A.adj(S)$ are in $mathbb{Z}[(s_{i,j})_{i,j},(a_{i,j})_{i<j}]$; moreover they formally have $det(S)$ as a factor.



    Now, if $K$ is a commutative ring with unity, we must use an argument using a variant of Gauss lemma showing that the factor $det(S)$ is preserved over $K$. What form of the lemma can be used and how to write it correctly ?



    I just see that the OP takes for himself the green chevron; we are our own best advocates






    share|cite|improve this answer











    $endgroup$



    Assume that $det(S)=0$. Then $adj(S)$ has rank $1$ and is symmetric; then $adj(S)=avv^T$ where $ain mathbb{R}$ and $v$ is a vector. Thus $adj(S)Aadj(S)=a^2v(v^TAv)v^T$. Since $A$ is skew-symmetric, $v^TAv=0$ and $adj(S)Aadj(S)=0$. We use the Darij's method; here, the condition is that $det(S)$ is an irreducible polynomial when $S$ is a generic symmetric matrix; if it is true, then $det(S)$ is a factor of every entry of $adj(S)Aadj(S)$.



    EDIT 1. For the proof that $det(S)$ is an irreducible polynomial when $S$ is a generic symmetric matrix, cf. https://mathoverflow.net/questions/50362/irreducibility-of-determinant-of-symmetric-matrix
    and we are done !



    EDIT 2. @ darij grinberg , hi Darij, I read quickly your Theorem 1 (for $K$, a commutative ring with unity) and I think that your proof works; yet it is complicated! I think (as you wrote in your comment above) that it suffices to prove the result when $K$ is a field; yet I do not kwow how to write it rigorously...



    STEP 1. $K$ is a field. If $det(S)=0$, then $adj(S)=vw^T$ and $adj(S).A.adj(S)=v(w^TAw)v^T=0$ (even if $char(K)=2$). Since $det(.)$ is irreducible over $M_n(K)$, we conclude as above.



    STEP 2. Let $S=[s_{ij}],A=[a_{i,j}]$. We work in the ring of polynomials $mathbb{Z}[(s_{i,j})_{i,j},(a_{i,j})_{i<j}]$ in the indeterminates $(s_{i,j}),(a_{i,j})$. This ring has no zero-divisors, is factorial and its characteristic is $0$ and even is integrally closed. Clearly the entries of $adj(S).A.adj(S)$ are in $mathbb{Z}[(s_{i,j})_{i,j},(a_{i,j})_{i<j}]$; moreover they formally have $det(S)$ as a factor.



    Now, if $K$ is a commutative ring with unity, we must use an argument using a variant of Gauss lemma showing that the factor $det(S)$ is preserved over $K$. What form of the lemma can be used and how to write it correctly ?



    I just see that the OP takes for himself the green chevron; we are our own best advocates







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Apr 13 '17 at 12:58









    Community

    1




    1










    answered Aug 13 '15 at 21:48









    loup blancloup blanc

    24.2k21851




    24.2k21851












    • $begingroup$
      Why is the rank of adj S = 1 for det S = 0?
      $endgroup$
      – user54031
      Aug 13 '15 at 23:06










    • $begingroup$
      It is true for any matrix !
      $endgroup$
      – loup blanc
      Aug 14 '15 at 9:59










    • $begingroup$
      In fact the rank is 0 or 1
      $endgroup$
      – loup blanc
      Aug 14 '15 at 10:00










    • $begingroup$
      +1. Nice trick in here! But you can spare yourself the algebraic geometry orgy by generalizing: For any $ntimes n$-matrix $S$ and any alternating $ntimes n$-matrix $A$, the entries of $left(operatorname{adj} Sright) cdot A cdot left(operatorname{adj}left(S^Tright)right)$ are divisible by $det S$. The irreducibility of $det S$ is much more elementary here, and instead of $operatorname{adj} S = avv^T$ you now need $operatorname{adj} S = vw^T$ (which is also easier to prove and less reliant on the base field).
      $endgroup$
      – darij grinberg
      Aug 14 '15 at 21:08










    • $begingroup$
      Yes Darij, you are right.
      $endgroup$
      – loup blanc
      Aug 15 '15 at 4:22


















    • $begingroup$
      Why is the rank of adj S = 1 for det S = 0?
      $endgroup$
      – user54031
      Aug 13 '15 at 23:06










    • $begingroup$
      It is true for any matrix !
      $endgroup$
      – loup blanc
      Aug 14 '15 at 9:59










    • $begingroup$
      In fact the rank is 0 or 1
      $endgroup$
      – loup blanc
      Aug 14 '15 at 10:00










    • $begingroup$
      +1. Nice trick in here! But you can spare yourself the algebraic geometry orgy by generalizing: For any $ntimes n$-matrix $S$ and any alternating $ntimes n$-matrix $A$, the entries of $left(operatorname{adj} Sright) cdot A cdot left(operatorname{adj}left(S^Tright)right)$ are divisible by $det S$. The irreducibility of $det S$ is much more elementary here, and instead of $operatorname{adj} S = avv^T$ you now need $operatorname{adj} S = vw^T$ (which is also easier to prove and less reliant on the base field).
      $endgroup$
      – darij grinberg
      Aug 14 '15 at 21:08










    • $begingroup$
      Yes Darij, you are right.
      $endgroup$
      – loup blanc
      Aug 15 '15 at 4:22
















    $begingroup$
    Why is the rank of adj S = 1 for det S = 0?
    $endgroup$
    – user54031
    Aug 13 '15 at 23:06




    $begingroup$
    Why is the rank of adj S = 1 for det S = 0?
    $endgroup$
    – user54031
    Aug 13 '15 at 23:06












    $begingroup$
    It is true for any matrix !
    $endgroup$
    – loup blanc
    Aug 14 '15 at 9:59




    $begingroup$
    It is true for any matrix !
    $endgroup$
    – loup blanc
    Aug 14 '15 at 9:59












    $begingroup$
    In fact the rank is 0 or 1
    $endgroup$
    – loup blanc
    Aug 14 '15 at 10:00




    $begingroup$
    In fact the rank is 0 or 1
    $endgroup$
    – loup blanc
    Aug 14 '15 at 10:00












    $begingroup$
    +1. Nice trick in here! But you can spare yourself the algebraic geometry orgy by generalizing: For any $ntimes n$-matrix $S$ and any alternating $ntimes n$-matrix $A$, the entries of $left(operatorname{adj} Sright) cdot A cdot left(operatorname{adj}left(S^Tright)right)$ are divisible by $det S$. The irreducibility of $det S$ is much more elementary here, and instead of $operatorname{adj} S = avv^T$ you now need $operatorname{adj} S = vw^T$ (which is also easier to prove and less reliant on the base field).
    $endgroup$
    – darij grinberg
    Aug 14 '15 at 21:08




    $begingroup$
    +1. Nice trick in here! But you can spare yourself the algebraic geometry orgy by generalizing: For any $ntimes n$-matrix $S$ and any alternating $ntimes n$-matrix $A$, the entries of $left(operatorname{adj} Sright) cdot A cdot left(operatorname{adj}left(S^Tright)right)$ are divisible by $det S$. The irreducibility of $det S$ is much more elementary here, and instead of $operatorname{adj} S = avv^T$ you now need $operatorname{adj} S = vw^T$ (which is also easier to prove and less reliant on the base field).
    $endgroup$
    – darij grinberg
    Aug 14 '15 at 21:08












    $begingroup$
    Yes Darij, you are right.
    $endgroup$
    – loup blanc
    Aug 15 '15 at 4:22




    $begingroup$
    Yes Darij, you are right.
    $endgroup$
    – loup blanc
    Aug 15 '15 at 4:22











    1












    $begingroup$

    Here is a proof.



    In the following, we fix a commutative ring $mathbb{K}$. All matrices are
    over $mathbb{K}$.




    Theorem 1. Let $ninmathbb{N}$. Let $S$ be an $ntimes n$-matrix. Let $A$
    be an alternating $ntimes n$-matrix. (This means that $A^{T}=-A$ and that the
    diagonal entries of $A$ are $0$.) Then, each entry of the matrix $left(
    operatorname*{adj}Sright) cdot Acdotleft( operatorname*{adj}left(
    S^{T}right) right) $
    is divisible by $det S$ (in $mathbb{K}$).




    [UPDATE: A slight modification of the below proof of Theorem 1 can be
    found in the solution to Exercise 6.42 in
    my Notes on the combinatorial
    fundamentals of algebra
    , version of 10 January 2019. More precisely,
    said Exercise 6.42 claims that each entry of the matrix
    $left(operatorname{adj} Sright)^T cdot A cdot
    left(operatorname{adj} Sright)$
    is divisible by $det S$; now it remains
    to substitute $S^T$ for $S$ and recall that
    $left(operatorname{adj} Sright)^T = operatorname{adj} left(S^Tright)$,
    and this immediately yields Theorem 1 above. Still, the following (shorter)
    version of this proof might be useful as well.]



    The main workhorse of the proof of Theorem 1 is the following result, which
    is essentially (up to some annoying switching of rows and columns) the
    Desnanot-Jacobi identity used in Dodgson condensation:




    Theorem 2. Let $ninmathbb{N}$. Let $S$ be an $ntimes n$-matrix. For
    every $uinleft{ 1,2,ldots,nright} $ and $vinleft{ 1,2,ldots
    ,nright} $
    , we let $S_{sim u,sim v}$ be the $left( n-1right)
    timesleft( n-1right) $
    -matrix obtained by crossing out the $u$-th row and
    the $v$-th column in $S$. (Thus, $operatorname*{adj}S=left( left(
    -1right) ^{i+j}S_{sim j,sim i}right) _{1leq ileq n, 1leq jleq n}$
    .)
    For every four elements $u$, $u^{prime}$, $v$ and $v^{prime}$ of $left{
    1,2,ldots,nright} $
    with $uneq u^{prime}$ and $vneq v^{prime}$, we let
    $S_{left( sim u,sim u^{prime}right) ,left( sim v,sim v^{prime
    }right) }$
    be the $left( n-2right) timesleft( n-2right) $-matrix
    obtained by crossing out the $u$-th and $u^{prime}$-th rows and the $v$-th
    and $v^{prime}$-th columns in $S$. Let $u$, $i$, $v$ and $j$ be four elements
    of $left{ 1,2,ldots,nright} $ with $uneq v$ and $ineq j$. Then,
    begin{align}
    & detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
    v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
    j,sim u}right) \
    & = left( -1right) ^{left[ i<jright] +left[ u<vright] }det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) .
    end{align}

    Here, we use the Iverson bracket notation (that is, we write
    $left[ mathcal{A}right] $ for the truth value of a statement
    $mathcal{A}$; this is defined by $left[ mathcal{A}right] =
    begin{cases}
    1, & text{if }mathcal{A}text{ is true;}\
    0, & text{if }mathcal{A}text{ is false}
    end{cases}
    $
    ).




    There are several ways to prove Theorem 2: I am aware of one argument
    that derives it from the Plücker
    relations (the simplest ones, where just one column is being shuffled around).
    There is at least one combinatorial argument that proves Theorem 2 in the
    case when $i = 1$, $j = n$, $u = 1$ and $v = n$ (see Zeilberger's paper);
    the general case can be reduced to this case by permuting rows and columns
    (although it is quite painful to track how the signs change under these
    permutations). (See also a paper by Berliner and Brualdi for a
    generalization of Theorem 2, with a combinatorial proof too.) There is
    at least one short algebraic proof of Theorem 2 (again in the case when
    $i = 1$, $j = n$, $u = 1$ and $v = n$ only) which relies on "formal" division
    by $det S$ (that is, it proves that
    begin{align}
    & det S cdot left(detleft( S_{sim 1,sim 1}right) cdotdetleft( S_{sim 2,sim
    2}right) -detleft( S_{sim 1,sim 2}right) cdotdetleft( S_{sim
    2,sim 1}right) right) \
    & = left(det Sright)^2
    cdotdetleft( S_{left( sim 1,sim 2right) ,left( sim 1,sim
    2right) }right) ,
    end{align}

    and then argues that $det S$ can be cancelled because the determinant of a
    "generic" square matrix is invertible). (This proof appears in Bressoud's
    Proofs and Confirmations; a French version can also be found in
    lecture notes by Yoann Gelineau.) Unfortunately, none of these proofs
    seems to release the reader from the annoyance of dealing with the signs.
    Maybe exterior powers are the best thing to use here, but I do not see how.
    I have written up a division-free (but laborious and annoying) proof of
    Theorem 2 in my determinant notes; more precisely, I have written up
    the proof of the $i < j$ and $u < v$ case, but the general case can easily
    be obtained from it as follows:



    Proof of Theorem 2. We need to prove the equality
    begin{align}
    & detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
    v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
    j,sim u}right) \
    & = left( -1right) ^{left[ i<jright] +left[ u<vright] }det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) .
    label{darij.eq.1}
    tag{1}
    end{align}

    If we interchange $u$ with $v$, then the left hand side of this equality
    gets multiplied by $-1$ (because its subtrahend and its minuend switch places),
    whereas the right hand side also gets multiplied by $-1$ (since
    $S_{left( sim i,sim jright) ,left( sim u,sim
    vright)}$
    does not change, but $left[ u<vright]$ either changes from $0$
    to $1$ or changes from $1$ to $0$). Hence, if we interchange $u$ with $v$,
    then the equality eqref{darij.eq.1} does not change its truth value.
    Thus, we can WLOG assume that $u leq v$ (since otherwise we can just
    interchange $u$ with $v$). Assume this. For similar reasons, we can WLOG
    assume that $i leq j$; assume this too. From $u leq v$ and $u neq v$,
    we obtain $u < v$. From $i leq j$ and $i neq j$, we obtain $i < j$.
    Thus, Theorem 6.126 in my Notes on the combinatorial
    fundamentals of algebra
    , version of 10 January 2019 (applied to $A=S$,
    $p=i$ and $q=j$) shows that
    begin{align}
    & det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) \
    & = detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
    v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
    j,sim u}right)
    label{darij.eq.2}
    tag{2}
    end{align}

    (indeed, what I am calling $S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }$
    here is what I am calling
    $operatorname{sub}^{1,2,ldots,widehat{u},ldots,widehat{v},ldots,n}_{1,2,ldots,widehat{i},ldots,widehat{j},ldots,n} A$ in my notes).



    But both $left[i < jright]$ and $left[u < vright]$ equal $1$ (since
    $i < j$ and $u < v$). Thus,
    $left( -1right) ^{left[ i<jright] +left[ u<vright] }
    = left(-1right)^{1+1} = 1$
    . Therefore
    begin{align}
    & underbrace{left( -1right) ^{left[ i<jright] +left[ u<vright] }}_{=1}det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) \
    & = det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) \
    & = detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
    v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
    j,sim u}right)
    end{align}

    (by eqref{darij.eq.2}). This proves Theorem 2. $blacksquare$



    Finally, here is an obvious lemma:




    Lemma 3. Let $ninmathbb{N}$. For every $iinleft{ 1,2,ldots
    ,nright} $
    and $jinleft{ 1,2,ldots,nright} $, let $E_{i,j}$ be the
    $ntimes n$-matrix whose $left( i,jright) $-th entry is $1$ and whose all
    other entries are $0$. (This is called an elementary matrix.) Then, every
    alternating $ntimes n$-matrix is a $mathbb{K}$-linear combination of the
    matrices $E_{i,j}-E_{j,i}$ for pairs $left( i,jright) $ of integers
    satisfying $1leq i<jleq n$.




    Proof of Theorem 1. We shall use the notation $E_{i,j}$ defined in Lemma 3.



    We need to prove that every entry of the matrix $left( operatorname*{adj}
    Sright) cdot Acdotleft( operatorname*{adj}left( S^{T}right)
    right) $
    is divisible by $det S$. In other words, we need to prove that,
    for every $left( u,vright) inleft{ 1,2,ldots,nright} ^{2}$, the
    $left( u,vright) $-th entry of the matrix $left( operatorname*{adj}
    Sright) cdot Acdotleft( operatorname*{adj}left( S^{T}right)
    right) $
    is divisible by $det S$. So, fix $left( u,vright) inleft{
    1,2,ldots,nright} ^{2}$
    .



    We need to show that the $left( u,vright) $-th entry of the matrix
    $left( operatorname*{adj}Sright) cdot Acdotleft( operatorname*{adj}
    left( S^{T}right) right) $
    is divisible by $det S$. This statement is
    clearly $mathbb{K}$-linear in $A$ (in the sense that if $A_{1}$ and $A_{2}$
    are two alternating $ntimes n$-matrices such that this statement holds both
    for $A=A_{1}$ and for $A=A_{2}$, and if $lambda_{1}$ and $lambda_{2}$ are
    two elements of $mathbb{K}$, then this statement also holds for
    $A=lambda_{1}A_{1}+lambda_{2}A_{2}$). Thus, we can WLOG assume that $A$ has
    the form $E_{i,j}-E_{j,i}$ for a pair $left( i,jright) $ of integers
    satisfying $1leq i<jleq n$ (according to Lemma 3). Assume this, and consider
    this pair $left( i,jright) $.



    We have $operatorname*{adj}S=left( left( -1right) ^{x+y}detleft(
    S_{sim y,sim x}right) right) _{1leq xleq n, 1leq yleq n}$
    and
    begin{align}
    operatorname*{adj}left( S^{T}right)
    & = left( left( -1right)
    ^{x+y}detleft( underbrace{left( S^{T}right) _{sim y,sim x}
    }_{=left( S_{sim x,sim y}right) ^{T}}right) right) _{1leq xleq
    n, 1leq yleq n} \
    & = left( left( -1right) ^{x+y}underbrace{detleft( left( S_{sim
    x,sim y}right) ^{T}right) }_{=detleft( S_{sim x,sim y}right)
    }right) _{1leq xleq n, 1leq yleq n} \
    & = left( left( -1right) ^{x+y}detleft( S_{sim x,sim y}right)
    right) _{1leq xleq n, 1leq yleq n} .
    end{align}

    Hence,
    begin{align}
    & underbrace{left( operatorname*{adj}Sright) }_{=left( left(
    -1right) ^{x+y}detleft( S_{sim y,sim x}right) right) _{1leq xleq
    n, 1leq yleq n}}cdotunderbrace{A}_{=E_{i,j}-E_{j,i}}cdot
    underbrace{left( operatorname*{adj}left( S^{T}right) right)
    }_{=left( left( -1right) ^{x+y}detleft( S_{sim x,sim y}right)
    right) _{1leq xleq n, 1leq yleq n}} \
    & =left( left( -1right) ^{x+y}detleft( S_{sim y,sim x}right)
    right) _{1leq xleq n, 1leq yleq n}cdotleft( E_{i,j}-E_{j,i}right) \
    & qquad qquad
    cdotleft( left( -1right) ^{x+y}detleft( S_{sim x,sim y}right)
    right) _{1leq xleq n, 1leq yleq n} \
    & = left( left( -1right) ^{x+i}detleft( S_{sim i,sim x}right)
    cdotleft( -1right) ^{j+y}detleft( S_{sim j,sim y}right) right. \
    & qquad qquad
    left. -left(
    -1right) ^{x+j}detleft( S_{sim j,sim x}right) cdotleft( -1right)
    ^{i+y}detleft( S_{sim i,sim y}right) right) _{1leq xleq n, 1leq
    yleq n} .
    end{align}

    Hence, the $left( u,vright) $-th entry of the matrix $left(
    operatorname*{adj}Sright) cdot Acdotleft( operatorname*{adj}left(
    S^{T}right) right) $
    is
    begin{align}
    & left( -1right) ^{u+i}detleft( S_{sim i,sim u}right) cdotleft(
    -1right) ^{j+v}detleft( S_{sim j,sim v}right) -left( -1right)
    ^{u+j}detleft( S_{sim j,sim u}right) cdotleft( -1right) ^{i+v}
    detleft( S_{sim i,sim v}right) \
    & = left( -1right) ^{i+j+u+v}left( detleft( S_{sim i,sim
    u}right) cdotdetleft( S_{sim j,sim v}right) -detleft( S_{sim
    i,sim v}right) cdotdetleft( S_{sim j,sim u}right) right) .
    label{darij.eq.3}
    tag{3}
    end{align}

    We need to prove that this is divisible by $det S$. If $u=v$, then this is
    obvious (because if $u=v$, then the right hand side of eqref{darij.eq.3} is $0$). Hence,
    we WLOG assume that $uneq v$. Thus, eqref{darij.eq.3} shows that the $left(
    u,vright) $
    -th entry of the matrix $left( operatorname*{adj}Sright)
    cdot Acdotleft( operatorname*{adj}left( S^{T}right) right) $
    is
    begin{align}
    & left( -1right) ^{i+j+u+v}underbrace{left( detleft( S_{sim i,sim
    u}right) cdotdetleft( S_{sim j,sim v}right) -detleft( S_{sim
    i,sim v}right) cdotdetleft( S_{sim j,sim u}right) right)
    }_{substack{=left( -1right) ^{left[ i<jright] +left[
    u<vright] }det Scdotdetleft( S_{left( sim i,sim jright) ,left(
    sim u,sim vright) }right) \text{(by Theorem 2)}}} \
    & = left( -1right) ^{i+j+u+v}left( -1right) ^{left[
    i<jright] +left[ u<vright] }det Scdotdetleft( S_{left( sim
    i,sim jright) ,left( sim u,sim vright) }right) ,
    end{align}

    which is clearly divisible by $det S$. Theorem 1 is thus proven. $blacksquare$






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      Please see my answer. Does it agree with your answer?
      $endgroup$
      – user54031
      Aug 18 '15 at 8:19










    • $begingroup$
      @LBO: I fear I cannot tell, as I am not acquainted with the tensor notation you are using.
      $endgroup$
      – darij grinberg
      Aug 18 '15 at 11:52
















    1












    $begingroup$

    Here is a proof.



    In the following, we fix a commutative ring $mathbb{K}$. All matrices are
    over $mathbb{K}$.




    Theorem 1. Let $ninmathbb{N}$. Let $S$ be an $ntimes n$-matrix. Let $A$
    be an alternating $ntimes n$-matrix. (This means that $A^{T}=-A$ and that the
    diagonal entries of $A$ are $0$.) Then, each entry of the matrix $left(
    operatorname*{adj}Sright) cdot Acdotleft( operatorname*{adj}left(
    S^{T}right) right) $
    is divisible by $det S$ (in $mathbb{K}$).




    [UPDATE: A slight modification of the below proof of Theorem 1 can be
    found in the solution to Exercise 6.42 in
    my Notes on the combinatorial
    fundamentals of algebra
    , version of 10 January 2019. More precisely,
    said Exercise 6.42 claims that each entry of the matrix
    $left(operatorname{adj} Sright)^T cdot A cdot
    left(operatorname{adj} Sright)$
    is divisible by $det S$; now it remains
    to substitute $S^T$ for $S$ and recall that
    $left(operatorname{adj} Sright)^T = operatorname{adj} left(S^Tright)$,
    and this immediately yields Theorem 1 above. Still, the following (shorter)
    version of this proof might be useful as well.]



    The main workhorse of the proof of Theorem 1 is the following result, which
    is essentially (up to some annoying switching of rows and columns) the
    Desnanot-Jacobi identity used in Dodgson condensation:




    Theorem 2. Let $ninmathbb{N}$. Let $S$ be an $ntimes n$-matrix. For
    every $uinleft{ 1,2,ldots,nright} $ and $vinleft{ 1,2,ldots
    ,nright} $
    , we let $S_{sim u,sim v}$ be the $left( n-1right)
    timesleft( n-1right) $
    -matrix obtained by crossing out the $u$-th row and
    the $v$-th column in $S$. (Thus, $operatorname*{adj}S=left( left(
    -1right) ^{i+j}S_{sim j,sim i}right) _{1leq ileq n, 1leq jleq n}$
    .)
    For every four elements $u$, $u^{prime}$, $v$ and $v^{prime}$ of $left{
    1,2,ldots,nright} $
    with $uneq u^{prime}$ and $vneq v^{prime}$, we let
    $S_{left( sim u,sim u^{prime}right) ,left( sim v,sim v^{prime
    }right) }$
    be the $left( n-2right) timesleft( n-2right) $-matrix
    obtained by crossing out the $u$-th and $u^{prime}$-th rows and the $v$-th
    and $v^{prime}$-th columns in $S$. Let $u$, $i$, $v$ and $j$ be four elements
    of $left{ 1,2,ldots,nright} $ with $uneq v$ and $ineq j$. Then,
    begin{align}
    & detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
    v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
    j,sim u}right) \
    & = left( -1right) ^{left[ i<jright] +left[ u<vright] }det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) .
    end{align}

    Here, we use the Iverson bracket notation (that is, we write
    $left[ mathcal{A}right] $ for the truth value of a statement
    $mathcal{A}$; this is defined by $left[ mathcal{A}right] =
    begin{cases}
    1, & text{if }mathcal{A}text{ is true;}\
    0, & text{if }mathcal{A}text{ is false}
    end{cases}
    $
    ).




    There are several ways to prove Theorem 2: I am aware of one argument
    that derives it from the Plücker
    relations (the simplest ones, where just one column is being shuffled around).
    There is at least one combinatorial argument that proves Theorem 2 in the
    case when $i = 1$, $j = n$, $u = 1$ and $v = n$ (see Zeilberger's paper);
    the general case can be reduced to this case by permuting rows and columns
    (although it is quite painful to track how the signs change under these
    permutations). (See also a paper by Berliner and Brualdi for a
    generalization of Theorem 2, with a combinatorial proof too.) There is
    at least one short algebraic proof of Theorem 2 (again in the case when
    $i = 1$, $j = n$, $u = 1$ and $v = n$ only) which relies on "formal" division
    by $det S$ (that is, it proves that
    begin{align}
    & det S cdot left(detleft( S_{sim 1,sim 1}right) cdotdetleft( S_{sim 2,sim
    2}right) -detleft( S_{sim 1,sim 2}right) cdotdetleft( S_{sim
    2,sim 1}right) right) \
    & = left(det Sright)^2
    cdotdetleft( S_{left( sim 1,sim 2right) ,left( sim 1,sim
    2right) }right) ,
    end{align}

    and then argues that $det S$ can be cancelled because the determinant of a
    "generic" square matrix is invertible). (This proof appears in Bressoud's
    Proofs and Confirmations; a French version can also be found in
    lecture notes by Yoann Gelineau.) Unfortunately, none of these proofs
    seems to release the reader from the annoyance of dealing with the signs.
    Maybe exterior powers are the best thing to use here, but I do not see how.
    I have written up a division-free (but laborious and annoying) proof of
    Theorem 2 in my determinant notes; more precisely, I have written up
    the proof of the $i < j$ and $u < v$ case, but the general case can easily
    be obtained from it as follows:



    Proof of Theorem 2. We need to prove the equality
    begin{align}
    & detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
    v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
    j,sim u}right) \
    & = left( -1right) ^{left[ i<jright] +left[ u<vright] }det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) .
    label{darij.eq.1}
    tag{1}
    end{align}

    If we interchange $u$ with $v$, then the left hand side of this equality
    gets multiplied by $-1$ (because its subtrahend and its minuend switch places),
    whereas the right hand side also gets multiplied by $-1$ (since
    $S_{left( sim i,sim jright) ,left( sim u,sim
    vright)}$
    does not change, but $left[ u<vright]$ either changes from $0$
    to $1$ or changes from $1$ to $0$). Hence, if we interchange $u$ with $v$,
    then the equality eqref{darij.eq.1} does not change its truth value.
    Thus, we can WLOG assume that $u leq v$ (since otherwise we can just
    interchange $u$ with $v$). Assume this. For similar reasons, we can WLOG
    assume that $i leq j$; assume this too. From $u leq v$ and $u neq v$,
    we obtain $u < v$. From $i leq j$ and $i neq j$, we obtain $i < j$.
    Thus, Theorem 6.126 in my Notes on the combinatorial
    fundamentals of algebra
    , version of 10 January 2019 (applied to $A=S$,
    $p=i$ and $q=j$) shows that
    begin{align}
    & det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) \
    & = detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
    v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
    j,sim u}right)
    label{darij.eq.2}
    tag{2}
    end{align}

    (indeed, what I am calling $S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }$
    here is what I am calling
    $operatorname{sub}^{1,2,ldots,widehat{u},ldots,widehat{v},ldots,n}_{1,2,ldots,widehat{i},ldots,widehat{j},ldots,n} A$ in my notes).



    But both $left[i < jright]$ and $left[u < vright]$ equal $1$ (since
    $i < j$ and $u < v$). Thus,
    $left( -1right) ^{left[ i<jright] +left[ u<vright] }
    = left(-1right)^{1+1} = 1$
    . Therefore
    begin{align}
    & underbrace{left( -1right) ^{left[ i<jright] +left[ u<vright] }}_{=1}det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) \
    & = det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) \
    & = detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
    v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
    j,sim u}right)
    end{align}

    (by eqref{darij.eq.2}). This proves Theorem 2. $blacksquare$



    Finally, here is an obvious lemma:




    Lemma 3. Let $ninmathbb{N}$. For every $iinleft{ 1,2,ldots
    ,nright} $
    and $jinleft{ 1,2,ldots,nright} $, let $E_{i,j}$ be the
    $ntimes n$-matrix whose $left( i,jright) $-th entry is $1$ and whose all
    other entries are $0$. (This is called an elementary matrix.) Then, every
    alternating $ntimes n$-matrix is a $mathbb{K}$-linear combination of the
    matrices $E_{i,j}-E_{j,i}$ for pairs $left( i,jright) $ of integers
    satisfying $1leq i<jleq n$.




    Proof of Theorem 1. We shall use the notation $E_{i,j}$ defined in Lemma 3.



    We need to prove that every entry of the matrix $left( operatorname*{adj}
    Sright) cdot Acdotleft( operatorname*{adj}left( S^{T}right)
    right) $
    is divisible by $det S$. In other words, we need to prove that,
    for every $left( u,vright) inleft{ 1,2,ldots,nright} ^{2}$, the
    $left( u,vright) $-th entry of the matrix $left( operatorname*{adj}
    Sright) cdot Acdotleft( operatorname*{adj}left( S^{T}right)
    right) $
    is divisible by $det S$. So, fix $left( u,vright) inleft{
    1,2,ldots,nright} ^{2}$
    .



    We need to show that the $left( u,vright) $-th entry of the matrix
    $left( operatorname*{adj}Sright) cdot Acdotleft( operatorname*{adj}
    left( S^{T}right) right) $
    is divisible by $det S$. This statement is
    clearly $mathbb{K}$-linear in $A$ (in the sense that if $A_{1}$ and $A_{2}$
    are two alternating $ntimes n$-matrices such that this statement holds both
    for $A=A_{1}$ and for $A=A_{2}$, and if $lambda_{1}$ and $lambda_{2}$ are
    two elements of $mathbb{K}$, then this statement also holds for
    $A=lambda_{1}A_{1}+lambda_{2}A_{2}$). Thus, we can WLOG assume that $A$ has
    the form $E_{i,j}-E_{j,i}$ for a pair $left( i,jright) $ of integers
    satisfying $1leq i<jleq n$ (according to Lemma 3). Assume this, and consider
    this pair $left( i,jright) $.



    We have $operatorname*{adj}S=left( left( -1right) ^{x+y}detleft(
    S_{sim y,sim x}right) right) _{1leq xleq n, 1leq yleq n}$
    and
    begin{align}
    operatorname*{adj}left( S^{T}right)
    & = left( left( -1right)
    ^{x+y}detleft( underbrace{left( S^{T}right) _{sim y,sim x}
    }_{=left( S_{sim x,sim y}right) ^{T}}right) right) _{1leq xleq
    n, 1leq yleq n} \
    & = left( left( -1right) ^{x+y}underbrace{detleft( left( S_{sim
    x,sim y}right) ^{T}right) }_{=detleft( S_{sim x,sim y}right)
    }right) _{1leq xleq n, 1leq yleq n} \
    & = left( left( -1right) ^{x+y}detleft( S_{sim x,sim y}right)
    right) _{1leq xleq n, 1leq yleq n} .
    end{align}

    Hence,
    begin{align}
    & underbrace{left( operatorname*{adj}Sright) }_{=left( left(
    -1right) ^{x+y}detleft( S_{sim y,sim x}right) right) _{1leq xleq
    n, 1leq yleq n}}cdotunderbrace{A}_{=E_{i,j}-E_{j,i}}cdot
    underbrace{left( operatorname*{adj}left( S^{T}right) right)
    }_{=left( left( -1right) ^{x+y}detleft( S_{sim x,sim y}right)
    right) _{1leq xleq n, 1leq yleq n}} \
    & =left( left( -1right) ^{x+y}detleft( S_{sim y,sim x}right)
    right) _{1leq xleq n, 1leq yleq n}cdotleft( E_{i,j}-E_{j,i}right) \
    & qquad qquad
    cdotleft( left( -1right) ^{x+y}detleft( S_{sim x,sim y}right)
    right) _{1leq xleq n, 1leq yleq n} \
    & = left( left( -1right) ^{x+i}detleft( S_{sim i,sim x}right)
    cdotleft( -1right) ^{j+y}detleft( S_{sim j,sim y}right) right. \
    & qquad qquad
    left. -left(
    -1right) ^{x+j}detleft( S_{sim j,sim x}right) cdotleft( -1right)
    ^{i+y}detleft( S_{sim i,sim y}right) right) _{1leq xleq n, 1leq
    yleq n} .
    end{align}

    Hence, the $left( u,vright) $-th entry of the matrix $left(
    operatorname*{adj}Sright) cdot Acdotleft( operatorname*{adj}left(
    S^{T}right) right) $
    is
    begin{align}
    & left( -1right) ^{u+i}detleft( S_{sim i,sim u}right) cdotleft(
    -1right) ^{j+v}detleft( S_{sim j,sim v}right) -left( -1right)
    ^{u+j}detleft( S_{sim j,sim u}right) cdotleft( -1right) ^{i+v}
    detleft( S_{sim i,sim v}right) \
    & = left( -1right) ^{i+j+u+v}left( detleft( S_{sim i,sim
    u}right) cdotdetleft( S_{sim j,sim v}right) -detleft( S_{sim
    i,sim v}right) cdotdetleft( S_{sim j,sim u}right) right) .
    label{darij.eq.3}
    tag{3}
    end{align}

    We need to prove that this is divisible by $det S$. If $u=v$, then this is
    obvious (because if $u=v$, then the right hand side of eqref{darij.eq.3} is $0$). Hence,
    we WLOG assume that $uneq v$. Thus, eqref{darij.eq.3} shows that the $left(
    u,vright) $
    -th entry of the matrix $left( operatorname*{adj}Sright)
    cdot Acdotleft( operatorname*{adj}left( S^{T}right) right) $
    is
    begin{align}
    & left( -1right) ^{i+j+u+v}underbrace{left( detleft( S_{sim i,sim
    u}right) cdotdetleft( S_{sim j,sim v}right) -detleft( S_{sim
    i,sim v}right) cdotdetleft( S_{sim j,sim u}right) right)
    }_{substack{=left( -1right) ^{left[ i<jright] +left[
    u<vright] }det Scdotdetleft( S_{left( sim i,sim jright) ,left(
    sim u,sim vright) }right) \text{(by Theorem 2)}}} \
    & = left( -1right) ^{i+j+u+v}left( -1right) ^{left[
    i<jright] +left[ u<vright] }det Scdotdetleft( S_{left( sim
    i,sim jright) ,left( sim u,sim vright) }right) ,
    end{align}

    which is clearly divisible by $det S$. Theorem 1 is thus proven. $blacksquare$






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      Please see my answer. Does it agree with your answer?
      $endgroup$
      – user54031
      Aug 18 '15 at 8:19










    • $begingroup$
      @LBO: I fear I cannot tell, as I am not acquainted with the tensor notation you are using.
      $endgroup$
      – darij grinberg
      Aug 18 '15 at 11:52














    1












    1








    1





    $begingroup$

    Here is a proof.



    In the following, we fix a commutative ring $mathbb{K}$. All matrices are
    over $mathbb{K}$.




    Theorem 1. Let $ninmathbb{N}$. Let $S$ be an $ntimes n$-matrix. Let $A$
    be an alternating $ntimes n$-matrix. (This means that $A^{T}=-A$ and that the
    diagonal entries of $A$ are $0$.) Then, each entry of the matrix $left(
    operatorname*{adj}Sright) cdot Acdotleft( operatorname*{adj}left(
    S^{T}right) right) $
    is divisible by $det S$ (in $mathbb{K}$).




    [UPDATE: A slight modification of the below proof of Theorem 1 can be
    found in the solution to Exercise 6.42 in
    my Notes on the combinatorial
    fundamentals of algebra
    , version of 10 January 2019. More precisely,
    said Exercise 6.42 claims that each entry of the matrix
    $left(operatorname{adj} Sright)^T cdot A cdot
    left(operatorname{adj} Sright)$
    is divisible by $det S$; now it remains
    to substitute $S^T$ for $S$ and recall that
    $left(operatorname{adj} Sright)^T = operatorname{adj} left(S^Tright)$,
    and this immediately yields Theorem 1 above. Still, the following (shorter)
    version of this proof might be useful as well.]



    The main workhorse of the proof of Theorem 1 is the following result, which
    is essentially (up to some annoying switching of rows and columns) the
    Desnanot-Jacobi identity used in Dodgson condensation:




    Theorem 2. Let $ninmathbb{N}$. Let $S$ be an $ntimes n$-matrix. For
    every $uinleft{ 1,2,ldots,nright} $ and $vinleft{ 1,2,ldots
    ,nright} $
    , we let $S_{sim u,sim v}$ be the $left( n-1right)
    timesleft( n-1right) $
    -matrix obtained by crossing out the $u$-th row and
    the $v$-th column in $S$. (Thus, $operatorname*{adj}S=left( left(
    -1right) ^{i+j}S_{sim j,sim i}right) _{1leq ileq n, 1leq jleq n}$
    .)
    For every four elements $u$, $u^{prime}$, $v$ and $v^{prime}$ of $left{
    1,2,ldots,nright} $
    with $uneq u^{prime}$ and $vneq v^{prime}$, we let
    $S_{left( sim u,sim u^{prime}right) ,left( sim v,sim v^{prime
    }right) }$
    be the $left( n-2right) timesleft( n-2right) $-matrix
    obtained by crossing out the $u$-th and $u^{prime}$-th rows and the $v$-th
    and $v^{prime}$-th columns in $S$. Let $u$, $i$, $v$ and $j$ be four elements
    of $left{ 1,2,ldots,nright} $ with $uneq v$ and $ineq j$. Then,
    begin{align}
    & detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
    v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
    j,sim u}right) \
    & = left( -1right) ^{left[ i<jright] +left[ u<vright] }det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) .
    end{align}

    Here, we use the Iverson bracket notation (that is, we write
    $left[ mathcal{A}right] $ for the truth value of a statement
    $mathcal{A}$; this is defined by $left[ mathcal{A}right] =
    begin{cases}
    1, & text{if }mathcal{A}text{ is true;}\
    0, & text{if }mathcal{A}text{ is false}
    end{cases}
    $
    ).




    There are several ways to prove Theorem 2: I am aware of one argument
    that derives it from the Plücker
    relations (the simplest ones, where just one column is being shuffled around).
    There is at least one combinatorial argument that proves Theorem 2 in the
    case when $i = 1$, $j = n$, $u = 1$ and $v = n$ (see Zeilberger's paper);
    the general case can be reduced to this case by permuting rows and columns
    (although it is quite painful to track how the signs change under these
    permutations). (See also a paper by Berliner and Brualdi for a
    generalization of Theorem 2, with a combinatorial proof too.) There is
    at least one short algebraic proof of Theorem 2 (again in the case when
    $i = 1$, $j = n$, $u = 1$ and $v = n$ only) which relies on "formal" division
    by $det S$ (that is, it proves that
    begin{align}
    & det S cdot left(detleft( S_{sim 1,sim 1}right) cdotdetleft( S_{sim 2,sim
    2}right) -detleft( S_{sim 1,sim 2}right) cdotdetleft( S_{sim
    2,sim 1}right) right) \
    & = left(det Sright)^2
    cdotdetleft( S_{left( sim 1,sim 2right) ,left( sim 1,sim
    2right) }right) ,
    end{align}

    and then argues that $det S$ can be cancelled because the determinant of a
    "generic" square matrix is invertible). (This proof appears in Bressoud's
    Proofs and Confirmations; a French version can also be found in
    lecture notes by Yoann Gelineau.) Unfortunately, none of these proofs
    seems to release the reader from the annoyance of dealing with the signs.
    Maybe exterior powers are the best thing to use here, but I do not see how.
    I have written up a division-free (but laborious and annoying) proof of
    Theorem 2 in my determinant notes; more precisely, I have written up
    the proof of the $i < j$ and $u < v$ case, but the general case can easily
    be obtained from it as follows:



    Proof of Theorem 2. We need to prove the equality
    begin{align}
    & detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
    v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
    j,sim u}right) \
    & = left( -1right) ^{left[ i<jright] +left[ u<vright] }det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) .
    label{darij.eq.1}
    tag{1}
    end{align}

    If we interchange $u$ with $v$, then the left hand side of this equality
    gets multiplied by $-1$ (because its subtrahend and its minuend switch places),
    whereas the right hand side also gets multiplied by $-1$ (since
    $S_{left( sim i,sim jright) ,left( sim u,sim
    vright)}$
    does not change, but $left[ u<vright]$ either changes from $0$
    to $1$ or changes from $1$ to $0$). Hence, if we interchange $u$ with $v$,
    then the equality eqref{darij.eq.1} does not change its truth value.
    Thus, we can WLOG assume that $u leq v$ (since otherwise we can just
    interchange $u$ with $v$). Assume this. For similar reasons, we can WLOG
    assume that $i leq j$; assume this too. From $u leq v$ and $u neq v$,
    we obtain $u < v$. From $i leq j$ and $i neq j$, we obtain $i < j$.
    Thus, Theorem 6.126 in my Notes on the combinatorial
    fundamentals of algebra
    , version of 10 January 2019 (applied to $A=S$,
    $p=i$ and $q=j$) shows that
    begin{align}
    & det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) \
    & = detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
    v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
    j,sim u}right)
    label{darij.eq.2}
    tag{2}
    end{align}

    (indeed, what I am calling $S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }$
    here is what I am calling
    $operatorname{sub}^{1,2,ldots,widehat{u},ldots,widehat{v},ldots,n}_{1,2,ldots,widehat{i},ldots,widehat{j},ldots,n} A$ in my notes).



    But both $left[i < jright]$ and $left[u < vright]$ equal $1$ (since
    $i < j$ and $u < v$). Thus,
    $left( -1right) ^{left[ i<jright] +left[ u<vright] }
    = left(-1right)^{1+1} = 1$
    . Therefore
    begin{align}
    & underbrace{left( -1right) ^{left[ i<jright] +left[ u<vright] }}_{=1}det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) \
    & = det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) \
    & = detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
    v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
    j,sim u}right)
    end{align}

    (by eqref{darij.eq.2}). This proves Theorem 2. $blacksquare$



    Finally, here is an obvious lemma:




    Lemma 3. Let $ninmathbb{N}$. For every $iinleft{ 1,2,ldots
    ,nright} $
    and $jinleft{ 1,2,ldots,nright} $, let $E_{i,j}$ be the
    $ntimes n$-matrix whose $left( i,jright) $-th entry is $1$ and whose all
    other entries are $0$. (This is called an elementary matrix.) Then, every
    alternating $ntimes n$-matrix is a $mathbb{K}$-linear combination of the
    matrices $E_{i,j}-E_{j,i}$ for pairs $left( i,jright) $ of integers
    satisfying $1leq i<jleq n$.




    Proof of Theorem 1. We shall use the notation $E_{i,j}$ defined in Lemma 3.



    We need to prove that every entry of the matrix $left( operatorname*{adj}
    Sright) cdot Acdotleft( operatorname*{adj}left( S^{T}right)
    right) $
    is divisible by $det S$. In other words, we need to prove that,
    for every $left( u,vright) inleft{ 1,2,ldots,nright} ^{2}$, the
    $left( u,vright) $-th entry of the matrix $left( operatorname*{adj}
    Sright) cdot Acdotleft( operatorname*{adj}left( S^{T}right)
    right) $
    is divisible by $det S$. So, fix $left( u,vright) inleft{
    1,2,ldots,nright} ^{2}$
    .



    We need to show that the $left( u,vright) $-th entry of the matrix
    $left( operatorname*{adj}Sright) cdot Acdotleft( operatorname*{adj}
    left( S^{T}right) right) $
    is divisible by $det S$. This statement is
    clearly $mathbb{K}$-linear in $A$ (in the sense that if $A_{1}$ and $A_{2}$
    are two alternating $ntimes n$-matrices such that this statement holds both
    for $A=A_{1}$ and for $A=A_{2}$, and if $lambda_{1}$ and $lambda_{2}$ are
    two elements of $mathbb{K}$, then this statement also holds for
    $A=lambda_{1}A_{1}+lambda_{2}A_{2}$). Thus, we can WLOG assume that $A$ has
    the form $E_{i,j}-E_{j,i}$ for a pair $left( i,jright) $ of integers
    satisfying $1leq i<jleq n$ (according to Lemma 3). Assume this, and consider
    this pair $left( i,jright) $.



    We have $operatorname*{adj}S=left( left( -1right) ^{x+y}detleft(
    S_{sim y,sim x}right) right) _{1leq xleq n, 1leq yleq n}$
    and
    begin{align}
    operatorname*{adj}left( S^{T}right)
    & = left( left( -1right)
    ^{x+y}detleft( underbrace{left( S^{T}right) _{sim y,sim x}
    }_{=left( S_{sim x,sim y}right) ^{T}}right) right) _{1leq xleq
    n, 1leq yleq n} \
    & = left( left( -1right) ^{x+y}underbrace{detleft( left( S_{sim
    x,sim y}right) ^{T}right) }_{=detleft( S_{sim x,sim y}right)
    }right) _{1leq xleq n, 1leq yleq n} \
    & = left( left( -1right) ^{x+y}detleft( S_{sim x,sim y}right)
    right) _{1leq xleq n, 1leq yleq n} .
    end{align}

    Hence,
    begin{align}
    & underbrace{left( operatorname*{adj}Sright) }_{=left( left(
    -1right) ^{x+y}detleft( S_{sim y,sim x}right) right) _{1leq xleq
    n, 1leq yleq n}}cdotunderbrace{A}_{=E_{i,j}-E_{j,i}}cdot
    underbrace{left( operatorname*{adj}left( S^{T}right) right)
    }_{=left( left( -1right) ^{x+y}detleft( S_{sim x,sim y}right)
    right) _{1leq xleq n, 1leq yleq n}} \
    & =left( left( -1right) ^{x+y}detleft( S_{sim y,sim x}right)
    right) _{1leq xleq n, 1leq yleq n}cdotleft( E_{i,j}-E_{j,i}right) \
    & qquad qquad
    cdotleft( left( -1right) ^{x+y}detleft( S_{sim x,sim y}right)
    right) _{1leq xleq n, 1leq yleq n} \
    & = left( left( -1right) ^{x+i}detleft( S_{sim i,sim x}right)
    cdotleft( -1right) ^{j+y}detleft( S_{sim j,sim y}right) right. \
    & qquad qquad
    left. -left(
    -1right) ^{x+j}detleft( S_{sim j,sim x}right) cdotleft( -1right)
    ^{i+y}detleft( S_{sim i,sim y}right) right) _{1leq xleq n, 1leq
    yleq n} .
    end{align}

    Hence, the $left( u,vright) $-th entry of the matrix $left(
    operatorname*{adj}Sright) cdot Acdotleft( operatorname*{adj}left(
    S^{T}right) right) $
    is
    begin{align}
    & left( -1right) ^{u+i}detleft( S_{sim i,sim u}right) cdotleft(
    -1right) ^{j+v}detleft( S_{sim j,sim v}right) -left( -1right)
    ^{u+j}detleft( S_{sim j,sim u}right) cdotleft( -1right) ^{i+v}
    detleft( S_{sim i,sim v}right) \
    & = left( -1right) ^{i+j+u+v}left( detleft( S_{sim i,sim
    u}right) cdotdetleft( S_{sim j,sim v}right) -detleft( S_{sim
    i,sim v}right) cdotdetleft( S_{sim j,sim u}right) right) .
    label{darij.eq.3}
    tag{3}
    end{align}

    We need to prove that this is divisible by $det S$. If $u=v$, then this is
    obvious (because if $u=v$, then the right hand side of eqref{darij.eq.3} is $0$). Hence,
    we WLOG assume that $uneq v$. Thus, eqref{darij.eq.3} shows that the $left(
    u,vright) $
    -th entry of the matrix $left( operatorname*{adj}Sright)
    cdot Acdotleft( operatorname*{adj}left( S^{T}right) right) $
    is
    begin{align}
    & left( -1right) ^{i+j+u+v}underbrace{left( detleft( S_{sim i,sim
    u}right) cdotdetleft( S_{sim j,sim v}right) -detleft( S_{sim
    i,sim v}right) cdotdetleft( S_{sim j,sim u}right) right)
    }_{substack{=left( -1right) ^{left[ i<jright] +left[
    u<vright] }det Scdotdetleft( S_{left( sim i,sim jright) ,left(
    sim u,sim vright) }right) \text{(by Theorem 2)}}} \
    & = left( -1right) ^{i+j+u+v}left( -1right) ^{left[
    i<jright] +left[ u<vright] }det Scdotdetleft( S_{left( sim
    i,sim jright) ,left( sim u,sim vright) }right) ,
    end{align}

    which is clearly divisible by $det S$. Theorem 1 is thus proven. $blacksquare$






    share|cite|improve this answer











    $endgroup$



    Here is a proof.



    In the following, we fix a commutative ring $mathbb{K}$. All matrices are
    over $mathbb{K}$.




    Theorem 1. Let $ninmathbb{N}$. Let $S$ be an $ntimes n$-matrix. Let $A$
    be an alternating $ntimes n$-matrix. (This means that $A^{T}=-A$ and that the
    diagonal entries of $A$ are $0$.) Then, each entry of the matrix $left(
    operatorname*{adj}Sright) cdot Acdotleft( operatorname*{adj}left(
    S^{T}right) right) $
    is divisible by $det S$ (in $mathbb{K}$).




    [UPDATE: A slight modification of the below proof of Theorem 1 can be
    found in the solution to Exercise 6.42 in
    my Notes on the combinatorial
    fundamentals of algebra
    , version of 10 January 2019. More precisely,
    said Exercise 6.42 claims that each entry of the matrix
    $left(operatorname{adj} Sright)^T cdot A cdot
    left(operatorname{adj} Sright)$
    is divisible by $det S$; now it remains
    to substitute $S^T$ for $S$ and recall that
    $left(operatorname{adj} Sright)^T = operatorname{adj} left(S^Tright)$,
    and this immediately yields Theorem 1 above. Still, the following (shorter)
    version of this proof might be useful as well.]



    The main workhorse of the proof of Theorem 1 is the following result, which
    is essentially (up to some annoying switching of rows and columns) the
    Desnanot-Jacobi identity used in Dodgson condensation:




    Theorem 2. Let $ninmathbb{N}$. Let $S$ be an $ntimes n$-matrix. For
    every $uinleft{ 1,2,ldots,nright} $ and $vinleft{ 1,2,ldots
    ,nright} $
    , we let $S_{sim u,sim v}$ be the $left( n-1right)
    timesleft( n-1right) $
    -matrix obtained by crossing out the $u$-th row and
    the $v$-th column in $S$. (Thus, $operatorname*{adj}S=left( left(
    -1right) ^{i+j}S_{sim j,sim i}right) _{1leq ileq n, 1leq jleq n}$
    .)
    For every four elements $u$, $u^{prime}$, $v$ and $v^{prime}$ of $left{
    1,2,ldots,nright} $
    with $uneq u^{prime}$ and $vneq v^{prime}$, we let
    $S_{left( sim u,sim u^{prime}right) ,left( sim v,sim v^{prime
    }right) }$
    be the $left( n-2right) timesleft( n-2right) $-matrix
    obtained by crossing out the $u$-th and $u^{prime}$-th rows and the $v$-th
    and $v^{prime}$-th columns in $S$. Let $u$, $i$, $v$ and $j$ be four elements
    of $left{ 1,2,ldots,nright} $ with $uneq v$ and $ineq j$. Then,
    begin{align}
    & detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
    v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
    j,sim u}right) \
    & = left( -1right) ^{left[ i<jright] +left[ u<vright] }det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) .
    end{align}

    Here, we use the Iverson bracket notation (that is, we write
    $left[ mathcal{A}right] $ for the truth value of a statement
    $mathcal{A}$; this is defined by $left[ mathcal{A}right] =
    begin{cases}
    1, & text{if }mathcal{A}text{ is true;}\
    0, & text{if }mathcal{A}text{ is false}
    end{cases}
    $
    ).




    There are several ways to prove Theorem 2: I am aware of one argument
    that derives it from the Plücker
    relations (the simplest ones, where just one column is being shuffled around).
    There is at least one combinatorial argument that proves Theorem 2 in the
    case when $i = 1$, $j = n$, $u = 1$ and $v = n$ (see Zeilberger's paper);
    the general case can be reduced to this case by permuting rows and columns
    (although it is quite painful to track how the signs change under these
    permutations). (See also a paper by Berliner and Brualdi for a
    generalization of Theorem 2, with a combinatorial proof too.) There is
    at least one short algebraic proof of Theorem 2 (again in the case when
    $i = 1$, $j = n$, $u = 1$ and $v = n$ only) which relies on "formal" division
    by $det S$ (that is, it proves that
    begin{align}
    & det S cdot left(detleft( S_{sim 1,sim 1}right) cdotdetleft( S_{sim 2,sim
    2}right) -detleft( S_{sim 1,sim 2}right) cdotdetleft( S_{sim
    2,sim 1}right) right) \
    & = left(det Sright)^2
    cdotdetleft( S_{left( sim 1,sim 2right) ,left( sim 1,sim
    2right) }right) ,
    end{align}

    and then argues that $det S$ can be cancelled because the determinant of a
    "generic" square matrix is invertible). (This proof appears in Bressoud's
    Proofs and Confirmations; a French version can also be found in
    lecture notes by Yoann Gelineau.) Unfortunately, none of these proofs
    seems to release the reader from the annoyance of dealing with the signs.
    Maybe exterior powers are the best thing to use here, but I do not see how.
    I have written up a division-free (but laborious and annoying) proof of
    Theorem 2 in my determinant notes; more precisely, I have written up
    the proof of the $i < j$ and $u < v$ case, but the general case can easily
    be obtained from it as follows:



    Proof of Theorem 2. We need to prove the equality
    begin{align}
    & detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
    v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
    j,sim u}right) \
    & = left( -1right) ^{left[ i<jright] +left[ u<vright] }det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) .
    label{darij.eq.1}
    tag{1}
    end{align}

    If we interchange $u$ with $v$, then the left hand side of this equality
    gets multiplied by $-1$ (because its subtrahend and its minuend switch places),
    whereas the right hand side also gets multiplied by $-1$ (since
    $S_{left( sim i,sim jright) ,left( sim u,sim
    vright)}$
    does not change, but $left[ u<vright]$ either changes from $0$
    to $1$ or changes from $1$ to $0$). Hence, if we interchange $u$ with $v$,
    then the equality eqref{darij.eq.1} does not change its truth value.
    Thus, we can WLOG assume that $u leq v$ (since otherwise we can just
    interchange $u$ with $v$). Assume this. For similar reasons, we can WLOG
    assume that $i leq j$; assume this too. From $u leq v$ and $u neq v$,
    we obtain $u < v$. From $i leq j$ and $i neq j$, we obtain $i < j$.
    Thus, Theorem 6.126 in my Notes on the combinatorial
    fundamentals of algebra
    , version of 10 January 2019 (applied to $A=S$,
    $p=i$ and $q=j$) shows that
    begin{align}
    & det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) \
    & = detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
    v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
    j,sim u}right)
    label{darij.eq.2}
    tag{2}
    end{align}

    (indeed, what I am calling $S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }$
    here is what I am calling
    $operatorname{sub}^{1,2,ldots,widehat{u},ldots,widehat{v},ldots,n}_{1,2,ldots,widehat{i},ldots,widehat{j},ldots,n} A$ in my notes).



    But both $left[i < jright]$ and $left[u < vright]$ equal $1$ (since
    $i < j$ and $u < v$). Thus,
    $left( -1right) ^{left[ i<jright] +left[ u<vright] }
    = left(-1right)^{1+1} = 1$
    . Therefore
    begin{align}
    & underbrace{left( -1right) ^{left[ i<jright] +left[ u<vright] }}_{=1}det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) \
    & = det
    Scdotdetleft( S_{left( sim i,sim jright) ,left( sim u,sim
    vright) }right) \
    & = detleft( S_{sim i,sim u}right) cdotdetleft( S_{sim j,sim
    v}right) -detleft( S_{sim i,sim v}right) cdotdetleft( S_{sim
    j,sim u}right)
    end{align}

    (by eqref{darij.eq.2}). This proves Theorem 2. $blacksquare$



    Finally, here is an obvious lemma:




    Lemma 3. Let $ninmathbb{N}$. For every $iinleft{ 1,2,ldots
    ,nright} $
    and $jinleft{ 1,2,ldots,nright} $, let $E_{i,j}$ be the
    $ntimes n$-matrix whose $left( i,jright) $-th entry is $1$ and whose all
    other entries are $0$. (This is called an elementary matrix.) Then, every
    alternating $ntimes n$-matrix is a $mathbb{K}$-linear combination of the
    matrices $E_{i,j}-E_{j,i}$ for pairs $left( i,jright) $ of integers
    satisfying $1leq i<jleq n$.




    Proof of Theorem 1. We shall use the notation $E_{i,j}$ defined in Lemma 3.



    We need to prove that every entry of the matrix $left( operatorname*{adj}
    Sright) cdot Acdotleft( operatorname*{adj}left( S^{T}right)
    right) $
    is divisible by $det S$. In other words, we need to prove that,
    for every $left( u,vright) inleft{ 1,2,ldots,nright} ^{2}$, the
    $left( u,vright) $-th entry of the matrix $left( operatorname*{adj}
    Sright) cdot Acdotleft( operatorname*{adj}left( S^{T}right)
    right) $
    is divisible by $det S$. So, fix $left( u,vright) inleft{
    1,2,ldots,nright} ^{2}$
    .



    We need to show that the $left( u,vright) $-th entry of the matrix
    $left( operatorname*{adj}Sright) cdot Acdotleft( operatorname*{adj}
    left( S^{T}right) right) $
    is divisible by $det S$. This statement is
    clearly $mathbb{K}$-linear in $A$ (in the sense that if $A_{1}$ and $A_{2}$
    are two alternating $ntimes n$-matrices such that this statement holds both
    for $A=A_{1}$ and for $A=A_{2}$, and if $lambda_{1}$ and $lambda_{2}$ are
    two elements of $mathbb{K}$, then this statement also holds for
    $A=lambda_{1}A_{1}+lambda_{2}A_{2}$). Thus, we can WLOG assume that $A$ has
    the form $E_{i,j}-E_{j,i}$ for a pair $left( i,jright) $ of integers
    satisfying $1leq i<jleq n$ (according to Lemma 3). Assume this, and consider
    this pair $left( i,jright) $.



    We have $operatorname*{adj}S=left( left( -1right) ^{x+y}detleft(
    S_{sim y,sim x}right) right) _{1leq xleq n, 1leq yleq n}$
    and
    begin{align}
    operatorname*{adj}left( S^{T}right)
    & = left( left( -1right)
    ^{x+y}detleft( underbrace{left( S^{T}right) _{sim y,sim x}
    }_{=left( S_{sim x,sim y}right) ^{T}}right) right) _{1leq xleq
    n, 1leq yleq n} \
    & = left( left( -1right) ^{x+y}underbrace{detleft( left( S_{sim
    x,sim y}right) ^{T}right) }_{=detleft( S_{sim x,sim y}right)
    }right) _{1leq xleq n, 1leq yleq n} \
    & = left( left( -1right) ^{x+y}detleft( S_{sim x,sim y}right)
    right) _{1leq xleq n, 1leq yleq n} .
    end{align}

    Hence,
    begin{align}
    & underbrace{left( operatorname*{adj}Sright) }_{=left( left(
    -1right) ^{x+y}detleft( S_{sim y,sim x}right) right) _{1leq xleq
    n, 1leq yleq n}}cdotunderbrace{A}_{=E_{i,j}-E_{j,i}}cdot
    underbrace{left( operatorname*{adj}left( S^{T}right) right)
    }_{=left( left( -1right) ^{x+y}detleft( S_{sim x,sim y}right)
    right) _{1leq xleq n, 1leq yleq n}} \
    & =left( left( -1right) ^{x+y}detleft( S_{sim y,sim x}right)
    right) _{1leq xleq n, 1leq yleq n}cdotleft( E_{i,j}-E_{j,i}right) \
    & qquad qquad
    cdotleft( left( -1right) ^{x+y}detleft( S_{sim x,sim y}right)
    right) _{1leq xleq n, 1leq yleq n} \
    & = left( left( -1right) ^{x+i}detleft( S_{sim i,sim x}right)
    cdotleft( -1right) ^{j+y}detleft( S_{sim j,sim y}right) right. \
    & qquad qquad
    left. -left(
    -1right) ^{x+j}detleft( S_{sim j,sim x}right) cdotleft( -1right)
    ^{i+y}detleft( S_{sim i,sim y}right) right) _{1leq xleq n, 1leq
    yleq n} .
    end{align}

    Hence, the $left( u,vright) $-th entry of the matrix $left(
    operatorname*{adj}Sright) cdot Acdotleft( operatorname*{adj}left(
    S^{T}right) right) $
    is
    begin{align}
    & left( -1right) ^{u+i}detleft( S_{sim i,sim u}right) cdotleft(
    -1right) ^{j+v}detleft( S_{sim j,sim v}right) -left( -1right)
    ^{u+j}detleft( S_{sim j,sim u}right) cdotleft( -1right) ^{i+v}
    detleft( S_{sim i,sim v}right) \
    & = left( -1right) ^{i+j+u+v}left( detleft( S_{sim i,sim
    u}right) cdotdetleft( S_{sim j,sim v}right) -detleft( S_{sim
    i,sim v}right) cdotdetleft( S_{sim j,sim u}right) right) .
    label{darij.eq.3}
    tag{3}
    end{align}

    We need to prove that this is divisible by $det S$. If $u=v$, then this is
    obvious (because if $u=v$, then the right hand side of eqref{darij.eq.3} is $0$). Hence,
    we WLOG assume that $uneq v$. Thus, eqref{darij.eq.3} shows that the $left(
    u,vright) $
    -th entry of the matrix $left( operatorname*{adj}Sright)
    cdot Acdotleft( operatorname*{adj}left( S^{T}right) right) $
    is
    begin{align}
    & left( -1right) ^{i+j+u+v}underbrace{left( detleft( S_{sim i,sim
    u}right) cdotdetleft( S_{sim j,sim v}right) -detleft( S_{sim
    i,sim v}right) cdotdetleft( S_{sim j,sim u}right) right)
    }_{substack{=left( -1right) ^{left[ i<jright] +left[
    u<vright] }det Scdotdetleft( S_{left( sim i,sim jright) ,left(
    sim u,sim vright) }right) \text{(by Theorem 2)}}} \
    & = left( -1right) ^{i+j+u+v}left( -1right) ^{left[
    i<jright] +left[ u<vright] }det Scdotdetleft( S_{left( sim
    i,sim jright) ,left( sim u,sim vright) }right) ,
    end{align}

    which is clearly divisible by $det S$. Theorem 1 is thus proven. $blacksquare$







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Jan 10 at 2:14

























    answered Aug 15 '15 at 15:48









    darij grinbergdarij grinberg

    11.5k33168




    11.5k33168












    • $begingroup$
      Please see my answer. Does it agree with your answer?
      $endgroup$
      – user54031
      Aug 18 '15 at 8:19










    • $begingroup$
      @LBO: I fear I cannot tell, as I am not acquainted with the tensor notation you are using.
      $endgroup$
      – darij grinberg
      Aug 18 '15 at 11:52


















    • $begingroup$
      Please see my answer. Does it agree with your answer?
      $endgroup$
      – user54031
      Aug 18 '15 at 8:19










    • $begingroup$
      @LBO: I fear I cannot tell, as I am not acquainted with the tensor notation you are using.
      $endgroup$
      – darij grinberg
      Aug 18 '15 at 11:52
















    $begingroup$
    Please see my answer. Does it agree with your answer?
    $endgroup$
    – user54031
    Aug 18 '15 at 8:19




    $begingroup$
    Please see my answer. Does it agree with your answer?
    $endgroup$
    – user54031
    Aug 18 '15 at 8:19












    $begingroup$
    @LBO: I fear I cannot tell, as I am not acquainted with the tensor notation you are using.
    $endgroup$
    – darij grinberg
    Aug 18 '15 at 11:52




    $begingroup$
    @LBO: I fear I cannot tell, as I am not acquainted with the tensor notation you are using.
    $endgroup$
    – darij grinberg
    Aug 18 '15 at 11:52











    0












    $begingroup$

    I have managed to prove the claim in a pedestrian way. I'll just present the final result because the proof is quite involved and probably of interest only to me.



    In the following, I will use the (abstract) index notation and Einstein summation convention.



    First of all, the determinant of a square $n times n$ matrix $S$ is given by



    $$det S = frac{1}{n!} epsilon_{a_1 ldots a_n} epsilon_{b_1 ldots b_n} S_{b_1 a_1} ldots S_{b_n a_n}.$$



    Second, the adjugate of $S$ is given by a similar expression
    $$(operatorname{ajd} S)_{a_1 b_1} = frac{1}{(n-1)!} epsilon_{a_1 a_2 ldots a_n} epsilon_{b_1 b_2 ldots b_n} S_{b_2 a_2} ldots S_{b_n a_n}.$$



    Finally, we will need the another tensor of order $4$, defined as
    $$(operatorname{ajd}_2 S)_{a_1 a_2,b_1 b_2} = frac{1}{(n-2)!} epsilon_{a_1 a_2 a_3 ldots a_n} epsilon_{b_1 b_2 b_3 ldots b_n} S_{b_3 a_3} ldots S_{b_n a_n}.$$



    Then the following identity holds
    $$((S^{-1}) A (S^{-1})^{T})_{ab} = frac{1}{2} frac{(operatorname{ajd}_2 S)_{ab,cd} A_{cd}}{det S}$$
    for $A$ skew-symmetric.






    share|cite|improve this answer









    $endgroup$


















      0












      $begingroup$

      I have managed to prove the claim in a pedestrian way. I'll just present the final result because the proof is quite involved and probably of interest only to me.



      In the following, I will use the (abstract) index notation and Einstein summation convention.



      First of all, the determinant of a square $n times n$ matrix $S$ is given by



      $$det S = frac{1}{n!} epsilon_{a_1 ldots a_n} epsilon_{b_1 ldots b_n} S_{b_1 a_1} ldots S_{b_n a_n}.$$



      Second, the adjugate of $S$ is given by a similar expression
      $$(operatorname{ajd} S)_{a_1 b_1} = frac{1}{(n-1)!} epsilon_{a_1 a_2 ldots a_n} epsilon_{b_1 b_2 ldots b_n} S_{b_2 a_2} ldots S_{b_n a_n}.$$



      Finally, we will need the another tensor of order $4$, defined as
      $$(operatorname{ajd}_2 S)_{a_1 a_2,b_1 b_2} = frac{1}{(n-2)!} epsilon_{a_1 a_2 a_3 ldots a_n} epsilon_{b_1 b_2 b_3 ldots b_n} S_{b_3 a_3} ldots S_{b_n a_n}.$$



      Then the following identity holds
      $$((S^{-1}) A (S^{-1})^{T})_{ab} = frac{1}{2} frac{(operatorname{ajd}_2 S)_{ab,cd} A_{cd}}{det S}$$
      for $A$ skew-symmetric.






      share|cite|improve this answer









      $endgroup$
















        0












        0








        0





        $begingroup$

        I have managed to prove the claim in a pedestrian way. I'll just present the final result because the proof is quite involved and probably of interest only to me.



        In the following, I will use the (abstract) index notation and Einstein summation convention.



        First of all, the determinant of a square $n times n$ matrix $S$ is given by



        $$det S = frac{1}{n!} epsilon_{a_1 ldots a_n} epsilon_{b_1 ldots b_n} S_{b_1 a_1} ldots S_{b_n a_n}.$$



        Second, the adjugate of $S$ is given by a similar expression
        $$(operatorname{ajd} S)_{a_1 b_1} = frac{1}{(n-1)!} epsilon_{a_1 a_2 ldots a_n} epsilon_{b_1 b_2 ldots b_n} S_{b_2 a_2} ldots S_{b_n a_n}.$$



        Finally, we will need the another tensor of order $4$, defined as
        $$(operatorname{ajd}_2 S)_{a_1 a_2,b_1 b_2} = frac{1}{(n-2)!} epsilon_{a_1 a_2 a_3 ldots a_n} epsilon_{b_1 b_2 b_3 ldots b_n} S_{b_3 a_3} ldots S_{b_n a_n}.$$



        Then the following identity holds
        $$((S^{-1}) A (S^{-1})^{T})_{ab} = frac{1}{2} frac{(operatorname{ajd}_2 S)_{ab,cd} A_{cd}}{det S}$$
        for $A$ skew-symmetric.






        share|cite|improve this answer









        $endgroup$



        I have managed to prove the claim in a pedestrian way. I'll just present the final result because the proof is quite involved and probably of interest only to me.



        In the following, I will use the (abstract) index notation and Einstein summation convention.



        First of all, the determinant of a square $n times n$ matrix $S$ is given by



        $$det S = frac{1}{n!} epsilon_{a_1 ldots a_n} epsilon_{b_1 ldots b_n} S_{b_1 a_1} ldots S_{b_n a_n}.$$



        Second, the adjugate of $S$ is given by a similar expression
        $$(operatorname{ajd} S)_{a_1 b_1} = frac{1}{(n-1)!} epsilon_{a_1 a_2 ldots a_n} epsilon_{b_1 b_2 ldots b_n} S_{b_2 a_2} ldots S_{b_n a_n}.$$



        Finally, we will need the another tensor of order $4$, defined as
        $$(operatorname{ajd}_2 S)_{a_1 a_2,b_1 b_2} = frac{1}{(n-2)!} epsilon_{a_1 a_2 a_3 ldots a_n} epsilon_{b_1 b_2 b_3 ldots b_n} S_{b_3 a_3} ldots S_{b_n a_n}.$$



        Then the following identity holds
        $$((S^{-1}) A (S^{-1})^{T})_{ab} = frac{1}{2} frac{(operatorname{ajd}_2 S)_{ab,cd} A_{cd}}{det S}$$
        for $A$ skew-symmetric.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Aug 18 '15 at 8:17







        user54031






























            -2












            $begingroup$

            Because $Smbox{adj}(S)=mbox{adj}(S)S=mbox{det}(S)I$, then $S$ is invertible iff $mbox{det}(S)ne 0$, and, in that case
            $$
            S^{-1} = frac{1}{mbox{det}(S)}mbox{adj}(S).
            $$
            Therefore,
            $$
            S^{-1}AS^{-1} = frac{1}{mbox{det}(S)^{2}}mbox{adj}(S),A,mbox{adj}(S).
            $$






            share|cite|improve this answer









            $endgroup$













            • $begingroup$
              Yes, thank you, I'm aware of that. I would like to prove that the numerator of your last expressio contains det S.
              $endgroup$
              – user54031
              Aug 13 '15 at 20:32










            • $begingroup$
              I don't really understand what you mean by contains $mbox{det}(S)$. There are determinants buried in the inverses on the left that match the determinants on the right. Is there something in particular bothering you about the expression?
              $endgroup$
              – DisintegratingByParts
              Aug 13 '15 at 20:44










            • $begingroup$
              Ok, to word it differently, I want to show that, whenever det S = 0, (adj S) A (adj S) also vanishes.
              $endgroup$
              – user54031
              Aug 13 '15 at 21:00
















            -2












            $begingroup$

            Because $Smbox{adj}(S)=mbox{adj}(S)S=mbox{det}(S)I$, then $S$ is invertible iff $mbox{det}(S)ne 0$, and, in that case
            $$
            S^{-1} = frac{1}{mbox{det}(S)}mbox{adj}(S).
            $$
            Therefore,
            $$
            S^{-1}AS^{-1} = frac{1}{mbox{det}(S)^{2}}mbox{adj}(S),A,mbox{adj}(S).
            $$






            share|cite|improve this answer









            $endgroup$













            • $begingroup$
              Yes, thank you, I'm aware of that. I would like to prove that the numerator of your last expressio contains det S.
              $endgroup$
              – user54031
              Aug 13 '15 at 20:32










            • $begingroup$
              I don't really understand what you mean by contains $mbox{det}(S)$. There are determinants buried in the inverses on the left that match the determinants on the right. Is there something in particular bothering you about the expression?
              $endgroup$
              – DisintegratingByParts
              Aug 13 '15 at 20:44










            • $begingroup$
              Ok, to word it differently, I want to show that, whenever det S = 0, (adj S) A (adj S) also vanishes.
              $endgroup$
              – user54031
              Aug 13 '15 at 21:00














            -2












            -2








            -2





            $begingroup$

            Because $Smbox{adj}(S)=mbox{adj}(S)S=mbox{det}(S)I$, then $S$ is invertible iff $mbox{det}(S)ne 0$, and, in that case
            $$
            S^{-1} = frac{1}{mbox{det}(S)}mbox{adj}(S).
            $$
            Therefore,
            $$
            S^{-1}AS^{-1} = frac{1}{mbox{det}(S)^{2}}mbox{adj}(S),A,mbox{adj}(S).
            $$






            share|cite|improve this answer









            $endgroup$



            Because $Smbox{adj}(S)=mbox{adj}(S)S=mbox{det}(S)I$, then $S$ is invertible iff $mbox{det}(S)ne 0$, and, in that case
            $$
            S^{-1} = frac{1}{mbox{det}(S)}mbox{adj}(S).
            $$
            Therefore,
            $$
            S^{-1}AS^{-1} = frac{1}{mbox{det}(S)^{2}}mbox{adj}(S),A,mbox{adj}(S).
            $$







            share|cite|improve this answer












            share|cite|improve this answer



            share|cite|improve this answer










            answered Aug 13 '15 at 20:28









            DisintegratingByPartsDisintegratingByParts

            60.3k42681




            60.3k42681












            • $begingroup$
              Yes, thank you, I'm aware of that. I would like to prove that the numerator of your last expressio contains det S.
              $endgroup$
              – user54031
              Aug 13 '15 at 20:32










            • $begingroup$
              I don't really understand what you mean by contains $mbox{det}(S)$. There are determinants buried in the inverses on the left that match the determinants on the right. Is there something in particular bothering you about the expression?
              $endgroup$
              – DisintegratingByParts
              Aug 13 '15 at 20:44










            • $begingroup$
              Ok, to word it differently, I want to show that, whenever det S = 0, (adj S) A (adj S) also vanishes.
              $endgroup$
              – user54031
              Aug 13 '15 at 21:00


















            • $begingroup$
              Yes, thank you, I'm aware of that. I would like to prove that the numerator of your last expressio contains det S.
              $endgroup$
              – user54031
              Aug 13 '15 at 20:32










            • $begingroup$
              I don't really understand what you mean by contains $mbox{det}(S)$. There are determinants buried in the inverses on the left that match the determinants on the right. Is there something in particular bothering you about the expression?
              $endgroup$
              – DisintegratingByParts
              Aug 13 '15 at 20:44










            • $begingroup$
              Ok, to word it differently, I want to show that, whenever det S = 0, (adj S) A (adj S) also vanishes.
              $endgroup$
              – user54031
              Aug 13 '15 at 21:00
















            $begingroup$
            Yes, thank you, I'm aware of that. I would like to prove that the numerator of your last expressio contains det S.
            $endgroup$
            – user54031
            Aug 13 '15 at 20:32




            $begingroup$
            Yes, thank you, I'm aware of that. I would like to prove that the numerator of your last expressio contains det S.
            $endgroup$
            – user54031
            Aug 13 '15 at 20:32












            $begingroup$
            I don't really understand what you mean by contains $mbox{det}(S)$. There are determinants buried in the inverses on the left that match the determinants on the right. Is there something in particular bothering you about the expression?
            $endgroup$
            – DisintegratingByParts
            Aug 13 '15 at 20:44




            $begingroup$
            I don't really understand what you mean by contains $mbox{det}(S)$. There are determinants buried in the inverses on the left that match the determinants on the right. Is there something in particular bothering you about the expression?
            $endgroup$
            – DisintegratingByParts
            Aug 13 '15 at 20:44












            $begingroup$
            Ok, to word it differently, I want to show that, whenever det S = 0, (adj S) A (adj S) also vanishes.
            $endgroup$
            – user54031
            Aug 13 '15 at 21:00




            $begingroup$
            Ok, to word it differently, I want to show that, whenever det S = 0, (adj S) A (adj S) also vanishes.
            $endgroup$
            – user54031
            Aug 13 '15 at 21:00


















            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1396165%2fproducts-of-adjugate-matrices%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Bressuire

            Cabo Verde

            Gyllenstierna