Computing an almost Vandermonde matrix
up vote
2
down vote
favorite
I have this determinant which looks like a Vandermonde matrix
$$D=begin{vmatrix}1& a_1 & cdots & a_1^{n-2}& a_1^n\
1& a_2 & cdots & a_2^{n-2}& a_2^n\
vdots &vdots & ddots & vdots & vdots\
1& a_n & cdots & a_n^{n-2}& a_n^n
end{vmatrix}$$
Using the software maxima I found that probably $D$ has this form
$$D= prod_{i<j}(a_j-a_i)(a_1+a_2+cdots+ a_n)$$
but I couldn't prove it. Is my conjecture true and how can I prove it?
linear-algebra determinant
add a comment |
up vote
2
down vote
favorite
I have this determinant which looks like a Vandermonde matrix
$$D=begin{vmatrix}1& a_1 & cdots & a_1^{n-2}& a_1^n\
1& a_2 & cdots & a_2^{n-2}& a_2^n\
vdots &vdots & ddots & vdots & vdots\
1& a_n & cdots & a_n^{n-2}& a_n^n
end{vmatrix}$$
Using the software maxima I found that probably $D$ has this form
$$D= prod_{i<j}(a_j-a_i)(a_1+a_2+cdots+ a_n)$$
but I couldn't prove it. Is my conjecture true and how can I prove it?
linear-algebra determinant
This is the particular case (for $mu = left(1right)$) of the "Bi-Alternant Formula" proved in John R. Stembridge, A Concise Proof of the Littlewood-Richardson Rule, Elec. J. Comb. 9 N5. This is one of the most readable and useful papers ever written :)
– darij grinberg
Dec 3 at 19:46
Thanks for your comment. In fact I want just a simple proof :)
– As soon as possible
Dec 3 at 19:51
add a comment |
up vote
2
down vote
favorite
up vote
2
down vote
favorite
I have this determinant which looks like a Vandermonde matrix
$$D=begin{vmatrix}1& a_1 & cdots & a_1^{n-2}& a_1^n\
1& a_2 & cdots & a_2^{n-2}& a_2^n\
vdots &vdots & ddots & vdots & vdots\
1& a_n & cdots & a_n^{n-2}& a_n^n
end{vmatrix}$$
Using the software maxima I found that probably $D$ has this form
$$D= prod_{i<j}(a_j-a_i)(a_1+a_2+cdots+ a_n)$$
but I couldn't prove it. Is my conjecture true and how can I prove it?
linear-algebra determinant
I have this determinant which looks like a Vandermonde matrix
$$D=begin{vmatrix}1& a_1 & cdots & a_1^{n-2}& a_1^n\
1& a_2 & cdots & a_2^{n-2}& a_2^n\
vdots &vdots & ddots & vdots & vdots\
1& a_n & cdots & a_n^{n-2}& a_n^n
end{vmatrix}$$
Using the software maxima I found that probably $D$ has this form
$$D= prod_{i<j}(a_j-a_i)(a_1+a_2+cdots+ a_n)$$
but I couldn't prove it. Is my conjecture true and how can I prove it?
linear-algebra determinant
linear-algebra determinant
edited Dec 3 at 23:22
darij grinberg
10.2k33061
10.2k33061
asked Dec 3 at 18:40
As soon as possible
38218
38218
This is the particular case (for $mu = left(1right)$) of the "Bi-Alternant Formula" proved in John R. Stembridge, A Concise Proof of the Littlewood-Richardson Rule, Elec. J. Comb. 9 N5. This is one of the most readable and useful papers ever written :)
– darij grinberg
Dec 3 at 19:46
Thanks for your comment. In fact I want just a simple proof :)
– As soon as possible
Dec 3 at 19:51
add a comment |
This is the particular case (for $mu = left(1right)$) of the "Bi-Alternant Formula" proved in John R. Stembridge, A Concise Proof of the Littlewood-Richardson Rule, Elec. J. Comb. 9 N5. This is one of the most readable and useful papers ever written :)
– darij grinberg
Dec 3 at 19:46
Thanks for your comment. In fact I want just a simple proof :)
– As soon as possible
Dec 3 at 19:51
This is the particular case (for $mu = left(1right)$) of the "Bi-Alternant Formula" proved in John R. Stembridge, A Concise Proof of the Littlewood-Richardson Rule, Elec. J. Comb. 9 N5. This is one of the most readable and useful papers ever written :)
– darij grinberg
Dec 3 at 19:46
This is the particular case (for $mu = left(1right)$) of the "Bi-Alternant Formula" proved in John R. Stembridge, A Concise Proof of the Littlewood-Richardson Rule, Elec. J. Comb. 9 N5. This is one of the most readable and useful papers ever written :)
– darij grinberg
Dec 3 at 19:46
Thanks for your comment. In fact I want just a simple proof :)
– As soon as possible
Dec 3 at 19:51
Thanks for your comment. In fact I want just a simple proof :)
– As soon as possible
Dec 3 at 19:51
add a comment |
3 Answers
3
active
oldest
votes
up vote
2
down vote
accepted
Hint:
Consider $$D =
begin{vmatrix}
1&x&x^2&cdots&x^{n-2} & x^{n-1}&x^n\
1&a_1&a_1^2&cdots&a_1^{n-2} & a_1^{n-1}&a_1^n\
1&a_2&a_2^2&cdots&a_2^{n-2} & a_2^{n-1}&a_2^n\
vdots\
1&a_n&a_n^2&cdots&a_n^{n-2} & a_n^{n-1}&a_n^n\
end{vmatrix}$$
This is a Vandermonde determinant, so you already know how to calculate it. Look for the coefficient of $x^{n-1}$. On the other hand develop the determinant using the first row.
In a similar way, you can see the following generalization:
$$begin{vmatrix}
1&a_1&cdots&a_1^{k-1}&a_1^{k+1}cdots &a_1^n\
1&a_2&cdots&a_2^{k-1}&a_2^{k+1}cdots &a_2^n\
vdots\
1&a_n&cdots&a_n^{k-1}&a_n^{k+1}cdots &a_n^n\
end{vmatrix} = sigma_{n-k}(a_1,a_2cdots,a_n)prod_{i<j}(a_j-a_i)$$
(Here $sigma_k$ denotes the $k$-th elementary symmetric polynomial)
add a comment |
up vote
1
down vote
Two proofs:
If I flip the order of the columns of your matrix, I obtain the matrix
begin{equation}
begin{pmatrix}
a_1^n & a_1^{n-2} & cdots & a_1 & 1 \
a_2^n & a_2^{n-2} & cdots & a_2 & 1 \
vdots & vdots & ddots & vdots & vdots \
a_n^n & a_n^{n-2} & cdots & a_n & 1
end{pmatrix}
=
left(
begin{cases}
a_i^{n-j}, & text{if } j > 1; \
a_i^n, & text{if } j = 1
end{cases}
right)_{1leq ileq n, 1leq jleq n} .
end{equation}
This latter matrix has determinant
begin{equation}
left(a_1 + a_2 + cdots + a_nright)
prod_{1leq i<jleq n} left(a_i - a_jright)
end{equation}
according to Exercise 6.16 in my Notes on the combinatorial fundamentals of algebra, version of 7 November 2018 (where I use the notations $x_i$ instead of $a_i$). All that remains is to check that the sign that the determinant incurs when I flip the order of the columns is precisely the sign by which $prod_{1leq i<jleq n} left(a_i - a_jright)$ differs from $prod_{1leq i<jleq n} left(a_j - a_iright)$. But this is clear: Both signs are the sign of the permutation $w_0$ of the set $left{1,2,ldots,nright}$ that sends each $k$ to $n+1-k$. (This sign is explicitly given by $left(-1right)^{nleft(n-1right)/2}$, but we do not need to know this.)Your claim is the particular case (for $mu = left(1right)$) of the "Bi-Alternant Formula" proved in John R. Stembridge, A Concise Proof of the Littlewood-Richardson Rule, Elec. J. Comb. 9 N5. Indeed, if I rename your $a_1, a_2, ldots, a_n$ as $x_1, x_2, ldots, x_n$ and flip the order of the columns of your matrix, then your $D$ becomes $a_{left(n, n-2, n-3, ldots, 1, 0right)} = a_{left(1right) + rho}$ in Stembridge's notations. Now the "Bi-Alternant Formula" yields that $s_{left(1right)} = a_{left(1right) + rho} / a_rho$, where $a_rho$ is the genuine Vandermonde determinant and where $s_{left(1right)} = x_1 + x_2 + cdots + x_n$. Your result follows, again after checking that the signs are right.
Don't be afraid of either of these two references. My notes are long but it's because I flesh out every triviality in full detail. Stembridge's paper looks advanced but is fully self-contained and highly readable; it is a great first introduction to algebraic combinatorics.
add a comment |
up vote
1
down vote
You formula is correct. My proof is not pretty.
To compute $D$, subtract the last row of the matrix from each of the other rows. The $i^{th}$ row will now have a factor of $a_i-a_n$, for all $ile n-1$. Factor this out. The first column of the new matrix has a $1$ at its bottom, and the rest of its entries are zero, so $D$ is equal to the determinant of the upper right $(n-1)times (n-1)$ matrix (times the removed factors). This smaller matrix looks like this: the $(i,j)$ entry is the sum of all monomials of the form $a_i^s a_n^t$ which satisfy $s+t=j-1$. The exception is the last column $j=n-1$, where instead the monomials satisfy $s+t=n-1$.
Rinse and repeat, subtracting the last row of this new matrix from all others, and factoring out $(a_i-a_{n-1})$ from each row. The entries will now be a sum of monomials of the form $a_i^r a_{n-1}^sa_n^t$, where $r+s+t=j-1$, again with the exception of the last column. As you continue this process, you will see the pattern emerging, and can prove it by induction.
By the end, you will have all the removed factors $(a_i-a_j)$ times the determinant of a $1times 1$ matrix. Its entry will be the sum of all monomials of the form $a_1^{m_1}a_2^{m_2}cdots a_n^{m_n}$ which satisfy $m_1+dots+m_n=1$. Of course, this is just $a_1+dots+a_n$.
add a comment |
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
Hint:
Consider $$D =
begin{vmatrix}
1&x&x^2&cdots&x^{n-2} & x^{n-1}&x^n\
1&a_1&a_1^2&cdots&a_1^{n-2} & a_1^{n-1}&a_1^n\
1&a_2&a_2^2&cdots&a_2^{n-2} & a_2^{n-1}&a_2^n\
vdots\
1&a_n&a_n^2&cdots&a_n^{n-2} & a_n^{n-1}&a_n^n\
end{vmatrix}$$
This is a Vandermonde determinant, so you already know how to calculate it. Look for the coefficient of $x^{n-1}$. On the other hand develop the determinant using the first row.
In a similar way, you can see the following generalization:
$$begin{vmatrix}
1&a_1&cdots&a_1^{k-1}&a_1^{k+1}cdots &a_1^n\
1&a_2&cdots&a_2^{k-1}&a_2^{k+1}cdots &a_2^n\
vdots\
1&a_n&cdots&a_n^{k-1}&a_n^{k+1}cdots &a_n^n\
end{vmatrix} = sigma_{n-k}(a_1,a_2cdots,a_n)prod_{i<j}(a_j-a_i)$$
(Here $sigma_k$ denotes the $k$-th elementary symmetric polynomial)
add a comment |
up vote
2
down vote
accepted
Hint:
Consider $$D =
begin{vmatrix}
1&x&x^2&cdots&x^{n-2} & x^{n-1}&x^n\
1&a_1&a_1^2&cdots&a_1^{n-2} & a_1^{n-1}&a_1^n\
1&a_2&a_2^2&cdots&a_2^{n-2} & a_2^{n-1}&a_2^n\
vdots\
1&a_n&a_n^2&cdots&a_n^{n-2} & a_n^{n-1}&a_n^n\
end{vmatrix}$$
This is a Vandermonde determinant, so you already know how to calculate it. Look for the coefficient of $x^{n-1}$. On the other hand develop the determinant using the first row.
In a similar way, you can see the following generalization:
$$begin{vmatrix}
1&a_1&cdots&a_1^{k-1}&a_1^{k+1}cdots &a_1^n\
1&a_2&cdots&a_2^{k-1}&a_2^{k+1}cdots &a_2^n\
vdots\
1&a_n&cdots&a_n^{k-1}&a_n^{k+1}cdots &a_n^n\
end{vmatrix} = sigma_{n-k}(a_1,a_2cdots,a_n)prod_{i<j}(a_j-a_i)$$
(Here $sigma_k$ denotes the $k$-th elementary symmetric polynomial)
add a comment |
up vote
2
down vote
accepted
up vote
2
down vote
accepted
Hint:
Consider $$D =
begin{vmatrix}
1&x&x^2&cdots&x^{n-2} & x^{n-1}&x^n\
1&a_1&a_1^2&cdots&a_1^{n-2} & a_1^{n-1}&a_1^n\
1&a_2&a_2^2&cdots&a_2^{n-2} & a_2^{n-1}&a_2^n\
vdots\
1&a_n&a_n^2&cdots&a_n^{n-2} & a_n^{n-1}&a_n^n\
end{vmatrix}$$
This is a Vandermonde determinant, so you already know how to calculate it. Look for the coefficient of $x^{n-1}$. On the other hand develop the determinant using the first row.
In a similar way, you can see the following generalization:
$$begin{vmatrix}
1&a_1&cdots&a_1^{k-1}&a_1^{k+1}cdots &a_1^n\
1&a_2&cdots&a_2^{k-1}&a_2^{k+1}cdots &a_2^n\
vdots\
1&a_n&cdots&a_n^{k-1}&a_n^{k+1}cdots &a_n^n\
end{vmatrix} = sigma_{n-k}(a_1,a_2cdots,a_n)prod_{i<j}(a_j-a_i)$$
(Here $sigma_k$ denotes the $k$-th elementary symmetric polynomial)
Hint:
Consider $$D =
begin{vmatrix}
1&x&x^2&cdots&x^{n-2} & x^{n-1}&x^n\
1&a_1&a_1^2&cdots&a_1^{n-2} & a_1^{n-1}&a_1^n\
1&a_2&a_2^2&cdots&a_2^{n-2} & a_2^{n-1}&a_2^n\
vdots\
1&a_n&a_n^2&cdots&a_n^{n-2} & a_n^{n-1}&a_n^n\
end{vmatrix}$$
This is a Vandermonde determinant, so you already know how to calculate it. Look for the coefficient of $x^{n-1}$. On the other hand develop the determinant using the first row.
In a similar way, you can see the following generalization:
$$begin{vmatrix}
1&a_1&cdots&a_1^{k-1}&a_1^{k+1}cdots &a_1^n\
1&a_2&cdots&a_2^{k-1}&a_2^{k+1}cdots &a_2^n\
vdots\
1&a_n&cdots&a_n^{k-1}&a_n^{k+1}cdots &a_n^n\
end{vmatrix} = sigma_{n-k}(a_1,a_2cdots,a_n)prod_{i<j}(a_j-a_i)$$
(Here $sigma_k$ denotes the $k$-th elementary symmetric polynomial)
answered Dec 3 at 20:32
jjagmath
2213
2213
add a comment |
add a comment |
up vote
1
down vote
Two proofs:
If I flip the order of the columns of your matrix, I obtain the matrix
begin{equation}
begin{pmatrix}
a_1^n & a_1^{n-2} & cdots & a_1 & 1 \
a_2^n & a_2^{n-2} & cdots & a_2 & 1 \
vdots & vdots & ddots & vdots & vdots \
a_n^n & a_n^{n-2} & cdots & a_n & 1
end{pmatrix}
=
left(
begin{cases}
a_i^{n-j}, & text{if } j > 1; \
a_i^n, & text{if } j = 1
end{cases}
right)_{1leq ileq n, 1leq jleq n} .
end{equation}
This latter matrix has determinant
begin{equation}
left(a_1 + a_2 + cdots + a_nright)
prod_{1leq i<jleq n} left(a_i - a_jright)
end{equation}
according to Exercise 6.16 in my Notes on the combinatorial fundamentals of algebra, version of 7 November 2018 (where I use the notations $x_i$ instead of $a_i$). All that remains is to check that the sign that the determinant incurs when I flip the order of the columns is precisely the sign by which $prod_{1leq i<jleq n} left(a_i - a_jright)$ differs from $prod_{1leq i<jleq n} left(a_j - a_iright)$. But this is clear: Both signs are the sign of the permutation $w_0$ of the set $left{1,2,ldots,nright}$ that sends each $k$ to $n+1-k$. (This sign is explicitly given by $left(-1right)^{nleft(n-1right)/2}$, but we do not need to know this.)Your claim is the particular case (for $mu = left(1right)$) of the "Bi-Alternant Formula" proved in John R. Stembridge, A Concise Proof of the Littlewood-Richardson Rule, Elec. J. Comb. 9 N5. Indeed, if I rename your $a_1, a_2, ldots, a_n$ as $x_1, x_2, ldots, x_n$ and flip the order of the columns of your matrix, then your $D$ becomes $a_{left(n, n-2, n-3, ldots, 1, 0right)} = a_{left(1right) + rho}$ in Stembridge's notations. Now the "Bi-Alternant Formula" yields that $s_{left(1right)} = a_{left(1right) + rho} / a_rho$, where $a_rho$ is the genuine Vandermonde determinant and where $s_{left(1right)} = x_1 + x_2 + cdots + x_n$. Your result follows, again after checking that the signs are right.
Don't be afraid of either of these two references. My notes are long but it's because I flesh out every triviality in full detail. Stembridge's paper looks advanced but is fully self-contained and highly readable; it is a great first introduction to algebraic combinatorics.
add a comment |
up vote
1
down vote
Two proofs:
If I flip the order of the columns of your matrix, I obtain the matrix
begin{equation}
begin{pmatrix}
a_1^n & a_1^{n-2} & cdots & a_1 & 1 \
a_2^n & a_2^{n-2} & cdots & a_2 & 1 \
vdots & vdots & ddots & vdots & vdots \
a_n^n & a_n^{n-2} & cdots & a_n & 1
end{pmatrix}
=
left(
begin{cases}
a_i^{n-j}, & text{if } j > 1; \
a_i^n, & text{if } j = 1
end{cases}
right)_{1leq ileq n, 1leq jleq n} .
end{equation}
This latter matrix has determinant
begin{equation}
left(a_1 + a_2 + cdots + a_nright)
prod_{1leq i<jleq n} left(a_i - a_jright)
end{equation}
according to Exercise 6.16 in my Notes on the combinatorial fundamentals of algebra, version of 7 November 2018 (where I use the notations $x_i$ instead of $a_i$). All that remains is to check that the sign that the determinant incurs when I flip the order of the columns is precisely the sign by which $prod_{1leq i<jleq n} left(a_i - a_jright)$ differs from $prod_{1leq i<jleq n} left(a_j - a_iright)$. But this is clear: Both signs are the sign of the permutation $w_0$ of the set $left{1,2,ldots,nright}$ that sends each $k$ to $n+1-k$. (This sign is explicitly given by $left(-1right)^{nleft(n-1right)/2}$, but we do not need to know this.)Your claim is the particular case (for $mu = left(1right)$) of the "Bi-Alternant Formula" proved in John R. Stembridge, A Concise Proof of the Littlewood-Richardson Rule, Elec. J. Comb. 9 N5. Indeed, if I rename your $a_1, a_2, ldots, a_n$ as $x_1, x_2, ldots, x_n$ and flip the order of the columns of your matrix, then your $D$ becomes $a_{left(n, n-2, n-3, ldots, 1, 0right)} = a_{left(1right) + rho}$ in Stembridge's notations. Now the "Bi-Alternant Formula" yields that $s_{left(1right)} = a_{left(1right) + rho} / a_rho$, where $a_rho$ is the genuine Vandermonde determinant and where $s_{left(1right)} = x_1 + x_2 + cdots + x_n$. Your result follows, again after checking that the signs are right.
Don't be afraid of either of these two references. My notes are long but it's because I flesh out every triviality in full detail. Stembridge's paper looks advanced but is fully self-contained and highly readable; it is a great first introduction to algebraic combinatorics.
add a comment |
up vote
1
down vote
up vote
1
down vote
Two proofs:
If I flip the order of the columns of your matrix, I obtain the matrix
begin{equation}
begin{pmatrix}
a_1^n & a_1^{n-2} & cdots & a_1 & 1 \
a_2^n & a_2^{n-2} & cdots & a_2 & 1 \
vdots & vdots & ddots & vdots & vdots \
a_n^n & a_n^{n-2} & cdots & a_n & 1
end{pmatrix}
=
left(
begin{cases}
a_i^{n-j}, & text{if } j > 1; \
a_i^n, & text{if } j = 1
end{cases}
right)_{1leq ileq n, 1leq jleq n} .
end{equation}
This latter matrix has determinant
begin{equation}
left(a_1 + a_2 + cdots + a_nright)
prod_{1leq i<jleq n} left(a_i - a_jright)
end{equation}
according to Exercise 6.16 in my Notes on the combinatorial fundamentals of algebra, version of 7 November 2018 (where I use the notations $x_i$ instead of $a_i$). All that remains is to check that the sign that the determinant incurs when I flip the order of the columns is precisely the sign by which $prod_{1leq i<jleq n} left(a_i - a_jright)$ differs from $prod_{1leq i<jleq n} left(a_j - a_iright)$. But this is clear: Both signs are the sign of the permutation $w_0$ of the set $left{1,2,ldots,nright}$ that sends each $k$ to $n+1-k$. (This sign is explicitly given by $left(-1right)^{nleft(n-1right)/2}$, but we do not need to know this.)Your claim is the particular case (for $mu = left(1right)$) of the "Bi-Alternant Formula" proved in John R. Stembridge, A Concise Proof of the Littlewood-Richardson Rule, Elec. J. Comb. 9 N5. Indeed, if I rename your $a_1, a_2, ldots, a_n$ as $x_1, x_2, ldots, x_n$ and flip the order of the columns of your matrix, then your $D$ becomes $a_{left(n, n-2, n-3, ldots, 1, 0right)} = a_{left(1right) + rho}$ in Stembridge's notations. Now the "Bi-Alternant Formula" yields that $s_{left(1right)} = a_{left(1right) + rho} / a_rho$, where $a_rho$ is the genuine Vandermonde determinant and where $s_{left(1right)} = x_1 + x_2 + cdots + x_n$. Your result follows, again after checking that the signs are right.
Don't be afraid of either of these two references. My notes are long but it's because I flesh out every triviality in full detail. Stembridge's paper looks advanced but is fully self-contained and highly readable; it is a great first introduction to algebraic combinatorics.
Two proofs:
If I flip the order of the columns of your matrix, I obtain the matrix
begin{equation}
begin{pmatrix}
a_1^n & a_1^{n-2} & cdots & a_1 & 1 \
a_2^n & a_2^{n-2} & cdots & a_2 & 1 \
vdots & vdots & ddots & vdots & vdots \
a_n^n & a_n^{n-2} & cdots & a_n & 1
end{pmatrix}
=
left(
begin{cases}
a_i^{n-j}, & text{if } j > 1; \
a_i^n, & text{if } j = 1
end{cases}
right)_{1leq ileq n, 1leq jleq n} .
end{equation}
This latter matrix has determinant
begin{equation}
left(a_1 + a_2 + cdots + a_nright)
prod_{1leq i<jleq n} left(a_i - a_jright)
end{equation}
according to Exercise 6.16 in my Notes on the combinatorial fundamentals of algebra, version of 7 November 2018 (where I use the notations $x_i$ instead of $a_i$). All that remains is to check that the sign that the determinant incurs when I flip the order of the columns is precisely the sign by which $prod_{1leq i<jleq n} left(a_i - a_jright)$ differs from $prod_{1leq i<jleq n} left(a_j - a_iright)$. But this is clear: Both signs are the sign of the permutation $w_0$ of the set $left{1,2,ldots,nright}$ that sends each $k$ to $n+1-k$. (This sign is explicitly given by $left(-1right)^{nleft(n-1right)/2}$, but we do not need to know this.)Your claim is the particular case (for $mu = left(1right)$) of the "Bi-Alternant Formula" proved in John R. Stembridge, A Concise Proof of the Littlewood-Richardson Rule, Elec. J. Comb. 9 N5. Indeed, if I rename your $a_1, a_2, ldots, a_n$ as $x_1, x_2, ldots, x_n$ and flip the order of the columns of your matrix, then your $D$ becomes $a_{left(n, n-2, n-3, ldots, 1, 0right)} = a_{left(1right) + rho}$ in Stembridge's notations. Now the "Bi-Alternant Formula" yields that $s_{left(1right)} = a_{left(1right) + rho} / a_rho$, where $a_rho$ is the genuine Vandermonde determinant and where $s_{left(1right)} = x_1 + x_2 + cdots + x_n$. Your result follows, again after checking that the signs are right.
Don't be afraid of either of these two references. My notes are long but it's because I flesh out every triviality in full detail. Stembridge's paper looks advanced but is fully self-contained and highly readable; it is a great first introduction to algebraic combinatorics.
answered Dec 3 at 23:37
darij grinberg
10.2k33061
10.2k33061
add a comment |
add a comment |
up vote
1
down vote
You formula is correct. My proof is not pretty.
To compute $D$, subtract the last row of the matrix from each of the other rows. The $i^{th}$ row will now have a factor of $a_i-a_n$, for all $ile n-1$. Factor this out. The first column of the new matrix has a $1$ at its bottom, and the rest of its entries are zero, so $D$ is equal to the determinant of the upper right $(n-1)times (n-1)$ matrix (times the removed factors). This smaller matrix looks like this: the $(i,j)$ entry is the sum of all monomials of the form $a_i^s a_n^t$ which satisfy $s+t=j-1$. The exception is the last column $j=n-1$, where instead the monomials satisfy $s+t=n-1$.
Rinse and repeat, subtracting the last row of this new matrix from all others, and factoring out $(a_i-a_{n-1})$ from each row. The entries will now be a sum of monomials of the form $a_i^r a_{n-1}^sa_n^t$, where $r+s+t=j-1$, again with the exception of the last column. As you continue this process, you will see the pattern emerging, and can prove it by induction.
By the end, you will have all the removed factors $(a_i-a_j)$ times the determinant of a $1times 1$ matrix. Its entry will be the sum of all monomials of the form $a_1^{m_1}a_2^{m_2}cdots a_n^{m_n}$ which satisfy $m_1+dots+m_n=1$. Of course, this is just $a_1+dots+a_n$.
add a comment |
up vote
1
down vote
You formula is correct. My proof is not pretty.
To compute $D$, subtract the last row of the matrix from each of the other rows. The $i^{th}$ row will now have a factor of $a_i-a_n$, for all $ile n-1$. Factor this out. The first column of the new matrix has a $1$ at its bottom, and the rest of its entries are zero, so $D$ is equal to the determinant of the upper right $(n-1)times (n-1)$ matrix (times the removed factors). This smaller matrix looks like this: the $(i,j)$ entry is the sum of all monomials of the form $a_i^s a_n^t$ which satisfy $s+t=j-1$. The exception is the last column $j=n-1$, where instead the monomials satisfy $s+t=n-1$.
Rinse and repeat, subtracting the last row of this new matrix from all others, and factoring out $(a_i-a_{n-1})$ from each row. The entries will now be a sum of monomials of the form $a_i^r a_{n-1}^sa_n^t$, where $r+s+t=j-1$, again with the exception of the last column. As you continue this process, you will see the pattern emerging, and can prove it by induction.
By the end, you will have all the removed factors $(a_i-a_j)$ times the determinant of a $1times 1$ matrix. Its entry will be the sum of all monomials of the form $a_1^{m_1}a_2^{m_2}cdots a_n^{m_n}$ which satisfy $m_1+dots+m_n=1$. Of course, this is just $a_1+dots+a_n$.
add a comment |
up vote
1
down vote
up vote
1
down vote
You formula is correct. My proof is not pretty.
To compute $D$, subtract the last row of the matrix from each of the other rows. The $i^{th}$ row will now have a factor of $a_i-a_n$, for all $ile n-1$. Factor this out. The first column of the new matrix has a $1$ at its bottom, and the rest of its entries are zero, so $D$ is equal to the determinant of the upper right $(n-1)times (n-1)$ matrix (times the removed factors). This smaller matrix looks like this: the $(i,j)$ entry is the sum of all monomials of the form $a_i^s a_n^t$ which satisfy $s+t=j-1$. The exception is the last column $j=n-1$, where instead the monomials satisfy $s+t=n-1$.
Rinse and repeat, subtracting the last row of this new matrix from all others, and factoring out $(a_i-a_{n-1})$ from each row. The entries will now be a sum of monomials of the form $a_i^r a_{n-1}^sa_n^t$, where $r+s+t=j-1$, again with the exception of the last column. As you continue this process, you will see the pattern emerging, and can prove it by induction.
By the end, you will have all the removed factors $(a_i-a_j)$ times the determinant of a $1times 1$ matrix. Its entry will be the sum of all monomials of the form $a_1^{m_1}a_2^{m_2}cdots a_n^{m_n}$ which satisfy $m_1+dots+m_n=1$. Of course, this is just $a_1+dots+a_n$.
You formula is correct. My proof is not pretty.
To compute $D$, subtract the last row of the matrix from each of the other rows. The $i^{th}$ row will now have a factor of $a_i-a_n$, for all $ile n-1$. Factor this out. The first column of the new matrix has a $1$ at its bottom, and the rest of its entries are zero, so $D$ is equal to the determinant of the upper right $(n-1)times (n-1)$ matrix (times the removed factors). This smaller matrix looks like this: the $(i,j)$ entry is the sum of all monomials of the form $a_i^s a_n^t$ which satisfy $s+t=j-1$. The exception is the last column $j=n-1$, where instead the monomials satisfy $s+t=n-1$.
Rinse and repeat, subtracting the last row of this new matrix from all others, and factoring out $(a_i-a_{n-1})$ from each row. The entries will now be a sum of monomials of the form $a_i^r a_{n-1}^sa_n^t$, where $r+s+t=j-1$, again with the exception of the last column. As you continue this process, you will see the pattern emerging, and can prove it by induction.
By the end, you will have all the removed factors $(a_i-a_j)$ times the determinant of a $1times 1$ matrix. Its entry will be the sum of all monomials of the form $a_1^{m_1}a_2^{m_2}cdots a_n^{m_n}$ which satisfy $m_1+dots+m_n=1$. Of course, this is just $a_1+dots+a_n$.
edited Dec 3 at 23:54
answered Dec 3 at 19:36
Mike Earnest
19.6k11950
19.6k11950
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3024496%2fcomputing-an-almost-vandermonde-matrix%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
This is the particular case (for $mu = left(1right)$) of the "Bi-Alternant Formula" proved in John R. Stembridge, A Concise Proof of the Littlewood-Richardson Rule, Elec. J. Comb. 9 N5. This is one of the most readable and useful papers ever written :)
– darij grinberg
Dec 3 at 19:46
Thanks for your comment. In fact I want just a simple proof :)
– As soon as possible
Dec 3 at 19:51