I need an intuitive explanation of eigenvalues and eigenvectors












11












$begingroup$


Been browsing around here for quite a while, and finally took the plunge and signed up.



I've started my mathematics major, and am taking a course in Linear Algebra. While I seem to be doing rather well in all the topics covered, such as vectors and matrix manipulation, I am having some trouble understanding the meaning of eigenvalues and eigenvectors.



For the life of me, I just cannot wrap my head around any textbook explanation (and I've tried three!), and Google so far just hasn't helped me at all. All I see are problem questions, but no real explanation (and even then most of those are hard to grasp).



Could someone be so kind and provide a layman explanation of these terms, and work me through an example? Nothing too hard, seeing as I'm a first year student.



Thank you, and I look forward to spending even more time on here!










share|cite|improve this question











$endgroup$












  • $begingroup$
    Possibly related: math.stackexchange.com/questions/36815/…
    $endgroup$
    – René B. Christensen
    Jun 5 '16 at 7:47






  • 1




    $begingroup$
    What exactly is your issue? Is it the definition or motivation or intuition?
    $endgroup$
    – Funktorality
    Jun 5 '16 at 8:00










  • $begingroup$
    Welcome to MSELand, I am sorry because I am not able to give an interpretation, by my competences and my bad english, but there are nice examples from physics, for examples for coupled systems likes as molecules, or spring mechanisms. In fact I beleve that behind Google there is an example too.
    $endgroup$
    – user243301
    Jun 5 '16 at 8:02








  • 1




    $begingroup$
    Also possibly related (a duplicate?) math.stackexchange.com/questions/243533/…
    $endgroup$
    – Mark S.
    Jun 5 '16 at 16:34






  • 1




    $begingroup$
    a very nice visual explanation is at setosa.io/ev/eigenvectors-and-eigenvalues
    $endgroup$
    – Mark S.
    Jun 5 '16 at 16:34
















11












$begingroup$


Been browsing around here for quite a while, and finally took the plunge and signed up.



I've started my mathematics major, and am taking a course in Linear Algebra. While I seem to be doing rather well in all the topics covered, such as vectors and matrix manipulation, I am having some trouble understanding the meaning of eigenvalues and eigenvectors.



For the life of me, I just cannot wrap my head around any textbook explanation (and I've tried three!), and Google so far just hasn't helped me at all. All I see are problem questions, but no real explanation (and even then most of those are hard to grasp).



Could someone be so kind and provide a layman explanation of these terms, and work me through an example? Nothing too hard, seeing as I'm a first year student.



Thank you, and I look forward to spending even more time on here!










share|cite|improve this question











$endgroup$












  • $begingroup$
    Possibly related: math.stackexchange.com/questions/36815/…
    $endgroup$
    – René B. Christensen
    Jun 5 '16 at 7:47






  • 1




    $begingroup$
    What exactly is your issue? Is it the definition or motivation or intuition?
    $endgroup$
    – Funktorality
    Jun 5 '16 at 8:00










  • $begingroup$
    Welcome to MSELand, I am sorry because I am not able to give an interpretation, by my competences and my bad english, but there are nice examples from physics, for examples for coupled systems likes as molecules, or spring mechanisms. In fact I beleve that behind Google there is an example too.
    $endgroup$
    – user243301
    Jun 5 '16 at 8:02








  • 1




    $begingroup$
    Also possibly related (a duplicate?) math.stackexchange.com/questions/243533/…
    $endgroup$
    – Mark S.
    Jun 5 '16 at 16:34






  • 1




    $begingroup$
    a very nice visual explanation is at setosa.io/ev/eigenvectors-and-eigenvalues
    $endgroup$
    – Mark S.
    Jun 5 '16 at 16:34














11












11








11


12



$begingroup$


Been browsing around here for quite a while, and finally took the plunge and signed up.



I've started my mathematics major, and am taking a course in Linear Algebra. While I seem to be doing rather well in all the topics covered, such as vectors and matrix manipulation, I am having some trouble understanding the meaning of eigenvalues and eigenvectors.



For the life of me, I just cannot wrap my head around any textbook explanation (and I've tried three!), and Google so far just hasn't helped me at all. All I see are problem questions, but no real explanation (and even then most of those are hard to grasp).



Could someone be so kind and provide a layman explanation of these terms, and work me through an example? Nothing too hard, seeing as I'm a first year student.



Thank you, and I look forward to spending even more time on here!










share|cite|improve this question











$endgroup$




Been browsing around here for quite a while, and finally took the plunge and signed up.



I've started my mathematics major, and am taking a course in Linear Algebra. While I seem to be doing rather well in all the topics covered, such as vectors and matrix manipulation, I am having some trouble understanding the meaning of eigenvalues and eigenvectors.



For the life of me, I just cannot wrap my head around any textbook explanation (and I've tried three!), and Google so far just hasn't helped me at all. All I see are problem questions, but no real explanation (and even then most of those are hard to grasp).



Could someone be so kind and provide a layman explanation of these terms, and work me through an example? Nothing too hard, seeing as I'm a first year student.



Thank you, and I look forward to spending even more time on here!







linear-algebra eigenvalues-eigenvectors intuition






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jun 5 '16 at 17:22









Martin Sleziak

45k10122277




45k10122277










asked Jun 5 '16 at 7:10









QuantumLoopyQuantumLoopy

8316




8316












  • $begingroup$
    Possibly related: math.stackexchange.com/questions/36815/…
    $endgroup$
    – René B. Christensen
    Jun 5 '16 at 7:47






  • 1




    $begingroup$
    What exactly is your issue? Is it the definition or motivation or intuition?
    $endgroup$
    – Funktorality
    Jun 5 '16 at 8:00










  • $begingroup$
    Welcome to MSELand, I am sorry because I am not able to give an interpretation, by my competences and my bad english, but there are nice examples from physics, for examples for coupled systems likes as molecules, or spring mechanisms. In fact I beleve that behind Google there is an example too.
    $endgroup$
    – user243301
    Jun 5 '16 at 8:02








  • 1




    $begingroup$
    Also possibly related (a duplicate?) math.stackexchange.com/questions/243533/…
    $endgroup$
    – Mark S.
    Jun 5 '16 at 16:34






  • 1




    $begingroup$
    a very nice visual explanation is at setosa.io/ev/eigenvectors-and-eigenvalues
    $endgroup$
    – Mark S.
    Jun 5 '16 at 16:34


















  • $begingroup$
    Possibly related: math.stackexchange.com/questions/36815/…
    $endgroup$
    – René B. Christensen
    Jun 5 '16 at 7:47






  • 1




    $begingroup$
    What exactly is your issue? Is it the definition or motivation or intuition?
    $endgroup$
    – Funktorality
    Jun 5 '16 at 8:00










  • $begingroup$
    Welcome to MSELand, I am sorry because I am not able to give an interpretation, by my competences and my bad english, but there are nice examples from physics, for examples for coupled systems likes as molecules, or spring mechanisms. In fact I beleve that behind Google there is an example too.
    $endgroup$
    – user243301
    Jun 5 '16 at 8:02








  • 1




    $begingroup$
    Also possibly related (a duplicate?) math.stackexchange.com/questions/243533/…
    $endgroup$
    – Mark S.
    Jun 5 '16 at 16:34






  • 1




    $begingroup$
    a very nice visual explanation is at setosa.io/ev/eigenvectors-and-eigenvalues
    $endgroup$
    – Mark S.
    Jun 5 '16 at 16:34
















$begingroup$
Possibly related: math.stackexchange.com/questions/36815/…
$endgroup$
– René B. Christensen
Jun 5 '16 at 7:47




$begingroup$
Possibly related: math.stackexchange.com/questions/36815/…
$endgroup$
– René B. Christensen
Jun 5 '16 at 7:47




1




1




$begingroup$
What exactly is your issue? Is it the definition or motivation or intuition?
$endgroup$
– Funktorality
Jun 5 '16 at 8:00




$begingroup$
What exactly is your issue? Is it the definition or motivation or intuition?
$endgroup$
– Funktorality
Jun 5 '16 at 8:00












$begingroup$
Welcome to MSELand, I am sorry because I am not able to give an interpretation, by my competences and my bad english, but there are nice examples from physics, for examples for coupled systems likes as molecules, or spring mechanisms. In fact I beleve that behind Google there is an example too.
$endgroup$
– user243301
Jun 5 '16 at 8:02






$begingroup$
Welcome to MSELand, I am sorry because I am not able to give an interpretation, by my competences and my bad english, but there are nice examples from physics, for examples for coupled systems likes as molecules, or spring mechanisms. In fact I beleve that behind Google there is an example too.
$endgroup$
– user243301
Jun 5 '16 at 8:02






1




1




$begingroup$
Also possibly related (a duplicate?) math.stackexchange.com/questions/243533/…
$endgroup$
– Mark S.
Jun 5 '16 at 16:34




$begingroup$
Also possibly related (a duplicate?) math.stackexchange.com/questions/243533/…
$endgroup$
– Mark S.
Jun 5 '16 at 16:34




1




1




$begingroup$
a very nice visual explanation is at setosa.io/ev/eigenvectors-and-eigenvalues
$endgroup$
– Mark S.
Jun 5 '16 at 16:34




$begingroup$
a very nice visual explanation is at setosa.io/ev/eigenvectors-and-eigenvalues
$endgroup$
– Mark S.
Jun 5 '16 at 16:34










3 Answers
3






active

oldest

votes


















19












$begingroup$

Here is some intuition motivated by applications. In many applications, we have a system that takes some input and produces an output. A special case of this situation is when the inputs and outputs are vectors (or signals) and the system effects a linear transformation (which can be represented by some matrix $A$).



So, if the input vector (or input signal) is $x$,then the output is $Ax$. Usually, the direction of the output $Ax$ is different from the direction of $x$ (you can try out examples by picking arbitrary $2 times 2$ matrices $A$). To understand the system better, an important question to answer is the following: what are the input vectors which do not change direction when they pass through the system? It is ok if the magnitude changes, but the direction shouldn't. In other words, what are the $x$'s for which $Ax$ is just a scalar multiple of $x$? These $x$'s are precisely the eigenvectors.



If the system (or $n times n$ matrix $A$) has a set ${b_1,ldots,b_n}$ of $n$ eigenvectors that form a basis for the $n$-dimensional space, then we are quite lucky, because we can represent any given input $x$ as a linear combination $x=sum c_i b_i$ of the eigenvectors. Computing $Ax$ is then simple: because $A$ takes $b_i$ to $lambda_i b_i$, by linearity $A$ takes $x=sum c_i b_i$ to $Ax=sum c_i lambda_i b_i$. Thus, for all practical purposes, we have simplified our system (and matrix) to one which is a diagonal matrix because we chose our basis to be the eigenvectors. We were able to represent all inputs as just linear combinations of the eigenvectors, and the matrix $A$ acts on eigenvectors in a simple way (just scalar multiplication). As you see, diagonal matrices are preferred because they simplify things considerably and we understand them better.






share|cite|improve this answer









$endgroup$









  • 1




    $begingroup$
    Wow. Very neat. Can you explain why we might care about winch inputs and outputs are in the same direction in practical situations?
    $endgroup$
    – user230452
    Jun 5 '16 at 8:44






  • 1




    $begingroup$
    For example, if we want the output signal (or vector) to be as large in magnitude as possible, we can restrict the available input energy to be along the direction of the eigenvector corresponding to the largest (in absolute value) eigenvalue. That way, even if there is background noise, the output is clear. You don't want to waste any energy along directions of small eigenvalue because those directions get killed. Eigen-decomposition is done often in wireless communication systems.
    $endgroup$
    – svsring
    Jun 5 '16 at 8:47








  • 1




    $begingroup$
    In the signals and systems area of electrical engineering, a particular kind of system that arises in practice called "linear, time-invariant systems" is studied. Their eigenfunctions (i,e the inputs which are invariant) are the complex exponentials. So, the system is described by how it affects each complex exponential, and inputs and outputs are represented with respect to this basis.
    $endgroup$
    – svsring
    Jun 5 '16 at 8:51












  • $begingroup$
    This definitely helped clarify the intuitive aspect of it, especially regarding inputs and outputs!
    $endgroup$
    – QuantumLoopy
    Jun 5 '16 at 8:54










  • $begingroup$
    one of the best intuitive explanations I have seen till date!
    $endgroup$
    – naiveDeveloper
    Oct 14 '17 at 7:00



















8












$begingroup$

Let me give a geometric explanation in 2D. The first fact that is almost always glossed over in class is that a "linear operator" or a matrix $T$ acting on a space $V to V$ looks like a combination of rotations, flips, and dilations. To image what I mean, think of a checker-board patterned cloth. If I apply a transform to the space, it stretches (or shrinks) the space by pulling the cloth in different directions, and maybe possibly rotates and flips the cloth too. My point, as I will show, is that the direction in which the space (the cloth) gets pulled is the eigenvectors. Lets start with pictures:



Start by applying $T = begin{pmatrix} 1 & 0 \ 1 & 1 end{pmatrix}$ to the standard grid I discussed above in 2D. The image of this transformation (only showing the 20 by 20 grid's image) is shown below. The bold lines in the second image indicate the eigenvectors of $T.$ Notice that it only has 1 eigenvector (why?). The transform $T$ is such that it "shears" the space, and the only unit vector that doesn't change direction as a result of $T$ is $e_2 = (0,1)^T.$ Draw any other line on this cloth, and every time you apply $T$ it will become more vertical (more aligned with the eigenvector).



enter image description here





Lets look at an example with two eigenvectors:



Here, let $T = begin{pmatrix} 2 & -1 \ -1 & 2 end{pmatrix},$ another common matrix you'll come across. Now we see this has 2 eigenvectors. First, I must apologize about the scales on these images, the eigenvectors are perpendicular (you'll soon learn why must this be so.) Immediately we can see what the action of $T$ is on the standard 20 by 20 grid. Physically, imagine a cloth being held fixed at all 4 corners, then 2 of the opposite corners get stretched in the direction of the bold line. The bold lines are the vectors that do not change direction as $T$ is applied, or one could say they are the characteristic directions of $T$. As you apply $T$ over and over again, any other vector in the space tends toward the direction of an eigenvector.



enter image description here



And last, I decided not to leave a picture, but consider $T = begin{pmatrix} cos{x} & -sin{x} \ sin{x} & cos{x} end{pmatrix}.$ This is a rotation of the space about the origin, and has no (real) eigenvectors. Could you imagine why a pure rotation would have no real eigenvectors? Hopefully its now clear that because every vector changes direction upon applying $T,$ no real eigenvectors exist.



These concepts can easily be generalized to higher dimensions. My suggestion as a first year student would be to look back at these examples as you learn about geometric multiplicity, symmetric matrices, and orthogonal (unitary) matrices, perhaps this example will give you some physical insight on those important classes of operators too.






share|cite|improve this answer









$endgroup$









  • 1




    $begingroup$
    Wow thank you for the detailed explanation Merkh! I must admit my geometric knowledge is a weak point of mine, but nevertheless it was insightful seeing that side of it, and definitely something not shown in class.
    $endgroup$
    – QuantumLoopy
    Jun 5 '16 at 10:24










  • $begingroup$
    Interesting notes for those who follow: For a symmetric 2 by 2 (real) matrix, the image always looks like what is shown, because the eigenvectors are always proportional to $(1,1)^T$ and $(1,-1)^T,$ although the scaling of the image do not do any justice for this point. Likewise, the 3rd example $T$ is skew-symmetric. Any skew-sym 2 by 2 is always at least a rotation in space (and always has imaginary eigenvalues).
    $endgroup$
    – Merkh
    Jun 5 '16 at 10:25










  • $begingroup$
    MATLAB has this nice demo called eigshow that illustrates what you have shown here; there is also an analog of this visualization for SVD.
    $endgroup$
    – J. M. is not a mathematician
    Jun 5 '16 at 11:16



















3












$begingroup$

Well I will try to give you a simple example. An eigenvalue equation can be found in different areas. Since you are learning linear algebra, I will give you an example of this one. Suppose you are working in $Bbb{R}^3$ and you are given a linear transformation $T:Bbb{R}^3 to Bbb{R}^3$. Then, one says that $v in Bbb{R}^3$ is an eigenvector of $T$ with eigenvalue $lambda in Bbb{R}$ if it satisfies the following equation:



$$T(v)=lambda v quad (1)$$



So one question you can ask is the following: Given a linear transformation $T:Bbb{R}^3 to Bbb{R}^3$, what are its eigenvalues and eigenvectors?



Well, to answer this question, by its definition, the eigenvectors are the $vin Bbb{R}^3$ such that $T(v)=lambda v$. Since we are working in $Bbb{R}^3$ we can think $T$ as a matrix $Ain Bbb{R}^{3x3}$ and therefore equation $(1)$ may be written in matrix form as



$$Av = lambda v$$



Which is the same as saying:



$$(A-lambda Id)v=0 quad (2)$$



So basically finding the eigenvectors is analogous to solving this linear system of equations (In this case $3$ equations.)



An example is given in this link: http://www.sosmath.com/matrix/eigen2/eigen2.html



Which I could copy but I think its easier if you just check it out. You will see that they calculate $det(A-lambda Id)=0$ since its a way to solve the system of equations mentioned.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    I would assume that A is the matrix, relating to the equation you provided. Thank you for the link, I don't think I've seen that one yet so I'll definitely take a look!
    $endgroup$
    – QuantumLoopy
    Jun 5 '16 at 8:57










  • $begingroup$
    actually that equation (2) is not easily solveable. We don't know lamda. We need to try that for every lamda to find a match
    $endgroup$
    – user4951
    Jun 6 '16 at 2:36












Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1814197%2fi-need-an-intuitive-explanation-of-eigenvalues-and-eigenvectors%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes









19












$begingroup$

Here is some intuition motivated by applications. In many applications, we have a system that takes some input and produces an output. A special case of this situation is when the inputs and outputs are vectors (or signals) and the system effects a linear transformation (which can be represented by some matrix $A$).



So, if the input vector (or input signal) is $x$,then the output is $Ax$. Usually, the direction of the output $Ax$ is different from the direction of $x$ (you can try out examples by picking arbitrary $2 times 2$ matrices $A$). To understand the system better, an important question to answer is the following: what are the input vectors which do not change direction when they pass through the system? It is ok if the magnitude changes, but the direction shouldn't. In other words, what are the $x$'s for which $Ax$ is just a scalar multiple of $x$? These $x$'s are precisely the eigenvectors.



If the system (or $n times n$ matrix $A$) has a set ${b_1,ldots,b_n}$ of $n$ eigenvectors that form a basis for the $n$-dimensional space, then we are quite lucky, because we can represent any given input $x$ as a linear combination $x=sum c_i b_i$ of the eigenvectors. Computing $Ax$ is then simple: because $A$ takes $b_i$ to $lambda_i b_i$, by linearity $A$ takes $x=sum c_i b_i$ to $Ax=sum c_i lambda_i b_i$. Thus, for all practical purposes, we have simplified our system (and matrix) to one which is a diagonal matrix because we chose our basis to be the eigenvectors. We were able to represent all inputs as just linear combinations of the eigenvectors, and the matrix $A$ acts on eigenvectors in a simple way (just scalar multiplication). As you see, diagonal matrices are preferred because they simplify things considerably and we understand them better.






share|cite|improve this answer









$endgroup$









  • 1




    $begingroup$
    Wow. Very neat. Can you explain why we might care about winch inputs and outputs are in the same direction in practical situations?
    $endgroup$
    – user230452
    Jun 5 '16 at 8:44






  • 1




    $begingroup$
    For example, if we want the output signal (or vector) to be as large in magnitude as possible, we can restrict the available input energy to be along the direction of the eigenvector corresponding to the largest (in absolute value) eigenvalue. That way, even if there is background noise, the output is clear. You don't want to waste any energy along directions of small eigenvalue because those directions get killed. Eigen-decomposition is done often in wireless communication systems.
    $endgroup$
    – svsring
    Jun 5 '16 at 8:47








  • 1




    $begingroup$
    In the signals and systems area of electrical engineering, a particular kind of system that arises in practice called "linear, time-invariant systems" is studied. Their eigenfunctions (i,e the inputs which are invariant) are the complex exponentials. So, the system is described by how it affects each complex exponential, and inputs and outputs are represented with respect to this basis.
    $endgroup$
    – svsring
    Jun 5 '16 at 8:51












  • $begingroup$
    This definitely helped clarify the intuitive aspect of it, especially regarding inputs and outputs!
    $endgroup$
    – QuantumLoopy
    Jun 5 '16 at 8:54










  • $begingroup$
    one of the best intuitive explanations I have seen till date!
    $endgroup$
    – naiveDeveloper
    Oct 14 '17 at 7:00
















19












$begingroup$

Here is some intuition motivated by applications. In many applications, we have a system that takes some input and produces an output. A special case of this situation is when the inputs and outputs are vectors (or signals) and the system effects a linear transformation (which can be represented by some matrix $A$).



So, if the input vector (or input signal) is $x$,then the output is $Ax$. Usually, the direction of the output $Ax$ is different from the direction of $x$ (you can try out examples by picking arbitrary $2 times 2$ matrices $A$). To understand the system better, an important question to answer is the following: what are the input vectors which do not change direction when they pass through the system? It is ok if the magnitude changes, but the direction shouldn't. In other words, what are the $x$'s for which $Ax$ is just a scalar multiple of $x$? These $x$'s are precisely the eigenvectors.



If the system (or $n times n$ matrix $A$) has a set ${b_1,ldots,b_n}$ of $n$ eigenvectors that form a basis for the $n$-dimensional space, then we are quite lucky, because we can represent any given input $x$ as a linear combination $x=sum c_i b_i$ of the eigenvectors. Computing $Ax$ is then simple: because $A$ takes $b_i$ to $lambda_i b_i$, by linearity $A$ takes $x=sum c_i b_i$ to $Ax=sum c_i lambda_i b_i$. Thus, for all practical purposes, we have simplified our system (and matrix) to one which is a diagonal matrix because we chose our basis to be the eigenvectors. We were able to represent all inputs as just linear combinations of the eigenvectors, and the matrix $A$ acts on eigenvectors in a simple way (just scalar multiplication). As you see, diagonal matrices are preferred because they simplify things considerably and we understand them better.






share|cite|improve this answer









$endgroup$









  • 1




    $begingroup$
    Wow. Very neat. Can you explain why we might care about winch inputs and outputs are in the same direction in practical situations?
    $endgroup$
    – user230452
    Jun 5 '16 at 8:44






  • 1




    $begingroup$
    For example, if we want the output signal (or vector) to be as large in magnitude as possible, we can restrict the available input energy to be along the direction of the eigenvector corresponding to the largest (in absolute value) eigenvalue. That way, even if there is background noise, the output is clear. You don't want to waste any energy along directions of small eigenvalue because those directions get killed. Eigen-decomposition is done often in wireless communication systems.
    $endgroup$
    – svsring
    Jun 5 '16 at 8:47








  • 1




    $begingroup$
    In the signals and systems area of electrical engineering, a particular kind of system that arises in practice called "linear, time-invariant systems" is studied. Their eigenfunctions (i,e the inputs which are invariant) are the complex exponentials. So, the system is described by how it affects each complex exponential, and inputs and outputs are represented with respect to this basis.
    $endgroup$
    – svsring
    Jun 5 '16 at 8:51












  • $begingroup$
    This definitely helped clarify the intuitive aspect of it, especially regarding inputs and outputs!
    $endgroup$
    – QuantumLoopy
    Jun 5 '16 at 8:54










  • $begingroup$
    one of the best intuitive explanations I have seen till date!
    $endgroup$
    – naiveDeveloper
    Oct 14 '17 at 7:00














19












19








19





$begingroup$

Here is some intuition motivated by applications. In many applications, we have a system that takes some input and produces an output. A special case of this situation is when the inputs and outputs are vectors (or signals) and the system effects a linear transformation (which can be represented by some matrix $A$).



So, if the input vector (or input signal) is $x$,then the output is $Ax$. Usually, the direction of the output $Ax$ is different from the direction of $x$ (you can try out examples by picking arbitrary $2 times 2$ matrices $A$). To understand the system better, an important question to answer is the following: what are the input vectors which do not change direction when they pass through the system? It is ok if the magnitude changes, but the direction shouldn't. In other words, what are the $x$'s for which $Ax$ is just a scalar multiple of $x$? These $x$'s are precisely the eigenvectors.



If the system (or $n times n$ matrix $A$) has a set ${b_1,ldots,b_n}$ of $n$ eigenvectors that form a basis for the $n$-dimensional space, then we are quite lucky, because we can represent any given input $x$ as a linear combination $x=sum c_i b_i$ of the eigenvectors. Computing $Ax$ is then simple: because $A$ takes $b_i$ to $lambda_i b_i$, by linearity $A$ takes $x=sum c_i b_i$ to $Ax=sum c_i lambda_i b_i$. Thus, for all practical purposes, we have simplified our system (and matrix) to one which is a diagonal matrix because we chose our basis to be the eigenvectors. We were able to represent all inputs as just linear combinations of the eigenvectors, and the matrix $A$ acts on eigenvectors in a simple way (just scalar multiplication). As you see, diagonal matrices are preferred because they simplify things considerably and we understand them better.






share|cite|improve this answer









$endgroup$



Here is some intuition motivated by applications. In many applications, we have a system that takes some input and produces an output. A special case of this situation is when the inputs and outputs are vectors (or signals) and the system effects a linear transformation (which can be represented by some matrix $A$).



So, if the input vector (or input signal) is $x$,then the output is $Ax$. Usually, the direction of the output $Ax$ is different from the direction of $x$ (you can try out examples by picking arbitrary $2 times 2$ matrices $A$). To understand the system better, an important question to answer is the following: what are the input vectors which do not change direction when they pass through the system? It is ok if the magnitude changes, but the direction shouldn't. In other words, what are the $x$'s for which $Ax$ is just a scalar multiple of $x$? These $x$'s are precisely the eigenvectors.



If the system (or $n times n$ matrix $A$) has a set ${b_1,ldots,b_n}$ of $n$ eigenvectors that form a basis for the $n$-dimensional space, then we are quite lucky, because we can represent any given input $x$ as a linear combination $x=sum c_i b_i$ of the eigenvectors. Computing $Ax$ is then simple: because $A$ takes $b_i$ to $lambda_i b_i$, by linearity $A$ takes $x=sum c_i b_i$ to $Ax=sum c_i lambda_i b_i$. Thus, for all practical purposes, we have simplified our system (and matrix) to one which is a diagonal matrix because we chose our basis to be the eigenvectors. We were able to represent all inputs as just linear combinations of the eigenvectors, and the matrix $A$ acts on eigenvectors in a simple way (just scalar multiplication). As you see, diagonal matrices are preferred because they simplify things considerably and we understand them better.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Jun 5 '16 at 8:24









svsringsvsring

98427




98427








  • 1




    $begingroup$
    Wow. Very neat. Can you explain why we might care about winch inputs and outputs are in the same direction in practical situations?
    $endgroup$
    – user230452
    Jun 5 '16 at 8:44






  • 1




    $begingroup$
    For example, if we want the output signal (or vector) to be as large in magnitude as possible, we can restrict the available input energy to be along the direction of the eigenvector corresponding to the largest (in absolute value) eigenvalue. That way, even if there is background noise, the output is clear. You don't want to waste any energy along directions of small eigenvalue because those directions get killed. Eigen-decomposition is done often in wireless communication systems.
    $endgroup$
    – svsring
    Jun 5 '16 at 8:47








  • 1




    $begingroup$
    In the signals and systems area of electrical engineering, a particular kind of system that arises in practice called "linear, time-invariant systems" is studied. Their eigenfunctions (i,e the inputs which are invariant) are the complex exponentials. So, the system is described by how it affects each complex exponential, and inputs and outputs are represented with respect to this basis.
    $endgroup$
    – svsring
    Jun 5 '16 at 8:51












  • $begingroup$
    This definitely helped clarify the intuitive aspect of it, especially regarding inputs and outputs!
    $endgroup$
    – QuantumLoopy
    Jun 5 '16 at 8:54










  • $begingroup$
    one of the best intuitive explanations I have seen till date!
    $endgroup$
    – naiveDeveloper
    Oct 14 '17 at 7:00














  • 1




    $begingroup$
    Wow. Very neat. Can you explain why we might care about winch inputs and outputs are in the same direction in practical situations?
    $endgroup$
    – user230452
    Jun 5 '16 at 8:44






  • 1




    $begingroup$
    For example, if we want the output signal (or vector) to be as large in magnitude as possible, we can restrict the available input energy to be along the direction of the eigenvector corresponding to the largest (in absolute value) eigenvalue. That way, even if there is background noise, the output is clear. You don't want to waste any energy along directions of small eigenvalue because those directions get killed. Eigen-decomposition is done often in wireless communication systems.
    $endgroup$
    – svsring
    Jun 5 '16 at 8:47








  • 1




    $begingroup$
    In the signals and systems area of electrical engineering, a particular kind of system that arises in practice called "linear, time-invariant systems" is studied. Their eigenfunctions (i,e the inputs which are invariant) are the complex exponentials. So, the system is described by how it affects each complex exponential, and inputs and outputs are represented with respect to this basis.
    $endgroup$
    – svsring
    Jun 5 '16 at 8:51












  • $begingroup$
    This definitely helped clarify the intuitive aspect of it, especially regarding inputs and outputs!
    $endgroup$
    – QuantumLoopy
    Jun 5 '16 at 8:54










  • $begingroup$
    one of the best intuitive explanations I have seen till date!
    $endgroup$
    – naiveDeveloper
    Oct 14 '17 at 7:00








1




1




$begingroup$
Wow. Very neat. Can you explain why we might care about winch inputs and outputs are in the same direction in practical situations?
$endgroup$
– user230452
Jun 5 '16 at 8:44




$begingroup$
Wow. Very neat. Can you explain why we might care about winch inputs and outputs are in the same direction in practical situations?
$endgroup$
– user230452
Jun 5 '16 at 8:44




1




1




$begingroup$
For example, if we want the output signal (or vector) to be as large in magnitude as possible, we can restrict the available input energy to be along the direction of the eigenvector corresponding to the largest (in absolute value) eigenvalue. That way, even if there is background noise, the output is clear. You don't want to waste any energy along directions of small eigenvalue because those directions get killed. Eigen-decomposition is done often in wireless communication systems.
$endgroup$
– svsring
Jun 5 '16 at 8:47






$begingroup$
For example, if we want the output signal (or vector) to be as large in magnitude as possible, we can restrict the available input energy to be along the direction of the eigenvector corresponding to the largest (in absolute value) eigenvalue. That way, even if there is background noise, the output is clear. You don't want to waste any energy along directions of small eigenvalue because those directions get killed. Eigen-decomposition is done often in wireless communication systems.
$endgroup$
– svsring
Jun 5 '16 at 8:47






1




1




$begingroup$
In the signals and systems area of electrical engineering, a particular kind of system that arises in practice called "linear, time-invariant systems" is studied. Their eigenfunctions (i,e the inputs which are invariant) are the complex exponentials. So, the system is described by how it affects each complex exponential, and inputs and outputs are represented with respect to this basis.
$endgroup$
– svsring
Jun 5 '16 at 8:51






$begingroup$
In the signals and systems area of electrical engineering, a particular kind of system that arises in practice called "linear, time-invariant systems" is studied. Their eigenfunctions (i,e the inputs which are invariant) are the complex exponentials. So, the system is described by how it affects each complex exponential, and inputs and outputs are represented with respect to this basis.
$endgroup$
– svsring
Jun 5 '16 at 8:51














$begingroup$
This definitely helped clarify the intuitive aspect of it, especially regarding inputs and outputs!
$endgroup$
– QuantumLoopy
Jun 5 '16 at 8:54




$begingroup$
This definitely helped clarify the intuitive aspect of it, especially regarding inputs and outputs!
$endgroup$
– QuantumLoopy
Jun 5 '16 at 8:54












$begingroup$
one of the best intuitive explanations I have seen till date!
$endgroup$
– naiveDeveloper
Oct 14 '17 at 7:00




$begingroup$
one of the best intuitive explanations I have seen till date!
$endgroup$
– naiveDeveloper
Oct 14 '17 at 7:00











8












$begingroup$

Let me give a geometric explanation in 2D. The first fact that is almost always glossed over in class is that a "linear operator" or a matrix $T$ acting on a space $V to V$ looks like a combination of rotations, flips, and dilations. To image what I mean, think of a checker-board patterned cloth. If I apply a transform to the space, it stretches (or shrinks) the space by pulling the cloth in different directions, and maybe possibly rotates and flips the cloth too. My point, as I will show, is that the direction in which the space (the cloth) gets pulled is the eigenvectors. Lets start with pictures:



Start by applying $T = begin{pmatrix} 1 & 0 \ 1 & 1 end{pmatrix}$ to the standard grid I discussed above in 2D. The image of this transformation (only showing the 20 by 20 grid's image) is shown below. The bold lines in the second image indicate the eigenvectors of $T.$ Notice that it only has 1 eigenvector (why?). The transform $T$ is such that it "shears" the space, and the only unit vector that doesn't change direction as a result of $T$ is $e_2 = (0,1)^T.$ Draw any other line on this cloth, and every time you apply $T$ it will become more vertical (more aligned with the eigenvector).



enter image description here





Lets look at an example with two eigenvectors:



Here, let $T = begin{pmatrix} 2 & -1 \ -1 & 2 end{pmatrix},$ another common matrix you'll come across. Now we see this has 2 eigenvectors. First, I must apologize about the scales on these images, the eigenvectors are perpendicular (you'll soon learn why must this be so.) Immediately we can see what the action of $T$ is on the standard 20 by 20 grid. Physically, imagine a cloth being held fixed at all 4 corners, then 2 of the opposite corners get stretched in the direction of the bold line. The bold lines are the vectors that do not change direction as $T$ is applied, or one could say they are the characteristic directions of $T$. As you apply $T$ over and over again, any other vector in the space tends toward the direction of an eigenvector.



enter image description here



And last, I decided not to leave a picture, but consider $T = begin{pmatrix} cos{x} & -sin{x} \ sin{x} & cos{x} end{pmatrix}.$ This is a rotation of the space about the origin, and has no (real) eigenvectors. Could you imagine why a pure rotation would have no real eigenvectors? Hopefully its now clear that because every vector changes direction upon applying $T,$ no real eigenvectors exist.



These concepts can easily be generalized to higher dimensions. My suggestion as a first year student would be to look back at these examples as you learn about geometric multiplicity, symmetric matrices, and orthogonal (unitary) matrices, perhaps this example will give you some physical insight on those important classes of operators too.






share|cite|improve this answer









$endgroup$









  • 1




    $begingroup$
    Wow thank you for the detailed explanation Merkh! I must admit my geometric knowledge is a weak point of mine, but nevertheless it was insightful seeing that side of it, and definitely something not shown in class.
    $endgroup$
    – QuantumLoopy
    Jun 5 '16 at 10:24










  • $begingroup$
    Interesting notes for those who follow: For a symmetric 2 by 2 (real) matrix, the image always looks like what is shown, because the eigenvectors are always proportional to $(1,1)^T$ and $(1,-1)^T,$ although the scaling of the image do not do any justice for this point. Likewise, the 3rd example $T$ is skew-symmetric. Any skew-sym 2 by 2 is always at least a rotation in space (and always has imaginary eigenvalues).
    $endgroup$
    – Merkh
    Jun 5 '16 at 10:25










  • $begingroup$
    MATLAB has this nice demo called eigshow that illustrates what you have shown here; there is also an analog of this visualization for SVD.
    $endgroup$
    – J. M. is not a mathematician
    Jun 5 '16 at 11:16
















8












$begingroup$

Let me give a geometric explanation in 2D. The first fact that is almost always glossed over in class is that a "linear operator" or a matrix $T$ acting on a space $V to V$ looks like a combination of rotations, flips, and dilations. To image what I mean, think of a checker-board patterned cloth. If I apply a transform to the space, it stretches (or shrinks) the space by pulling the cloth in different directions, and maybe possibly rotates and flips the cloth too. My point, as I will show, is that the direction in which the space (the cloth) gets pulled is the eigenvectors. Lets start with pictures:



Start by applying $T = begin{pmatrix} 1 & 0 \ 1 & 1 end{pmatrix}$ to the standard grid I discussed above in 2D. The image of this transformation (only showing the 20 by 20 grid's image) is shown below. The bold lines in the second image indicate the eigenvectors of $T.$ Notice that it only has 1 eigenvector (why?). The transform $T$ is such that it "shears" the space, and the only unit vector that doesn't change direction as a result of $T$ is $e_2 = (0,1)^T.$ Draw any other line on this cloth, and every time you apply $T$ it will become more vertical (more aligned with the eigenvector).



enter image description here





Lets look at an example with two eigenvectors:



Here, let $T = begin{pmatrix} 2 & -1 \ -1 & 2 end{pmatrix},$ another common matrix you'll come across. Now we see this has 2 eigenvectors. First, I must apologize about the scales on these images, the eigenvectors are perpendicular (you'll soon learn why must this be so.) Immediately we can see what the action of $T$ is on the standard 20 by 20 grid. Physically, imagine a cloth being held fixed at all 4 corners, then 2 of the opposite corners get stretched in the direction of the bold line. The bold lines are the vectors that do not change direction as $T$ is applied, or one could say they are the characteristic directions of $T$. As you apply $T$ over and over again, any other vector in the space tends toward the direction of an eigenvector.



enter image description here



And last, I decided not to leave a picture, but consider $T = begin{pmatrix} cos{x} & -sin{x} \ sin{x} & cos{x} end{pmatrix}.$ This is a rotation of the space about the origin, and has no (real) eigenvectors. Could you imagine why a pure rotation would have no real eigenvectors? Hopefully its now clear that because every vector changes direction upon applying $T,$ no real eigenvectors exist.



These concepts can easily be generalized to higher dimensions. My suggestion as a first year student would be to look back at these examples as you learn about geometric multiplicity, symmetric matrices, and orthogonal (unitary) matrices, perhaps this example will give you some physical insight on those important classes of operators too.






share|cite|improve this answer









$endgroup$









  • 1




    $begingroup$
    Wow thank you for the detailed explanation Merkh! I must admit my geometric knowledge is a weak point of mine, but nevertheless it was insightful seeing that side of it, and definitely something not shown in class.
    $endgroup$
    – QuantumLoopy
    Jun 5 '16 at 10:24










  • $begingroup$
    Interesting notes for those who follow: For a symmetric 2 by 2 (real) matrix, the image always looks like what is shown, because the eigenvectors are always proportional to $(1,1)^T$ and $(1,-1)^T,$ although the scaling of the image do not do any justice for this point. Likewise, the 3rd example $T$ is skew-symmetric. Any skew-sym 2 by 2 is always at least a rotation in space (and always has imaginary eigenvalues).
    $endgroup$
    – Merkh
    Jun 5 '16 at 10:25










  • $begingroup$
    MATLAB has this nice demo called eigshow that illustrates what you have shown here; there is also an analog of this visualization for SVD.
    $endgroup$
    – J. M. is not a mathematician
    Jun 5 '16 at 11:16














8












8








8





$begingroup$

Let me give a geometric explanation in 2D. The first fact that is almost always glossed over in class is that a "linear operator" or a matrix $T$ acting on a space $V to V$ looks like a combination of rotations, flips, and dilations. To image what I mean, think of a checker-board patterned cloth. If I apply a transform to the space, it stretches (or shrinks) the space by pulling the cloth in different directions, and maybe possibly rotates and flips the cloth too. My point, as I will show, is that the direction in which the space (the cloth) gets pulled is the eigenvectors. Lets start with pictures:



Start by applying $T = begin{pmatrix} 1 & 0 \ 1 & 1 end{pmatrix}$ to the standard grid I discussed above in 2D. The image of this transformation (only showing the 20 by 20 grid's image) is shown below. The bold lines in the second image indicate the eigenvectors of $T.$ Notice that it only has 1 eigenvector (why?). The transform $T$ is such that it "shears" the space, and the only unit vector that doesn't change direction as a result of $T$ is $e_2 = (0,1)^T.$ Draw any other line on this cloth, and every time you apply $T$ it will become more vertical (more aligned with the eigenvector).



enter image description here





Lets look at an example with two eigenvectors:



Here, let $T = begin{pmatrix} 2 & -1 \ -1 & 2 end{pmatrix},$ another common matrix you'll come across. Now we see this has 2 eigenvectors. First, I must apologize about the scales on these images, the eigenvectors are perpendicular (you'll soon learn why must this be so.) Immediately we can see what the action of $T$ is on the standard 20 by 20 grid. Physically, imagine a cloth being held fixed at all 4 corners, then 2 of the opposite corners get stretched in the direction of the bold line. The bold lines are the vectors that do not change direction as $T$ is applied, or one could say they are the characteristic directions of $T$. As you apply $T$ over and over again, any other vector in the space tends toward the direction of an eigenvector.



enter image description here



And last, I decided not to leave a picture, but consider $T = begin{pmatrix} cos{x} & -sin{x} \ sin{x} & cos{x} end{pmatrix}.$ This is a rotation of the space about the origin, and has no (real) eigenvectors. Could you imagine why a pure rotation would have no real eigenvectors? Hopefully its now clear that because every vector changes direction upon applying $T,$ no real eigenvectors exist.



These concepts can easily be generalized to higher dimensions. My suggestion as a first year student would be to look back at these examples as you learn about geometric multiplicity, symmetric matrices, and orthogonal (unitary) matrices, perhaps this example will give you some physical insight on those important classes of operators too.






share|cite|improve this answer









$endgroup$



Let me give a geometric explanation in 2D. The first fact that is almost always glossed over in class is that a "linear operator" or a matrix $T$ acting on a space $V to V$ looks like a combination of rotations, flips, and dilations. To image what I mean, think of a checker-board patterned cloth. If I apply a transform to the space, it stretches (or shrinks) the space by pulling the cloth in different directions, and maybe possibly rotates and flips the cloth too. My point, as I will show, is that the direction in which the space (the cloth) gets pulled is the eigenvectors. Lets start with pictures:



Start by applying $T = begin{pmatrix} 1 & 0 \ 1 & 1 end{pmatrix}$ to the standard grid I discussed above in 2D. The image of this transformation (only showing the 20 by 20 grid's image) is shown below. The bold lines in the second image indicate the eigenvectors of $T.$ Notice that it only has 1 eigenvector (why?). The transform $T$ is such that it "shears" the space, and the only unit vector that doesn't change direction as a result of $T$ is $e_2 = (0,1)^T.$ Draw any other line on this cloth, and every time you apply $T$ it will become more vertical (more aligned with the eigenvector).



enter image description here





Lets look at an example with two eigenvectors:



Here, let $T = begin{pmatrix} 2 & -1 \ -1 & 2 end{pmatrix},$ another common matrix you'll come across. Now we see this has 2 eigenvectors. First, I must apologize about the scales on these images, the eigenvectors are perpendicular (you'll soon learn why must this be so.) Immediately we can see what the action of $T$ is on the standard 20 by 20 grid. Physically, imagine a cloth being held fixed at all 4 corners, then 2 of the opposite corners get stretched in the direction of the bold line. The bold lines are the vectors that do not change direction as $T$ is applied, or one could say they are the characteristic directions of $T$. As you apply $T$ over and over again, any other vector in the space tends toward the direction of an eigenvector.



enter image description here



And last, I decided not to leave a picture, but consider $T = begin{pmatrix} cos{x} & -sin{x} \ sin{x} & cos{x} end{pmatrix}.$ This is a rotation of the space about the origin, and has no (real) eigenvectors. Could you imagine why a pure rotation would have no real eigenvectors? Hopefully its now clear that because every vector changes direction upon applying $T,$ no real eigenvectors exist.



These concepts can easily be generalized to higher dimensions. My suggestion as a first year student would be to look back at these examples as you learn about geometric multiplicity, symmetric matrices, and orthogonal (unitary) matrices, perhaps this example will give you some physical insight on those important classes of operators too.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Jun 5 '16 at 10:14









MerkhMerkh

2,312719




2,312719








  • 1




    $begingroup$
    Wow thank you for the detailed explanation Merkh! I must admit my geometric knowledge is a weak point of mine, but nevertheless it was insightful seeing that side of it, and definitely something not shown in class.
    $endgroup$
    – QuantumLoopy
    Jun 5 '16 at 10:24










  • $begingroup$
    Interesting notes for those who follow: For a symmetric 2 by 2 (real) matrix, the image always looks like what is shown, because the eigenvectors are always proportional to $(1,1)^T$ and $(1,-1)^T,$ although the scaling of the image do not do any justice for this point. Likewise, the 3rd example $T$ is skew-symmetric. Any skew-sym 2 by 2 is always at least a rotation in space (and always has imaginary eigenvalues).
    $endgroup$
    – Merkh
    Jun 5 '16 at 10:25










  • $begingroup$
    MATLAB has this nice demo called eigshow that illustrates what you have shown here; there is also an analog of this visualization for SVD.
    $endgroup$
    – J. M. is not a mathematician
    Jun 5 '16 at 11:16














  • 1




    $begingroup$
    Wow thank you for the detailed explanation Merkh! I must admit my geometric knowledge is a weak point of mine, but nevertheless it was insightful seeing that side of it, and definitely something not shown in class.
    $endgroup$
    – QuantumLoopy
    Jun 5 '16 at 10:24










  • $begingroup$
    Interesting notes for those who follow: For a symmetric 2 by 2 (real) matrix, the image always looks like what is shown, because the eigenvectors are always proportional to $(1,1)^T$ and $(1,-1)^T,$ although the scaling of the image do not do any justice for this point. Likewise, the 3rd example $T$ is skew-symmetric. Any skew-sym 2 by 2 is always at least a rotation in space (and always has imaginary eigenvalues).
    $endgroup$
    – Merkh
    Jun 5 '16 at 10:25










  • $begingroup$
    MATLAB has this nice demo called eigshow that illustrates what you have shown here; there is also an analog of this visualization for SVD.
    $endgroup$
    – J. M. is not a mathematician
    Jun 5 '16 at 11:16








1




1




$begingroup$
Wow thank you for the detailed explanation Merkh! I must admit my geometric knowledge is a weak point of mine, but nevertheless it was insightful seeing that side of it, and definitely something not shown in class.
$endgroup$
– QuantumLoopy
Jun 5 '16 at 10:24




$begingroup$
Wow thank you for the detailed explanation Merkh! I must admit my geometric knowledge is a weak point of mine, but nevertheless it was insightful seeing that side of it, and definitely something not shown in class.
$endgroup$
– QuantumLoopy
Jun 5 '16 at 10:24












$begingroup$
Interesting notes for those who follow: For a symmetric 2 by 2 (real) matrix, the image always looks like what is shown, because the eigenvectors are always proportional to $(1,1)^T$ and $(1,-1)^T,$ although the scaling of the image do not do any justice for this point. Likewise, the 3rd example $T$ is skew-symmetric. Any skew-sym 2 by 2 is always at least a rotation in space (and always has imaginary eigenvalues).
$endgroup$
– Merkh
Jun 5 '16 at 10:25




$begingroup$
Interesting notes for those who follow: For a symmetric 2 by 2 (real) matrix, the image always looks like what is shown, because the eigenvectors are always proportional to $(1,1)^T$ and $(1,-1)^T,$ although the scaling of the image do not do any justice for this point. Likewise, the 3rd example $T$ is skew-symmetric. Any skew-sym 2 by 2 is always at least a rotation in space (and always has imaginary eigenvalues).
$endgroup$
– Merkh
Jun 5 '16 at 10:25












$begingroup$
MATLAB has this nice demo called eigshow that illustrates what you have shown here; there is also an analog of this visualization for SVD.
$endgroup$
– J. M. is not a mathematician
Jun 5 '16 at 11:16




$begingroup$
MATLAB has this nice demo called eigshow that illustrates what you have shown here; there is also an analog of this visualization for SVD.
$endgroup$
– J. M. is not a mathematician
Jun 5 '16 at 11:16











3












$begingroup$

Well I will try to give you a simple example. An eigenvalue equation can be found in different areas. Since you are learning linear algebra, I will give you an example of this one. Suppose you are working in $Bbb{R}^3$ and you are given a linear transformation $T:Bbb{R}^3 to Bbb{R}^3$. Then, one says that $v in Bbb{R}^3$ is an eigenvector of $T$ with eigenvalue $lambda in Bbb{R}$ if it satisfies the following equation:



$$T(v)=lambda v quad (1)$$



So one question you can ask is the following: Given a linear transformation $T:Bbb{R}^3 to Bbb{R}^3$, what are its eigenvalues and eigenvectors?



Well, to answer this question, by its definition, the eigenvectors are the $vin Bbb{R}^3$ such that $T(v)=lambda v$. Since we are working in $Bbb{R}^3$ we can think $T$ as a matrix $Ain Bbb{R}^{3x3}$ and therefore equation $(1)$ may be written in matrix form as



$$Av = lambda v$$



Which is the same as saying:



$$(A-lambda Id)v=0 quad (2)$$



So basically finding the eigenvectors is analogous to solving this linear system of equations (In this case $3$ equations.)



An example is given in this link: http://www.sosmath.com/matrix/eigen2/eigen2.html



Which I could copy but I think its easier if you just check it out. You will see that they calculate $det(A-lambda Id)=0$ since its a way to solve the system of equations mentioned.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    I would assume that A is the matrix, relating to the equation you provided. Thank you for the link, I don't think I've seen that one yet so I'll definitely take a look!
    $endgroup$
    – QuantumLoopy
    Jun 5 '16 at 8:57










  • $begingroup$
    actually that equation (2) is not easily solveable. We don't know lamda. We need to try that for every lamda to find a match
    $endgroup$
    – user4951
    Jun 6 '16 at 2:36
















3












$begingroup$

Well I will try to give you a simple example. An eigenvalue equation can be found in different areas. Since you are learning linear algebra, I will give you an example of this one. Suppose you are working in $Bbb{R}^3$ and you are given a linear transformation $T:Bbb{R}^3 to Bbb{R}^3$. Then, one says that $v in Bbb{R}^3$ is an eigenvector of $T$ with eigenvalue $lambda in Bbb{R}$ if it satisfies the following equation:



$$T(v)=lambda v quad (1)$$



So one question you can ask is the following: Given a linear transformation $T:Bbb{R}^3 to Bbb{R}^3$, what are its eigenvalues and eigenvectors?



Well, to answer this question, by its definition, the eigenvectors are the $vin Bbb{R}^3$ such that $T(v)=lambda v$. Since we are working in $Bbb{R}^3$ we can think $T$ as a matrix $Ain Bbb{R}^{3x3}$ and therefore equation $(1)$ may be written in matrix form as



$$Av = lambda v$$



Which is the same as saying:



$$(A-lambda Id)v=0 quad (2)$$



So basically finding the eigenvectors is analogous to solving this linear system of equations (In this case $3$ equations.)



An example is given in this link: http://www.sosmath.com/matrix/eigen2/eigen2.html



Which I could copy but I think its easier if you just check it out. You will see that they calculate $det(A-lambda Id)=0$ since its a way to solve the system of equations mentioned.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    I would assume that A is the matrix, relating to the equation you provided. Thank you for the link, I don't think I've seen that one yet so I'll definitely take a look!
    $endgroup$
    – QuantumLoopy
    Jun 5 '16 at 8:57










  • $begingroup$
    actually that equation (2) is not easily solveable. We don't know lamda. We need to try that for every lamda to find a match
    $endgroup$
    – user4951
    Jun 6 '16 at 2:36














3












3








3





$begingroup$

Well I will try to give you a simple example. An eigenvalue equation can be found in different areas. Since you are learning linear algebra, I will give you an example of this one. Suppose you are working in $Bbb{R}^3$ and you are given a linear transformation $T:Bbb{R}^3 to Bbb{R}^3$. Then, one says that $v in Bbb{R}^3$ is an eigenvector of $T$ with eigenvalue $lambda in Bbb{R}$ if it satisfies the following equation:



$$T(v)=lambda v quad (1)$$



So one question you can ask is the following: Given a linear transformation $T:Bbb{R}^3 to Bbb{R}^3$, what are its eigenvalues and eigenvectors?



Well, to answer this question, by its definition, the eigenvectors are the $vin Bbb{R}^3$ such that $T(v)=lambda v$. Since we are working in $Bbb{R}^3$ we can think $T$ as a matrix $Ain Bbb{R}^{3x3}$ and therefore equation $(1)$ may be written in matrix form as



$$Av = lambda v$$



Which is the same as saying:



$$(A-lambda Id)v=0 quad (2)$$



So basically finding the eigenvectors is analogous to solving this linear system of equations (In this case $3$ equations.)



An example is given in this link: http://www.sosmath.com/matrix/eigen2/eigen2.html



Which I could copy but I think its easier if you just check it out. You will see that they calculate $det(A-lambda Id)=0$ since its a way to solve the system of equations mentioned.






share|cite|improve this answer









$endgroup$



Well I will try to give you a simple example. An eigenvalue equation can be found in different areas. Since you are learning linear algebra, I will give you an example of this one. Suppose you are working in $Bbb{R}^3$ and you are given a linear transformation $T:Bbb{R}^3 to Bbb{R}^3$. Then, one says that $v in Bbb{R}^3$ is an eigenvector of $T$ with eigenvalue $lambda in Bbb{R}$ if it satisfies the following equation:



$$T(v)=lambda v quad (1)$$



So one question you can ask is the following: Given a linear transformation $T:Bbb{R}^3 to Bbb{R}^3$, what are its eigenvalues and eigenvectors?



Well, to answer this question, by its definition, the eigenvectors are the $vin Bbb{R}^3$ such that $T(v)=lambda v$. Since we are working in $Bbb{R}^3$ we can think $T$ as a matrix $Ain Bbb{R}^{3x3}$ and therefore equation $(1)$ may be written in matrix form as



$$Av = lambda v$$



Which is the same as saying:



$$(A-lambda Id)v=0 quad (2)$$



So basically finding the eigenvectors is analogous to solving this linear system of equations (In this case $3$ equations.)



An example is given in this link: http://www.sosmath.com/matrix/eigen2/eigen2.html



Which I could copy but I think its easier if you just check it out. You will see that they calculate $det(A-lambda Id)=0$ since its a way to solve the system of equations mentioned.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Jun 5 '16 at 7:56









Joaquin LiniadoJoaquin Liniado

2,447824




2,447824












  • $begingroup$
    I would assume that A is the matrix, relating to the equation you provided. Thank you for the link, I don't think I've seen that one yet so I'll definitely take a look!
    $endgroup$
    – QuantumLoopy
    Jun 5 '16 at 8:57










  • $begingroup$
    actually that equation (2) is not easily solveable. We don't know lamda. We need to try that for every lamda to find a match
    $endgroup$
    – user4951
    Jun 6 '16 at 2:36


















  • $begingroup$
    I would assume that A is the matrix, relating to the equation you provided. Thank you for the link, I don't think I've seen that one yet so I'll definitely take a look!
    $endgroup$
    – QuantumLoopy
    Jun 5 '16 at 8:57










  • $begingroup$
    actually that equation (2) is not easily solveable. We don't know lamda. We need to try that for every lamda to find a match
    $endgroup$
    – user4951
    Jun 6 '16 at 2:36
















$begingroup$
I would assume that A is the matrix, relating to the equation you provided. Thank you for the link, I don't think I've seen that one yet so I'll definitely take a look!
$endgroup$
– QuantumLoopy
Jun 5 '16 at 8:57




$begingroup$
I would assume that A is the matrix, relating to the equation you provided. Thank you for the link, I don't think I've seen that one yet so I'll definitely take a look!
$endgroup$
– QuantumLoopy
Jun 5 '16 at 8:57












$begingroup$
actually that equation (2) is not easily solveable. We don't know lamda. We need to try that for every lamda to find a match
$endgroup$
– user4951
Jun 6 '16 at 2:36




$begingroup$
actually that equation (2) is not easily solveable. We don't know lamda. We need to try that for every lamda to find a match
$endgroup$
– user4951
Jun 6 '16 at 2:36


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1814197%2fi-need-an-intuitive-explanation-of-eigenvalues-and-eigenvectors%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Bressuire

Cabo Verde

Gyllenstierna