Bounding operator norm of matrix using eigen-values












0












$begingroup$


I have a symmetric matrix $A$ and I have managed to show that all eigenvalues of $A$ lie in $[-c,c]$. I want to prove that the operator norm of $A$ is less than $c$ i.e. $||A||leq c$.



I could prove that this norm is less than $c^2$ as follows :



From the hypothesis, we have that $cI-Asucceq 0$ and $cI+Asucceq 0$ since the hypothesis states that $-cx^Txleq x^TAxleq cx^Tx$. Now $(cI-A)(cI+A)=(cI+A)(cI-A)=c^2I-A^2succeq 0$.



So, $x^TA^2xleq c^2 x^Tximplies ||Ax||^2leq c^2 ||x||^2$.



However I don't see how to get the bound of $c$.










share|cite|improve this question











$endgroup$

















    0












    $begingroup$


    I have a symmetric matrix $A$ and I have managed to show that all eigenvalues of $A$ lie in $[-c,c]$. I want to prove that the operator norm of $A$ is less than $c$ i.e. $||A||leq c$.



    I could prove that this norm is less than $c^2$ as follows :



    From the hypothesis, we have that $cI-Asucceq 0$ and $cI+Asucceq 0$ since the hypothesis states that $-cx^Txleq x^TAxleq cx^Tx$. Now $(cI-A)(cI+A)=(cI+A)(cI-A)=c^2I-A^2succeq 0$.



    So, $x^TA^2xleq c^2 x^Tximplies ||Ax||^2leq c^2 ||x||^2$.



    However I don't see how to get the bound of $c$.










    share|cite|improve this question











    $endgroup$















      0












      0








      0





      $begingroup$


      I have a symmetric matrix $A$ and I have managed to show that all eigenvalues of $A$ lie in $[-c,c]$. I want to prove that the operator norm of $A$ is less than $c$ i.e. $||A||leq c$.



      I could prove that this norm is less than $c^2$ as follows :



      From the hypothesis, we have that $cI-Asucceq 0$ and $cI+Asucceq 0$ since the hypothesis states that $-cx^Txleq x^TAxleq cx^Tx$. Now $(cI-A)(cI+A)=(cI+A)(cI-A)=c^2I-A^2succeq 0$.



      So, $x^TA^2xleq c^2 x^Tximplies ||Ax||^2leq c^2 ||x||^2$.



      However I don't see how to get the bound of $c$.










      share|cite|improve this question











      $endgroup$




      I have a symmetric matrix $A$ and I have managed to show that all eigenvalues of $A$ lie in $[-c,c]$. I want to prove that the operator norm of $A$ is less than $c$ i.e. $||A||leq c$.



      I could prove that this norm is less than $c^2$ as follows :



      From the hypothesis, we have that $cI-Asucceq 0$ and $cI+Asucceq 0$ since the hypothesis states that $-cx^Txleq x^TAxleq cx^Tx$. Now $(cI-A)(cI+A)=(cI+A)(cI-A)=c^2I-A^2succeq 0$.



      So, $x^TA^2xleq c^2 x^Tximplies ||Ax||^2leq c^2 ||x||^2$.



      However I don't see how to get the bound of $c$.







      eigenvalues-eigenvectors norm spectral-theory






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Dec 25 '18 at 7:16







      User Not Found

















      asked Dec 25 '18 at 6:28









      User Not FoundUser Not Found

      462422




      462422






















          3 Answers
          3






          active

          oldest

          votes


















          1












          $begingroup$

          Your $preceq$s are backwards. The hypothesis implies that $cI-A$ and $cI+A$ are both positive semidefinite, not negative semidefinite. So their product is also positive semidefinite, and this gives you the inequality $x^T A^2 x le c^2 x^T x$. Since $x^T A^2 x = x^T A^T A x = (Ax)^T (Ax) = |Ax|^2$, this implies $|Ax|^2 le c^2 |x|^2$ as desired. (What you wrote seemed to have lost some squares.)



          The conclusion $|Ax| ge c^2 |x|$ (or even $|Ax|^2 ge c^2 |x|^2$ is clearly wrong, as you can see by taking $A$ to be the zero matrix.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Oh wait...this just proves my claim. Thanks for going through it and correcting it.
            $endgroup$
            – User Not Found
            Dec 25 '18 at 7:19



















          0












          $begingroup$

          If your definition of operator norm is "maximum singular value," then simply note that the eigenvalues of $A^top A$ lie in $[0, c^2]$ so the singular values lie in $[0, c]$.





          If your definition of operator norm is $max_{x : |x| = 1} |A x|$, then write $A = ULambda U^top$ where $U$ is orthogonal and $Lambda$ is diagonal. Then



          $$max_{x : |x|=1} |Ax|^2 = max_{x : |x|=1} x^top A^top A x
          = max_{x : |x|=1} x^top ULambda^2 U^top x overset{y=U^top x}{=} max_{y : |y|=1} y^top Lambda^2 y = max_{y : sum_i y_i^2 = 1} sum_{i=1}^n y_i^2 lambda_i^2.$$



          WLOG let $lambda_1^2 ge cdots ge lambda_n^2$
          By Hölder's inequality, $sum_{i=1}^n y_i^2 lambda_i^2 le left(sum_{i=1}^n y_i^2right) lambda_1^2$ with equality when $y = (1, 0, ldots, 0)$.






          share|cite|improve this answer











          $endgroup$





















            0












            $begingroup$

            If $A$ is symmetric (and real) then it has an orthonormal basis of eigenvectors $v_k$.
            Then any unit $x$ can be written as $x=sum_k x_k v_k$ with $sum_k |x_k|^2 = 1$ and so
            $|Ax|^2 = |A(sum_k x_k v_k)|^2 =|sum_k x_k A v_k|^2 = |sum_k x_k lambda_k v_k|^2= sum_k |x_k|^2|lambda_k|^2 le c^2 sum_k |x_k|^2=c^2$.






            share|cite|improve this answer









            $endgroup$













              Your Answer





              StackExchange.ifUsing("editor", function () {
              return StackExchange.using("mathjaxEditing", function () {
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              });
              });
              }, "mathjax-editing");

              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "69"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              noCode: true, onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3051889%2fbounding-operator-norm-of-matrix-using-eigen-values%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              3 Answers
              3






              active

              oldest

              votes








              3 Answers
              3






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              1












              $begingroup$

              Your $preceq$s are backwards. The hypothesis implies that $cI-A$ and $cI+A$ are both positive semidefinite, not negative semidefinite. So their product is also positive semidefinite, and this gives you the inequality $x^T A^2 x le c^2 x^T x$. Since $x^T A^2 x = x^T A^T A x = (Ax)^T (Ax) = |Ax|^2$, this implies $|Ax|^2 le c^2 |x|^2$ as desired. (What you wrote seemed to have lost some squares.)



              The conclusion $|Ax| ge c^2 |x|$ (or even $|Ax|^2 ge c^2 |x|^2$ is clearly wrong, as you can see by taking $A$ to be the zero matrix.






              share|cite|improve this answer









              $endgroup$













              • $begingroup$
                Oh wait...this just proves my claim. Thanks for going through it and correcting it.
                $endgroup$
                – User Not Found
                Dec 25 '18 at 7:19
















              1












              $begingroup$

              Your $preceq$s are backwards. The hypothesis implies that $cI-A$ and $cI+A$ are both positive semidefinite, not negative semidefinite. So their product is also positive semidefinite, and this gives you the inequality $x^T A^2 x le c^2 x^T x$. Since $x^T A^2 x = x^T A^T A x = (Ax)^T (Ax) = |Ax|^2$, this implies $|Ax|^2 le c^2 |x|^2$ as desired. (What you wrote seemed to have lost some squares.)



              The conclusion $|Ax| ge c^2 |x|$ (or even $|Ax|^2 ge c^2 |x|^2$ is clearly wrong, as you can see by taking $A$ to be the zero matrix.






              share|cite|improve this answer









              $endgroup$













              • $begingroup$
                Oh wait...this just proves my claim. Thanks for going through it and correcting it.
                $endgroup$
                – User Not Found
                Dec 25 '18 at 7:19














              1












              1








              1





              $begingroup$

              Your $preceq$s are backwards. The hypothesis implies that $cI-A$ and $cI+A$ are both positive semidefinite, not negative semidefinite. So their product is also positive semidefinite, and this gives you the inequality $x^T A^2 x le c^2 x^T x$. Since $x^T A^2 x = x^T A^T A x = (Ax)^T (Ax) = |Ax|^2$, this implies $|Ax|^2 le c^2 |x|^2$ as desired. (What you wrote seemed to have lost some squares.)



              The conclusion $|Ax| ge c^2 |x|$ (or even $|Ax|^2 ge c^2 |x|^2$ is clearly wrong, as you can see by taking $A$ to be the zero matrix.






              share|cite|improve this answer









              $endgroup$



              Your $preceq$s are backwards. The hypothesis implies that $cI-A$ and $cI+A$ are both positive semidefinite, not negative semidefinite. So their product is also positive semidefinite, and this gives you the inequality $x^T A^2 x le c^2 x^T x$. Since $x^T A^2 x = x^T A^T A x = (Ax)^T (Ax) = |Ax|^2$, this implies $|Ax|^2 le c^2 |x|^2$ as desired. (What you wrote seemed to have lost some squares.)



              The conclusion $|Ax| ge c^2 |x|$ (or even $|Ax|^2 ge c^2 |x|^2$ is clearly wrong, as you can see by taking $A$ to be the zero matrix.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered Dec 25 '18 at 6:38









              Nate EldredgeNate Eldredge

              63.2k682171




              63.2k682171












              • $begingroup$
                Oh wait...this just proves my claim. Thanks for going through it and correcting it.
                $endgroup$
                – User Not Found
                Dec 25 '18 at 7:19


















              • $begingroup$
                Oh wait...this just proves my claim. Thanks for going through it and correcting it.
                $endgroup$
                – User Not Found
                Dec 25 '18 at 7:19
















              $begingroup$
              Oh wait...this just proves my claim. Thanks for going through it and correcting it.
              $endgroup$
              – User Not Found
              Dec 25 '18 at 7:19




              $begingroup$
              Oh wait...this just proves my claim. Thanks for going through it and correcting it.
              $endgroup$
              – User Not Found
              Dec 25 '18 at 7:19











              0












              $begingroup$

              If your definition of operator norm is "maximum singular value," then simply note that the eigenvalues of $A^top A$ lie in $[0, c^2]$ so the singular values lie in $[0, c]$.





              If your definition of operator norm is $max_{x : |x| = 1} |A x|$, then write $A = ULambda U^top$ where $U$ is orthogonal and $Lambda$ is diagonal. Then



              $$max_{x : |x|=1} |Ax|^2 = max_{x : |x|=1} x^top A^top A x
              = max_{x : |x|=1} x^top ULambda^2 U^top x overset{y=U^top x}{=} max_{y : |y|=1} y^top Lambda^2 y = max_{y : sum_i y_i^2 = 1} sum_{i=1}^n y_i^2 lambda_i^2.$$



              WLOG let $lambda_1^2 ge cdots ge lambda_n^2$
              By Hölder's inequality, $sum_{i=1}^n y_i^2 lambda_i^2 le left(sum_{i=1}^n y_i^2right) lambda_1^2$ with equality when $y = (1, 0, ldots, 0)$.






              share|cite|improve this answer











              $endgroup$


















                0












                $begingroup$

                If your definition of operator norm is "maximum singular value," then simply note that the eigenvalues of $A^top A$ lie in $[0, c^2]$ so the singular values lie in $[0, c]$.





                If your definition of operator norm is $max_{x : |x| = 1} |A x|$, then write $A = ULambda U^top$ where $U$ is orthogonal and $Lambda$ is diagonal. Then



                $$max_{x : |x|=1} |Ax|^2 = max_{x : |x|=1} x^top A^top A x
                = max_{x : |x|=1} x^top ULambda^2 U^top x overset{y=U^top x}{=} max_{y : |y|=1} y^top Lambda^2 y = max_{y : sum_i y_i^2 = 1} sum_{i=1}^n y_i^2 lambda_i^2.$$



                WLOG let $lambda_1^2 ge cdots ge lambda_n^2$
                By Hölder's inequality, $sum_{i=1}^n y_i^2 lambda_i^2 le left(sum_{i=1}^n y_i^2right) lambda_1^2$ with equality when $y = (1, 0, ldots, 0)$.






                share|cite|improve this answer











                $endgroup$
















                  0












                  0








                  0





                  $begingroup$

                  If your definition of operator norm is "maximum singular value," then simply note that the eigenvalues of $A^top A$ lie in $[0, c^2]$ so the singular values lie in $[0, c]$.





                  If your definition of operator norm is $max_{x : |x| = 1} |A x|$, then write $A = ULambda U^top$ where $U$ is orthogonal and $Lambda$ is diagonal. Then



                  $$max_{x : |x|=1} |Ax|^2 = max_{x : |x|=1} x^top A^top A x
                  = max_{x : |x|=1} x^top ULambda^2 U^top x overset{y=U^top x}{=} max_{y : |y|=1} y^top Lambda^2 y = max_{y : sum_i y_i^2 = 1} sum_{i=1}^n y_i^2 lambda_i^2.$$



                  WLOG let $lambda_1^2 ge cdots ge lambda_n^2$
                  By Hölder's inequality, $sum_{i=1}^n y_i^2 lambda_i^2 le left(sum_{i=1}^n y_i^2right) lambda_1^2$ with equality when $y = (1, 0, ldots, 0)$.






                  share|cite|improve this answer











                  $endgroup$



                  If your definition of operator norm is "maximum singular value," then simply note that the eigenvalues of $A^top A$ lie in $[0, c^2]$ so the singular values lie in $[0, c]$.





                  If your definition of operator norm is $max_{x : |x| = 1} |A x|$, then write $A = ULambda U^top$ where $U$ is orthogonal and $Lambda$ is diagonal. Then



                  $$max_{x : |x|=1} |Ax|^2 = max_{x : |x|=1} x^top A^top A x
                  = max_{x : |x|=1} x^top ULambda^2 U^top x overset{y=U^top x}{=} max_{y : |y|=1} y^top Lambda^2 y = max_{y : sum_i y_i^2 = 1} sum_{i=1}^n y_i^2 lambda_i^2.$$



                  WLOG let $lambda_1^2 ge cdots ge lambda_n^2$
                  By Hölder's inequality, $sum_{i=1}^n y_i^2 lambda_i^2 le left(sum_{i=1}^n y_i^2right) lambda_1^2$ with equality when $y = (1, 0, ldots, 0)$.







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Dec 25 '18 at 6:43

























                  answered Dec 25 '18 at 6:38









                  angryavianangryavian

                  40.8k23380




                  40.8k23380























                      0












                      $begingroup$

                      If $A$ is symmetric (and real) then it has an orthonormal basis of eigenvectors $v_k$.
                      Then any unit $x$ can be written as $x=sum_k x_k v_k$ with $sum_k |x_k|^2 = 1$ and so
                      $|Ax|^2 = |A(sum_k x_k v_k)|^2 =|sum_k x_k A v_k|^2 = |sum_k x_k lambda_k v_k|^2= sum_k |x_k|^2|lambda_k|^2 le c^2 sum_k |x_k|^2=c^2$.






                      share|cite|improve this answer









                      $endgroup$


















                        0












                        $begingroup$

                        If $A$ is symmetric (and real) then it has an orthonormal basis of eigenvectors $v_k$.
                        Then any unit $x$ can be written as $x=sum_k x_k v_k$ with $sum_k |x_k|^2 = 1$ and so
                        $|Ax|^2 = |A(sum_k x_k v_k)|^2 =|sum_k x_k A v_k|^2 = |sum_k x_k lambda_k v_k|^2= sum_k |x_k|^2|lambda_k|^2 le c^2 sum_k |x_k|^2=c^2$.






                        share|cite|improve this answer









                        $endgroup$
















                          0












                          0








                          0





                          $begingroup$

                          If $A$ is symmetric (and real) then it has an orthonormal basis of eigenvectors $v_k$.
                          Then any unit $x$ can be written as $x=sum_k x_k v_k$ with $sum_k |x_k|^2 = 1$ and so
                          $|Ax|^2 = |A(sum_k x_k v_k)|^2 =|sum_k x_k A v_k|^2 = |sum_k x_k lambda_k v_k|^2= sum_k |x_k|^2|lambda_k|^2 le c^2 sum_k |x_k|^2=c^2$.






                          share|cite|improve this answer









                          $endgroup$



                          If $A$ is symmetric (and real) then it has an orthonormal basis of eigenvectors $v_k$.
                          Then any unit $x$ can be written as $x=sum_k x_k v_k$ with $sum_k |x_k|^2 = 1$ and so
                          $|Ax|^2 = |A(sum_k x_k v_k)|^2 =|sum_k x_k A v_k|^2 = |sum_k x_k lambda_k v_k|^2= sum_k |x_k|^2|lambda_k|^2 le c^2 sum_k |x_k|^2=c^2$.







                          share|cite|improve this answer












                          share|cite|improve this answer



                          share|cite|improve this answer










                          answered Dec 25 '18 at 7:08









                          copper.hatcopper.hat

                          127k559160




                          127k559160






























                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Mathematics Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3051889%2fbounding-operator-norm-of-matrix-using-eigen-values%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Bressuire

                              Cabo Verde

                              Gyllenstierna