Halmos Finite-Dimensional Vector Spaces: Does $mathcal{P}$ over $mathbb{C}$ with $x(t) = x(1 - t)$ form a...












1












$begingroup$


Paul R. Halmos "Finite-Dimensional Vector Spaces", 2e, chapter I, section 2, exercise 5.d:




Consider the vector space $mathcal{P}$ and the subsets $mathcal{V}$
of $mathcal{P}$ consisting of those vectors (polynomials) $x$ for
which



(d) $x(t) = x(1 - t)$ for all $t$.



In which of these cases is $mathcal{V}$ a vector space?




I would suggest that the subset satisfying (d) does form a vector space, since




  • (d) forms a linear constraint in the polynomial's coefficients $mathbf{a}$ as in


$$
g(mathbf{a}) = x(t) - x(1 - t) = 0,
$$




  • the zero vector is included,

  • every element has an inverse element.


$g$ is a linear constraint, since
$$
g(mathbf{a} + mathbf{b}) = g(mathbf{a}) + g(mathbf{b}) qquad wedge qquad alpha g(mathbf{a}) = g(alpha mathbf{a}),
$$

where $mathbf{a}$, $mathbf{b}$ are coefficients of polynomials in $mathcal{P}$, and $alpha$ is a complex number.



Is that correct?










share|cite|improve this question











$endgroup$

















    1












    $begingroup$


    Paul R. Halmos "Finite-Dimensional Vector Spaces", 2e, chapter I, section 2, exercise 5.d:




    Consider the vector space $mathcal{P}$ and the subsets $mathcal{V}$
    of $mathcal{P}$ consisting of those vectors (polynomials) $x$ for
    which



    (d) $x(t) = x(1 - t)$ for all $t$.



    In which of these cases is $mathcal{V}$ a vector space?




    I would suggest that the subset satisfying (d) does form a vector space, since




    • (d) forms a linear constraint in the polynomial's coefficients $mathbf{a}$ as in


    $$
    g(mathbf{a}) = x(t) - x(1 - t) = 0,
    $$




    • the zero vector is included,

    • every element has an inverse element.


    $g$ is a linear constraint, since
    $$
    g(mathbf{a} + mathbf{b}) = g(mathbf{a}) + g(mathbf{b}) qquad wedge qquad alpha g(mathbf{a}) = g(alpha mathbf{a}),
    $$

    where $mathbf{a}$, $mathbf{b}$ are coefficients of polynomials in $mathcal{P}$, and $alpha$ is a complex number.



    Is that correct?










    share|cite|improve this question











    $endgroup$















      1












      1








      1





      $begingroup$


      Paul R. Halmos "Finite-Dimensional Vector Spaces", 2e, chapter I, section 2, exercise 5.d:




      Consider the vector space $mathcal{P}$ and the subsets $mathcal{V}$
      of $mathcal{P}$ consisting of those vectors (polynomials) $x$ for
      which



      (d) $x(t) = x(1 - t)$ for all $t$.



      In which of these cases is $mathcal{V}$ a vector space?




      I would suggest that the subset satisfying (d) does form a vector space, since




      • (d) forms a linear constraint in the polynomial's coefficients $mathbf{a}$ as in


      $$
      g(mathbf{a}) = x(t) - x(1 - t) = 0,
      $$




      • the zero vector is included,

      • every element has an inverse element.


      $g$ is a linear constraint, since
      $$
      g(mathbf{a} + mathbf{b}) = g(mathbf{a}) + g(mathbf{b}) qquad wedge qquad alpha g(mathbf{a}) = g(alpha mathbf{a}),
      $$

      where $mathbf{a}$, $mathbf{b}$ are coefficients of polynomials in $mathcal{P}$, and $alpha$ is a complex number.



      Is that correct?










      share|cite|improve this question











      $endgroup$




      Paul R. Halmos "Finite-Dimensional Vector Spaces", 2e, chapter I, section 2, exercise 5.d:




      Consider the vector space $mathcal{P}$ and the subsets $mathcal{V}$
      of $mathcal{P}$ consisting of those vectors (polynomials) $x$ for
      which



      (d) $x(t) = x(1 - t)$ for all $t$.



      In which of these cases is $mathcal{V}$ a vector space?




      I would suggest that the subset satisfying (d) does form a vector space, since




      • (d) forms a linear constraint in the polynomial's coefficients $mathbf{a}$ as in


      $$
      g(mathbf{a}) = x(t) - x(1 - t) = 0,
      $$




      • the zero vector is included,

      • every element has an inverse element.


      $g$ is a linear constraint, since
      $$
      g(mathbf{a} + mathbf{b}) = g(mathbf{a}) + g(mathbf{b}) qquad wedge qquad alpha g(mathbf{a}) = g(alpha mathbf{a}),
      $$

      where $mathbf{a}$, $mathbf{b}$ are coefficients of polynomials in $mathcal{P}$, and $alpha$ is a complex number.



      Is that correct?







      linear-algebra abstract-algebra vector-spaces






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Jan 5 at 10:18







      Max Herrmann

















      asked Jan 5 at 8:24









      Max HerrmannMax Herrmann

      724419




      724419






















          2 Answers
          2






          active

          oldest

          votes


















          1












          $begingroup$


          I would suggest that the subset satisfying (d) does form a vector space, since




          • (d) forms a linear constraint in the polynomial's coefficients,

          • the zero vector is included,

          • every element has an inverse element.


          Is that correct?




          This is the right idea, but when you say that (d) forms a linear constraint in the coefficients, what do you mean?



          I suppose what you mean is that if the polynomial is $a_0 + a_1 x + a_2 x^2 + cdots ...$ then it is some linear equation in $a_0, a_1, ldots$. But why does this imply that (d) is a vector space?



          However you can formalize (d). I would approach this by defining a map
          $$
          A : mathcal{P} to mathcal{P}
          $$



          where $A(boldsymbol{x}) = boldsymbol{x}(t) - boldsymbol{x}(1-t)$.



          Then, show that $A$ is a linear map -- that is, it preserves addition and scalar multiplication.



          Finally, $mathcal{V}$ is the set of polynomials $boldsymbol{x}$ such that $A(boldsymbol{x}) = boldsymbol{0}$. Since $A$ is linear this is a vector subspace. So this completes your proof.






          share|cite|improve this answer









          $endgroup$





















            0












            $begingroup$

            Your first bullet point is not useful. Just verify the definition. Surely 0 is in there. Every element has an additive inverse. If $c$ is a constant, then $cx(t)=cx(1-t)$, so constant multiples of polynomials are also in there. So you're done.






            share|cite|improve this answer









            $endgroup$













            • $begingroup$
              What if I doubt that every element has an additive inverse?
              $endgroup$
              – Max Herrmann
              Jan 5 at 8:54










            • $begingroup$
              It seems you are misunderstanding the idea of a proof. You need to show this is the case. The constant case is clearly shows the inverse. Just take $c=-1$. However, I did leave out that two functions may not add to satisfy $d$, but this is clear since you just add the contraint equation; i.e. $x(t)+y(t)=x(1-t)+y(1-t)$. You should note that notationally, you should write $(x+y)(t)$ for the aforementioned equation. As well as $(x+y)(1-t)$. This is, by definition, the addition of two polynomials.
              $endgroup$
              – user495490
              Jan 5 at 9:34










            • $begingroup$
              What does $-1 cdot x(t) = -1 cdot x(1-t)$ tell me?
              $endgroup$
              – Max Herrmann
              Jan 5 at 9:45










            • $begingroup$
              That inverse exist. -x(t) also satisfies (d). It is clearly an additive inverse to x(t).
              $endgroup$
              – user495490
              Jan 5 at 12:44










            • $begingroup$
              I find it impressive that it is obvious to you. I need some more hints and smaller steps in argumentation, I'm afraid. E.g. proof by contradiction: Assume there is an element $x'$ in the subset which does not have an additive inverse. Then $g(-mathbf{a}') = -x'(t) + x'(1-t) neq 0$. But $g$ is homogeneous in $mathbf{a}$. Hence, every element has an additive inverse.
              $endgroup$
              – Max Herrmann
              Jan 5 at 14:30













            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "69"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3062516%2fhalmos-finite-dimensional-vector-spaces-does-mathcalp-over-mathbbc-wi%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1












            $begingroup$


            I would suggest that the subset satisfying (d) does form a vector space, since




            • (d) forms a linear constraint in the polynomial's coefficients,

            • the zero vector is included,

            • every element has an inverse element.


            Is that correct?




            This is the right idea, but when you say that (d) forms a linear constraint in the coefficients, what do you mean?



            I suppose what you mean is that if the polynomial is $a_0 + a_1 x + a_2 x^2 + cdots ...$ then it is some linear equation in $a_0, a_1, ldots$. But why does this imply that (d) is a vector space?



            However you can formalize (d). I would approach this by defining a map
            $$
            A : mathcal{P} to mathcal{P}
            $$



            where $A(boldsymbol{x}) = boldsymbol{x}(t) - boldsymbol{x}(1-t)$.



            Then, show that $A$ is a linear map -- that is, it preserves addition and scalar multiplication.



            Finally, $mathcal{V}$ is the set of polynomials $boldsymbol{x}$ such that $A(boldsymbol{x}) = boldsymbol{0}$. Since $A$ is linear this is a vector subspace. So this completes your proof.






            share|cite|improve this answer









            $endgroup$


















              1












              $begingroup$


              I would suggest that the subset satisfying (d) does form a vector space, since




              • (d) forms a linear constraint in the polynomial's coefficients,

              • the zero vector is included,

              • every element has an inverse element.


              Is that correct?




              This is the right idea, but when you say that (d) forms a linear constraint in the coefficients, what do you mean?



              I suppose what you mean is that if the polynomial is $a_0 + a_1 x + a_2 x^2 + cdots ...$ then it is some linear equation in $a_0, a_1, ldots$. But why does this imply that (d) is a vector space?



              However you can formalize (d). I would approach this by defining a map
              $$
              A : mathcal{P} to mathcal{P}
              $$



              where $A(boldsymbol{x}) = boldsymbol{x}(t) - boldsymbol{x}(1-t)$.



              Then, show that $A$ is a linear map -- that is, it preserves addition and scalar multiplication.



              Finally, $mathcal{V}$ is the set of polynomials $boldsymbol{x}$ such that $A(boldsymbol{x}) = boldsymbol{0}$. Since $A$ is linear this is a vector subspace. So this completes your proof.






              share|cite|improve this answer









              $endgroup$
















                1












                1








                1





                $begingroup$


                I would suggest that the subset satisfying (d) does form a vector space, since




                • (d) forms a linear constraint in the polynomial's coefficients,

                • the zero vector is included,

                • every element has an inverse element.


                Is that correct?




                This is the right idea, but when you say that (d) forms a linear constraint in the coefficients, what do you mean?



                I suppose what you mean is that if the polynomial is $a_0 + a_1 x + a_2 x^2 + cdots ...$ then it is some linear equation in $a_0, a_1, ldots$. But why does this imply that (d) is a vector space?



                However you can formalize (d). I would approach this by defining a map
                $$
                A : mathcal{P} to mathcal{P}
                $$



                where $A(boldsymbol{x}) = boldsymbol{x}(t) - boldsymbol{x}(1-t)$.



                Then, show that $A$ is a linear map -- that is, it preserves addition and scalar multiplication.



                Finally, $mathcal{V}$ is the set of polynomials $boldsymbol{x}$ such that $A(boldsymbol{x}) = boldsymbol{0}$. Since $A$ is linear this is a vector subspace. So this completes your proof.






                share|cite|improve this answer









                $endgroup$




                I would suggest that the subset satisfying (d) does form a vector space, since




                • (d) forms a linear constraint in the polynomial's coefficients,

                • the zero vector is included,

                • every element has an inverse element.


                Is that correct?




                This is the right idea, but when you say that (d) forms a linear constraint in the coefficients, what do you mean?



                I suppose what you mean is that if the polynomial is $a_0 + a_1 x + a_2 x^2 + cdots ...$ then it is some linear equation in $a_0, a_1, ldots$. But why does this imply that (d) is a vector space?



                However you can formalize (d). I would approach this by defining a map
                $$
                A : mathcal{P} to mathcal{P}
                $$



                where $A(boldsymbol{x}) = boldsymbol{x}(t) - boldsymbol{x}(1-t)$.



                Then, show that $A$ is a linear map -- that is, it preserves addition and scalar multiplication.



                Finally, $mathcal{V}$ is the set of polynomials $boldsymbol{x}$ such that $A(boldsymbol{x}) = boldsymbol{0}$. Since $A$ is linear this is a vector subspace. So this completes your proof.







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered Jan 5 at 9:57









                60056005

                37k751127




                37k751127























                    0












                    $begingroup$

                    Your first bullet point is not useful. Just verify the definition. Surely 0 is in there. Every element has an additive inverse. If $c$ is a constant, then $cx(t)=cx(1-t)$, so constant multiples of polynomials are also in there. So you're done.






                    share|cite|improve this answer









                    $endgroup$













                    • $begingroup$
                      What if I doubt that every element has an additive inverse?
                      $endgroup$
                      – Max Herrmann
                      Jan 5 at 8:54










                    • $begingroup$
                      It seems you are misunderstanding the idea of a proof. You need to show this is the case. The constant case is clearly shows the inverse. Just take $c=-1$. However, I did leave out that two functions may not add to satisfy $d$, but this is clear since you just add the contraint equation; i.e. $x(t)+y(t)=x(1-t)+y(1-t)$. You should note that notationally, you should write $(x+y)(t)$ for the aforementioned equation. As well as $(x+y)(1-t)$. This is, by definition, the addition of two polynomials.
                      $endgroup$
                      – user495490
                      Jan 5 at 9:34










                    • $begingroup$
                      What does $-1 cdot x(t) = -1 cdot x(1-t)$ tell me?
                      $endgroup$
                      – Max Herrmann
                      Jan 5 at 9:45










                    • $begingroup$
                      That inverse exist. -x(t) also satisfies (d). It is clearly an additive inverse to x(t).
                      $endgroup$
                      – user495490
                      Jan 5 at 12:44










                    • $begingroup$
                      I find it impressive that it is obvious to you. I need some more hints and smaller steps in argumentation, I'm afraid. E.g. proof by contradiction: Assume there is an element $x'$ in the subset which does not have an additive inverse. Then $g(-mathbf{a}') = -x'(t) + x'(1-t) neq 0$. But $g$ is homogeneous in $mathbf{a}$. Hence, every element has an additive inverse.
                      $endgroup$
                      – Max Herrmann
                      Jan 5 at 14:30


















                    0












                    $begingroup$

                    Your first bullet point is not useful. Just verify the definition. Surely 0 is in there. Every element has an additive inverse. If $c$ is a constant, then $cx(t)=cx(1-t)$, so constant multiples of polynomials are also in there. So you're done.






                    share|cite|improve this answer









                    $endgroup$













                    • $begingroup$
                      What if I doubt that every element has an additive inverse?
                      $endgroup$
                      – Max Herrmann
                      Jan 5 at 8:54










                    • $begingroup$
                      It seems you are misunderstanding the idea of a proof. You need to show this is the case. The constant case is clearly shows the inverse. Just take $c=-1$. However, I did leave out that two functions may not add to satisfy $d$, but this is clear since you just add the contraint equation; i.e. $x(t)+y(t)=x(1-t)+y(1-t)$. You should note that notationally, you should write $(x+y)(t)$ for the aforementioned equation. As well as $(x+y)(1-t)$. This is, by definition, the addition of two polynomials.
                      $endgroup$
                      – user495490
                      Jan 5 at 9:34










                    • $begingroup$
                      What does $-1 cdot x(t) = -1 cdot x(1-t)$ tell me?
                      $endgroup$
                      – Max Herrmann
                      Jan 5 at 9:45










                    • $begingroup$
                      That inverse exist. -x(t) also satisfies (d). It is clearly an additive inverse to x(t).
                      $endgroup$
                      – user495490
                      Jan 5 at 12:44










                    • $begingroup$
                      I find it impressive that it is obvious to you. I need some more hints and smaller steps in argumentation, I'm afraid. E.g. proof by contradiction: Assume there is an element $x'$ in the subset which does not have an additive inverse. Then $g(-mathbf{a}') = -x'(t) + x'(1-t) neq 0$. But $g$ is homogeneous in $mathbf{a}$. Hence, every element has an additive inverse.
                      $endgroup$
                      – Max Herrmann
                      Jan 5 at 14:30
















                    0












                    0








                    0





                    $begingroup$

                    Your first bullet point is not useful. Just verify the definition. Surely 0 is in there. Every element has an additive inverse. If $c$ is a constant, then $cx(t)=cx(1-t)$, so constant multiples of polynomials are also in there. So you're done.






                    share|cite|improve this answer









                    $endgroup$



                    Your first bullet point is not useful. Just verify the definition. Surely 0 is in there. Every element has an additive inverse. If $c$ is a constant, then $cx(t)=cx(1-t)$, so constant multiples of polynomials are also in there. So you're done.







                    share|cite|improve this answer












                    share|cite|improve this answer



                    share|cite|improve this answer










                    answered Jan 5 at 8:43









                    user495490user495490

                    285




                    285












                    • $begingroup$
                      What if I doubt that every element has an additive inverse?
                      $endgroup$
                      – Max Herrmann
                      Jan 5 at 8:54










                    • $begingroup$
                      It seems you are misunderstanding the idea of a proof. You need to show this is the case. The constant case is clearly shows the inverse. Just take $c=-1$. However, I did leave out that two functions may not add to satisfy $d$, but this is clear since you just add the contraint equation; i.e. $x(t)+y(t)=x(1-t)+y(1-t)$. You should note that notationally, you should write $(x+y)(t)$ for the aforementioned equation. As well as $(x+y)(1-t)$. This is, by definition, the addition of two polynomials.
                      $endgroup$
                      – user495490
                      Jan 5 at 9:34










                    • $begingroup$
                      What does $-1 cdot x(t) = -1 cdot x(1-t)$ tell me?
                      $endgroup$
                      – Max Herrmann
                      Jan 5 at 9:45










                    • $begingroup$
                      That inverse exist. -x(t) also satisfies (d). It is clearly an additive inverse to x(t).
                      $endgroup$
                      – user495490
                      Jan 5 at 12:44










                    • $begingroup$
                      I find it impressive that it is obvious to you. I need some more hints and smaller steps in argumentation, I'm afraid. E.g. proof by contradiction: Assume there is an element $x'$ in the subset which does not have an additive inverse. Then $g(-mathbf{a}') = -x'(t) + x'(1-t) neq 0$. But $g$ is homogeneous in $mathbf{a}$. Hence, every element has an additive inverse.
                      $endgroup$
                      – Max Herrmann
                      Jan 5 at 14:30




















                    • $begingroup$
                      What if I doubt that every element has an additive inverse?
                      $endgroup$
                      – Max Herrmann
                      Jan 5 at 8:54










                    • $begingroup$
                      It seems you are misunderstanding the idea of a proof. You need to show this is the case. The constant case is clearly shows the inverse. Just take $c=-1$. However, I did leave out that two functions may not add to satisfy $d$, but this is clear since you just add the contraint equation; i.e. $x(t)+y(t)=x(1-t)+y(1-t)$. You should note that notationally, you should write $(x+y)(t)$ for the aforementioned equation. As well as $(x+y)(1-t)$. This is, by definition, the addition of two polynomials.
                      $endgroup$
                      – user495490
                      Jan 5 at 9:34










                    • $begingroup$
                      What does $-1 cdot x(t) = -1 cdot x(1-t)$ tell me?
                      $endgroup$
                      – Max Herrmann
                      Jan 5 at 9:45










                    • $begingroup$
                      That inverse exist. -x(t) also satisfies (d). It is clearly an additive inverse to x(t).
                      $endgroup$
                      – user495490
                      Jan 5 at 12:44










                    • $begingroup$
                      I find it impressive that it is obvious to you. I need some more hints and smaller steps in argumentation, I'm afraid. E.g. proof by contradiction: Assume there is an element $x'$ in the subset which does not have an additive inverse. Then $g(-mathbf{a}') = -x'(t) + x'(1-t) neq 0$. But $g$ is homogeneous in $mathbf{a}$. Hence, every element has an additive inverse.
                      $endgroup$
                      – Max Herrmann
                      Jan 5 at 14:30


















                    $begingroup$
                    What if I doubt that every element has an additive inverse?
                    $endgroup$
                    – Max Herrmann
                    Jan 5 at 8:54




                    $begingroup$
                    What if I doubt that every element has an additive inverse?
                    $endgroup$
                    – Max Herrmann
                    Jan 5 at 8:54












                    $begingroup$
                    It seems you are misunderstanding the idea of a proof. You need to show this is the case. The constant case is clearly shows the inverse. Just take $c=-1$. However, I did leave out that two functions may not add to satisfy $d$, but this is clear since you just add the contraint equation; i.e. $x(t)+y(t)=x(1-t)+y(1-t)$. You should note that notationally, you should write $(x+y)(t)$ for the aforementioned equation. As well as $(x+y)(1-t)$. This is, by definition, the addition of two polynomials.
                    $endgroup$
                    – user495490
                    Jan 5 at 9:34




                    $begingroup$
                    It seems you are misunderstanding the idea of a proof. You need to show this is the case. The constant case is clearly shows the inverse. Just take $c=-1$. However, I did leave out that two functions may not add to satisfy $d$, but this is clear since you just add the contraint equation; i.e. $x(t)+y(t)=x(1-t)+y(1-t)$. You should note that notationally, you should write $(x+y)(t)$ for the aforementioned equation. As well as $(x+y)(1-t)$. This is, by definition, the addition of two polynomials.
                    $endgroup$
                    – user495490
                    Jan 5 at 9:34












                    $begingroup$
                    What does $-1 cdot x(t) = -1 cdot x(1-t)$ tell me?
                    $endgroup$
                    – Max Herrmann
                    Jan 5 at 9:45




                    $begingroup$
                    What does $-1 cdot x(t) = -1 cdot x(1-t)$ tell me?
                    $endgroup$
                    – Max Herrmann
                    Jan 5 at 9:45












                    $begingroup$
                    That inverse exist. -x(t) also satisfies (d). It is clearly an additive inverse to x(t).
                    $endgroup$
                    – user495490
                    Jan 5 at 12:44




                    $begingroup$
                    That inverse exist. -x(t) also satisfies (d). It is clearly an additive inverse to x(t).
                    $endgroup$
                    – user495490
                    Jan 5 at 12:44












                    $begingroup$
                    I find it impressive that it is obvious to you. I need some more hints and smaller steps in argumentation, I'm afraid. E.g. proof by contradiction: Assume there is an element $x'$ in the subset which does not have an additive inverse. Then $g(-mathbf{a}') = -x'(t) + x'(1-t) neq 0$. But $g$ is homogeneous in $mathbf{a}$. Hence, every element has an additive inverse.
                    $endgroup$
                    – Max Herrmann
                    Jan 5 at 14:30






                    $begingroup$
                    I find it impressive that it is obvious to you. I need some more hints and smaller steps in argumentation, I'm afraid. E.g. proof by contradiction: Assume there is an element $x'$ in the subset which does not have an additive inverse. Then $g(-mathbf{a}') = -x'(t) + x'(1-t) neq 0$. But $g$ is homogeneous in $mathbf{a}$. Hence, every element has an additive inverse.
                    $endgroup$
                    – Max Herrmann
                    Jan 5 at 14:30




















                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Mathematics Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3062516%2fhalmos-finite-dimensional-vector-spaces-does-mathcalp-over-mathbbc-wi%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Bressuire

                    Cabo Verde

                    Gyllenstierna