Is there any rule of thumb when it comes to selecting control/predict horizon for MPC?












0












$begingroup$


I have a simple question:



Is there any rule of thumb when it comes to selecting control/predict horizon for MPC?



Normaly I set control and predict horizon equals, but I have heard that's not good practice.



I'm developing my own adaptive(subspace identification) constrained MPC with GNU Octave and it works very good! But still I need some method where I can auto select the horizon of predict and control.



I know the model of the system and I can find the damping, time constant, poles and eigen frequency. Can I use them to compute the horizons?



If you wonder what algorithm I'm using, I'm using Observer Kalman Filter Identification to compute the impulse response from an arbitrary input and output, then Eigensystem Realization Algorithm to turn the impulse response into a discrete state space model. I have tried N4SID, MOESP, ARX but they are too advanced and requried more tuning and data to get a good model.



System Identification Algorithms:
https://github.com/DanielMartensson/Mataveid



Constrained Model Predictive Control:
https://github.com/DanielMartensson/Matavecontrol



MPC model predictive control










share|cite|improve this question









$endgroup$

















    0












    $begingroup$


    I have a simple question:



    Is there any rule of thumb when it comes to selecting control/predict horizon for MPC?



    Normaly I set control and predict horizon equals, but I have heard that's not good practice.



    I'm developing my own adaptive(subspace identification) constrained MPC with GNU Octave and it works very good! But still I need some method where I can auto select the horizon of predict and control.



    I know the model of the system and I can find the damping, time constant, poles and eigen frequency. Can I use them to compute the horizons?



    If you wonder what algorithm I'm using, I'm using Observer Kalman Filter Identification to compute the impulse response from an arbitrary input and output, then Eigensystem Realization Algorithm to turn the impulse response into a discrete state space model. I have tried N4SID, MOESP, ARX but they are too advanced and requried more tuning and data to get a good model.



    System Identification Algorithms:
    https://github.com/DanielMartensson/Mataveid



    Constrained Model Predictive Control:
    https://github.com/DanielMartensson/Matavecontrol



    MPC model predictive control










    share|cite|improve this question









    $endgroup$















      0












      0








      0





      $begingroup$


      I have a simple question:



      Is there any rule of thumb when it comes to selecting control/predict horizon for MPC?



      Normaly I set control and predict horizon equals, but I have heard that's not good practice.



      I'm developing my own adaptive(subspace identification) constrained MPC with GNU Octave and it works very good! But still I need some method where I can auto select the horizon of predict and control.



      I know the model of the system and I can find the damping, time constant, poles and eigen frequency. Can I use them to compute the horizons?



      If you wonder what algorithm I'm using, I'm using Observer Kalman Filter Identification to compute the impulse response from an arbitrary input and output, then Eigensystem Realization Algorithm to turn the impulse response into a discrete state space model. I have tried N4SID, MOESP, ARX but they are too advanced and requried more tuning and data to get a good model.



      System Identification Algorithms:
      https://github.com/DanielMartensson/Mataveid



      Constrained Model Predictive Control:
      https://github.com/DanielMartensson/Matavecontrol



      MPC model predictive control










      share|cite|improve this question









      $endgroup$




      I have a simple question:



      Is there any rule of thumb when it comes to selecting control/predict horizon for MPC?



      Normaly I set control and predict horizon equals, but I have heard that's not good practice.



      I'm developing my own adaptive(subspace identification) constrained MPC with GNU Octave and it works very good! But still I need some method where I can auto select the horizon of predict and control.



      I know the model of the system and I can find the damping, time constant, poles and eigen frequency. Can I use them to compute the horizons?



      If you wonder what algorithm I'm using, I'm using Observer Kalman Filter Identification to compute the impulse response from an arbitrary input and output, then Eigensystem Realization Algorithm to turn the impulse response into a discrete state space model. I have tried N4SID, MOESP, ARX but they are too advanced and requried more tuning and data to get a good model.



      System Identification Algorithms:
      https://github.com/DanielMartensson/Mataveid



      Constrained Model Predictive Control:
      https://github.com/DanielMartensson/Matavecontrol



      MPC model predictive control







      optimization control-theory optimal-control linear-control system-identification






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Dec 26 '18 at 13:01









      Daniel MårtenssonDaniel Mårtensson

      944416




      944416






















          2 Answers
          2






          active

          oldest

          votes


















          1












          $begingroup$

          I would say that there is no simple rule. Long enough to capture the important behavior.



          I've never heard anyone saying it is bad practice to use same control and prediction horizon. Why complicate matters with two design choices.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            If you add a real rollout policy like LQR then you can technically have an infinite horizon. I believe without this you also do not have a stability proof. But you do need to show that this policy does not violate any of the constraints.
            $endgroup$
            – Kwin van der Veen
            Dec 26 '18 at 22:10










          • $begingroup$
            A standard approach is to add a terminal penalty derived from the infinite horizon costs (i.e. the solution to the Riccati equation). This at least guarantees stability in a non-trivial set around the origin, and although not guaranteeing stability generally it is definitely a sound way to make the MPC controller closer to the inifinite horizon solution and give non-constrained response identical to the LQ response. This serves as the basis for most guaranteed stability approaches.
            $endgroup$
            – Johan Löfberg
            Dec 27 '18 at 8:49



















          1












          $begingroup$

          In most of the cases, the prediction and control horizons are defined by methods based trial and error. They are not the best values but as far as they work, people are happy.



          If you are looking for a more structured method, that is simple. Just optimize the horizons $N_c$ and $N_p$. Use evolutionary algorithms such as GA.
          Have a look at the following publication:




          Mohammadi, Arash, et al. "Optimizing model predictive control horizons using genetic algorithm for motion cueing algorithm." Expert Systems with Applications 92 (2018): 73-81.




          Available here and here.



          In respond to should you use $N_c=N_p$. It depends. For my case, as $N_p$ is very long, increase in control horizon slows down the computation. If the prediction horizon is short enough, there is not problem with that. Also, is future reference signal variable and predictable? If yes, feel free to use a long $N_c$ but if your future reference signal is constant, maybe $N_c=3$ is enough.






          share|cite|improve this answer











          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "69"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3052921%2fis-there-any-rule-of-thumb-when-it-comes-to-selecting-control-predict-horizon-fo%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1












            $begingroup$

            I would say that there is no simple rule. Long enough to capture the important behavior.



            I've never heard anyone saying it is bad practice to use same control and prediction horizon. Why complicate matters with two design choices.






            share|cite|improve this answer









            $endgroup$













            • $begingroup$
              If you add a real rollout policy like LQR then you can technically have an infinite horizon. I believe without this you also do not have a stability proof. But you do need to show that this policy does not violate any of the constraints.
              $endgroup$
              – Kwin van der Veen
              Dec 26 '18 at 22:10










            • $begingroup$
              A standard approach is to add a terminal penalty derived from the infinite horizon costs (i.e. the solution to the Riccati equation). This at least guarantees stability in a non-trivial set around the origin, and although not guaranteeing stability generally it is definitely a sound way to make the MPC controller closer to the inifinite horizon solution and give non-constrained response identical to the LQ response. This serves as the basis for most guaranteed stability approaches.
              $endgroup$
              – Johan Löfberg
              Dec 27 '18 at 8:49
















            1












            $begingroup$

            I would say that there is no simple rule. Long enough to capture the important behavior.



            I've never heard anyone saying it is bad practice to use same control and prediction horizon. Why complicate matters with two design choices.






            share|cite|improve this answer









            $endgroup$













            • $begingroup$
              If you add a real rollout policy like LQR then you can technically have an infinite horizon. I believe without this you also do not have a stability proof. But you do need to show that this policy does not violate any of the constraints.
              $endgroup$
              – Kwin van der Veen
              Dec 26 '18 at 22:10










            • $begingroup$
              A standard approach is to add a terminal penalty derived from the infinite horizon costs (i.e. the solution to the Riccati equation). This at least guarantees stability in a non-trivial set around the origin, and although not guaranteeing stability generally it is definitely a sound way to make the MPC controller closer to the inifinite horizon solution and give non-constrained response identical to the LQ response. This serves as the basis for most guaranteed stability approaches.
              $endgroup$
              – Johan Löfberg
              Dec 27 '18 at 8:49














            1












            1








            1





            $begingroup$

            I would say that there is no simple rule. Long enough to capture the important behavior.



            I've never heard anyone saying it is bad practice to use same control and prediction horizon. Why complicate matters with two design choices.






            share|cite|improve this answer









            $endgroup$



            I would say that there is no simple rule. Long enough to capture the important behavior.



            I've never heard anyone saying it is bad practice to use same control and prediction horizon. Why complicate matters with two design choices.







            share|cite|improve this answer












            share|cite|improve this answer



            share|cite|improve this answer










            answered Dec 26 '18 at 19:34









            Johan LöfbergJohan Löfberg

            5,2551811




            5,2551811












            • $begingroup$
              If you add a real rollout policy like LQR then you can technically have an infinite horizon. I believe without this you also do not have a stability proof. But you do need to show that this policy does not violate any of the constraints.
              $endgroup$
              – Kwin van der Veen
              Dec 26 '18 at 22:10










            • $begingroup$
              A standard approach is to add a terminal penalty derived from the infinite horizon costs (i.e. the solution to the Riccati equation). This at least guarantees stability in a non-trivial set around the origin, and although not guaranteeing stability generally it is definitely a sound way to make the MPC controller closer to the inifinite horizon solution and give non-constrained response identical to the LQ response. This serves as the basis for most guaranteed stability approaches.
              $endgroup$
              – Johan Löfberg
              Dec 27 '18 at 8:49


















            • $begingroup$
              If you add a real rollout policy like LQR then you can technically have an infinite horizon. I believe without this you also do not have a stability proof. But you do need to show that this policy does not violate any of the constraints.
              $endgroup$
              – Kwin van der Veen
              Dec 26 '18 at 22:10










            • $begingroup$
              A standard approach is to add a terminal penalty derived from the infinite horizon costs (i.e. the solution to the Riccati equation). This at least guarantees stability in a non-trivial set around the origin, and although not guaranteeing stability generally it is definitely a sound way to make the MPC controller closer to the inifinite horizon solution and give non-constrained response identical to the LQ response. This serves as the basis for most guaranteed stability approaches.
              $endgroup$
              – Johan Löfberg
              Dec 27 '18 at 8:49
















            $begingroup$
            If you add a real rollout policy like LQR then you can technically have an infinite horizon. I believe without this you also do not have a stability proof. But you do need to show that this policy does not violate any of the constraints.
            $endgroup$
            – Kwin van der Veen
            Dec 26 '18 at 22:10




            $begingroup$
            If you add a real rollout policy like LQR then you can technically have an infinite horizon. I believe without this you also do not have a stability proof. But you do need to show that this policy does not violate any of the constraints.
            $endgroup$
            – Kwin van der Veen
            Dec 26 '18 at 22:10












            $begingroup$
            A standard approach is to add a terminal penalty derived from the infinite horizon costs (i.e. the solution to the Riccati equation). This at least guarantees stability in a non-trivial set around the origin, and although not guaranteeing stability generally it is definitely a sound way to make the MPC controller closer to the inifinite horizon solution and give non-constrained response identical to the LQ response. This serves as the basis for most guaranteed stability approaches.
            $endgroup$
            – Johan Löfberg
            Dec 27 '18 at 8:49




            $begingroup$
            A standard approach is to add a terminal penalty derived from the infinite horizon costs (i.e. the solution to the Riccati equation). This at least guarantees stability in a non-trivial set around the origin, and although not guaranteeing stability generally it is definitely a sound way to make the MPC controller closer to the inifinite horizon solution and give non-constrained response identical to the LQ response. This serves as the basis for most guaranteed stability approaches.
            $endgroup$
            – Johan Löfberg
            Dec 27 '18 at 8:49











            1












            $begingroup$

            In most of the cases, the prediction and control horizons are defined by methods based trial and error. They are not the best values but as far as they work, people are happy.



            If you are looking for a more structured method, that is simple. Just optimize the horizons $N_c$ and $N_p$. Use evolutionary algorithms such as GA.
            Have a look at the following publication:




            Mohammadi, Arash, et al. "Optimizing model predictive control horizons using genetic algorithm for motion cueing algorithm." Expert Systems with Applications 92 (2018): 73-81.




            Available here and here.



            In respond to should you use $N_c=N_p$. It depends. For my case, as $N_p$ is very long, increase in control horizon slows down the computation. If the prediction horizon is short enough, there is not problem with that. Also, is future reference signal variable and predictable? If yes, feel free to use a long $N_c$ but if your future reference signal is constant, maybe $N_c=3$ is enough.






            share|cite|improve this answer











            $endgroup$


















              1












              $begingroup$

              In most of the cases, the prediction and control horizons are defined by methods based trial and error. They are not the best values but as far as they work, people are happy.



              If you are looking for a more structured method, that is simple. Just optimize the horizons $N_c$ and $N_p$. Use evolutionary algorithms such as GA.
              Have a look at the following publication:




              Mohammadi, Arash, et al. "Optimizing model predictive control horizons using genetic algorithm for motion cueing algorithm." Expert Systems with Applications 92 (2018): 73-81.




              Available here and here.



              In respond to should you use $N_c=N_p$. It depends. For my case, as $N_p$ is very long, increase in control horizon slows down the computation. If the prediction horizon is short enough, there is not problem with that. Also, is future reference signal variable and predictable? If yes, feel free to use a long $N_c$ but if your future reference signal is constant, maybe $N_c=3$ is enough.






              share|cite|improve this answer











              $endgroup$
















                1












                1








                1





                $begingroup$

                In most of the cases, the prediction and control horizons are defined by methods based trial and error. They are not the best values but as far as they work, people are happy.



                If you are looking for a more structured method, that is simple. Just optimize the horizons $N_c$ and $N_p$. Use evolutionary algorithms such as GA.
                Have a look at the following publication:




                Mohammadi, Arash, et al. "Optimizing model predictive control horizons using genetic algorithm for motion cueing algorithm." Expert Systems with Applications 92 (2018): 73-81.




                Available here and here.



                In respond to should you use $N_c=N_p$. It depends. For my case, as $N_p$ is very long, increase in control horizon slows down the computation. If the prediction horizon is short enough, there is not problem with that. Also, is future reference signal variable and predictable? If yes, feel free to use a long $N_c$ but if your future reference signal is constant, maybe $N_c=3$ is enough.






                share|cite|improve this answer











                $endgroup$



                In most of the cases, the prediction and control horizons are defined by methods based trial and error. They are not the best values but as far as they work, people are happy.



                If you are looking for a more structured method, that is simple. Just optimize the horizons $N_c$ and $N_p$. Use evolutionary algorithms such as GA.
                Have a look at the following publication:




                Mohammadi, Arash, et al. "Optimizing model predictive control horizons using genetic algorithm for motion cueing algorithm." Expert Systems with Applications 92 (2018): 73-81.




                Available here and here.



                In respond to should you use $N_c=N_p$. It depends. For my case, as $N_p$ is very long, increase in control horizon slows down the computation. If the prediction horizon is short enough, there is not problem with that. Also, is future reference signal variable and predictable? If yes, feel free to use a long $N_c$ but if your future reference signal is constant, maybe $N_c=3$ is enough.







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited Jan 3 at 10:03

























                answered Dec 27 '18 at 3:25









                ArashArash

                875210




                875210






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Mathematics Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3052921%2fis-there-any-rule-of-thumb-when-it-comes-to-selecting-control-predict-horizon-fo%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Bressuire

                    Cabo Verde

                    Gyllenstierna