Fit exponential with constant












3












$begingroup$


I have data whic would fit to an exponential function with a constant.



$$y=acdotexp(bcdot t) + c.$$



Now I can solve an exponential without a constant using least square by taking log of y and making the whole equation linear. Is it possible to use least square to solve it with a constant too ( i can't seem to convert the above to linear form, maybe i am missing something here) or do I have to use a non linear fitting function like nlm in R ?










share|cite|improve this question











$endgroup$












  • $begingroup$
    does $t$ take large negative values? is $b$ positive?
    $endgroup$
    – hHhh
    Jun 24 '15 at 13:52












  • $begingroup$
    t>=0 and b is -ve for the dataset I have.
    $endgroup$
    – silencer
    Jun 24 '15 at 13:57
















3












$begingroup$


I have data whic would fit to an exponential function with a constant.



$$y=acdotexp(bcdot t) + c.$$



Now I can solve an exponential without a constant using least square by taking log of y and making the whole equation linear. Is it possible to use least square to solve it with a constant too ( i can't seem to convert the above to linear form, maybe i am missing something here) or do I have to use a non linear fitting function like nlm in R ?










share|cite|improve this question











$endgroup$












  • $begingroup$
    does $t$ take large negative values? is $b$ positive?
    $endgroup$
    – hHhh
    Jun 24 '15 at 13:52












  • $begingroup$
    t>=0 and b is -ve for the dataset I have.
    $endgroup$
    – silencer
    Jun 24 '15 at 13:57














3












3








3


2



$begingroup$


I have data whic would fit to an exponential function with a constant.



$$y=acdotexp(bcdot t) + c.$$



Now I can solve an exponential without a constant using least square by taking log of y and making the whole equation linear. Is it possible to use least square to solve it with a constant too ( i can't seem to convert the above to linear form, maybe i am missing something here) or do I have to use a non linear fitting function like nlm in R ?










share|cite|improve this question











$endgroup$




I have data whic would fit to an exponential function with a constant.



$$y=acdotexp(bcdot t) + c.$$



Now I can solve an exponential without a constant using least square by taking log of y and making the whole equation linear. Is it possible to use least square to solve it with a constant too ( i can't seem to convert the above to linear form, maybe i am missing something here) or do I have to use a non linear fitting function like nlm in R ?







exponential-function regression






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 15 at 8:27









M. Winter

19.2k72966




19.2k72966










asked Jun 24 '15 at 13:23









silencersilencer

11814




11814












  • $begingroup$
    does $t$ take large negative values? is $b$ positive?
    $endgroup$
    – hHhh
    Jun 24 '15 at 13:52












  • $begingroup$
    t>=0 and b is -ve for the dataset I have.
    $endgroup$
    – silencer
    Jun 24 '15 at 13:57


















  • $begingroup$
    does $t$ take large negative values? is $b$ positive?
    $endgroup$
    – hHhh
    Jun 24 '15 at 13:52












  • $begingroup$
    t>=0 and b is -ve for the dataset I have.
    $endgroup$
    – silencer
    Jun 24 '15 at 13:57
















$begingroup$
does $t$ take large negative values? is $b$ positive?
$endgroup$
– hHhh
Jun 24 '15 at 13:52






$begingroup$
does $t$ take large negative values? is $b$ positive?
$endgroup$
– hHhh
Jun 24 '15 at 13:52














$begingroup$
t>=0 and b is -ve for the dataset I have.
$endgroup$
– silencer
Jun 24 '15 at 13:57




$begingroup$
t>=0 and b is -ve for the dataset I have.
$endgroup$
– silencer
Jun 24 '15 at 13:57










4 Answers
4






active

oldest

votes


















3












$begingroup$

The problem is intrinsically nonlinear since you want to minimize $$SSQ(a,b,c)=sum_{i=1}^Nbig(ae^{bt_i}+c-y_ibig)^2$$ and the nonlinear regression will require good (or at least reasonable and consistent) estimates for the three parameters.



But, suppose that you assign a value to $b$; then defining $z_i=e^{bt_i}$ the problem turns to be linear $(y=az+c)$ and a linear regression will give the value of $a,c$ for the given $b$ as well as the sum of the squares. Try a few values of $b$ untill you see a minimum of $SSQ(b)$. For this approximate value of $b$, you had from the linear regression the corresponding $a$ and $c$ and you are ready to go with the nonlinear regression.



Another approach could be : assume a value of $c$ and rewrite the model as $$y-c=a e^{bt}$$ $$log(y-c)=alpha + bt$$ which means that defining $w_i=log(y_i-c)$, the model is just $z=alpha + bt$ and a linear regression will give $alpha$ and $b$. From these, recompute $y_i^*=c+e^{ alpha + bt_i}$ and the corresponding some of squares $SSQ(c)$. Again, trying different values of $c$ will show a minimum and for the best value of $c$, you know the corresponding $b$ and $a=e^{alpha}$ and you are ready to go with the nonlinear regression.



It is sure that there is one phase with trial and error but it is very fast.



Edit



You can even have an immediate estimate of parameter $b$ if you take three equally spaced points $t_1$, $t_2$, $t_3$ such that $t_2=frac{t_1+t_3}2$ and, the corresponding $y_i$'s. After simplification $$frac{ y_3-y_2}{ y_3-y_1}=frac{1}{1+e^{frac{1}{2} b (t_1-t_3)}}$$ from which you get the estimate of $b$ $$b=frac{2}{t_1-t_3} log left(frac{y_1-y_2}{y_2-y_3}right)$$ from which $$a=frac{y_1-y_3}{e^{b t_1}-e^{b t_3}}$$ $$c=y_1-ae^{bt_1}$$
Using the data given in page 18 of JJacquelin's book, let us take (approximate values for the $x$'s) the three points $(-1,0.418)$, $(0,0.911)$, $(1,3.544)$. This immediately gives $bapprox 1.675$, $aapprox 0.606$, $capprox 0.304$ which are extremely close to the end results by JJacquelin in his book for this specific example.



Having these estimates, just run the nonlinear regression.



This was a trick proposed by Yves Daoust here






share|cite|improve this answer











$endgroup$





















    7












    $begingroup$

    A direct method of fitting (no guessed initial values required, no iterative process) is shown below.
    For the theory, see the paper (pp.16-17) : https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales



    enter image description here






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      Fantastic. Unfortunately, I don't read french. Can you point me to the page on which the integral equation is stated that is used to perform the transformation? Would love to learn more.
      $endgroup$
      – muaddib
      Jun 24 '15 at 14:17






    • 1




      $begingroup$
      The integral equation is (6) page 16 : $$y-y_1=-ac(x-x_1)+cint_{x_1}^x y(u)du$$ Presently, there is no available translation for the whole paper.
      $endgroup$
      – JJacquelin
      Jun 24 '15 at 15:28












    • $begingroup$
      I am always impressed by your approaches !
      $endgroup$
      – Claude Leibovici
      Jun 25 '15 at 13:34










    • $begingroup$
      Here's a numpy implementation of the above. Behaves a bit weird sometimes; no guarantees that it is accurate. hastebin.com/yuxoxotipo.py
      $endgroup$
      – Mingwei Samuel
      Jul 24 '17 at 22:56






    • 1




      $begingroup$
      @JJacquelin This is very nice. I've started using the approach you provide in your paper(s) in a few places in my work. I've opened up github.com/scipy/scipy/pull/9158. I hope the content (and citation) is to your liking.
      $endgroup$
      – Mad Physicist
      Aug 20 '18 at 19:24



















    1












    $begingroup$

    You can solve the problem by regressing over the derivate (difference) of the data. If you formulate the problem as having to solve for $a, b, c$ in



    $$y=bcdot e^{ax} + c$$



    By taking the derivative you get:



    begin{align}
    frac{dy}{dx} &= abcdot e^{ax} \
    logleft(frac{dy}{dx}right) &= log(abcdot e^{ax}) \
    &= log(ab) + log(e^{ax}) \
    &= log(ab) + ax
    end{align}



    You can then fit a linear model of the form $u = s x + t$ where $u=logleft(frac{dy}{dx}right)$, $s=a$, and $t=log(ab)$. After solving you can obtain $a = s$ and $b = frac{e^t}{a}$. And finally, you can obtain c by subtracting the reconstructed model from the original data.



    Note: this whole exercise will require you to sort your data set by $x$ values.



    Here is python code to accomplish the task:



    def regress_exponential_with_offset(x, y):
    # sort values
    ind = np.argsort(x)
    x = x[ind]
    y = y[ind]

    # decaying exponentials need special treatment
    # since we can't take the log of negative numbers.
    neg = -1 if y[0] > y[-1] else 1
    dx = np.diff(x)
    dy = np.diff(y)
    dy_dx = dy / dx

    # filter any remaining negative numbers.
    v = x[:-1]
    u = neg * dy_dx
    ind = np.where(u > 0)[0]
    v = v[ind]
    u = u[ind]

    # perform regression
    u = np.log(u)
    s, t = np.polyfit(v, u, 1)
    a = s
    b = neg * np.exp(t) / a
    yy = np.exp(a * x) * b
    c = np.median(y - yy)
    return a, b, c





    share|cite|improve this answer









    $endgroup$









    • 1




      $begingroup$
      I am sorry but this is extremely dangerous to do (except if the data are strictly perfect).
      $endgroup$
      – Claude Leibovici
      Jan 15 at 10:48










    • $begingroup$
      @ClaudeLeibovici Can you please elaborate? I've used this technique successfully on pretty noisy data.
      $endgroup$
      – gilsho
      Jan 16 at 13:53










    • $begingroup$
      I did not want to answer as soon as I saw your comment and, thinking, I shall not answer NOW but I am ready to have as long discussions as required on this topic with you (chat room, e-mails, ...) if you accept to first ask the question "Is this correct or not ? at Cross Validated Stack Exchange. Cheers.
      $endgroup$
      – Claude Leibovici
      Jan 17 at 2:15






    • 1




      $begingroup$
      @glisho. The idea to use the derivative of a non linear function to transform a non linear into a linear regression is not new. This is a good idea but with important drawbacks in practice, due to hazard in numerical calculus . Such method was described in the paper referenced below. But this is an very uncertain method if the data are scattered and/or noisy as shown in this paper. I fully agree with the comment of Claude Leibovici. In fact it is much better to use antiderivative than derivative because the numerical integration is much accurate and stable than numerical differentiation.
      $endgroup$
      – JJacquelin
      Jan 17 at 5:30










    • $begingroup$
      For example, the use of integral equation (but not differential equation) in case of $y=a exp(bx)+cquad$ is described pages 15-18 in the paper : fr.scribd.com/doc/14674814/Regressions-et-equations-integrales
      $endgroup$
      – JJacquelin
      Jan 17 at 6:56





















    -1












    $begingroup$

    I tried to deduce the formula for 'b' in case of equdistant ti-s, but I have got a different one. There is a reciprocal under the logarithm in my result. (Please check it. Maybe I am not right.)



    Sorry. I see ... The t3-t1 minus sign makes the log content right.






    share|cite|improve this answer











    $endgroup$














      Your Answer








      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1337601%2ffit-exponential-with-constant%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      4 Answers
      4






      active

      oldest

      votes








      4 Answers
      4






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      3












      $begingroup$

      The problem is intrinsically nonlinear since you want to minimize $$SSQ(a,b,c)=sum_{i=1}^Nbig(ae^{bt_i}+c-y_ibig)^2$$ and the nonlinear regression will require good (or at least reasonable and consistent) estimates for the three parameters.



      But, suppose that you assign a value to $b$; then defining $z_i=e^{bt_i}$ the problem turns to be linear $(y=az+c)$ and a linear regression will give the value of $a,c$ for the given $b$ as well as the sum of the squares. Try a few values of $b$ untill you see a minimum of $SSQ(b)$. For this approximate value of $b$, you had from the linear regression the corresponding $a$ and $c$ and you are ready to go with the nonlinear regression.



      Another approach could be : assume a value of $c$ and rewrite the model as $$y-c=a e^{bt}$$ $$log(y-c)=alpha + bt$$ which means that defining $w_i=log(y_i-c)$, the model is just $z=alpha + bt$ and a linear regression will give $alpha$ and $b$. From these, recompute $y_i^*=c+e^{ alpha + bt_i}$ and the corresponding some of squares $SSQ(c)$. Again, trying different values of $c$ will show a minimum and for the best value of $c$, you know the corresponding $b$ and $a=e^{alpha}$ and you are ready to go with the nonlinear regression.



      It is sure that there is one phase with trial and error but it is very fast.



      Edit



      You can even have an immediate estimate of parameter $b$ if you take three equally spaced points $t_1$, $t_2$, $t_3$ such that $t_2=frac{t_1+t_3}2$ and, the corresponding $y_i$'s. After simplification $$frac{ y_3-y_2}{ y_3-y_1}=frac{1}{1+e^{frac{1}{2} b (t_1-t_3)}}$$ from which you get the estimate of $b$ $$b=frac{2}{t_1-t_3} log left(frac{y_1-y_2}{y_2-y_3}right)$$ from which $$a=frac{y_1-y_3}{e^{b t_1}-e^{b t_3}}$$ $$c=y_1-ae^{bt_1}$$
      Using the data given in page 18 of JJacquelin's book, let us take (approximate values for the $x$'s) the three points $(-1,0.418)$, $(0,0.911)$, $(1,3.544)$. This immediately gives $bapprox 1.675$, $aapprox 0.606$, $capprox 0.304$ which are extremely close to the end results by JJacquelin in his book for this specific example.



      Having these estimates, just run the nonlinear regression.



      This was a trick proposed by Yves Daoust here






      share|cite|improve this answer











      $endgroup$


















        3












        $begingroup$

        The problem is intrinsically nonlinear since you want to minimize $$SSQ(a,b,c)=sum_{i=1}^Nbig(ae^{bt_i}+c-y_ibig)^2$$ and the nonlinear regression will require good (or at least reasonable and consistent) estimates for the three parameters.



        But, suppose that you assign a value to $b$; then defining $z_i=e^{bt_i}$ the problem turns to be linear $(y=az+c)$ and a linear regression will give the value of $a,c$ for the given $b$ as well as the sum of the squares. Try a few values of $b$ untill you see a minimum of $SSQ(b)$. For this approximate value of $b$, you had from the linear regression the corresponding $a$ and $c$ and you are ready to go with the nonlinear regression.



        Another approach could be : assume a value of $c$ and rewrite the model as $$y-c=a e^{bt}$$ $$log(y-c)=alpha + bt$$ which means that defining $w_i=log(y_i-c)$, the model is just $z=alpha + bt$ and a linear regression will give $alpha$ and $b$. From these, recompute $y_i^*=c+e^{ alpha + bt_i}$ and the corresponding some of squares $SSQ(c)$. Again, trying different values of $c$ will show a minimum and for the best value of $c$, you know the corresponding $b$ and $a=e^{alpha}$ and you are ready to go with the nonlinear regression.



        It is sure that there is one phase with trial and error but it is very fast.



        Edit



        You can even have an immediate estimate of parameter $b$ if you take three equally spaced points $t_1$, $t_2$, $t_3$ such that $t_2=frac{t_1+t_3}2$ and, the corresponding $y_i$'s. After simplification $$frac{ y_3-y_2}{ y_3-y_1}=frac{1}{1+e^{frac{1}{2} b (t_1-t_3)}}$$ from which you get the estimate of $b$ $$b=frac{2}{t_1-t_3} log left(frac{y_1-y_2}{y_2-y_3}right)$$ from which $$a=frac{y_1-y_3}{e^{b t_1}-e^{b t_3}}$$ $$c=y_1-ae^{bt_1}$$
        Using the data given in page 18 of JJacquelin's book, let us take (approximate values for the $x$'s) the three points $(-1,0.418)$, $(0,0.911)$, $(1,3.544)$. This immediately gives $bapprox 1.675$, $aapprox 0.606$, $capprox 0.304$ which are extremely close to the end results by JJacquelin in his book for this specific example.



        Having these estimates, just run the nonlinear regression.



        This was a trick proposed by Yves Daoust here






        share|cite|improve this answer











        $endgroup$
















          3












          3








          3





          $begingroup$

          The problem is intrinsically nonlinear since you want to minimize $$SSQ(a,b,c)=sum_{i=1}^Nbig(ae^{bt_i}+c-y_ibig)^2$$ and the nonlinear regression will require good (or at least reasonable and consistent) estimates for the three parameters.



          But, suppose that you assign a value to $b$; then defining $z_i=e^{bt_i}$ the problem turns to be linear $(y=az+c)$ and a linear regression will give the value of $a,c$ for the given $b$ as well as the sum of the squares. Try a few values of $b$ untill you see a minimum of $SSQ(b)$. For this approximate value of $b$, you had from the linear regression the corresponding $a$ and $c$ and you are ready to go with the nonlinear regression.



          Another approach could be : assume a value of $c$ and rewrite the model as $$y-c=a e^{bt}$$ $$log(y-c)=alpha + bt$$ which means that defining $w_i=log(y_i-c)$, the model is just $z=alpha + bt$ and a linear regression will give $alpha$ and $b$. From these, recompute $y_i^*=c+e^{ alpha + bt_i}$ and the corresponding some of squares $SSQ(c)$. Again, trying different values of $c$ will show a minimum and for the best value of $c$, you know the corresponding $b$ and $a=e^{alpha}$ and you are ready to go with the nonlinear regression.



          It is sure that there is one phase with trial and error but it is very fast.



          Edit



          You can even have an immediate estimate of parameter $b$ if you take three equally spaced points $t_1$, $t_2$, $t_3$ such that $t_2=frac{t_1+t_3}2$ and, the corresponding $y_i$'s. After simplification $$frac{ y_3-y_2}{ y_3-y_1}=frac{1}{1+e^{frac{1}{2} b (t_1-t_3)}}$$ from which you get the estimate of $b$ $$b=frac{2}{t_1-t_3} log left(frac{y_1-y_2}{y_2-y_3}right)$$ from which $$a=frac{y_1-y_3}{e^{b t_1}-e^{b t_3}}$$ $$c=y_1-ae^{bt_1}$$
          Using the data given in page 18 of JJacquelin's book, let us take (approximate values for the $x$'s) the three points $(-1,0.418)$, $(0,0.911)$, $(1,3.544)$. This immediately gives $bapprox 1.675$, $aapprox 0.606$, $capprox 0.304$ which are extremely close to the end results by JJacquelin in his book for this specific example.



          Having these estimates, just run the nonlinear regression.



          This was a trick proposed by Yves Daoust here






          share|cite|improve this answer











          $endgroup$



          The problem is intrinsically nonlinear since you want to minimize $$SSQ(a,b,c)=sum_{i=1}^Nbig(ae^{bt_i}+c-y_ibig)^2$$ and the nonlinear regression will require good (or at least reasonable and consistent) estimates for the three parameters.



          But, suppose that you assign a value to $b$; then defining $z_i=e^{bt_i}$ the problem turns to be linear $(y=az+c)$ and a linear regression will give the value of $a,c$ for the given $b$ as well as the sum of the squares. Try a few values of $b$ untill you see a minimum of $SSQ(b)$. For this approximate value of $b$, you had from the linear regression the corresponding $a$ and $c$ and you are ready to go with the nonlinear regression.



          Another approach could be : assume a value of $c$ and rewrite the model as $$y-c=a e^{bt}$$ $$log(y-c)=alpha + bt$$ which means that defining $w_i=log(y_i-c)$, the model is just $z=alpha + bt$ and a linear regression will give $alpha$ and $b$. From these, recompute $y_i^*=c+e^{ alpha + bt_i}$ and the corresponding some of squares $SSQ(c)$. Again, trying different values of $c$ will show a minimum and for the best value of $c$, you know the corresponding $b$ and $a=e^{alpha}$ and you are ready to go with the nonlinear regression.



          It is sure that there is one phase with trial and error but it is very fast.



          Edit



          You can even have an immediate estimate of parameter $b$ if you take three equally spaced points $t_1$, $t_2$, $t_3$ such that $t_2=frac{t_1+t_3}2$ and, the corresponding $y_i$'s. After simplification $$frac{ y_3-y_2}{ y_3-y_1}=frac{1}{1+e^{frac{1}{2} b (t_1-t_3)}}$$ from which you get the estimate of $b$ $$b=frac{2}{t_1-t_3} log left(frac{y_1-y_2}{y_2-y_3}right)$$ from which $$a=frac{y_1-y_3}{e^{b t_1}-e^{b t_3}}$$ $$c=y_1-ae^{bt_1}$$
          Using the data given in page 18 of JJacquelin's book, let us take (approximate values for the $x$'s) the three points $(-1,0.418)$, $(0,0.911)$, $(1,3.544)$. This immediately gives $bapprox 1.675$, $aapprox 0.606$, $capprox 0.304$ which are extremely close to the end results by JJacquelin in his book for this specific example.



          Having these estimates, just run the nonlinear regression.



          This was a trick proposed by Yves Daoust here







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Apr 13 '17 at 12:20









          Community

          1




          1










          answered Jun 25 '15 at 8:21









          Claude LeiboviciClaude Leibovici

          126k1158135




          126k1158135























              7












              $begingroup$

              A direct method of fitting (no guessed initial values required, no iterative process) is shown below.
              For the theory, see the paper (pp.16-17) : https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales



              enter image description here






              share|cite|improve this answer









              $endgroup$













              • $begingroup$
                Fantastic. Unfortunately, I don't read french. Can you point me to the page on which the integral equation is stated that is used to perform the transformation? Would love to learn more.
                $endgroup$
                – muaddib
                Jun 24 '15 at 14:17






              • 1




                $begingroup$
                The integral equation is (6) page 16 : $$y-y_1=-ac(x-x_1)+cint_{x_1}^x y(u)du$$ Presently, there is no available translation for the whole paper.
                $endgroup$
                – JJacquelin
                Jun 24 '15 at 15:28












              • $begingroup$
                I am always impressed by your approaches !
                $endgroup$
                – Claude Leibovici
                Jun 25 '15 at 13:34










              • $begingroup$
                Here's a numpy implementation of the above. Behaves a bit weird sometimes; no guarantees that it is accurate. hastebin.com/yuxoxotipo.py
                $endgroup$
                – Mingwei Samuel
                Jul 24 '17 at 22:56






              • 1




                $begingroup$
                @JJacquelin This is very nice. I've started using the approach you provide in your paper(s) in a few places in my work. I've opened up github.com/scipy/scipy/pull/9158. I hope the content (and citation) is to your liking.
                $endgroup$
                – Mad Physicist
                Aug 20 '18 at 19:24
















              7












              $begingroup$

              A direct method of fitting (no guessed initial values required, no iterative process) is shown below.
              For the theory, see the paper (pp.16-17) : https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales



              enter image description here






              share|cite|improve this answer









              $endgroup$













              • $begingroup$
                Fantastic. Unfortunately, I don't read french. Can you point me to the page on which the integral equation is stated that is used to perform the transformation? Would love to learn more.
                $endgroup$
                – muaddib
                Jun 24 '15 at 14:17






              • 1




                $begingroup$
                The integral equation is (6) page 16 : $$y-y_1=-ac(x-x_1)+cint_{x_1}^x y(u)du$$ Presently, there is no available translation for the whole paper.
                $endgroup$
                – JJacquelin
                Jun 24 '15 at 15:28












              • $begingroup$
                I am always impressed by your approaches !
                $endgroup$
                – Claude Leibovici
                Jun 25 '15 at 13:34










              • $begingroup$
                Here's a numpy implementation of the above. Behaves a bit weird sometimes; no guarantees that it is accurate. hastebin.com/yuxoxotipo.py
                $endgroup$
                – Mingwei Samuel
                Jul 24 '17 at 22:56






              • 1




                $begingroup$
                @JJacquelin This is very nice. I've started using the approach you provide in your paper(s) in a few places in my work. I've opened up github.com/scipy/scipy/pull/9158. I hope the content (and citation) is to your liking.
                $endgroup$
                – Mad Physicist
                Aug 20 '18 at 19:24














              7












              7








              7





              $begingroup$

              A direct method of fitting (no guessed initial values required, no iterative process) is shown below.
              For the theory, see the paper (pp.16-17) : https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales



              enter image description here






              share|cite|improve this answer









              $endgroup$



              A direct method of fitting (no guessed initial values required, no iterative process) is shown below.
              For the theory, see the paper (pp.16-17) : https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales



              enter image description here







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered Jun 24 '15 at 13:53









              JJacquelinJJacquelin

              45.9k21858




              45.9k21858












              • $begingroup$
                Fantastic. Unfortunately, I don't read french. Can you point me to the page on which the integral equation is stated that is used to perform the transformation? Would love to learn more.
                $endgroup$
                – muaddib
                Jun 24 '15 at 14:17






              • 1




                $begingroup$
                The integral equation is (6) page 16 : $$y-y_1=-ac(x-x_1)+cint_{x_1}^x y(u)du$$ Presently, there is no available translation for the whole paper.
                $endgroup$
                – JJacquelin
                Jun 24 '15 at 15:28












              • $begingroup$
                I am always impressed by your approaches !
                $endgroup$
                – Claude Leibovici
                Jun 25 '15 at 13:34










              • $begingroup$
                Here's a numpy implementation of the above. Behaves a bit weird sometimes; no guarantees that it is accurate. hastebin.com/yuxoxotipo.py
                $endgroup$
                – Mingwei Samuel
                Jul 24 '17 at 22:56






              • 1




                $begingroup$
                @JJacquelin This is very nice. I've started using the approach you provide in your paper(s) in a few places in my work. I've opened up github.com/scipy/scipy/pull/9158. I hope the content (and citation) is to your liking.
                $endgroup$
                – Mad Physicist
                Aug 20 '18 at 19:24


















              • $begingroup$
                Fantastic. Unfortunately, I don't read french. Can you point me to the page on which the integral equation is stated that is used to perform the transformation? Would love to learn more.
                $endgroup$
                – muaddib
                Jun 24 '15 at 14:17






              • 1




                $begingroup$
                The integral equation is (6) page 16 : $$y-y_1=-ac(x-x_1)+cint_{x_1}^x y(u)du$$ Presently, there is no available translation for the whole paper.
                $endgroup$
                – JJacquelin
                Jun 24 '15 at 15:28












              • $begingroup$
                I am always impressed by your approaches !
                $endgroup$
                – Claude Leibovici
                Jun 25 '15 at 13:34










              • $begingroup$
                Here's a numpy implementation of the above. Behaves a bit weird sometimes; no guarantees that it is accurate. hastebin.com/yuxoxotipo.py
                $endgroup$
                – Mingwei Samuel
                Jul 24 '17 at 22:56






              • 1




                $begingroup$
                @JJacquelin This is very nice. I've started using the approach you provide in your paper(s) in a few places in my work. I've opened up github.com/scipy/scipy/pull/9158. I hope the content (and citation) is to your liking.
                $endgroup$
                – Mad Physicist
                Aug 20 '18 at 19:24
















              $begingroup$
              Fantastic. Unfortunately, I don't read french. Can you point me to the page on which the integral equation is stated that is used to perform the transformation? Would love to learn more.
              $endgroup$
              – muaddib
              Jun 24 '15 at 14:17




              $begingroup$
              Fantastic. Unfortunately, I don't read french. Can you point me to the page on which the integral equation is stated that is used to perform the transformation? Would love to learn more.
              $endgroup$
              – muaddib
              Jun 24 '15 at 14:17




              1




              1




              $begingroup$
              The integral equation is (6) page 16 : $$y-y_1=-ac(x-x_1)+cint_{x_1}^x y(u)du$$ Presently, there is no available translation for the whole paper.
              $endgroup$
              – JJacquelin
              Jun 24 '15 at 15:28






              $begingroup$
              The integral equation is (6) page 16 : $$y-y_1=-ac(x-x_1)+cint_{x_1}^x y(u)du$$ Presently, there is no available translation for the whole paper.
              $endgroup$
              – JJacquelin
              Jun 24 '15 at 15:28














              $begingroup$
              I am always impressed by your approaches !
              $endgroup$
              – Claude Leibovici
              Jun 25 '15 at 13:34




              $begingroup$
              I am always impressed by your approaches !
              $endgroup$
              – Claude Leibovici
              Jun 25 '15 at 13:34












              $begingroup$
              Here's a numpy implementation of the above. Behaves a bit weird sometimes; no guarantees that it is accurate. hastebin.com/yuxoxotipo.py
              $endgroup$
              – Mingwei Samuel
              Jul 24 '17 at 22:56




              $begingroup$
              Here's a numpy implementation of the above. Behaves a bit weird sometimes; no guarantees that it is accurate. hastebin.com/yuxoxotipo.py
              $endgroup$
              – Mingwei Samuel
              Jul 24 '17 at 22:56




              1




              1




              $begingroup$
              @JJacquelin This is very nice. I've started using the approach you provide in your paper(s) in a few places in my work. I've opened up github.com/scipy/scipy/pull/9158. I hope the content (and citation) is to your liking.
              $endgroup$
              – Mad Physicist
              Aug 20 '18 at 19:24




              $begingroup$
              @JJacquelin This is very nice. I've started using the approach you provide in your paper(s) in a few places in my work. I've opened up github.com/scipy/scipy/pull/9158. I hope the content (and citation) is to your liking.
              $endgroup$
              – Mad Physicist
              Aug 20 '18 at 19:24











              1












              $begingroup$

              You can solve the problem by regressing over the derivate (difference) of the data. If you formulate the problem as having to solve for $a, b, c$ in



              $$y=bcdot e^{ax} + c$$



              By taking the derivative you get:



              begin{align}
              frac{dy}{dx} &= abcdot e^{ax} \
              logleft(frac{dy}{dx}right) &= log(abcdot e^{ax}) \
              &= log(ab) + log(e^{ax}) \
              &= log(ab) + ax
              end{align}



              You can then fit a linear model of the form $u = s x + t$ where $u=logleft(frac{dy}{dx}right)$, $s=a$, and $t=log(ab)$. After solving you can obtain $a = s$ and $b = frac{e^t}{a}$. And finally, you can obtain c by subtracting the reconstructed model from the original data.



              Note: this whole exercise will require you to sort your data set by $x$ values.



              Here is python code to accomplish the task:



              def regress_exponential_with_offset(x, y):
              # sort values
              ind = np.argsort(x)
              x = x[ind]
              y = y[ind]

              # decaying exponentials need special treatment
              # since we can't take the log of negative numbers.
              neg = -1 if y[0] > y[-1] else 1
              dx = np.diff(x)
              dy = np.diff(y)
              dy_dx = dy / dx

              # filter any remaining negative numbers.
              v = x[:-1]
              u = neg * dy_dx
              ind = np.where(u > 0)[0]
              v = v[ind]
              u = u[ind]

              # perform regression
              u = np.log(u)
              s, t = np.polyfit(v, u, 1)
              a = s
              b = neg * np.exp(t) / a
              yy = np.exp(a * x) * b
              c = np.median(y - yy)
              return a, b, c





              share|cite|improve this answer









              $endgroup$









              • 1




                $begingroup$
                I am sorry but this is extremely dangerous to do (except if the data are strictly perfect).
                $endgroup$
                – Claude Leibovici
                Jan 15 at 10:48










              • $begingroup$
                @ClaudeLeibovici Can you please elaborate? I've used this technique successfully on pretty noisy data.
                $endgroup$
                – gilsho
                Jan 16 at 13:53










              • $begingroup$
                I did not want to answer as soon as I saw your comment and, thinking, I shall not answer NOW but I am ready to have as long discussions as required on this topic with you (chat room, e-mails, ...) if you accept to first ask the question "Is this correct or not ? at Cross Validated Stack Exchange. Cheers.
                $endgroup$
                – Claude Leibovici
                Jan 17 at 2:15






              • 1




                $begingroup$
                @glisho. The idea to use the derivative of a non linear function to transform a non linear into a linear regression is not new. This is a good idea but with important drawbacks in practice, due to hazard in numerical calculus . Such method was described in the paper referenced below. But this is an very uncertain method if the data are scattered and/or noisy as shown in this paper. I fully agree with the comment of Claude Leibovici. In fact it is much better to use antiderivative than derivative because the numerical integration is much accurate and stable than numerical differentiation.
                $endgroup$
                – JJacquelin
                Jan 17 at 5:30










              • $begingroup$
                For example, the use of integral equation (but not differential equation) in case of $y=a exp(bx)+cquad$ is described pages 15-18 in the paper : fr.scribd.com/doc/14674814/Regressions-et-equations-integrales
                $endgroup$
                – JJacquelin
                Jan 17 at 6:56


















              1












              $begingroup$

              You can solve the problem by regressing over the derivate (difference) of the data. If you formulate the problem as having to solve for $a, b, c$ in



              $$y=bcdot e^{ax} + c$$



              By taking the derivative you get:



              begin{align}
              frac{dy}{dx} &= abcdot e^{ax} \
              logleft(frac{dy}{dx}right) &= log(abcdot e^{ax}) \
              &= log(ab) + log(e^{ax}) \
              &= log(ab) + ax
              end{align}



              You can then fit a linear model of the form $u = s x + t$ where $u=logleft(frac{dy}{dx}right)$, $s=a$, and $t=log(ab)$. After solving you can obtain $a = s$ and $b = frac{e^t}{a}$. And finally, you can obtain c by subtracting the reconstructed model from the original data.



              Note: this whole exercise will require you to sort your data set by $x$ values.



              Here is python code to accomplish the task:



              def regress_exponential_with_offset(x, y):
              # sort values
              ind = np.argsort(x)
              x = x[ind]
              y = y[ind]

              # decaying exponentials need special treatment
              # since we can't take the log of negative numbers.
              neg = -1 if y[0] > y[-1] else 1
              dx = np.diff(x)
              dy = np.diff(y)
              dy_dx = dy / dx

              # filter any remaining negative numbers.
              v = x[:-1]
              u = neg * dy_dx
              ind = np.where(u > 0)[0]
              v = v[ind]
              u = u[ind]

              # perform regression
              u = np.log(u)
              s, t = np.polyfit(v, u, 1)
              a = s
              b = neg * np.exp(t) / a
              yy = np.exp(a * x) * b
              c = np.median(y - yy)
              return a, b, c





              share|cite|improve this answer









              $endgroup$









              • 1




                $begingroup$
                I am sorry but this is extremely dangerous to do (except if the data are strictly perfect).
                $endgroup$
                – Claude Leibovici
                Jan 15 at 10:48










              • $begingroup$
                @ClaudeLeibovici Can you please elaborate? I've used this technique successfully on pretty noisy data.
                $endgroup$
                – gilsho
                Jan 16 at 13:53










              • $begingroup$
                I did not want to answer as soon as I saw your comment and, thinking, I shall not answer NOW but I am ready to have as long discussions as required on this topic with you (chat room, e-mails, ...) if you accept to first ask the question "Is this correct or not ? at Cross Validated Stack Exchange. Cheers.
                $endgroup$
                – Claude Leibovici
                Jan 17 at 2:15






              • 1




                $begingroup$
                @glisho. The idea to use the derivative of a non linear function to transform a non linear into a linear regression is not new. This is a good idea but with important drawbacks in practice, due to hazard in numerical calculus . Such method was described in the paper referenced below. But this is an very uncertain method if the data are scattered and/or noisy as shown in this paper. I fully agree with the comment of Claude Leibovici. In fact it is much better to use antiderivative than derivative because the numerical integration is much accurate and stable than numerical differentiation.
                $endgroup$
                – JJacquelin
                Jan 17 at 5:30










              • $begingroup$
                For example, the use of integral equation (but not differential equation) in case of $y=a exp(bx)+cquad$ is described pages 15-18 in the paper : fr.scribd.com/doc/14674814/Regressions-et-equations-integrales
                $endgroup$
                – JJacquelin
                Jan 17 at 6:56
















              1












              1








              1





              $begingroup$

              You can solve the problem by regressing over the derivate (difference) of the data. If you formulate the problem as having to solve for $a, b, c$ in



              $$y=bcdot e^{ax} + c$$



              By taking the derivative you get:



              begin{align}
              frac{dy}{dx} &= abcdot e^{ax} \
              logleft(frac{dy}{dx}right) &= log(abcdot e^{ax}) \
              &= log(ab) + log(e^{ax}) \
              &= log(ab) + ax
              end{align}



              You can then fit a linear model of the form $u = s x + t$ where $u=logleft(frac{dy}{dx}right)$, $s=a$, and $t=log(ab)$. After solving you can obtain $a = s$ and $b = frac{e^t}{a}$. And finally, you can obtain c by subtracting the reconstructed model from the original data.



              Note: this whole exercise will require you to sort your data set by $x$ values.



              Here is python code to accomplish the task:



              def regress_exponential_with_offset(x, y):
              # sort values
              ind = np.argsort(x)
              x = x[ind]
              y = y[ind]

              # decaying exponentials need special treatment
              # since we can't take the log of negative numbers.
              neg = -1 if y[0] > y[-1] else 1
              dx = np.diff(x)
              dy = np.diff(y)
              dy_dx = dy / dx

              # filter any remaining negative numbers.
              v = x[:-1]
              u = neg * dy_dx
              ind = np.where(u > 0)[0]
              v = v[ind]
              u = u[ind]

              # perform regression
              u = np.log(u)
              s, t = np.polyfit(v, u, 1)
              a = s
              b = neg * np.exp(t) / a
              yy = np.exp(a * x) * b
              c = np.median(y - yy)
              return a, b, c





              share|cite|improve this answer









              $endgroup$



              You can solve the problem by regressing over the derivate (difference) of the data. If you formulate the problem as having to solve for $a, b, c$ in



              $$y=bcdot e^{ax} + c$$



              By taking the derivative you get:



              begin{align}
              frac{dy}{dx} &= abcdot e^{ax} \
              logleft(frac{dy}{dx}right) &= log(abcdot e^{ax}) \
              &= log(ab) + log(e^{ax}) \
              &= log(ab) + ax
              end{align}



              You can then fit a linear model of the form $u = s x + t$ where $u=logleft(frac{dy}{dx}right)$, $s=a$, and $t=log(ab)$. After solving you can obtain $a = s$ and $b = frac{e^t}{a}$. And finally, you can obtain c by subtracting the reconstructed model from the original data.



              Note: this whole exercise will require you to sort your data set by $x$ values.



              Here is python code to accomplish the task:



              def regress_exponential_with_offset(x, y):
              # sort values
              ind = np.argsort(x)
              x = x[ind]
              y = y[ind]

              # decaying exponentials need special treatment
              # since we can't take the log of negative numbers.
              neg = -1 if y[0] > y[-1] else 1
              dx = np.diff(x)
              dy = np.diff(y)
              dy_dx = dy / dx

              # filter any remaining negative numbers.
              v = x[:-1]
              u = neg * dy_dx
              ind = np.where(u > 0)[0]
              v = v[ind]
              u = u[ind]

              # perform regression
              u = np.log(u)
              s, t = np.polyfit(v, u, 1)
              a = s
              b = neg * np.exp(t) / a
              yy = np.exp(a * x) * b
              c = np.median(y - yy)
              return a, b, c






              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered Jan 15 at 8:11









              gilshogilsho

              111




              111








              • 1




                $begingroup$
                I am sorry but this is extremely dangerous to do (except if the data are strictly perfect).
                $endgroup$
                – Claude Leibovici
                Jan 15 at 10:48










              • $begingroup$
                @ClaudeLeibovici Can you please elaborate? I've used this technique successfully on pretty noisy data.
                $endgroup$
                – gilsho
                Jan 16 at 13:53










              • $begingroup$
                I did not want to answer as soon as I saw your comment and, thinking, I shall not answer NOW but I am ready to have as long discussions as required on this topic with you (chat room, e-mails, ...) if you accept to first ask the question "Is this correct or not ? at Cross Validated Stack Exchange. Cheers.
                $endgroup$
                – Claude Leibovici
                Jan 17 at 2:15






              • 1




                $begingroup$
                @glisho. The idea to use the derivative of a non linear function to transform a non linear into a linear regression is not new. This is a good idea but with important drawbacks in practice, due to hazard in numerical calculus . Such method was described in the paper referenced below. But this is an very uncertain method if the data are scattered and/or noisy as shown in this paper. I fully agree with the comment of Claude Leibovici. In fact it is much better to use antiderivative than derivative because the numerical integration is much accurate and stable than numerical differentiation.
                $endgroup$
                – JJacquelin
                Jan 17 at 5:30










              • $begingroup$
                For example, the use of integral equation (but not differential equation) in case of $y=a exp(bx)+cquad$ is described pages 15-18 in the paper : fr.scribd.com/doc/14674814/Regressions-et-equations-integrales
                $endgroup$
                – JJacquelin
                Jan 17 at 6:56
















              • 1




                $begingroup$
                I am sorry but this is extremely dangerous to do (except if the data are strictly perfect).
                $endgroup$
                – Claude Leibovici
                Jan 15 at 10:48










              • $begingroup$
                @ClaudeLeibovici Can you please elaborate? I've used this technique successfully on pretty noisy data.
                $endgroup$
                – gilsho
                Jan 16 at 13:53










              • $begingroup$
                I did not want to answer as soon as I saw your comment and, thinking, I shall not answer NOW but I am ready to have as long discussions as required on this topic with you (chat room, e-mails, ...) if you accept to first ask the question "Is this correct or not ? at Cross Validated Stack Exchange. Cheers.
                $endgroup$
                – Claude Leibovici
                Jan 17 at 2:15






              • 1




                $begingroup$
                @glisho. The idea to use the derivative of a non linear function to transform a non linear into a linear regression is not new. This is a good idea but with important drawbacks in practice, due to hazard in numerical calculus . Such method was described in the paper referenced below. But this is an very uncertain method if the data are scattered and/or noisy as shown in this paper. I fully agree with the comment of Claude Leibovici. In fact it is much better to use antiderivative than derivative because the numerical integration is much accurate and stable than numerical differentiation.
                $endgroup$
                – JJacquelin
                Jan 17 at 5:30










              • $begingroup$
                For example, the use of integral equation (but not differential equation) in case of $y=a exp(bx)+cquad$ is described pages 15-18 in the paper : fr.scribd.com/doc/14674814/Regressions-et-equations-integrales
                $endgroup$
                – JJacquelin
                Jan 17 at 6:56










              1




              1




              $begingroup$
              I am sorry but this is extremely dangerous to do (except if the data are strictly perfect).
              $endgroup$
              – Claude Leibovici
              Jan 15 at 10:48




              $begingroup$
              I am sorry but this is extremely dangerous to do (except if the data are strictly perfect).
              $endgroup$
              – Claude Leibovici
              Jan 15 at 10:48












              $begingroup$
              @ClaudeLeibovici Can you please elaborate? I've used this technique successfully on pretty noisy data.
              $endgroup$
              – gilsho
              Jan 16 at 13:53




              $begingroup$
              @ClaudeLeibovici Can you please elaborate? I've used this technique successfully on pretty noisy data.
              $endgroup$
              – gilsho
              Jan 16 at 13:53












              $begingroup$
              I did not want to answer as soon as I saw your comment and, thinking, I shall not answer NOW but I am ready to have as long discussions as required on this topic with you (chat room, e-mails, ...) if you accept to first ask the question "Is this correct or not ? at Cross Validated Stack Exchange. Cheers.
              $endgroup$
              – Claude Leibovici
              Jan 17 at 2:15




              $begingroup$
              I did not want to answer as soon as I saw your comment and, thinking, I shall not answer NOW but I am ready to have as long discussions as required on this topic with you (chat room, e-mails, ...) if you accept to first ask the question "Is this correct or not ? at Cross Validated Stack Exchange. Cheers.
              $endgroup$
              – Claude Leibovici
              Jan 17 at 2:15




              1




              1




              $begingroup$
              @glisho. The idea to use the derivative of a non linear function to transform a non linear into a linear regression is not new. This is a good idea but with important drawbacks in practice, due to hazard in numerical calculus . Such method was described in the paper referenced below. But this is an very uncertain method if the data are scattered and/or noisy as shown in this paper. I fully agree with the comment of Claude Leibovici. In fact it is much better to use antiderivative than derivative because the numerical integration is much accurate and stable than numerical differentiation.
              $endgroup$
              – JJacquelin
              Jan 17 at 5:30




              $begingroup$
              @glisho. The idea to use the derivative of a non linear function to transform a non linear into a linear regression is not new. This is a good idea but with important drawbacks in practice, due to hazard in numerical calculus . Such method was described in the paper referenced below. But this is an very uncertain method if the data are scattered and/or noisy as shown in this paper. I fully agree with the comment of Claude Leibovici. In fact it is much better to use antiderivative than derivative because the numerical integration is much accurate and stable than numerical differentiation.
              $endgroup$
              – JJacquelin
              Jan 17 at 5:30












              $begingroup$
              For example, the use of integral equation (but not differential equation) in case of $y=a exp(bx)+cquad$ is described pages 15-18 in the paper : fr.scribd.com/doc/14674814/Regressions-et-equations-integrales
              $endgroup$
              – JJacquelin
              Jan 17 at 6:56






              $begingroup$
              For example, the use of integral equation (but not differential equation) in case of $y=a exp(bx)+cquad$ is described pages 15-18 in the paper : fr.scribd.com/doc/14674814/Regressions-et-equations-integrales
              $endgroup$
              – JJacquelin
              Jan 17 at 6:56













              -1












              $begingroup$

              I tried to deduce the formula for 'b' in case of equdistant ti-s, but I have got a different one. There is a reciprocal under the logarithm in my result. (Please check it. Maybe I am not right.)



              Sorry. I see ... The t3-t1 minus sign makes the log content right.






              share|cite|improve this answer











              $endgroup$


















                -1












                $begingroup$

                I tried to deduce the formula for 'b' in case of equdistant ti-s, but I have got a different one. There is a reciprocal under the logarithm in my result. (Please check it. Maybe I am not right.)



                Sorry. I see ... The t3-t1 minus sign makes the log content right.






                share|cite|improve this answer











                $endgroup$
















                  -1












                  -1








                  -1





                  $begingroup$

                  I tried to deduce the formula for 'b' in case of equdistant ti-s, but I have got a different one. There is a reciprocal under the logarithm in my result. (Please check it. Maybe I am not right.)



                  Sorry. I see ... The t3-t1 minus sign makes the log content right.






                  share|cite|improve this answer











                  $endgroup$



                  I tried to deduce the formula for 'b' in case of equdistant ti-s, but I have got a different one. There is a reciprocal under the logarithm in my result. (Please check it. Maybe I am not right.)



                  Sorry. I see ... The t3-t1 minus sign makes the log content right.







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Jun 6 '18 at 13:46

























                  answered Jun 6 '18 at 13:36









                  Greg DetzkyGreg Detzky

                  11




                  11






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1337601%2ffit-exponential-with-constant%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Bressuire

                      Cabo Verde

                      Gyllenstierna