Saturday, 15 March 2014

python - scipy: minimize vs. minimize.scalar; return F versus return F**2; shouldnt make a difference? -


I have just found a behavior that I can not understand. Do I miss something?

I have an inherent task:

  def my_cost_fun (x, a, b, c): # x is a scaler; All variables are assigned as named arrays F = some_fun (x, a, b, c) -x return f  

I reduce the function by using this:

  • optimize.fsolve (my_cost_fun, 0.2, args = (a, b, c))
  • optimize.brentq (my_cost_fun , -0.2.0.2, args = (A, b, c))

    or by the mimimize function:

    • optimize .minimize (my_cost_fun, 0.2, args = (A, B, C), method = 'L-BFGS-B', limit = ((0, A),)
    < P> Strange thing: / P>
    • If I use return F

      • use the brent_q method and fsolve Give the same results and the fastest loop solution with % timeit ~ 250 μs
      • L-BFGS-B (and SLSQP and TNC) x0
      • If I use return F ** 2 : < / P>

        • FSOLVE gives the right solution but slow 1.2 ms for the fastest loop

        • L-BFGS-B provides the right solution , But gradually converts: 1. 5 ms for fastest loop

      Why can anyone explain why?

As I have described in the comments:

Here is a possible explanation Why L-BFGS-B is not working when you use return F : If the value of F can be negative, then optmize.minimize it Trying to get the most negative value is not necessarily searching , this is the minimum searching if you change F ** 2 If returned, then F ** 2 will always be positive for the functions of the actual value, F ** 2 F = 0, i.e. minima will be root.

This does not describe the problem of your time, but it can be of secondary concern. I would still be curious to study time with your special some_fun (x, a, b, c) if you get a chance to post a definition.


No comments:

Post a Comment