Conversation
|
Thanks! I started writing a minimal Zoom version to teach it to myself and come up with a structure / framework for how to implement its various flavours. I'll let you know once I have this! Can you rebase on |
3ecc472 to
0bed437
Compare
| lambda _y: fn(_y, args), state.y_eval, has_aux=True | ||
| ) | ||
|
|
||
| if self.search._needs_grad_at_y_eval: |
There was a problem hiding this comment.
FYI this is verboten, accessing a private field.
There was a problem hiding this comment.
How about just making it public? It's only accessed from minimisers.
| grad = lin_to_grad(lin_fn, state.y_eval, autodiff_mode=autodiff_mode) | ||
| f_eval_info = FunctionInfo.EvalGrad(f_eval, grad) | ||
|
|
||
| f_eval_info = cast(FunctionInfo.EvalGrad, f_eval_info) |
There was a problem hiding this comment.
Better to assert, which checks this at runtime as well as statically.
There was a problem hiding this comment.
Changed to assert isinstance(f_eval_info, FunctionInfo.EvalGrad)
| See [this documentation](./introduction.md) for more information. | ||
| """ | ||
|
|
||
| _needs_grad_at_y_eval: ClassVar[bool] |
There was a problem hiding this comment.
Should this be an AbstractClassVar?
If so then FYI this is a breaking change, since any other AbstractSearch subclasses out there will need to provide it.
There was a problem hiding this comment.
Not sure. Zoom is the only one that uses it now, but I also added it to all other searches.
How about just defaulting to False in AbstractSearch and only overwriting it in Zoom?
| grad = lin_to_grad(lin_fn, state.y_eval, autodiff_mode) | ||
| f_eval_info = FunctionInfo.EvalGrad(f_eval, grad) | ||
| else: | ||
| f_eval_info = FunctionInfo.Eval(f_eval) |
There was a problem hiding this comment.
What if we need some other flavor of FunctionInfo later? I think picking a boolean flag here may paint us into a corner in the future. Perhaps a more general approach is required.
There was a problem hiding this comment.
Do you mean that it might not just be grad that's required by a future search?
Maybe the search could have a method to set up f_eval_info? Or give a list of minimum required info fields that need to be filled in here?
Then in accepted, instead of the flag, branch based on what fields f_eval_info already has? So here it could be if not hasattr(f_eval_info, "grad").
There was a problem hiding this comment.
Definitely 'no' to using hasattr, which I'm not a fan of. I generally argue that it should be readable what types that a piece of code is expecting, which means that one should use isinstance instead of hasattr in order to determine what is available :)
But yes, I'm thinking that a future search may need more than grad – in general any kind of FunctionInfo. Perhaps something like info_needed_at_y_eval: AbstractClassVar[FunctionInfo]? @johannahaffner may have stronger opinions than I do here though.
| lambda _y: fn(_y, args), state.y_eval, has_aux=True | ||
| ) | ||
|
|
||
| if self.search._needs_grad_at_y_eval: |
There was a problem hiding this comment.
I am really coming to dislike the code duplication we have between these routines >.<
(Totally my fault a few years ago, no idea what the the solution is.)
Reopening #143 as it was closed when its base branch was deleted on release.