-
Notifications
You must be signed in to change notification settings - Fork 1
Pipline MHE #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Pipline MHE #2
Conversation
|
Thanks Kuan-Han, this looks really good at first glance. I especially like the small size of the functions in the MHE constructor file. I intend to give this a detailed review relatively soon. |
You're welcome, Robby! I really enjoyed reviewing your codes and learning from them. Please let me know if you have any questions and comments. |
Robbybp
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Kuan-Han, I think I will review this code in three stages:
- Make sure my data structures and functions are holding up to the task of using them for MHE. It seems like you are using the linker, helper, and time series data structures effectively, but I noticed that you could be using my functions for creating "cost expressions." I implemented this in a separate branch, kuanhanl#2, if you are curious.
- Add functionality to my data structures and functions that may be useful based on your code. This might be something like a class for "scalar data from a time-indexed variable" for things like objective weights. I'll use your MHE as a test case for this and open another PR into this branch when I have this functionality.
- Review the actual implementation of MHE here, e.g. the APIs to construct model components.
My motivation for these separate stages in this order is that I can do the first two independently (other than merging PRs into your MHE branch, which is not strictly necessary for me to work on these). Once these first two have been done and tested with both NMPC and MHE (although not necessarily merged into this branch), I can go ahead and work on a PR into IDAES with the data structures and utility functions. Then, when you have time to look at this again, you can make comments, review and merge my PRs, and address whatever comments I might have on the MHE API when I get around to it.
| @@ -0,0 +1 @@ | |||
| {"fs.pipeline.control_volume.flow_mass[*,1.0]": [553003.1393240632, 535472.4292275416], "fs.pipeline.control_volume.pressure[*,0]": [58.905880452751845, 58.92036532907178]} No newline at end of file | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where does this come from? I never like having a "data file" in a repository without the script to generate it. You can see the get_control_inputs function in my PR, which creates a TimeSeriesData object with this info. This can be serialized if necessary with json.dump and TimeSeriesData.to_serializable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These control inputs come from the results of run_pipeline_nmpc.py.
Yes, I think using TimeSeriesData to wrap these control inputs is more appropriate and easier to understand for users.
| # | ||
| # Load data into dynamic model | ||
| # | ||
| use_linker = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code fails if use_linker is set to True.
| else: | ||
| # If I want to use DynamicModelHelper: | ||
| m_plant_helper = DynamicModelHelper(m_plant, m_plant.fs.time) | ||
| m_plant_helper.load_scalar_data(scalar_data) | ||
| m_plant_helper.load_data_at_time(initial_data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Failure with use_linker was for two reasons:
- The DynamicModelHelper is used elsewhere
- The plant simulation doesn't converge if scalar data is not initialized
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you are right.
I saw you construct DynamicModelHelper for plant and initialize the scalar data before if statement.
Thanks for fixing this!
nmpc_examples/mhe/mhe_constructor.py
Outdated
| #TODO: This function is similar to "get_tracking_cost_from_constant_setpoint", | ||
| #Should I merge this one to that one? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I can tell, this function is basically just doing what get_tracking_cost_from_constant_setpoint does, except it uses zero as the setpoint value for every variable, and specifies the weights in terms of "components" rather than the variables that appear in the cost expression. My approach to merge the two functions has been to just call get_tracking_cost... with a setpoint data structure with all zeros, then to require that weights are specified in terms of error variables. I don't think it should be this function's responsibility to assign the correct weight to the correct error variable. That should already be done. You can see my approach in my PR: kuanhanl#2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, I should have thought of setting the setpoint data to zeros.
Just a minor suggestion, shall we remove "Tracking" in line 69 of nmpc_examples/nmpc/cost_expressions.py because MHE is using this function as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also see you have a toggle between two different objective formulations for constructing measurement error costs. It looks good.
| def test_small_mhe_simulation(self): | ||
| simulation_data, estimate_data = run_mhe( | ||
| simulation_horizon=4.0, | ||
| sample_period=2.0, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test takes quite a while with the default estimator horizon (20 h, I think), so I shortened it. I end up getting the same answers either way. Does this make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it does make sense in this case and this is related to your side question.
|
Side question: For MHE, should we be adding noise to the measurements? Also, in the example provided, our optimization problems converge with objective function values of zero. Is this expected, or a typical case? |
|
@Robbybp To answer your side question. For a typical case or the real-world application, yes, we should add noise to the measurements because measurements are seldom noise-free. |
DNMY - patch to MHE with new data structure
Use existing functions and data structures wherever possible in MHE
Add an MHE example for the pipeline, including a test for it.