Skip to content

Conversation

@kuanhanl
Copy link

@kuanhanl kuanhanl commented May 1, 2022

Add an MHE example for the pipeline, including a test for it.

@kuanhanl kuanhanl changed the title Pipline mhe Pipline MHE May 1, 2022
@Robbybp
Copy link
Owner

Robbybp commented May 2, 2022

Thanks Kuan-Han, this looks really good at first glance. I especially like the small size of the functions in the MHE constructor file. I intend to give this a detailed review relatively soon.

@kuanhanl
Copy link
Author

kuanhanl commented May 2, 2022

Thanks Kuan-Han, this looks really good at first glance. I especially like the small size of the functions in the MHE constructor file. I intend to give this a detailed review relatively soon.

You're welcome, Robby! I really enjoyed reviewing your codes and learning from them. Please let me know if you have any questions and comments.

Copy link
Owner

@Robbybp Robbybp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Kuan-Han, I think I will review this code in three stages:

  1. Make sure my data structures and functions are holding up to the task of using them for MHE. It seems like you are using the linker, helper, and time series data structures effectively, but I noticed that you could be using my functions for creating "cost expressions." I implemented this in a separate branch, kuanhanl#2, if you are curious.
  2. Add functionality to my data structures and functions that may be useful based on your code. This might be something like a class for "scalar data from a time-indexed variable" for things like objective weights. I'll use your MHE as a test case for this and open another PR into this branch when I have this functionality.
  3. Review the actual implementation of MHE here, e.g. the APIs to construct model components.

My motivation for these separate stages in this order is that I can do the first two independently (other than merging PRs into your MHE branch, which is not strictly necessary for me to work on these). Once these first two have been done and tested with both NMPC and MHE (although not necessarily merged into this branch), I can go ahead and work on a PR into IDAES with the data structures and utility functions. Then, when you have time to look at this again, you can make comments, review and merge my PRs, and address whatever comments I might have on the MHE API when I get around to it.

@@ -0,0 +1 @@
{"fs.pipeline.control_volume.flow_mass[*,1.0]": [553003.1393240632, 535472.4292275416], "fs.pipeline.control_volume.pressure[*,0]": [58.905880452751845, 58.92036532907178]} No newline at end of file
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where does this come from? I never like having a "data file" in a repository without the script to generate it. You can see the get_control_inputs function in my PR, which creates a TimeSeriesData object with this info. This can be serialized if necessary with json.dump and TimeSeriesData.to_serializable.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These control inputs come from the results of run_pipeline_nmpc.py.
Yes, I think using TimeSeriesData to wrap these control inputs is more appropriate and easier to understand for users.

#
# Load data into dynamic model
#
use_linker = False
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code fails if use_linker is set to True.

Comment on lines 101 to 105
else:
# If I want to use DynamicModelHelper:
m_plant_helper = DynamicModelHelper(m_plant, m_plant.fs.time)
m_plant_helper.load_scalar_data(scalar_data)
m_plant_helper.load_data_at_time(initial_data)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Failure with use_linker was for two reasons:

  1. The DynamicModelHelper is used elsewhere
  2. The plant simulation doesn't converge if scalar data is not initialized

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you are right.
I saw you construct DynamicModelHelper for plant and initialize the scalar data before if statement.
Thanks for fixing this!

Comment on lines 218 to 219
#TODO: This function is similar to "get_tracking_cost_from_constant_setpoint",
#Should I merge this one to that one?
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I can tell, this function is basically just doing what get_tracking_cost_from_constant_setpoint does, except it uses zero as the setpoint value for every variable, and specifies the weights in terms of "components" rather than the variables that appear in the cost expression. My approach to merge the two functions has been to just call get_tracking_cost... with a setpoint data structure with all zeros, then to require that weights are specified in terms of error variables. I don't think it should be this function's responsibility to assign the correct weight to the correct error variable. That should already be done. You can see my approach in my PR: kuanhanl#2.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, I should have thought of setting the setpoint data to zeros.
Just a minor suggestion, shall we remove "Tracking" in line 69 of nmpc_examples/nmpc/cost_expressions.py because MHE is using this function as well?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also see you have a toggle between two different objective formulations for constructing measurement error costs. It looks good.

Comment on lines 8 to 12
def test_small_mhe_simulation(self):
simulation_data, estimate_data = run_mhe(
simulation_horizon=4.0,
sample_period=2.0,
)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test takes quite a while with the default estimator horizon (20 h, I think), so I shortened it. I end up getting the same answers either way. Does this make sense?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it does make sense in this case and this is related to your side question.

@Robbybp
Copy link
Owner

Robbybp commented Jun 1, 2022

Side question: For MHE, should we be adding noise to the measurements? Also, in the example provided, our optimization problems converge with objective function values of zero. Is this expected, or a typical case?

@kuanhanl
Copy link
Author

@Robbybp To answer your side question.
In this example, you can see I directly transfer some outputs from plant to estimator as measurements without adding any noise. Therefore, it's reasonable to expect the optimization problems converge with objective values of zero and the estimation accuracy is the same even if you shorten the horizon. This is because the optimizer should be able to find the exact true states based on the true controls, noise-free measurements and the dynamic model.

For a typical case or the real-world application, yes, we should add noise to the measurements because measurements are seldom noise-free.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants