-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simple OD Nanofiltration model #1536
base: main
Are you sure you want to change the base?
Conversation
@declare_process_block_class("Nanofiltration0D") | ||
class Nanofiltration0DData(UnitModelBlockData): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My high-level question before diving in more deeply is-- why do we need another NF model class? Could we instead just modify the existing NF_ZO model and potentially rename if needed? Or could we delete NF_ZO and replace with this?
My concern is that all of the different NF files that we have now can already be confusing, and adding another separate NF file will add to that confusion. So it'd be best if we could consolidate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After the fact, I am asking myself if it would have been better to merge them. In part I did this to ensure backward compatibility as this model makes a few additional assumptions (both to simplify and to enforce electroneutrality).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main thing I think we need to do here is consolidate the two NF models (ZO and this new 0D). After thinking it over, I think the way we should do this is incorporate new changes to the former ZO model (and rename ZO to be 0D) but omit some new config options that deviate from "the standard".
I also had some questions and comments concerning electroneutrality.
assert stream[0].is_property_constructed(v.name) | ||
|
||
initializer = stream.default_initializer() | ||
initializer.initialize(stream) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are your thoughts on asserting a particular record of the initializer? For example, we could use some of the Diagnostic Toolbox functionality to record numerical issues after having applied the initializer. The main idea being that we can leave some form of a benchmark behind in the test, and if we attempt to refine the initializer (or compare with another initializer) in the future, we'd have some sort of metric to compare against and try to improve on.
The first thing I'd want to know is how does the outcome of the initializer compare with the old initialize
method?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not quite sure what you would be asserting in this case. What is the outcome of an initialization in the general sense? One might say that it is converging the model to a given tolerance, but that is not quite true as it only needs to get it to the point that a full solver can take over (although many of our routines include that as the final step).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TBH- this thought was geared more towards the Scaler originally, but then I figured that we should think about doing some form of a check on Initializers although it is not apparent what the best way to go about it would be, or if it would be worthwhile for something like a property block.
One idea- even though it may not be the best approach- is to check the approximate number of iterations before convergence, which could be a pain between the different runners. Something like that, or we can set a low number on max_iter and check the state of the model. But again--now that I think about it- this probably makes sense for a flowsheet or a complex model and not so much for a more basic model.
Overall though, the main question that I wanted to answer was -- how can we compare two initializers/initialization methods at this level to decide on whether one is superior (for the tested use case)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is not easy to define. For one, determining the number of iterations is difficult to start with (as the solvers do not report it), and sometime more iterations are acceptable if it results in greater robustness across a range of conditions. That perhaps is the bigger point - it is hard to mark one initializer as superior to another without considering what they were designed to do (and this is ultimately why IDAES moved to class-based initializers to that we could have multiple options if required).
One thing I can say however is that the number of iterations should be fairly consistent across runners if you have a well scaled problem (but bets are off is the scaling is not good).
set_scaling_factor(stream[0].flow_mass_phase_comp["Liq", "Cl_-"], 1e2) | ||
set_scaling_factor(stream[0].flow_mass_phase_comp["Liq", "Mg_2+"], 1e3) | ||
|
||
scaler.scale_model(stream[0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar idea here but probably easier to do-- I am unsure what metric(s) would be best to "benchmark" against, but perhaps asserting the condition number, for example, would be one way of comparing against the performance of the former scaling approach as well as any new scalers (or changes to this one) that might be introduced later.
I am mainly looking for a way to avoid overlooking any potential degradation in model performance/stability as we continue to introduce these scaler classes. Maybe the ScalingProfiler can be used to some extent for this, but I don't think that could apply to comparisons against the old method-based approach for scaling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For that you want to look at this: https://github.com/IDAES/idaes-compatibility/tree/main/src/idaes_compatibility/robustness
Rather than look at some numerical characteristic of the model, you want to look at how well the solver performed at solving it. Additionally, it cannot be a single point test, but needs to cover a range of conditions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would be perfect! I didn't know about this IPOPTConvergenceAnalysis class. Didn't explore its full functionality yet, but if I am understanding correctly, then I would say we should test a range of conditions to check how the scaler affects convergence.
I would even say that we can think about a way to apply that to probing how "good" an initializer is as well, granted we'd have to follow up the initialization with a solve if it wasn't baked in already.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will note the plan is/was to move IPOPTConvergenceAnalysis to use the parameter sweep tool once it is ready (I tried to make it mostly compatible).
@declare_process_block_class("Nanofiltration0D") | ||
class Nanofiltration0DData(UnitModelBlockData): | ||
""" | ||
Zero order nanofiltration model. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Zero order nanofiltration model. | |
Zero dimensional nanofiltration model. |
This brings me to my main comment on this PR: we should only have one NF file or the other between the ZO and 0D forms. So, I think it would make sense to get rid of the ZO formulation and incorporate anything from it that is not included in the 0d formulation here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, it may be better to just incorporate additions made in NF_0D here to the existing NF_ZO formulation (and rename ZO to 0D). I say this after observing deviations in config options and the relation of those deviations to the omission of a ControlVolume.
CONFIG.declare( | ||
"include_pressure_balance", | ||
ConfigValue( | ||
default=True, | ||
domain=Bool, | ||
description="Whether to include pressure balance for retentate side.", | ||
), | ||
) | ||
CONFIG.declare( | ||
"has_retentate_pressure_drop", | ||
ConfigValue( | ||
default=False, | ||
domain=Bool, | ||
description="Whether to include pressure drop in the retentate side.", | ||
), | ||
) | ||
CONFIG.declare( | ||
"include_temperature_equality", | ||
ConfigValue( | ||
default=True, | ||
domain=Bool, | ||
description="Whether to include temperature equality constraints", | ||
doc="""Argument indicating whether temperature equality should be | ||
inlcluded. If this is False, no energy balance constraint will be | ||
written.""", | ||
), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these config options deviate from our typical "standard" and I suppose that is partly because you aren't using a ControlVolume0D in the model. Regardless, I would think that you could at least maintain options like "has_pressure_change". With regard to isothermal constraints, we have an isothermal ControlVolume with config options that allow for doing what "include_temperature_equality" enables.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did debate this whilst writing the model, and decided to not be overly constrained by standards and instead focus on what made sense for the case at hand. Config option names are easily changed at least, but my initial feeling is that a control volume is unnecessary complications in this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we didn't already have the NF_ZO model, then I would agree. Since we do though, I think the merging of the two should resolve this, and we can just keep the "standard" config options from the ZO formulation while adding your electroneutrality option.
CONFIG.declare( | ||
"electroneutrality_ion", | ||
ConfigValue( | ||
default="Cl_-", | ||
domain=str, | ||
description="Balancing ion to use to maintain electroneutrality", | ||
doc="""Name of ion to use to maintain electroneutrality in | ||
permeate stream. If an ion is provided, the split fraction | ||
for this ion will be solved implicitly to ensure electroneutrality | ||
is maintained in the permeate stream. If None, user must provide | ||
spl;it fractions for all monovalent ions and electroneutrality is | ||
not guaranteed..""", | ||
), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am thinking that you do not need this option because MCAS will handle electroneutrality for you with the "assert_electroneutrality" method, where you can choose your target ion for rebalancing if needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I see why you need this now. If one wants to fix split fractions for each ion, one would need to ensure electroneutrality is maintained in the permeate, and choosing the adjustable ion would allow for that to occur. Is that the main intention?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another thought on this--suppose I have some performance data for my membrane, and I know what the rejection rate (or alternatively solute recovery) is for each ion that I am interested in and have characterized in my solution composition. There could be the case that electroneutrality is not satisfied due to some unmeasured constituent's contribution.
Is your solution to this case to essentially not define any electroneutrality ion, thereby disregarding electroneutrality all together in the permeate? In that case, I wonder if we should log a warning for the user, where we'd check if electroneutrality would be satisfied in the permeate, and provide the warning in the case the electroneutrality would be violated. This way, a user who is already aware of the violation and expects it can wave off the warning, while the naive user who is actually making an unintentional error and would eventually come to the realization that they should balance electroneutrality would get some early notice.
Perhaps this could be injected into the initialization routine (or ignored because we don't know when the user might decide to fix/unfix split fractions.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, my response in that case is that you need to do data reconciliation to get an electroneutral solution, otherwise your data is not correct.
The model does support None
for the electroneutrality species to allow you to forgo that constraint (and replaces it with a standard split fraction) for cases where the user wants to do that. As for doing a check on electroneutrality, to me that is a post-check the user should be doing, as there are too many edge cases that could occur.
"Solute recovery", ":math:`R_j`", "solute_recovery", [j], ":math:`\text{dimensionless}`" | ||
"Multivalent ion recovery", ":math:`R_{multi}`", "multivalent_recovery", None, ":math:`\text{dimensionless}`" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think solute rejection is more commonly used in practice. The ZO formulation includes rejection, and I think you could keep both when merging the two models.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I decided to only include one of rejection or recovery to avoid any chance of over-specifying the problem - it would be easy to switch to rejection if that would be a better choice. I would generally discourage having both, as someone will attempt to fix both of them and wonder why the model fails to solve.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case, we should move to rejection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rejection as in 1- permeate_concentration/feed_concentration
. This should already be in the ZO model.
Fixes/Resolves: None
Summary/Motivation:
The existing Nanofiltration_ZO model does not preserve electoneutrality in the outlet streams.
Additionally, the MCAS property package does not have a new type
Initializer
orScaler
.Changes proposed in this PR:
Nanofiltration0D
unit model with user provided recovery fractions and option to maintain electoneutrality.BlockTriangularizationInitializer
by defaultMCASScaler
for use the new scaling API.Legal Acknowledgement
By contributing to this software project, I agree to the following terms and conditions for my contribution: