Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OM test suite #1254

Open
Mathadon opened this issue Apr 1, 2022 · 56 comments
Open

OM test suite #1254

Mathadon opened this issue Apr 1, 2022 · 56 comments
Assignees

Comments

@Mathadon
Copy link
Member

Mathadon commented Apr 1, 2022

@casella suggests to include IDEAS in the OpenModelica test suite #1253 (comment) . I think this is a good idea and I hope to be able to make some time for this? Can you comment on what exactly we'd have to do for this?

@Mathadon Mathadon self-assigned this Apr 1, 2022
@casella
Copy link

casella commented Apr 1, 2022

Give me your consent and the URLs of the branch(es) that we need to test. That's it 😃

One typical option is to test the last stable release (mainly to check regressions caused by OpenModelica developments) and the master/main/development branch, to also get early feedback about regressions caused by changes to the library.

BTW, as soon as we see that the level of support of your library in OpenModelica is decent, we can flag it as "supported" and you easily install it using the new Package Manager that will be available also in OMEdit from the 1.19.0 release.

@Mathadon
Copy link
Member Author

Mathadon commented Apr 1, 2022

I can't formally consent on behalf of KU Leuven but you have my personal consent :)

Let's stick to the master branch: https://github.com/open-ideas/IDEAS/tree/master We don't have a development branch.

Will we get notifications somehow if we break something?

Package manager sounds great :)

Thanks!

Edit: what is tested exactly? syntax checks on each model, are models that extend Modelica.Icons.Example simulated, .. ? Thanks!

@casella
Copy link

casella commented Apr 1, 2022

I can't formally consent on behalf of KU Leuven but you have my personal consent :)

We don't need a formal consent. The idea is that we want to test libraries for which the developers are willing to fix issues, so they don't make omc look bad for reasons that are not under our responsibility. Looks like it in this case 😄

Let's stick to the master branch: https://github.com/open-ideas/IDEAS/tree/master We don't have a development branch.

It would be good if you also had some stable release versions in the long term. But we can start with that, no problem.

Will we get notifications somehow if we break something?

No, but you can check this page that contains one general regression report for all libraries that we check, more or less daily. On this page instead, you can check how your library works with past released versions of OMC and with the current development version (master)

Package manager sounds great :)

It is. It is causing us some delay in the release of 1.19.0, but it's going to be quite useful. See documentation here.

Edit: what is tested exactly? syntax checks on each model, are models that extend Modelica.Icons.Example simulated, .. ?

We run each model that has an experiment(StopTime) annotation. I hope you have them already in the models that can be simulated. Otherwise it seems like a good idea to add them, because the default StopTime = 1 is not good in most cases anyway.

@casella
Copy link

casella commented Apr 1, 2022

@sjoelund, could you please add https://github.com/open-ideas/IDEAS to the package manager? For the time being, I would only add the master version and mark it as "experimental", because we need to fix some issues with the library to get that to work with OMC. When this is stabilized, @Mathadon can make a new release, and then we can also add that.

@Mathadon
Copy link
Member Author

Mathadon commented Apr 2, 2022

@casella I'm working on MSL 4.0 support atm. I can include OM fixes and include those too in a future release of IDEAS v3.

@casella
Copy link

casella commented Apr 2, 2022

Sounds good. As soon as we have the first test report, we can also look for other small issues.

@Mathadon
Copy link
Member Author

Mathadon commented Apr 11, 2022

@casella I'm working on release 3.0.0, which includes MSL 4.0 support. If there is feedback from your end about OM support then it would be good if we can fix those issues before making the release. :)

edit: the release candidate is already on the master branch

@casella
Copy link

casella commented Apr 11, 2022

@casella I'm working on release 3.0.0, which includes MSL 4.0 support. If there is feedback from your end about OM support then it would be good if we can fix those issues before making the release. :)

Sure. I thought IDEAS was already included in the library testsuite, but it isn't. @sjoelund, why is this library included in the package manager, but not tested?

@casella
Copy link

casella commented Apr 11, 2022

@Mathadon I understand you are not too interested at running version 2.2.2 of IDEAS in OMC, and you are fine at focusing on getting the new 3.0.0 version (currently under development) to work. Is that correct?

@Mathadon
Copy link
Member Author

That is correct:)

@casella
Copy link

casella commented Apr 12, 2022

OK, so for the time being we'll test the development version (master/HEAD), then, when you release 3.0.0, we'll also test that. I'll keep you posted.

@Mathadon
Copy link
Member Author

@casella I see that IDEAS has been added but the test results are empty:
https://libraries.openmodelica.org/branches/master/IDEAS/IDEAS.html

Should I add something at the library side for getting the tests to run?

@casella
Copy link

casella commented May 17, 2022

I'm checking with @sjoelund, I'll keep you posted.

@sjoelund
Copy link

@casella
Copy link

casella commented May 18, 2022

Aha, I thought this was taken care of automatically by changing conf.json.

OK, let me take care of that, and also update the documentation 😄

@casella
Copy link

casella commented May 23, 2022

@sjoelund I updated the documentation of the library testing configuration, I hope it is now complete. At least it worked in the case of IDEAS 😄

@casella
Copy link

casella commented May 23, 2022

@Mathadon we finally managed to get IDEAS to test. As agreed, we test the last released version (currently 3.0.0), to look for OMC regressions, and the latest version on the master branch, to look for library regressions.

The reports with the results of the testing with the standard compiler configurations are here:

The library testsuite is ran about once per day. All regressions with the default configuration are reported in a new file here: https://libraries.openmodelica.org/branches/history/master/, you can also get graphs with the trends. There are similar reports for the special configurations, e.g. https://libraries.openmodelica.org/branches/history/daemode/

@casella
Copy link

casella commented May 23, 2022

Getting to the specifics, the results with the released version are not bad, 87% of the models simulate successfully.
immagine

BTW, we are releasing OMC 1.19.0 anytime soon (we are completing the beta testing version now); the new OMC has an integrated package manager that allows to install open source libraries with a few clicks.

Currently the support level of the 3.0.0 released version IDEAS is marked as "experimental", but in fact it's already quite good, we could also mark it as "support", meaning partial support, not yet full. pre-release version (i.e. the master branch) are marked as not supported, because they could be broken any time by a bad commit to the library repo.

In the result reports, you can click on the model name to see the warning or errors at compile time, and on the "sim" link to see the output of the runtime. I had a quick look, there are probably just a handful of common issues to address, some of the library and some of the tool.

Last, but not least, if you can provide us reference results in either csv or .mat format, we could verify the simulation result. My suggestion is that you place them in a separate GitHub repository, and you force-push the same commit every time you update them, so that the size of the git repo doesn't grow - nobody cares about past versions of that.

Mathadon added a commit that referenced this issue May 23, 2022
…RectangularZoneTemplateInterface.mo for

avoiding warnings about protected variables for #1254
@Mathadon Mathadon mentioned this issue May 23, 2022
@Mathadon
Copy link
Member Author

@casella I fixed some (but not all) issues on our side through 0519fd2 . However, it seems that there are some bugs in OM too:

https://libraries.openmodelica.org/branches/master/IDEAS_dev/files/IDEAS_dev_IDEAS.Buildings.Components.Examples.BuildingShadeExample.err

[/home/hudson/saved_omc/libraries/.openmodelica/libraries/IDEAS 3.0.0-master/Buildings/Components/Shading/Overhang.mo:30:3-31:57:writable] Error: Non-array modification ‘false‘ for array component ‘fixed‘, possibly due to missing ‘each‘.

for code

  parameter Modelica.Units.SI.Length tmpH[4](each fixed=false)
    "Height rectangular sections used for superposition";

seems an incorrect error message since the line of code does contain 'each'.

Similar error

[/home/hudson/saved_omc/libraries/.openmodelica/libraries/IDEAS 3.0.0-master/Buildings/Components/Interfaces/PartialSurface.mo:10:3-12:44:writable] Error: Non-array modification ‘IDEAS.Types.Tilt.Wall‘ for array component ‘inc‘, possibly due to missing ‘each‘.

for this line of code

  parameter Modelica.Units.SI.Angle inc=sim.incOpts[incOpt]
    "Custom inclination (tilt) angle of the wall, default wall"
    annotation (Dialog(enable=incOpt == 4));

Looking forward to the new test results!

Mathadon added a commit that referenced this issue May 24, 2022
fixed typo in LinRectangularZoneTemplate.mo for #1254
@casella
Copy link

casella commented May 24, 2022

@casella
Copy link

casella commented May 30, 2022

Great! my laptop died today

Sorry to hear that. RIP.

so I might not have the means to follow this up as much anymore over the coming days. But I have the impression that the major issues on our end have been solved in the meantime m.:)

Yeah. As soon as @perost gets a grip on #9027, there will be a few more running.

@casella
Copy link

casella commented May 30, 2022

There are a few small things you can fix on you side upon resurrection of your laptop, e.g. #1284 and #1285. No big deal.

@casella
Copy link

casella commented May 30, 2022

I'm also trying to improve the situation with testing of larger models, see OpenModelica/OpenModelica#9038.

@Mathadon
Copy link
Member Author

Mathadon commented Jun 2, 2022

@casella the benchmark results report 14s computation time for IDEAS.Buildings.Validation.Tests.Case600. I ran the same model using Dymola (using dassl). Dymola took 17s using Evaluate=false and 11s using Evaluate=true.

OM statistics:

|                 | |       | | events
|                 | |       | | |  2101 state events
|                 | |       | | |     0 time events
|                 | |       | | solver: dassl
|                 | |       | | | 346034 steps taken
|                 | |       | | | 546403 calls of functionODE
|                 | |       | | | 75591 evaluations of jacobian
|                 | |       | | | 36506 error test failures
|                 | |       | | |     0 convergence test failures
|                 | |       | | | 5.70351s time of jacobian evaluation

dymola statistics:

   Number of result points                 : 12896
   Number of grid points                   : 8761
   Number of accepted steps                : 341350
   Number of f-evaluations (dynamics)      : 748367
   Number of crossing function evaluations : 364195
   Number of Jacobian-evaluations          : 75907
   Number of model time events             : 0
   Number of input time events             : 0
   Number of state events                  : 2067

The number of ODE calls is significantly higher in Dymola (748367 instead of 546403), which suggests that the function evaluation cost in Dymola is still better. The computation time is quite close though. I wonder where the additional function evaluations come from. Perhaps OM is more efficient at computing the Jacobian?

When using dymola block timers and Evaluate=true:

Name of block               , Block, Total CPU[s], Mean[us]    ( Min[us]    to Max[us]    ),   Called
Entire model                :     0,       89.257,       45.29 (       2.00 to      123.00),  1970892
Outside of model            :     3,        5.005,        2.54 (       1.00 to     9494.00),  1970895
DynamicsSection             :    14,       76.982,       39.32 (      38.00 to      114.00),  1958007

The profiling strongly affects the model evaluation time so it is hard to make conclusions out of this.

This is using an unloaded Intel Xeon E5-1650 v4 @ 3.60GHz for Dymola and a (probably fully loaded) AMD Ryzen 5950X @3.4 GHz which further complicates the comparison.

Overall I'd say that OM is performing pretty well on this example!

@casella
Copy link

casella commented Jun 2, 2022

@Mathadon have you tried profiling the model in OpenModelica? That said, I'd say the performance is comparable, which is of course good, considering that Dymola is a commercial tool which with 15+ more years of development than OMC 😃

In my experience the Ryzen is significantly faster than the Xeon. This may also be important. The best machines to carry out this kind of duty are gamer PCs with fast clocks.

BTW, we are currently working on multirate solvers, @bernhardbachmann has written a first version of a general multi-rate solver that will be merged into master before the end of June. I believe this could significantly improve the performance on building models. If you are interested, I can add you to the loop. At some point, we could run some ad-hoc experiments with our test infrastructure, where we test IDEAS and Buildings with those algorithms, to see how they work.

@casella
Copy link

casella commented Jun 2, 2022

Another 2 extra models simulate in IDEAS_dev, more coming soon with your latest commit #1286.

Current simulation success ratio is 614/642 = 95.6%. As soon as we manage to give a bit more memory for testing, 15 more models in the testuite should work, bringing the success ratio over 98%.

@Mathadon
Copy link
Member Author

Mathadon commented Jun 2, 2022

@casella I didn't use OpenModelica myself, I just copied the results from your benchmarks :) So I did not try profiling the model in OpenModelica.

I'm indeed interested about the multirate solvers. Do they use the principles of quantized state solvers?

@sjoelund
Copy link

sjoelund commented Jun 2, 2022

In my experience the Ryzen is significantly faster than the Xeon. This may also be important. The best machines to carry out this kind of duty are gamer PCs with fast clocks.

And in Ryzen's case also with fast memory (which we don't have in this machine). Still, the machine should be 70-80% faster than a Xeon for single-thread workloads. It's in the right ballpark for performance but I suspect we could do better.

@casella
Copy link

casella commented Jun 2, 2022

@casella I didn't use OpenModelica myself, I just copied the results from your benchmarks :) So I did not try profiling the model in OpenModelica.

Aha, I guess you should, eventually 😄. Anyway, I guess having automated tests of the entire library is good already.

Please note that the test infrastructure allocates only one thread to each model, to maximise parallelism. Simulation is essentially a single-threaded task, so those figures are meaningful for real-life use; on the contrary, the whole code generation process can be carried out with a significant amount of parallelism - on my 24-core server, I typically observe an 8X average speed-up ratio when compiling large models, compared to single-thread. Hence, the time to generate the code and compile it will be much faster (2 to 10 times) if you run a single model at a time on a good workstation.

I'm indeed interested about the multirate solvers. Do they use the principles of quantized state solvers?

No, although that is another concept that definitely makes sense in this context, and which is definitely interesting for us.

The basic idea here is that you make a global step with any integration method, using error control to make sure that a certain percentage of states (e.g. 90% or 95%) are below the error threshold, but not all of them. Then, the remaining 10% or 5% states are refined on a finer time grid, possibly with another method, using interpolated values for the other 90 or 95% states, which were already found to be good enough.

In this way, if you open a window in a large building, this triggers a fast transient in one room, but it doesn't cause the entire system to be integrated with the small steps required by that local transient - this will only be done for the few states that are involved in that room. This allows to avoid the need of computing large Jacobians and inverting them in those small steps (when using implicit solvers), and also avoids the need of computing the entire derivative vector for those small refined steps, which can save a lot of time if the ratio of fast variables to slow ones is small. This last feature is currently not yet implemented in our solver, but we have plans to do that as well in the near future.

The really good thing is that this logic is fully adaptive and automatic - the algorithm figures out when and where to refine the steps on a certain subset of states; you don't have to partition the model statically a priori, only select the two integration methods and the percentage of fast states for refinement - that's it.

@Mathadon
Copy link
Member Author

Mathadon commented Jun 3, 2022

@casella the IDEAS.LIDEAS models access all state variables of the model, many of which are protected. I do not intend to unprotect all those state variables and therefore I will not fix this 'bug' for now. I could remove the experiment annotation but then the Buildingspy code checker will complain about that. So those models will not pass for now.

@Mathadon
Copy link
Member Author

Mathadon commented Jun 3, 2022

In my experience the Ryzen is significantly faster than the Xeon. This may also be important. The best machines to carry out this kind of duty are gamer PCs with fast clocks.

And in Ryzen's case also with fast memory (which we don't have in this machine). Still, the machine should be 70-80% faster than a Xeon for single-thread workloads. It's in the right ballpark for performance but I suspect we could do better.

A few years ago I looked into OM and noticed that it codegens a separate C function for each equation. This adds a lot of overhead for function calls. Is this still the case?

@sjoelund
Copy link

sjoelund commented Jun 3, 2022

I still does, yes. C compilation speed scales with the size of a function. In the past everything was in a single file, but this means all the system RAM and many hours are consumed to compile the model. Simple equations should probably be merged to find some middle ground.

@Mathadon
Copy link
Member Author

Mathadon commented Jun 3, 2022

Isn't simulation speed more important than compilation speed? I would group equations in groups of maximum a few hundreds and codegen separate functions for those. But perhaps you need the separate function calls for the debugger and/or sparse evaluation?

Edit: also, the extra compilation time is probably put to good use by writing more efficient binaries. E.g. grouping multiple scalar operations in AVX processor instructions etc..

Dymola and JModelica generate large functions so I don't see why OM would suddenly experience memory issues due to this.

@casella
Copy link

casella commented Jun 3, 2022

@Mathadon for the last 10 years the main goal of OpenModelica has been to expand the coverage ratio of existing libraries, until basically we could run all existing open-source valid Modelica models successfully. As you can see from this report, of the 13595 models we try to simulate in our library testsuite, the number of successful simulations went from 10081 (74%) of version 1.12, released 5 years ago, to 12271 (90%) of the current development version. So, we've made good progress on that, also considering that some of those models fail because they are not fully compliant with the standard. We are now in a position where we can start worrying a bit more about performance, maybe starting next year 😅

BTW, at Politecnico we are working on a high-performance Modelica compiler, codenamed MARCO, that takes the output of the OMC frontend and generates code with a completely different workflow, leveraging on LLVM-IR. In that case, we are not focusing on coverage for the time being, but rather on going as fast as we can with the simulations, albeit with some restrictions on the models we can actually simulate. We are planning to go open source by the end of this year, stay tuned.

@casella
Copy link

casella commented Jun 3, 2022

@casella the IDEAS.LIDEAS models access all state variables of the model, many of which are protected. I do not intend to unprotect all those state variables and therefore I will not fix this 'bug' for now. I could remove the experiment annotation but then the Buildingspy code checker will complain about that. So those models will not pass for now.

The reason why protected sections was introduced in Modelica is precisely to prevent accessing the variables of models from the outside. I'm not sure why you declare those variables protected if you want to access them from the outside. Is it because you are re-using models that were originally conceived for other purposes and you don't want to waste time editing them?

If that is the case, this seems to me a completely reasonable use case, for which we could add a flag to only issue a warning in case of protected access violation, and then turn it on for those tests.

There is also the possibility to set up the testing to ignore some sub-packages, but I would prefer to test as many of them as possible, if they are of interest to you.

What do you think?

@casella
Copy link

casella commented Jun 3, 2022

A few years ago I looked into OM and noticed that it codegens a separate C function for each equation. This adds a lot of overhead for function calls.

@sjoelund can't those functions be inlined by optimized C compilers? In that case, the overhead would just be at compile time, not at run time.

@sjoelund
Copy link

sjoelund commented Jun 3, 2022

We made these changes sometime when GCC was being used. GCC is really slow in many of our use-cases. Even simple function like the ones setting start values are problematic. Consider a file generated by the following shell-script:

N=300000
echo "int main() {" > a.c
echo "int x[$N+1],y[$N+1],z[$N+1],a[$N+1],b[$N+1],c[$N+1];" >> a.c
for X in `seq $N`; do
  for V in x y z a b c; do
    echo "$V[$X] = $X * 1.5;" >> a.c
  done
done
echo "}" >> a.c

It's not a very complicated program, but consider the compilation times:

  • clang -O3 a.c: 0m26.291s (clang 10.0.0)
  • gcc -O3 a.c: 127m33.918s (GCC 9.4.0)

So we disable optimization for some functions that we compile... Some OMC versions ago, we compiled everything with -O0 because of this very reason.

For sure grouping some operations together would be beneficial, but it's not been high on the list of priorities. The gains are relatively minor compared to fixes in the backend such as having better tearing.

Note that compilation performance is such a big problem that we split the code generation into multiple files if there are many equations. C compilers have been getting better and better so it might be time to look into making some changes now.

@sjoelund can't those functions be inlined by optimized C compilers? In that case, the overhead would just be at compile time, not at run time.

Sometimes. Not when they are in a different file.

@Mathadon
Copy link
Member Author

Mathadon commented Jun 3, 2022

@casella the IDEAS.LIDEAS models access all state variables of the model, many of which are protected. I do not intend to unprotect all those state variables and therefore I will not fix this 'bug' for now. I could remove the experiment annotation but then the Buildingspy code checker will complain about that. So those models will not pass for now.

The reason why protected sections was introduced in Modelica is precisely to prevent accessing the variables of models from the outside. I'm not sure why you declare those variables protected if you want to access them from the outside. Is it because you are re-using models that were originally conceived for other purposes and you don't want to waste time editing them?

If that is the case, this seems to me a completely reasonable use case, for which we could add a flag to only issue a warning in case of protected access violation, and then turn it on for those tests.

The reason why I'm protected the sub-models: to avoid generating too large result files, by hiding the most interior components from the end user.

The reason why I want to use the variables in those models: it's a verification model where a linearised (state space) model is compared to its original. And I want to access the state values to make sure that the initial state is identical. So I need all state variables.

So it's a special use case. Adding a flag to ignore the warning would be great..

There is also the possibility to set up the testing to ignore some sub-packages, but I would prefer to test as many of them as possible, if they are of interest to you.

What do you think?

I prefer the former option.

@Mathadon
Copy link
Member Author

Mathadon commented Jun 3, 2022

We made these changes sometime when GCC was being used. GCC is really slow in many of our use-cases. Even simple function like the ones setting start values are problematic. Consider a file generated by the following shell-script:

N=300000
echo "int main() {" > a.c
echo "int x[$N+1],y[$N+1],z[$N+1],a[$N+1],b[$N+1],c[$N+1];" >> a.c
for X in `seq $N`; do
  for V in x y z a b c; do
    echo "$V[$X] = $X * 1.5;" >> a.c
  done
done
echo "}" >> a.c

It's not a very complicated program, but consider the compilation times:

  • clang -O3 a.c: 0m26.291s (clang 10.0.0)
  • gcc -O3 a.c: 127m33.918s (GCC 9.4.0)

Ok, good point, but that's an extreme case :) But still, computation time impact is there!

Consider

N=30000
echo "int main() {" > a.c
echo "int x[$N+1],y[$N+1],z[$N+1],a[$N+1],b[$N+1],c[$N+1];" >> a.c
for X in `seq $N`; do
  for V in x y z a b c; do
    echo "$V[$X] = $X * 1.5;" >> a.c
  done
done
echo "}" >> a.c
echo "void mul(int *x, int *y, int *z, int *a, int *b, int *c, int val) {*x=val*1.5;*y=val*1.5;*z=val*1.5;*a=val*1.5;*b=val*1.5;*c=val*1.5;}" > mul.c
echo "extern void mul(int *x, int *y, int *z, int *a, int *b, int *c, int val);" > a2.c
echo "int main() {" >> a2.c
echo "int x[$N+1],y[$N+1],z[$N+1],a[$N+1],b[$N+1],c[$N+1];" >> a2.c
for X in `seq $N`; do
  echo "mul(x+$X,y+$X,z+$X,a+$X,b+$X,c+$X,$X);" >> a2.c
done
echo "}" >> a2.c

and

time gcc -O2 mul.c a2.c -o fun
time gcc -O2 a.c -o a
time { for i in {1..100}; do ./fun ; done }
time { for i in {1..100}; do ./a ; done }

results in

Compilation:
real	1m22.670s
user	1m21.332s
sys	0m1.320s

real	1m40.775s
user	1m40.496s
sys	0m0.264s

Execution:

real	0m0.293s
user	0m0.020s
sys	0m0.028s

real	0m0.173s
user	0m0.028s
sys	0m0.024s

So slightly faster compilation but almost 2x slower execution.

This example is even worse:

N=30000
echo "int mul(int a, int b){ return a*b;}" > mul.c
echo "extern int mul(int a, int b);" > a3.c
echo "int main() {" >> a3.c
echo "int x[$N+1],y[$N+1],z[$N+1],a[$N+1],b[$N+1],c[$N+1];" >> a3.c
for X in `seq $N`; do
  for V in x y z a b c; do
    echo "$V[$X] = mul($X, 1.5);" >> a3.c
  done
done
echo "}" >> a3.c
time gcc -O2 mul.c a3.c -o test

real	2m12.327s
user	2m11.060s
sys	0m1.252s

time { for i in {1..100}; do ./test ; done }

real	0m0.392s
user	0m0.028s
sys	0m0.020s

more than 2x slower.

So we disable optimization for some functions that we compile... Some OMC versions ago, we compiled everything with -O0 because of this very reason.

I can see the reason for that but it also has a significant impact.

Simulation of the model IDEAS.Examples.PPD12.VentilationRBC using O0, O1 and O2:

   CPU-time for integration                : 13 seconds
   CPU-time for integration                : 8.87 seconds
   CPU-time for integration                : 8.9 seconds

For sure grouping some operations together would be beneficial, but it's not been high on the list of priorities. The gains are relatively minor compared to fixes in the backend such as having better tearing.

If I'd have to guess then I think most of the performance gap with dymola is due to code gen. So 50% faster code could be possible.

Note that compilation performance is such a big problem that we split the code generation into multiple files if there are many equations. C compilers have been getting better and better so it might be time to look into making some changes now.

@sjoelund can't those functions be inlined by optimized C compilers? In that case, the overhead would just be at compile time, not at run time.

Sometimes. Not when they are in a different file.

@casella
Copy link

casella commented Jun 3, 2022

So it's a special use case. Adding a flag to ignore the warning would be great..

OK. Let's go for it, see OpenModelica/OpenModelica#9059.

@sjoelund
Copy link

sjoelund commented Jun 4, 2022

The reason why I'm protected the sub-models: to avoid generating too large result files, by hiding the most interior components from the end user.

Is the HideResult annotation not enough in this case?

@casella
Copy link

casella commented Jun 4, 2022

@sjoelund it is, but you would have to apply it on each and every component.

@Mathadon
Copy link
Member Author

Mathadon commented Jun 25, 2022

@casella new laptop arrived and I've worked through my backlog! I have the impression that the major Modelica spec compliance issues for IDEAS have been fixed and that the remaining issues are mostly related to OM. I propose to keep this issue open to track the progress of the compatibility but I presume that it's now a matter of prioritising the development of OM?

Thanks!

@Mathadon
Copy link
Member Author

@sjoelund it is, but you would have to apply it on each and every component.

That indeed would be a bit of a hassle. :)

@casella
Copy link

casella commented Jun 26, 2022

@Mathadon I agree, though I still see a couple minor issues at the library level:

I guess you can fix them in the library, and then issue a patch release 3.0.1 that works better with OpenModelica than 3.0.0, and will work even better as we fix remaining issues in the tool.

Most other failures in the testsuite are either due to insufficient memory allocated to the test process, see #9038, or to the issue with protected element access, #9059. We'll address them ASAP on our side.

Then there are a few models that fail at initialization, these should be investigated further, maybe later this fall as we currently have to solve similar problems with other libraries (e.g. Buildings) with higher priority. It is possible that those other fixes will have a positive effect on IDEAS as well, we'll see that in regression reports.

@Mathadon
Copy link
Member Author

LIDEAS protected variable access should be fixed through cbd1414

@casella
Copy link

casella commented Jun 30, 2022

Good, we should see the results tomorrow here.

@Mathadon
Copy link
Member Author

Mathadon commented Jul 14, 2022

Some more fixes pushed here 974f7f6

@Mathadon
Copy link
Member Author

More fixes pushed: 73ea45f

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants