OpenQuake Engine 3.7.0
[Michele Simionato (@micheles)]
- Hiding calculations that fail before the pre-execute phase (for instance,
because of missing files); they already give a clear error - Added an early check on truncation_level in presence of correlation model
[Guillaume Daniel (@guyomd)]
- Implemented Ameri (2017) GMPE
[Michele Simionato (@micheles)]
- Changed the ruptures CSV exporter to use commas instead of tabs
- Added a check forbidding
aggregate_by
for non-ebrisk calculators - Introduced a task queue
- Removed the
cache_XXX.hdf5
files by using the SWMR mode of h5py
[Kris Vanneste (@krisvanneste)]
- Updated the coefficients table for the atkinson_2015 to the actual
values in the paper.
[Michele Simionato (@micheles)]
- Added an
/extract/agg_curves
API to extract both absolute and relative
loss curves from an ebrisk calculation - Changed
oq reset --yes
to remove oqdata/user only in single-user mode - Now the engine automatically sorts the user-provided intensity_measure_types
- Optimized the aggregation by tag
- Fixed a bug with the binning when disaggregating around the date line
- Fixed a prefiltering bug with complex fault sources: in some cases, blocks
ruptures were incorrectly discarded - Changed the sampling algorithm for the GMPE logic trees: now it does
not require building the full tree in memory - Raised clear errors for geometry files without quotes or with the wrong
header in the multi_risk calculator - Changed the realizations.csv exporter to export '[FromShakeMap]' instead
of '[FromFile]' when needed - Changed the agg_curves exporter to export all realizations in a single file
and all statistics in a single file - Added rlz_id, rup_id and year to the losses_by_event output for ebrisk
- Fixed a bug in the ruptures XML exporter: the multiplicity was multiplied
(incorrectly) by the number of realizations - Fixed the pre-header of the CSV outputs to get proper CSV files
- Replaced the 64 bit event IDs in event based and scenario calculations
with 32 bit integers, for the happiness of Excel users
[Daniele Viganò (@daniviga)]
- Numpy 1.16, Scipy 1.3 and h5py 2.9 are now required
[Michele Simionato (@micheles)]
- Changed the ebrisk calculator to read the CompositeRiskModel directly
from the datastore, which means 20x less data transfer for Canada
[Anirudh Rao (@raoanirudh)]
- Fixed a bug in the gmf CSV importer: the coordinates were being
sorted and new site_ids assigned even though the user input sites
csv file had site_ids defined
[Michele Simionato (@micheles)]
- Fixed a bug in the rupture CSV exporter: the boundaries of a GriddedRupture
were exported with lons and lats inverted - Added some metadata to the CSV risk outputs
- Changed the distribution mechanism in ebrisk to reduce the slow tasks
[Graeme Weatherill (@g-weatherill)]
- Updates Kotha et al. (2019) GMPE to July 2019 coefficients
- Adds subclasses to Kotha et al. (2019) to implement polynomial site
response models and geology+slope site response model - Adds QA test to exercise all of the SERA site response calculators
[Michele Simionato (@micheles)]
- Internal: there is not need to call
gsim.init()
anymore
[Graeme Weatherill (@g-weatherill)]
- Adds parametric GMPE for cratonic regions in Europe
[Michele Simionato (@micheles)]
- In the agglosses output of scenario_risk the losses were incorrectly
multiplied by the realization weight - Removed the output
sourcegroups
and added the outputevents
[Graeme Weatherill (@g-weatherill)]
- Adds new meta ground motion models to undertake PSHA using design code
based amplification coefficients (Eurocode 8, Pitilakis et al., 2018) - Adds site amplification model of Sandikkaya & Dinsever (2018)
[Marco Pagani (@mmpagani)]
- Added a new rupture-site metric: the azimuth to the closest point on the
rupture
[Michele Simionato (@micheles)]
- Fixed a regression in disaggregation with nonparametric sources, which
were effectively discarded - The site amplification has been disabled by default in the ShakeMap
calculator, since it is usually already taken into account by the USGS
[Daniele Viganò (@daniviga)]
- Deleted calculations are not removed from the database anymore
- Removed the 'oq dbserver restart' command since it was broken
[Richard Styron (@cossatot)]
- Fixed
YoungsCoppersmith1985MFD.from_total_moment_rate()
: due to numeric
errors it was producing incorrect seismicity rates
[Michele Simionato (@micheles)]
- Now we generate the output
disagg_by_src
during disaggregation even in the
case of multiple realizations - Changed the way the random seed is set for BT and PM distributions
- The filenames generated by disagg_by_src exporter now contains the site ID
and not longitude and latitude, consistently with the other exporters - Accepted again meanLRs greater than 1 in vulnerability functions of kind LN
- Fixed a bug in event based with correlation and a filtered site collection
- Fixed the CSV exporter for the realizations in the case of scenarios
with parametric GSIMs - Removed some misleading warnings for calculations with a site model
- Added a check for missing
risk_investigation_time
in ebrisk - Reduced drastically (I measured improvements over 40x) memory occupation,
data transfer and data storage for multi-sites disaggregation - Sites for which the disaggregation PoE cannot be reached are discarded
and a warning is printed, rather than killing the whole computation oq show performance
can be called in the middle of a computation again- Filtered out the far away distances and reduced the time spent in
saving the performance info by orders of magnitude in large disaggregations - Reduced the data transfer by reading the data directly from the
datastore in disaggregation calculations - Reduced the memory consumption sending disaggregation tasks incrementally
- Added an extract API disagg_layer
- Moved
max_sites_disagg
from openquake.cfg into the job.ini - Fixed a bug with the --config option: serialize_jobs could not be overridden
- Implemented insured losses