Merge branch 'release/3.16.0' into feature/plugins

This commit is contained in:
Emily Soth 2025-05-22 12:53:44 -07:00
commit f0c5d67c80
13 changed files with 380 additions and 157 deletions

View File

@ -23,9 +23,9 @@
- Urban Flood Risk
- Urban Nature Access
- Urban Stormwater Retention
- Visitation: Recreation and Tourism
- Wave Energy
- Wind Energy
- Visitation: Recreation and Tourism
Workbench fixes/enhancements:
- Workbench
@ -70,6 +70,46 @@ Workbench
existing user-added metadata preserved)
(`#1774 <https://github.com/natcap/invest/issues/1774>`_).
Coastal Blue Carbon
===================
* Updated the Coastal Blue Carbon documentation to clarify what happens when a
class transitions from a state of accumulation or decay to a No Carbon Change
("NCC") state. (`#671 <https://github.com/natcap/invest/issues/671>`_).
HRA
===
* The intermediate simplified vectors will now inherit their geometry type from
the input vectors, rather than using ``ogr.wkbUnknown``
(`#1881 <https://github.com/natcap/invest/issues/1881>`_).
NDR
===
* Fixed a bug in the effective retention calculation where nodata pour point
pixels were mistakenly used as real data. The effect of this change is most
pronounced along stream edges and should not affect the overall pattern of
results. (`#1845 <https://github.com/natcap/invest/issues/1845>`_)
* ``stream.tif`` is now saved in the main output folder rather than the
intermediate folder (`#1864 <https://github.com/natcap/invest/issues/1864>`_).
Seasonal Water Yield
====================
* ``stream.tif`` is now saved in the main output folder rather than the
intermediate folder (`#1864 <https://github.com/natcap/invest/issues/1864>`_).
Urban Flood Risk
================
* The raster output ``Runoff_retention.tif`` has been renamed
``Runoff_retention_index.tif`` to clarify the difference between it and
``Runoff_retention_m3.tif``
(`#1837 <https://github.com/natcap/invest/issues/1837>`_).
Visitation: Recreation and Tourism
==================================
* user-day variables ``pr_PUD``, ``pr_TUD``, and ``avg_pr_UD`` are calculated
and written to ``regression_data.gpkg`` even if the Compute Regression
option is not selected.
(`#1893 <https://github.com/natcap/invest/issues/1893>`_).
3.15.1 (2025-05-06)
-------------------

View File

@ -0,0 +1,153 @@
########################################################
# This was written and is maintained by:
# Kirk Bauer <kirk@kaybee.org>
#
# Please send all comments, suggestions, bug reports,
# etc, to kirk@kaybee.org.
#
########################################################
# NOTE:
# All these options are the defaults if you run logwatch with no
# command-line arguments. You can override all of these on the
# command-line.
# You can put comments anywhere you want to. They are effective for the
# rest of the line.
# this is in the format of <name> = <value>. Whitespace at the beginning
# and end of the lines is removed. Whitespace before and after the = sign
# is removed. Everything is case *insensitive*.
# Yes = True = On = 1
# No = False = Off = 0
# You can override the default temp directory (/tmp) here
TmpDir = /var/cache/logwatch
# Output/Format Options
# By default Logwatch will print to stdout in text with no encoding.
# To make email Default set Output = mail to save to file set Output = file
Output = mail
# To make Html the default formatting Format = html
Format = text
# To make Base64 [aka uuencode] Encode = base64
# Encode = none is the same as Encode = 8bit.
# You can also specify 'Encode = 7bit', but only if all text is ASCII only.
Encode = none
# Input Encoding
# Logwatch assumes that the input is in UTF-8 encoding. Defining CharEncoding
# will use iconv to convert text to the UTF-8 encoding. Set CharEncoding
# to an empty string to use the default current locale. If set to a valid
# encoding, the input characters are converted to UTF-8, discarding any
# illegal characters. Valid encodings are as used by the iconv program,
# and `iconv -l` lists valid character set encodings.
# Setting CharEncoding to UTF-8 simply discards illegal UTF-8 characters.
#CharEncoding = ""
# Default person to mail reports to. Can be a local account or a
# complete email address. Variable Output should be set to mail, or
# --output mail should be passed on command line to enable mail feature.
MailTo = jdouglass@stanford.edu
# When using option --multiemail, it is possible to specify a different
# email recipient per host processed. For example, to send the report
# for hostname host1 to user@example.com, use:
#Mailto_host1 = user@example.com
# Multiple recipients can be specified by separating them with a space.
# Default person to mail reports from. Can be a local account or a
# complete email address.
MailFrom = logwatch@ncp-inkwell.stanford.edu
# if set, the results will be saved in <filename> instead of mailed
# or displayed. Be sure to set Output = file also.
#Filename = /tmp/logwatch
# Use archives? If set to 'Yes', the archives of logfiles
# (i.e. /var/log/messages.1 or /var/log/messages.1.gz) will
# be searched in addition to the /var/log/messages file.
# This usually will not do much if your range is set to just
# 'Yesterday' or 'Today'... it is probably best used with Range = All
# By default this is now set to Yes. To turn off Archives uncomment this.
#Archives = No
# The default time range for the report...
# The current choices are All, Today, Yesterday
Range = yesterday
# The default detail level for the report.
# This can either be Low, Med, High or a number.
# Low = 0
# Med = 5
# High = 10
Detail = High
# The 'Service' option expects either the name of a filter
# (in /usr/share/logwatch/scripts/services/*) or 'All'.
# The default service(s) to report on. This should be left as All for
# most people.
Service = All
# You can also disable certain services (when specifying all)
#Service = "-zz-network" # Prevents execution of zz-network service, which
# # prints useful network configuration info.
#Service = "-zz-sys" # Prevents execution of zz-sys service, which
# # prints useful system configuration info.
#Service = "-eximstats" # Prevents execution of eximstats service, which
# # is a wrapper for the eximstats program.
# If you only cared about FTP messages, you could use these 2 lines
# instead of the above:
#Service = ftpd-messages # Processes ftpd messages in /var/log/messages
#Service = ftpd-xferlog # Processes ftpd messages in /var/log/xferlog
# Maybe you only wanted reports on PAM messages, then you would use:
#Service = pam_pwdb # PAM_pwdb messages - usually quite a bit
#Service = pam # General PAM messages... usually not many
# You can also choose to use the 'LogFile' option. This will cause
# logwatch to only analyze that one logfile.. for example:
#LogFile = messages
# will process /var/log/messages. This will run all the filters that
# process that logfile. This option is probably not too useful to
# most people. Setting 'Service' to 'All' above analyzes all LogFiles
# anyways...
#
# By default we assume that all Unix systems have sendmail or a sendmail-like MTA.
# The mailer code prints a header with To: From: and Subject:.
# At this point you can change the mailer to anything that can handle this output
# stream.
# TODO test variables in the mailer string to see if the To/From/Subject can be set
# From here with out breaking anything. This would allow mail/mailx/nail etc..... -mgt
mailer = "/usr/sbin/sendmail -t"
#
# With this option set to a comma separated list of hostnames, only log entries
# for these particular hosts will be processed. This can allow a log host to
# process only its own logs, or Logwatch can be run once per a set of hosts
# included in the logfiles.
# Example: HostLimit = hosta,hostb,myhost
#
# The default is to report on all log entries, regardless of its source host.
# Note that some logfiles do not include host information and will not be
# influenced by this setting.
#
#HostLimit = myhost
# Default Log Directory
# All log-files are assumed to be given relative to the LogDir directory.
# Multiple LogDir statements are possible. Additional configuration variables
# to set particular directories follow, so LogDir need not be set.
#LogDir = /var/log
#
# By default /var/adm is searched after LogDir.
#AppendVarAdmToLogDirs = 1
#
# By default /var/log is to be searched after LogDir and /var/adm/ .
#AppendVarLogToLogDirs = 1
#
# The current working directory can be searched after the above. Not set by
# default.
#AppendCWDToLogDirs = 0
# vi: shiftwidth=3 tabstop=3 et

View File

@ -43,6 +43,14 @@
- google-cloud-sdk
- google-cloud-cli
- yubikey-manager
- logwatch
- fwlogwatch
- sendmail
- name: Configure logwatch
ansible.builtin.copy:
src: logwatch.conf
dest: /etc/logwatch/conf/logwatch.conf
- name: Add bookworm-backports repository
ansible.builtin.apt_repository:
@ -137,3 +145,4 @@
daemon_reload: true # reload in case there are any config changes
state: restarted
enabled: true

View File

@ -0,0 +1,44 @@
# ADR-0004: Remove Wind Energy Raster Outputs
Author: Megan Nissel
Science Lead: Rob Griffin
## Context
The Wind Energy model has three major data inputs required for all runs: a Wind Data Points CSV, containing Weibull parameters for each wind data point; a Bathymetry raster; and a CSV of global wind energy infrastructure parameters. Within the wind data points CSV, each row represents a discrete geographic coordinate point. During the model run, this CSV gets converted to a point vector and then the data are interpolated onto rasters.
When run without the valuation component, the model outputs the following:
- `density_W_per_m2.tif`: a raster representing power density (W/m^2) centered on a pixel.
- `harvested_energy_MWhr_per_yr.tif`: a raster representing the annual harvested energy from a farm centered on that pixel.
- `wind_energy_points.shp`: a vector (with points corresponding to those in the input Wind Energy points CSV) that summarizes the outputs of the two rasters.
When run with the valuation component, the model outputs three additional rasters in addition to the two listed above: `carbon_emissions_tons.tif`, `levelized_cost_price_per_kWh.tif`, and `npv.tif`. These values are not currently summarized in `wind_energy_points.shp`.
Users noticed the raster outputs included data in areas outside of those covered by the input Wind Data, resulting from the model's method of interpolating the vector data to the rasters. This led to a larger discussion around the validity of the interpolated raster results.
## Decision
Based on Rob's own use of the model, and review and evaluation of the problem, the consensus is that the model's current use of interpolation introduces too many potential violations of the constraints of the model (e.g. interpolating over areas that are invlaid due to ocean depth or distance from shore, or are outside of the areas included in the input wind speed data) and requires assumptions that may not be helpful for users. Rob therefore recommended removing the raster outputs entirely and retaining the associated values in the output `wind_energy_points.shp` vector.
As such, we have decided to move forward with removing the rasterized outputs:
- `carbon_emissions_tons.tif`
- `density_W_per_m2.tif`
- `harvested_energy_MWhr_per_yr.tif`
- `levelized_cost_price_per_kWh.tif`
- `npv.tif`
The model will need to be updated so that the valuation component also writes values to `wind_energy_points.shp`.
## Status
## Consequences
Once released, the model will no longer provide the rasterized outputs that it previously provided. Instead, values for each point will appear in `wind_energy_points.shp`. This vector will also contain valuation data if the model's valuation component is run.
## References
GitHub:
* [Pull Request](https://github.com/natcap/invest/pull/1898)
* [Discussion: Raster result values returned outside of wind data](https://github.com/natcap/invest/issues/1698)
* [User's Guide PR](https://github.com/natcap/invest.users-guide/pull/178)

View File

@ -93,8 +93,8 @@ here for several reasons:
"""
import logging
import os
import time
import shutil
import time
import numpy
import pandas
@ -103,12 +103,12 @@ import scipy.sparse
import taskgraph
from osgeo import gdal
from .. import utils
from .. import spec
from ..unit_registry import u
from .. import validation
from .. import gettext
from .. import spec
from .. import utils
from .. import validation
from ..model_metadata import MODEL_METADATA
from ..unit_registry import u
LOGGER = logging.getLogger(__name__)
@ -336,7 +336,14 @@ MODEL_SPEC = spec.build_model_spec({
"description": gettext("low carbon disturbance rate")
},
"NCC": {
"description": gettext("no change in carbon")
"description": gettext(
"no change in carbon. Defining 'NCC' for a "
"transition will halt any in-progress carbon "
"accumulation or emissions at the year of "
"transition, until the class transitions "
"again to a state of accumulation or "
"disturbance."
)
}
},
"about": gettext(

View File

@ -1639,14 +1639,12 @@ def _simplify(source_vector_path, tolerance, target_vector_path,
target_layer_name = os.path.splitext(
os.path.basename(target_vector_path))[0]
# Using wkbUnknown is important here because a user can provide a single
# vector with multiple geometry types. GPKG can handle whatever geom types
# we want it to use, but it will only be a conformant GPKG if and only if
# we set the layer type to ogr.wkbUnknown. Otherwise, the GPKG standard
# would expect that all geometries in a layer match the geom type of the
# layer and GDAL will raise a warning if that's not the case.
# Use the same geometry type from the source layer. This may be wkbUnknown
# if the layer contains multiple geometry types.
target_layer = target_vector.CreateLayer(
target_layer_name, source_layer.GetSpatialRef(), ogr.wkbUnknown)
target_layer_name,
srs=source_layer.GetSpatialRef(),
geom_type=source_layer.GetGeomType())
for field in source_layer.schema:
if field.GetName().lower() in preserve_columns:

View File

@ -177,6 +177,13 @@ void run_effective_retention(
neighbor.y < 0 or neighbor.y >= n_rows) {
continue;
}
neighbor_effective_retention = (
effective_retention_raster.get(
neighbor.x, neighbor.y));
if (is_close(neighbor_effective_retention, effective_retention_nodata)) {
continue;
}
if (neighbor.direction % 2 == 1) {
step_size = cell_size * 1.41421356237;
} else {
@ -189,10 +196,6 @@ void run_effective_retention(
current_step_factor = 0;
}
neighbor_effective_retention = (
effective_retention_raster.get(
neighbor.x, neighbor.y));
// Case 1: downslope neighbor is a stream pixel
if (neighbor_effective_retention == STREAM_EFFECTIVE_RETENTION) {
intermediate_retention = (
@ -222,7 +225,6 @@ void run_effective_retention(
}
}
// search upslope to see if we need to push a cell on the stack
// for i in range(8):
up_neighbors = UpslopeNeighbors<T>(Pixel<T>(flow_dir_raster, global_col, global_row));
for (auto neighbor: up_neighbors) {
neighbor_outflow_dir = INFLOW_OFFSETS[neighbor.direction];

View File

@ -279,6 +279,7 @@ MODEL_SPEC = spec.build_model_spec({
"units": u.kilogram/u.hectare
}}
},
"stream.tif": spec_utils.STREAM,
"intermediate_outputs": {
"type": "directory",
"contents": {
@ -378,7 +379,6 @@ MODEL_SPEC = spec.build_model_spec({
"about": "Inverse of slope",
"bands": {1: {"type": "number", "units": u.none}}
},
"stream.tif": spec.STREAM,
"sub_load_n.tif": {
"about": "Nitrogen loads for subsurface transport",
"bands": {1: {
@ -483,6 +483,7 @@ _OUTPUT_BASE_FILES = {
'n_total_export_path': 'n_total_export.tif',
'p_surface_export_path': 'p_surface_export.tif',
'watershed_results_ndr_path': 'watershed_results_ndr.gpkg',
'stream_path': 'stream.tif'
}
INTERMEDIATE_DIR_NAME = 'intermediate_outputs'
@ -500,7 +501,6 @@ _INTERMEDIATE_BASE_FILES = {
's_accumulation_path': 's_accumulation.tif',
's_bar_path': 's_bar.tif',
's_factor_inverse_path': 's_factor_inverse.tif',
'stream_path': 'stream.tif',
'sub_load_n_path': 'sub_load_n.tif',
'surface_load_n_path': 'surface_load_n.tif',
'surface_load_p_path': 'surface_load_p.tif',

View File

@ -566,7 +566,7 @@ def execute(args):
prep_aoi_task.join()
# All the server communication happens in this task.
user_days_task = task_graph.add_task(
calc_user_days_task = task_graph.add_task(
func=_retrieve_user_days,
args=(file_registry['local_aoi_path'],
file_registry['compressed_aoi_path'],
@ -580,6 +580,15 @@ def execute(args):
file_registry['server_version']],
task_name='user-day-calculation')
assemble_userday_variables_task = task_graph.add_task(
func=_assemble_regression_data,
args=(file_registry['pud_results_path'],
file_registry['tud_results_path'],
file_registry['regression_vector_path']),
target_path_list=[file_registry['regression_vector_path']],
dependent_task_list=[calc_user_days_task],
task_name='assemble userday variables')
if 'compute_regression' in args and args['compute_regression']:
# Prepare the AOI for geoprocessing.
prepare_response_polygons_task = task_graph.add_task(
@ -593,20 +602,11 @@ def execute(args):
assemble_predictor_data_task = _schedule_predictor_data_processing(
file_registry['local_aoi_path'],
file_registry['response_polygons_lookup'],
prepare_response_polygons_task,
[prepare_response_polygons_task, assemble_userday_variables_task],
args['predictor_table_path'],
file_registry['regression_vector_path'],
intermediate_dir, task_graph)
assemble_regression_data_task = task_graph.add_task(
func=_assemble_regression_data,
args=(file_registry['pud_results_path'],
file_registry['tud_results_path'],
file_registry['regression_vector_path']),
target_path_list=[file_registry['regression_vector_path']],
dependent_task_list=[assemble_predictor_data_task, user_days_task],
task_name='assemble predictor data')
# Compute the regression
coefficient_json_path = os.path.join(
intermediate_dir, 'predictor_estimates.json')
@ -626,16 +626,27 @@ def execute(args):
target_path_list=[file_registry['regression_coefficients'],
file_registry['regression_summary'],
coefficient_json_path],
dependent_task_list=[assemble_regression_data_task],
dependent_task_list=[assemble_predictor_data_task],
task_name='compute regression')
if ('scenario_predictor_table_path' in args and
args['scenario_predictor_table_path'] != ''):
driver = gdal.GetDriverByName('GPKG')
if os.path.exists(file_registry['scenario_results_path']):
driver.Delete(file_registry['scenario_results_path'])
aoi_vector = gdal.OpenEx(file_registry['local_aoi_path'])
target_vector = driver.CreateCopy(
file_registry['scenario_results_path'], aoi_vector)
target_layer = target_vector.GetLayer()
_rename_layer_from_parent(target_layer)
target_vector = target_layer = None
aoi_vector = None
utils.make_directories([scenario_dir])
build_scenario_data_task = _schedule_predictor_data_processing(
file_registry['local_aoi_path'],
file_registry['response_polygons_lookup'],
prepare_response_polygons_task,
[prepare_response_polygons_task],
args['scenario_predictor_table_path'],
file_registry['scenario_results_path'],
scenario_dir, task_graph)
@ -941,9 +952,8 @@ def _grid_vector(vector_path, grid_type, cell_size, out_grid_vector_path):
def _schedule_predictor_data_processing(
response_vector_path, response_polygons_pickle_path,
prepare_response_polygons_task,
predictor_table_path, target_predictor_vector_path,
working_dir, task_graph):
dependent_task_list, predictor_table_path,
target_predictor_vector_path, working_dir, task_graph):
"""Summarize spatial predictor data by polygons in the response vector.
Build a shapefile with geometry from the response vector, and tabular
@ -955,8 +965,7 @@ def _schedule_predictor_data_processing(
response_polygons_pickle_path (string): path to pickle that stores a
dict which maps each feature FID from ``response_vector_path`` to
its shapely geometry.
prepare_response_polygons_task (Taskgraph.Task object):
A Task needed for dependent_task_lists in this scope.
dependent_task_list (list): list of Taskgraph.Task objects.
predictor_table_path (string): path to a CSV file with three columns
'id', 'path' and 'type'. 'id' is the unique ID for that predictor
and must be less than 10 characters long. 'path' indicates the
@ -1028,7 +1037,7 @@ def _schedule_predictor_data_processing(
args=(predictor_type, response_polygons_pickle_path,
row['path'], predictor_target_path),
target_path_list=[predictor_target_path],
dependent_task_list=[prepare_response_polygons_task],
dependent_task_list=dependent_task_list,
task_name=f'predictor {predictor_id}'))
else:
predictor_target_path = os.path.join(
@ -1039,13 +1048,12 @@ def _schedule_predictor_data_processing(
args=(response_polygons_pickle_path,
row['path'], predictor_target_path),
target_path_list=[predictor_target_path],
dependent_task_list=[prepare_response_polygons_task],
dependent_task_list=dependent_task_list,
task_name=f'predictor {predictor_id}'))
# return predictor_task_list, predictor_json_list
assemble_predictor_data_task = task_graph.add_task(
func=_json_to_gpkg_table,
args=(response_vector_path, target_predictor_vector_path,
args=(target_predictor_vector_path,
predictor_json_list),
target_path_list=[target_predictor_vector_path],
dependent_task_list=predictor_task_list,
@ -1072,20 +1080,11 @@ def _prepare_response_polygons_lookup(
def _json_to_gpkg_table(
response_vector_path, predictor_vector_path,
predictor_json_list):
regression_vector_path, predictor_json_list):
"""Create a GeoPackage and a field with data from each json file."""
driver = gdal.GetDriverByName('GPKG')
if os.path.exists(predictor_vector_path):
driver.Delete(predictor_vector_path)
response_vector = gdal.OpenEx(
response_vector_path, gdal.OF_VECTOR | gdal.GA_Update)
predictor_vector = driver.CreateCopy(
predictor_vector_path, response_vector)
response_vector = None
layer = predictor_vector.GetLayer()
_rename_layer_from_parent(layer)
target_vector = gdal.OpenEx(
regression_vector_path, gdal.OF_VECTOR | gdal.GA_Update)
target_layer = target_vector.GetLayer()
predictor_id_list = []
for json_filename in predictor_json_list:
@ -1093,23 +1092,22 @@ def _json_to_gpkg_table(
predictor_id_list.append(predictor_id)
# Create a new field for the predictor
# Delete the field first if it already exists
field_index = layer.FindFieldIndex(
field_index = target_layer.FindFieldIndex(
str(predictor_id), 1)
if field_index >= 0:
layer.DeleteField(field_index)
target_layer.DeleteField(field_index)
predictor_field = ogr.FieldDefn(str(predictor_id), ogr.OFTReal)
layer.CreateField(predictor_field)
target_layer.CreateField(predictor_field)
with open(json_filename, 'r') as file:
predictor_results = json.load(file)
for feature_id, value in predictor_results.items():
feature = layer.GetFeature(int(feature_id))
feature = target_layer.GetFeature(int(feature_id))
feature.SetField(str(predictor_id), value)
layer.SetFeature(feature)
target_layer.SetFeature(feature)
layer = None
predictor_vector.FlushCache()
predictor_vector = None
target_layer = None
target_vector = None
def _raster_sum_mean(
@ -1391,7 +1389,7 @@ def _ogr_to_geometry_list(vector_path):
def _assemble_regression_data(
pud_vector_path, tud_vector_path, regression_vector_path):
pud_vector_path, tud_vector_path, target_vector_path):
"""Update the vector with the predictor data, adding response variables.
Args:
@ -1399,7 +1397,7 @@ def _assemble_regression_data(
layer with PUD_YR_AVG.
tud_vector_path (string): Path to the vector polygon
layer with TUD_YR_AVG.
regression_vector_path (string): The response polygons with predictor data.
target_vector_path (string): The response polygons with predictor data.
Fields will be added in order to compute the linear regression:
* pr_PUD
* pr_TUD
@ -1415,18 +1413,22 @@ def _assemble_regression_data(
tud_vector = gdal.OpenEx(
tud_vector_path, gdal.OF_VECTOR | gdal.GA_ReadOnly)
tud_layer = tud_vector.GetLayer()
target_vector = gdal.OpenEx(
regression_vector_path, gdal.OF_VECTOR | gdal.GA_Update)
driver = gdal.GetDriverByName('GPKG')
if os.path.exists(target_vector_path):
driver.Delete(target_vector_path)
target_vector = driver.CreateCopy(
target_vector_path, pud_vector)
target_layer = target_vector.GetLayer()
_rename_layer_from_parent(target_layer)
for field in target_layer.schema:
if field.name != POLYGON_ID_FIELD:
target_layer.DeleteField(
target_layer.FindFieldIndex(field.name, 1))
def _create_field(fieldname):
# Create a new field for the predictor
# Delete the field first if it already exists
field_index = target_layer.FindFieldIndex(
str(fieldname), 1)
if field_index >= 0:
target_layer.DeleteField(field_index)
field = ogr.FieldDefn(str(fieldname), ogr.OFTReal)
target_layer.CreateField(field)
@ -1689,7 +1691,7 @@ def _calculate_scenario(
scenario_results_path (string): path to desired output scenario
vector result which will be geometrically a copy of the input
AOI but contain the scenario predictor data fields as well as the
scenario esimated response.
scenario estimated response.
response_id (string): text ID of response variable to write to
the scenario result.
coefficient_json_path (string): path to json file with the pre-existing

View File

@ -383,6 +383,7 @@ MODEL_SPEC = spec.build_model_spec({
"units": u.millimeter/u.year
}}
},
"stream.tif": spec_utils.STREAM,
"P.tif": {
"about": gettext("The total precipitation across all months on this pixel."),
"bands": {1: {
@ -442,15 +443,6 @@ MODEL_SPEC = spec.build_model_spec({
"units": u.millimeter
}}
},
"stream.tif": {
"about": gettext(
"Stream network map generated from the input DEM and "
"Threshold Flow Accumulation. Values of 1 represent "
"streams, values of 0 are non-stream pixels."),
"bands": {1: {
"type": "integer"
}}
},
'Si.tif': {
"about": gettext("Map of the S_i factor derived from CN"),
"bands": {1: {"type": "number", "units": u.inch}}
@ -519,6 +511,7 @@ _OUTPUT_BASE_FILES = {
'l_sum_path': 'L_sum.tif',
'l_sum_avail_path': 'L_sum_avail.tif',
'qf_path': 'QF.tif',
'stream_path': 'stream.tif',
'b_sum_path': 'B_sum.tif',
'b_path': 'B.tif',
'vri_path': 'Vri.tif',
@ -529,7 +522,6 @@ _INTERMEDIATE_BASE_FILES = {
'aetm_path_list': ['aetm_%d.tif' % (x+1) for x in range(N_MONTHS)],
'flow_dir_path': 'flow_dir.tif',
'qfm_path_list': ['qf_%d.tif' % (x+1) for x in range(N_MONTHS)],
'stream_path': 'stream.tif',
'si_path': 'Si.tif',
'lulc_aligned_path': 'lulc_aligned.tif',
'dem_aligned_path': 'dem_aligned.tif',

View File

@ -120,7 +120,7 @@ MODEL_SPEC = spec.build_model_spec({
}
},
"outputs": {
"Runoff_retention.tif": {
"Runoff_retention_index.tif": {
"about": "Map of runoff retention index.",
"bands": {1: {
"type": "number",
@ -371,7 +371,7 @@ def execute(args):
# Generate Runoff Retention
runoff_retention_nodata = -9999
runoff_retention_raster_path = os.path.join(
args['workspace_dir'], f'Runoff_retention{file_suffix}.tif')
args['workspace_dir'], f'Runoff_retention_index{file_suffix}.tif')
runoff_retention_task = task_graph.add_task(
func=pygeoprocessing.raster_calculator,
args=([

View File

@ -152,10 +152,10 @@ class NDRTests(unittest.TestCase):
('p_surface_load', 41.826904),
('p_surface_export', 5.566120),
('n_surface_load', 2977.551270),
('n_surface_export', 274.020844),
('n_surface_export', 274.062129),
('n_subsurface_load', 28.558048),
('n_subsurface_export', 15.578484),
('n_total_export', 289.599314)]:
('n_total_export', 289.640609)]:
if not numpy.isclose(feature.GetField(field), value, atol=1e-2):
error_results[field] = (
'field', feature.GetField(field), value)
@ -226,12 +226,12 @@ class NDRTests(unittest.TestCase):
# results
expected_watershed_totals = {
'p_surface_load': 41.826904,
'p_surface_export': 5.870544,
'p_surface_export': 5.866880,
'n_surface_load': 2977.551270,
'n_surface_export': 274.020844,
'n_surface_export': 274.062129,
'n_subsurface_load': 28.558048,
'n_subsurface_export': 15.578484,
'n_total_export': 289.599314
'n_total_export': 289.640609
}
for field in expected_watershed_totals:
@ -306,12 +306,12 @@ class NDRTests(unittest.TestCase):
# results
for field, expected_value in [
('p_surface_load', 41.826904),
('p_surface_export', 4.915544),
('p_surface_export', 5.100640),
('n_surface_load', 2977.551914),
('n_surface_export', 320.082319),
('n_surface_export', 350.592891),
('n_subsurface_load', 28.558048),
('n_subsurface_export', 12.609187),
('n_total_export', 330.293407)]:
('n_total_export', 360.803969)]:
val = result_feature.GetField(field)
if not numpy.isclose(val, expected_value):
mismatch_list.append(
@ -361,12 +361,12 @@ class NDRTests(unittest.TestCase):
# results
for field, expected_value in [
('p_surface_load', 41.826904),
('p_surface_export', 5.870544),
('p_surface_export', 5.866880),
('n_surface_load', 2977.551270),
('n_surface_export', 274.020844),
('n_surface_export', 274.062129),
('n_subsurface_load', 28.558048),
('n_subsurface_export', 15.578484),
('n_total_export', 289.599314)]:
('n_total_export', 289.640609)]:
val = result_feature.GetField(field)
if not numpy.isclose(val, expected_value):
mismatch_list.append(

View File

@ -677,6 +677,40 @@ class TestRecClientServer(unittest.TestCase):
"""Delete workspace"""
shutil.rmtree(self.workspace_dir, ignore_errors=True)
def test_execute_no_regression(self):
"""Recreation test userday metrics exist if not computing regression."""
from natcap.invest.recreation import recmodel_client
args = {
'aoi_path': os.path.join(
SAMPLE_DATA, 'andros_aoi.shp'),
'compute_regression': False,
'start_year': recmodel_client.MIN_YEAR,
'end_year': recmodel_client.MAX_YEAR,
'grid_aoi': False,
'workspace_dir': self.workspace_dir,
'hostname': self.hostname,
'port': self.port,
}
recmodel_client.execute(args)
out_regression_vector_path = os.path.join(
args['workspace_dir'], 'regression_data.gpkg')
# These fields should exist even if `compute_regression` is False
expected_fields = ['pr_TUD', 'pr_PUD', 'avg_pr_UD']
# For convenience, assert the sums of the columns instead of all
# the individual values.
actual_sums = sum_vector_columns(
out_regression_vector_path, expected_fields)
expected_sums = {
'pr_TUD': 1.0,
'pr_PUD': 1.0,
'avg_pr_UD': 1.0
}
for key in expected_sums:
numpy.testing.assert_almost_equal(
actual_sums[key], expected_sums[key], decimal=3)
def test_all_metrics_local_server(self):
"""Recreation test with all but trivial predictor metrics."""
from natcap.invest.recreation import recmodel_client
@ -1259,64 +1293,6 @@ class RecreationClientRegressionTests(unittest.TestCase):
# andros_aoi.shp fits 71 hexes at 20000 meters cell size
self.assertEqual(n_features, 71)
def test_existing_regression_coef(self):
"""Recreation test regression coefficients handle existing output."""
from natcap.invest.recreation import recmodel_client
from natcap.invest import validation
# Initialize a TaskGraph
taskgraph_db_dir = os.path.join(
self.workspace_dir, '_taskgraph_working_dir')
n_workers = -1 # single process mode.
task_graph = taskgraph.TaskGraph(taskgraph_db_dir, n_workers)
response_vector_path = os.path.join(
self.workspace_dir, 'no_grid_vector_path.gpkg')
response_polygons_lookup_path = os.path.join(
self.workspace_dir, 'response_polygons_lookup.pickle')
recmodel_client._copy_aoi_no_grid(
os.path.join(SAMPLE_DATA, 'andros_aoi.shp'), response_vector_path)
predictor_table_path = os.path.join(SAMPLE_DATA, 'predictors.csv')
# make outputs to be overwritten
predictor_dict = recmodel_client.MODEL_SPEC.get_input(
'predictor_table_path').get_validated_dataframe(
predictor_table_path).to_dict(orient='index')
predictor_list = predictor_dict.keys()
tmp_working_dir = tempfile.mkdtemp(dir=self.workspace_dir)
empty_json_list = [
os.path.join(tmp_working_dir, x + '.json') for x in predictor_list]
out_coefficient_vector_path = os.path.join(
self.workspace_dir, 'out_coefficient_vector.shp')
_make_empty_files(
[out_coefficient_vector_path] + empty_json_list)
prepare_response_polygons_task = task_graph.add_task(
func=recmodel_client._prepare_response_polygons_lookup,
args=(response_vector_path,
response_polygons_lookup_path),
target_path_list=[response_polygons_lookup_path],
task_name='prepare response polygons for geoprocessing')
# build again to test against overwriting output
recmodel_client._schedule_predictor_data_processing(
response_vector_path, response_polygons_lookup_path,
prepare_response_polygons_task, predictor_table_path,
out_coefficient_vector_path, tmp_working_dir, task_graph)
# Copied over from a shapefile formerly in our test-data repo:
expected_values = {
'bonefish': 19.96503546104,
'airdist': 40977.89565353348,
'ports': 14.0,
'bathy': 1.17308099107
}
vector = gdal.OpenEx(out_coefficient_vector_path)
layer = vector.GetLayer()
for feature in layer:
for k, v in expected_values.items():
numpy.testing.assert_almost_equal(feature.GetField(k), v)
def test_predictor_table_absolute_paths(self):
"""Recreation test validation from full path."""
from natcap.invest.recreation import recmodel_client