Merge branch 'main' into feature/plugins

This commit is contained in:
Emily Soth 2025-05-12 13:36:19 -07:00
commit 5e9e739705
41 changed files with 2015 additions and 396 deletions

39
.github/ISSUE_TEMPLATE/01-bug-report.md vendored Normal file
View File

@ -0,0 +1,39 @@
---
name: Bug report
about: Did you find a problem in InVEST? Let us know!
title: ''
labels: ''
assignees: ''
type: 'Bug'
---
## Description
Briefly describe the problem.
## To replicate
Provide detailed steps to replicate the behavior.
1.
## Expected behavior
Describe what you expected to happen.
## Observed behavior
Describe what actually happened.
## Logfile(s)
If the problem occurred when running an InVEST model, please upload the logfile. (You can find this file in your workspace directory.)
If you ran the model from the Workbench, please also upload the Workbench logfile. (To find this file, navigate to the Workbench's `About` menu, select `Report a problem`, then select `Find my logs`).
## Screenshots
If applicable, add screenshots to help illustrate the problem.
## Environment
- Operating System:
- InVEST version number:
- InVEST interface (Workbench, CLI, or Python API):
## Additional context
Please provide any additional details that might help us understand and debug the issue, including what you think the cause might be, and any solutions or workarounds you have tried.

View File

@ -0,0 +1,27 @@
---
name: Feature request
about: Have an idea for a new InVEST feature? Let us know!
title: ''
labels: ''
assignees: ''
type: 'Feature'
---
## Background/Motivation
Provide any background information relevant to this request, such as how or why you came up with the idea.
## Description
Describe the desired end result.
## Benefits
Explain how the proposed addition or modification would be valuable to InVEST users.
## Mitigation of side effects
Are there any potential side effects of your proposed feature? How do you propose preventing or mitigating them?
## Alternatives
Have you considered any alternative solutions or features? Please describe them.
## Additional context
Please provide any additional details that might help us understand and evaluate your proposal.

5
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@ -0,0 +1,5 @@
blank_issues_enabled: false
contact_links:
- name: Question or troubleshooting support
url: https://community.naturalcapitalproject.org/
about: If you have a question related to using InVEST or need help troubleshooting a specific issue, please visit the NatCap Community Forum.

View File

@ -107,7 +107,7 @@ jobs:
twine upload \
--username="__token__" \
--password=${{ secrets.PYPI_NATCAP_INVEST_TOKEN }} \
artifacts/natcap.invest* artifacts/natcap_invest*
artifacts/natcap_invest*
- name: Roll back on failure
if: failure()

65
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,65 @@
# Contributing to InVEST
## Asking Questions
Do you have a question about using InVEST or need help troubleshooting a specific issue? Please visit the [NatCap Community Forum](https://community.naturalcapitalproject.org/).
## Reporting Bugs
Did you find a bug in the InVEST software?
1. Search the [InVEST Open Issues on GitHub](https://github.com/natcap/invest/issues) to see if it has already been reported.
2. If you dont find an existing issue, create a new issue. Select “Bug report”, and provide as much detail as you can.
3. Do you know how to fix the bug? Learn how we handle code contributions in the [Contributing Code](#contributing-code) section of this guide.
## Requesting Features
A feature is any addition or modification that affects users of the InVEST software but does not directly address a specific bug. A feature may be very small (e.g., minor styling updates in a small component of the Workbench), very large (e.g., a new InVEST model), or somewhere in between.
If you want to suggest an InVEST feature:
1. Search the [InVEST Open Issues on GitHub](https://github.com/natcap/invest/issues) to see if it has already been suggested.
2. If you dont find an existing issue, create a new issue. Select “Feature request”, and provide as much detail as you can.
Every proposed feature, large or small, will be weighed in terms of its potential impact on InVEST users and the capacity of the NatCap team to implement and maintain the feature. Please note that major features, including new InVEST models, must go through a peer-review process with our Platform Steering Committee. In addition, requests for science-related changes to existing InVEST models, such as the modification of equations, or other changes to the way the model functions, will require a NatCap science review.
## Contributing Code
### Contributor License Agreement
Any code you contribute to InVEST will fall under the same license as InVEST. Please read the [InVEST Contributor License Agreement](https://natcap.github.io/invest-cla) and, if you agree to the terms, sign it. If you do not sign the Contributor License Agreement, you will not be able to contribute code to InVEST.
### Finding Open Issues
1. Browse the [InVEST Open Issues on GitHub](https://github.com/natcap/invest/issues). You may wish to search by keyword and/or filter by label, such as the `good first issue` label.
2. When you find something you want to work on, comment on the issue to let us know youre interested in working on it.
3. Wait for a member of the NatCap team to respond. One of the following will happen:
1. If its OK for you to work on the issue, a member of the NatCap team will let you know (in a comment on the issue) and assign the issue to you. Proceed to step 4.
2. If theres some reason you shouldnt work on the issue (because the issue is invalid, or we are intentionally delaying work on it, or—oops—someone else is already working on it), a member of the NatCap team will let you know by leaving a comment on the issue. Return to step 1!
4. Review the [InVEST Contributor License Agreement](https://natcap.github.io/invest-cla), if you havent already. If you do not sign the Contributor License Agreement, you will not be able to contribute code to InVEST.
### Forking the Repository and Creating a Branch
1. Fork the `natcap/invest` repository. Check [GitHubs “Fork a repository” guide](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo) for details, if needed.
2. On your fork, create a new branch off `main`. Give your branch a descriptive name with the form: `<category>/<issue-number>-<short-description>`. For example:
- `bugfix/12345-remove-moth-from-mainframe`,
- `feature/23456-new-ui-theme`, or
- `docs/34567-update-readme`.
### Working on Code Changes
1. Make sure you have followed the instructions in [Forking the Repository and Creating a Branch](#forking-the-repository-and-creating-a-branch).
2. Refer to the [InVEST README](./README.rst) for guidance in setting up your local development environment.
3. As you work, use commit messages that are descriptive and include the issue number (e.g., “Validate input LULC raster against biophysical table (#1234)”) rather than generic (e.g., “update code”). This will make it easier to distinguish between commits in the commit history, and it will make it easier for you to write a PR description.
4. Update automated tests as needed. If you are adding anything new, such as a new input parameter, a new output, more robust validation, or a larger feature, you should add tests to ensure the new behavior is covered.
5. Manually test your code changes to catch any issues the automated tests might have missed. Manual tests may include interacting with the Workbench UI to verify visual appearance as well as behavior, running a model (via the CLI or the Workbench) and verifying the outputs in a GIS, and/or other related activities.
6. Consistent code style keeps our codebase readable and helps us focus on the content of code changes. We follow the [PEP 8 Style Guide for Python Code](https://peps.python.org/pep-0008/) and [PEP 257 Python Docstring Conventions](https://peps.python.org/pep-0257/). When in doubt about how to format your code, please refer to existing InVEST code for examples.
### Submitting a Pull Request
1. Make sure you have followed the instructions in [Working on Code Changes](#working-on-code-changes).
2. Once your code is ready to be reviewed, open a draft PR.
1. Select the appropriate target branch on the `natcap` fork. This should be the same branch you branched from when starting work on your PR. (For example: if you branched from `main`, the target branch should be `main`.) If youre unsure what the target branch should be, ask us in a comment on the relevant issue.
2. Double-check that your PR includes all the changes you want (and none that you dont).
3. Add a description of the changes you made. Be sure to reference the issue your PR is addressing.
4. If you have made any UI changes, include relevant screenshots and/or screen recordings.
5. Follow the checklist in the PR template. If youve completed an item, check its box.
6. Select “Create draft pull request”.
3. As soon as your (draft) PR is open, a series of automated checks (test runners, build processes, etc.) will begin. Wait for these checks to complete.
4. After youve completed all relevant items on the checklist, and all the automated checks are passing, mark your PR “Ready for review”.
5. Wait for someone to review your PR. One or more members of the NatCap team will review your PR. They may request some additional changes and/or ask clarifying questions about the changes youve made. You may browse our closed PRs for examples of our PR review process, including conversations about PR details.
6. After you receive a review on your PR, you may need to do one or more of the following:
1. Respond to reviewers questions or comments.
2. Make additional changes, then request a “re-review” from your reviewer(s).
3. Repeat steps 5 and 6 as needed.
7. Once the NatCap team has determined your PR is ready to be merged, they will approve and merge your PR. Thanks for contributing to InVEST! 🎉

View File

@ -61,9 +61,57 @@
..
Unreleased Changes
------------------
Unreleased Changes
------------------
Workbench
=========
* Metadata is now generated for files when creating a datastack (with any
existing user-added metadata preserved)
(`#1774 <https://github.com/natcap/invest/issues/1774>`_).
3.15.1 (2025-05-06)
-------------------
General
=======
* Handle exceptions that can arise during ``validation.check_spatial_overlap``
when a layer's bounding box cannot be transformed to EPSG:4326.
(`#1849 <https://github.com/natcap/invest/issues/1849>`_).
Workbench
=========
* Fixed a bug that did not allow users to select a folder as the location
to extract a datastack archive.
(`#1879 <https://github.com/natcap/invest/issues/1879>`_).
* When a parameter from a previous model run is changed, the model status
indicator (e.g., the "Model Complete" notice) is cleared to help prevent
confusion about which parameters went into the most recent model run
(`#1655 <https://github.com/natcap/invest/issues/1655>`_).
Crop Production
===============
* Both the Percentile and Regression models now issue a warning if any LULC
code in the LULC raster is not present in the landcover to crop table, or if
any LULC code in the landcover to crop table is not present in the LULC
raster (`#925 <https://github.com/natcap/invest/issues/925>`_).
* The Regression model now correctly validates crop names against the existence
of a corresponding regression yield table
(`#1723 <https://github.com/natcap/invest/issues/1723>`_).
Seasonal Water Yield
====================
* Fixed bug in quickflow calculation where `NoData` values in precipitation
were being incorrectly converted to 0's
(`#1592 <https://github.com/natcap/invest/issues/1592>`_).
Wind Energy
===========
* Fixed a bug where the model would error if no AOI was provided when run from
the workbench or from a datastack file where the value for 'aoi_vector_path'
was an empty string. (`#1900 <https://github.com/natcap/invest/issues/1900>`_)
3.15.0 (2025-04-03)
-------------------

View File

@ -9,6 +9,6 @@ Project to develop this policy and any trademark licensing. Visit
naturalcapitalproject.stanford.edu/invest-trademark-and-logo-use-policy for the
complete trademark and logo policy.
Copyright (c) 2023, Natural Capital Project
Copyright (c) 2025, Natural Capital Project
------------------------------------------------------------------------------

View File

@ -16,26 +16,27 @@ _REQUIREMENTS = [req.split(';')[0].split('#')[0].strip() for req in
and len(req.strip()) > 0)]
include_dirs = [numpy.get_include(), 'src/natcap/invest/managed_raster']
# Commenting out for plugin development
# if platform.system() == 'Windows':
# compiler_args = ['/std:c++20']
# compiler_and_linker_args = []
# if 'NATCAP_INVEST_GDAL_LIB_PATH' not in os.environ:
# raise RuntimeError(
# 'env variable NATCAP_INVEST_GDAL_LIB_PATH is not defined. '
# 'This env variable is required when building on Windows. If '
# 'using conda to manage your gdal installation, you may set '
# 'NATCAP_INVEST_GDAL_LIB_PATH="$CONDA_PREFIX/Library".')
# library_dirs = [f'{os.environ["NATCAP_INVEST_GDAL_LIB_PATH"]}/lib']
# include_dirs.append(f'{os.environ["NATCAP_INVEST_GDAL_LIB_PATH"]}/include')
# else:
# compiler_args = [subprocess.run(
# ['gdal-config', '--cflags'], capture_output=True, text=True
# ).stdout.strip()]
# compiler_and_linker_args = ['-std=c++20']
# library_dirs = [subprocess.run(
# ['gdal-config', '--libs'], capture_output=True, text=True
# ).stdout.split()[0][2:]] # get the first argument which is the library path
if platform.system() == 'Windows':
compiler_args = ['/std:c++20']
compiler_and_linker_args = []
if 'NATCAP_INVEST_GDAL_LIB_PATH' not in os.environ:
raise RuntimeError(
'env variable NATCAP_INVEST_GDAL_LIB_PATH is not defined. '
'This env variable is required when building on Windows. If '
'using conda to manage your gdal installation, you may set '
'NATCAP_INVEST_GDAL_LIB_PATH=%CONDA_PREFIX%/Library".')
library_dirs = [os.path.join(
os.environ["NATCAP_INVEST_GDAL_LIB_PATH"].rstrip(), "lib")]
include_dirs.append(os.path.join(
os.environ["NATCAP_INVEST_GDAL_LIB_PATH"].rstrip(), "include"))
else:
compiler_args = [subprocess.run(
['gdal-config', '--cflags'], capture_output=True, text=True
).stdout.strip()]
compiler_and_linker_args = ['-std=c++20']
library_dirs = [subprocess.run(
['gdal-config', '--libs'], capture_output=True, text=True
).stdout.split()[0][2:]] # get the first argument which is the library path
class build_py(_build_py):

View File

@ -495,7 +495,8 @@ def main(user_args=None):
try:
# If there's an exception from creating metadata
# I don't think we want to indicate a model failure
spec_utils.generate_metadata(model_module, parsed_datastack.args)
spec_utils.generate_metadata_for_outputs(
model_module, parsed_datastack.args)
except Exception as exc:
LOGGER.warning(
'Something went wrong while generating metadata', exc_info=exc)

View File

@ -617,6 +617,33 @@ def execute(args):
crop_to_landcover_df = validation.get_validated_dataframe(
args['landcover_to_crop_table_path'],
**MODEL_SPEC['args']['landcover_to_crop_table_path'])
lucodes_in_table = set(list(
crop_to_landcover_df[_EXPECTED_LUCODE_TABLE_HEADER]))
def update_unique_lucodes_in_raster(unique_codes, block):
unique_codes.update(numpy.unique(block))
return unique_codes
unique_lucodes_in_raster = pygeoprocessing.raster_reduce(
update_unique_lucodes_in_raster,
(args['landcover_raster_path'], 1),
set())
lucodes_missing_from_raster = lucodes_in_table.difference(
unique_lucodes_in_raster)
if lucodes_missing_from_raster:
LOGGER.warning(
"The following lucodes are in the landcover to crop table but "
f"aren't in the landcover raster: {lucodes_missing_from_raster}")
lucodes_missing_from_table = unique_lucodes_in_raster.difference(
lucodes_in_table)
if lucodes_missing_from_table:
LOGGER.warning(
"The following lucodes are in the landcover raster but aren't "
f"in the landcover to crop table: {lucodes_missing_from_table}")
bad_crop_name_list = []
for crop_name in crop_to_landcover_df.index:
crop_climate_bin_raster_path = os.path.join(

View File

@ -511,32 +511,42 @@ def execute(args):
args['fertilization_rate_table_path'],
**MODEL_SPEC['args']['fertilization_rate_table_path'])
crop_lucodes = list(crop_to_landcover_df[_EXPECTED_LUCODE_TABLE_HEADER])
lucodes_in_table = set(list(
crop_to_landcover_df[_EXPECTED_LUCODE_TABLE_HEADER]))
unique_lucodes = numpy.array([])
for _, lu_band_data in pygeoprocessing.iterblocks(
(args['landcover_raster_path'], 1)):
unique_block = numpy.unique(lu_band_data)
unique_lucodes = numpy.unique(numpy.concatenate(
(unique_lucodes, unique_block)))
def update_unique_lucodes_in_raster(unique_codes, block):
unique_codes.update(numpy.unique(block))
return unique_codes
missing_lucodes = set(crop_lucodes).difference(
set(unique_lucodes))
if len(missing_lucodes) > 0:
unique_lucodes_in_raster = pygeoprocessing.raster_reduce(
update_unique_lucodes_in_raster,
(args['landcover_raster_path'], 1),
set())
lucodes_missing_from_raster = lucodes_in_table.difference(
unique_lucodes_in_raster)
if lucodes_missing_from_raster:
LOGGER.warning(
"The following lucodes are in the landcover to crop table but "
"aren't in the landcover raster: %s", missing_lucodes)
f"aren't in the landcover raster: {lucodes_missing_from_raster}")
lucodes_missing_from_table = unique_lucodes_in_raster.difference(
lucodes_in_table)
if lucodes_missing_from_table:
LOGGER.warning(
"The following lucodes are in the landcover raster but aren't "
f"in the landcover to crop table: {lucodes_missing_from_table}")
LOGGER.info("Checking that crops correspond to known types.")
for crop_name in crop_to_landcover_df.index:
crop_climate_bin_raster_path = os.path.join(
crop_regression_yield_table_path = os.path.join(
args['model_data_path'],
_EXTENDED_CLIMATE_BIN_FILE_PATTERN % crop_name)
if not os.path.exists(crop_climate_bin_raster_path):
_REGRESSION_TABLE_PATTERN % crop_name)
if not os.path.exists(crop_regression_yield_table_path):
raise ValueError(
"Expected climate bin map called %s for crop %s "
"specified in %s", crop_climate_bin_raster_path, crop_name,
args['landcover_to_crop_table_path'])
f"Expected regression yield table called "
f"{crop_regression_yield_table_path} for crop {crop_name} "
f"specified in {args['landcover_to_crop_table_path']}")
landcover_raster_info = pygeoprocessing.get_raster_info(
args['landcover_raster_path'])

View File

@ -34,6 +34,7 @@ import warnings
from osgeo import gdal
from . import spec_utils
from . import utils
from . import validation
from . import models
@ -351,8 +352,6 @@ def build_datastack_archive(args, model_id, datastack_path):
target_filepath = os.path.join(
data_dir, f'{key}_file')
shutil.copyfile(source_path, target_filepath)
LOGGER.debug(
f'File copied from {source_path} --> {target_filepath}')
target_arg_value = target_filepath
files_found[source_path] = target_arg_value
@ -405,9 +404,20 @@ def build_datastack_archive(args, model_id, datastack_path):
# write parameters to a new json file in the temp workspace
param_file_uri = os.path.join(temp_workspace,
'parameters' + PARAMETER_SET_EXTENSION)
build_parameter_set(
parameter_set = build_parameter_set(
rewritten_args, model_id, param_file_uri, relative=True)
# write metadata for all files in args
keywords = [module.MODEL_SPEC['model_id'], 'InVEST']
for k, v in args.items():
if isinstance(v, str) and os.path.isfile(v):
this_arg_spec = module.MODEL_SPEC['args'][k]
# write metadata file to target location (in temp dir)
subdir = os.path.dirname(parameter_set['args'][k])
target_location = os.path.join(temp_workspace, subdir)
spec_utils.write_metadata_file(v, this_arg_spec, keywords,
out_workspace=target_location)
# Remove the handler before archiving the working dir (and the logfile)
archive_filehandler.close()
logging.getLogger().removeHandler(archive_filehandler)
@ -486,7 +496,7 @@ def build_parameter_set(args, model_id, paramset_path, relative=False):
directory of ``paramset_path``.
Returns:
``None``
parameter dictionary saved in ``paramset_path``
Raises:
ValueError if creating a relative path fails.
@ -536,6 +546,8 @@ def build_parameter_set(args, model_id, paramset_path, relative=False):
indent=4,
sort_keys=True))
return parameter_data
def extract_parameter_set(paramset_path):
"""Extract and return attributes from a parameter set.

View File

@ -12,6 +12,10 @@ import numpy
import pandas
import pygeoprocessing
import scipy.spatial
import shapely.errors
import shapely.geometry
import shapely.prepared
import shapely.wkb
import taskgraph
from osgeo import gdal
from osgeo import ogr
@ -124,7 +128,8 @@ MODEL_SPEC = {
"about": spec_utils.LULC['about'] + " " + gettext(
"All values in this raster must "
"have corresponding entries in the Biophysical Table."),
"projected": True
"projected": True,
"projection_units": u.meter
},
"pools_to_calculate": {
"type": "option_string",
@ -176,6 +181,8 @@ MODEL_SPEC = {
"Used only for the asymptotic model.")}
},
"geometries": spec_utils.POLYGONS,
"projected": True,
"projection_units": u.meter,
"required": "compute_forest_edge_effects",
"allowed": "compute_forest_edge_effects",
"about": gettext(
@ -262,6 +269,13 @@ MODEL_SPEC = {
"bands": {1: {
"type": "number", "units": u.metric_ton/u.hectare
}}
},
"regression_model_params_clipped.shp": {
"about": (
"The Global Regression Models shapefile clipped "
"to the study area."),
"geometries": spec_utils.POLYGONS,
"fields": {}
}
}
},
@ -420,6 +434,9 @@ def execute(args):
f'tropical_forest_edge_carbon_stocks{file_suffix}.tif')
output_file_registry['non_forest_mask'] = os.path.join(
intermediate_dir, f'non_forest_mask{file_suffix}.tif')
output_file_registry['tropical_forest_edge_clipped'] = os.path.join(
intermediate_dir,
f'regression_model_params_clipped{file_suffix}.shp')
# Map non-forest landcover codes to carbon biomasses
LOGGER.info('Calculating direct mapped carbon stocks')
@ -456,15 +473,26 @@ def execute(args):
output_file_registry['non_forest_mask']],
task_name='map_distance_from_forest_edge')
# Clip global regression model vector to LULC raster bounding box
LOGGER.info('Clipping global forest carbon edge regression models vector')
clip_forest_edge_carbon_vector_task = task_graph.add_task(
func=_clip_global_regression_models_vector,
args=(args['lulc_raster_path'],
args['tropical_forest_edge_carbon_model_vector_path'],
output_file_registry['tropical_forest_edge_clipped']),
target_path_list=[output_file_registry['tropical_forest_edge_clipped']],
task_name='clip_forest_edge_carbon_vector')
# Build spatial index for gridded global model for closest 3 points
LOGGER.info('Building spatial index for forest edge models.')
build_spatial_index_task = task_graph.add_task(
func=_build_spatial_index,
args=(args['lulc_raster_path'], intermediate_dir,
args['tropical_forest_edge_carbon_model_vector_path'],
output_file_registry['tropical_forest_edge_clipped'],
output_file_registry['spatial_index_pickle']),
target_path_list=[output_file_registry['spatial_index_pickle']],
task_name='build_spatial_index')
task_name='build_spatial_index',
dependent_task_list=[clip_forest_edge_carbon_vector_task])
# calculate the carbon edge effect on forests
LOGGER.info('Calculating forest edge carbon')
@ -732,7 +760,8 @@ def _map_distance_from_tropical_forest_edge(
edge_distance_raster = gdal.OpenEx(edge_distance_path, gdal.GA_Update)
edge_distance_band = edge_distance_raster.GetRasterBand(1)
for offset_dict in pygeoprocessing.iterblocks((base_lulc_raster_path, 1), offset_only=True):
for offset_dict in pygeoprocessing.iterblocks((base_lulc_raster_path, 1),
offset_only=True):
# where LULC has nodata, overwrite edge distance with nodata value
lulc_block = lulc_band.ReadAsArray(**offset_dict)
distance_block = edge_distance_band.ReadAsArray(**offset_dict)
@ -744,6 +773,93 @@ def _map_distance_from_tropical_forest_edge(
yoff=offset_dict['yoff'])
def _clip_global_regression_models_vector(
lulc_raster_path, source_vector_path, target_vector_path):
"""Clip the global carbon edge model shapefile
Clip the shapefile containing the global carbon edge model parameters
to the bounding box of the LULC raster (representing the target study area)
plus a buffer to account for the kd-tree lookup DISTANCE_UPPER_BOUND
Args:
lulc_raster_path (string): path to a raster that is used to define the
bounding box to use for clipping.
source_vector_path (string): a path to an OGR shapefile to be clipped.
target_vector_path (string): a path to an OGR shapefile to store the
clipped vector.
Returns:
None
"""
raster_info = pygeoprocessing.get_raster_info(lulc_raster_path)
vector_info = pygeoprocessing.get_vector_info(source_vector_path)
bbox_minx, bbox_miny, bbox_maxx, bbox_maxy = raster_info['bounding_box']
buffered_bb = [
bbox_minx - DISTANCE_UPPER_BOUND,
bbox_miny - DISTANCE_UPPER_BOUND,
bbox_maxx + DISTANCE_UPPER_BOUND,
bbox_maxy + DISTANCE_UPPER_BOUND,
]
# Reproject the LULC bounding box to the vector's projection for clipping
mask_bb = pygeoprocessing.transform_bounding_box(buffered_bb,
raster_info['projection_wkt'], vector_info['projection_wkt'])
shapely_mask = shapely.prepared.prep(shapely.geometry.box(*mask_bb))
base_vector = gdal.OpenEx(source_vector_path, gdal.OF_VECTOR)
base_layer = base_vector.GetLayer()
base_layer_defn = base_layer.GetLayerDefn()
base_geom_type = base_layer.GetGeomType()
target_driver = gdal.GetDriverByName('ESRI Shapefile')
target_vector = target_driver.Create(
target_vector_path, 0, 0, 0, gdal.GDT_Unknown)
target_layer = target_vector.CreateLayer(
base_layer_defn.GetName(), base_layer.GetSpatialRef(), base_geom_type)
target_layer.CreateFields(base_layer.schema)
target_layer.StartTransaction()
invalid_feature_count = 0
for feature in base_layer:
invalid = False
geometry = feature.GetGeometryRef()
try:
shapely_geom = shapely.wkb.loads(bytes(geometry.ExportToWkb()))
# Catch invalid geometries that cannot be loaded by Shapely;
# e.g. polygons with too few points for their type
except shapely.errors.ShapelyError:
invalid = True
else:
if shapely_geom.is_valid:
# Check for intersection rather than use gdal.Layer.Clip()
# to preserve the shape of the polygons (we use the centroid
# when constructing the kd-tree)
if shapely_mask.intersects(shapely_geom):
new_feature = ogr.Feature(target_layer.GetLayerDefn())
new_feature.SetGeometry(ogr.CreateGeometryFromWkb(
shapely_geom.wkb))
for field_name, field_value in feature.items().items():
new_feature.SetField(field_name, field_value)
target_layer.CreateFeature(new_feature)
else:
invalid = True
finally:
if invalid:
invalid_feature_count += 1
LOGGER.warning(
f"The geometry at feature {feature.GetFID()} is invalid "
"and will be skipped.")
target_layer.CommitTransaction()
if invalid_feature_count:
LOGGER.warning(
f"{invalid_feature_count} features in {source_vector_path} "
"were found to be invalid during clipping and were skipped.")
def _build_spatial_index(
base_raster_path, local_model_dir,
tropical_forest_edge_carbon_model_vector_path,
@ -781,10 +897,9 @@ def _build_spatial_index(
lulc_projection_wkt = pygeoprocessing.get_raster_info(
base_raster_path)['projection_wkt']
with utils._set_gdal_configuration('OGR_ENABLE_PARTIAL_REPROJECTION', 'TRUE'):
pygeoprocessing.reproject_vector(
tropical_forest_edge_carbon_model_vector_path, lulc_projection_wkt,
carbon_model_reproject_path)
pygeoprocessing.reproject_vector(
tropical_forest_edge_carbon_model_vector_path, lulc_projection_wkt,
carbon_model_reproject_path)
model_vector = gdal.OpenEx(carbon_model_reproject_path)
model_layer = model_vector.GetLayer()
@ -796,17 +911,14 @@ def _build_spatial_index(
# put all the polygons in the kd_tree because it's fast and simple
for poly_feature in model_layer:
poly_geom = poly_feature.GetGeometryRef()
if poly_geom.IsValid():
poly_centroid = poly_geom.Centroid()
# put in row/col order since rasters are row/col indexed
kd_points.append([poly_centroid.GetY(), poly_centroid.GetX()])
poly_centroid = poly_geom.Centroid()
# put in row/col order since rasters are row/col indexed
kd_points.append([poly_centroid.GetY(), poly_centroid.GetX()])
theta_model_parameters.append([
poly_feature.GetField(feature_id) for feature_id in
['theta1', 'theta2', 'theta3']])
method_model_parameter.append(poly_feature.GetField('method'))
else:
LOGGER.warning(f'skipping invalid geometry {poly_geom}')
theta_model_parameters.append([
poly_feature.GetField(feature_id) for feature_id in
['theta1', 'theta2', 'theta3']])
method_model_parameter.append(poly_feature.GetField('method'))
method_model_parameter = numpy.array(
method_model_parameter, dtype=numpy.int32)

View File

@ -1117,8 +1117,8 @@ def _calculate_monthly_quick_flow(precip_path, n_events_path, stream_path,
valid_p_mask = ~pygeoprocessing.array_equals_nodata(p_im, p_nodata)
valid_n_mask = ~pygeoprocessing.array_equals_nodata(n_m, n_nodata)
# precip mask: both p_im and n_m are defined and greater than 0
precip_mask = valid_p_mask & valid_n_mask & (p_im > 0) & (n_m > 0)
stream_mask = stream == 1
precip_ok = valid_p_mask & valid_n_mask & (p_im > 0) & (n_m > 0)
on_stream = stream == 1
# stream_nodata is the only input that carries over nodata values from
# the aligned DEM.
valid_mask = (
@ -1129,7 +1129,7 @@ def _calculate_monthly_quick_flow(precip_path, n_events_path, stream_path,
# QF is defined in terms of three cases:
#
# 1. Where there is no precipitation, QF = 0
# 1. Where precipitation <=0, QF = 0
# (even if stream or s_i is undefined)
#
# 2. Where there is precipitation and we're on a stream, QF = P
@ -1183,14 +1183,15 @@ def _calculate_monthly_quick_flow(precip_path, n_events_path, stream_path,
# qf_im is the quickflow at pixel i on month m
qf_im = numpy.full(p_im.shape, TARGET_NODATA, dtype=numpy.float32)
# case 1: where there is no precipitation
qf_im[~precip_mask] = 0
# case 1: where precip or n are <=0 but not nodata
case_1_mask = ~precip_ok & valid_p_mask & valid_n_mask
qf_im[case_1_mask] = 0
# case 2: where there is precipitation and we're on a stream
qf_im[precip_mask & stream_mask] = p_im[precip_mask & stream_mask]
qf_im[precip_ok & on_stream] = p_im[precip_ok & on_stream]
# case 3: where there is precipitation and we're not on a stream
case_3_mask = valid_mask & precip_mask & ~stream_mask
case_3_mask = valid_mask & precip_ok & ~on_stream
# for consistent indexing, make a_im the same shape as the other
# arrays even though we only use it in case 3

View File

@ -13,6 +13,8 @@ from natcap.invest import utils
from . import gettext
from .unit_registry import u
from pydantic import ValidationError
LOGGER = logging.getLogger(__name__)
@ -616,26 +618,54 @@ def describe_arg_from_name(module_name, *arg_keys):
return f'.. _{anchor_name}:\n\n{rst_description}'
def write_metadata_file(datasource_path, spec, lineage_statement, keywords_list):
"""Write a metadata sidecar file for an invest output dataset.
def write_metadata_file(datasource_path, spec, keywords_list,
lineage_statement='', out_workspace=None):
"""Write a metadata sidecar file for an invest dataset.
Create metadata for invest model inputs or outputs, taking care to
preserve existing human-modified attributes.
Note: We do not want to overwrite any existing metadata so if there is
invalid metadata for the datasource (i.e., doesn't pass geometamaker
validation in ``describe``), this function will NOT create new metadata.
Args:
datasource_path (str) - filepath to the invest output
spec (dict) - the invest specification for ``datasource_path``
lineage_statement (str) - string to describe origin of the dataset.
datasource_path (str) - filepath to the data to describe
spec (dict) - the invest specification for ``datasource_path``
keywords_list (list) - sequence of strings
lineage_statement (str, optional) - string to describe origin of
the dataset
out_workspace (str, optional) - where to write metadata if different
from data location
Returns:
None
"""
resource = geometamaker.describe(datasource_path)
def _get_key(key, resource):
"""Map name of actual key in yml from model_spec key name."""
names = {field.name.lower(): field.name
for field in resource.data_model.fields}
return names[key]
try:
resource = geometamaker.describe(datasource_path)
except ValidationError:
LOGGER.debug(
f"Skipping metadata creation for {datasource_path}, as invalid "
"metadata exists.")
return None
# Don't want function to fail bc can't create metadata due to invalid filetype
except ValueError as e:
LOGGER.debug(f"Skipping metadata creation for {datasource_path}: {e}")
return None
resource.set_lineage(lineage_statement)
# a pre-existing metadata doc could have keywords
words = resource.get_keywords()
resource.set_keywords(set(words + keywords_list))
if 'about' in spec:
if 'about' in spec and len(resource.get_description()) < 1:
resource.set_description(spec['about'])
attr_spec = None
if 'columns' in spec:
@ -644,27 +674,38 @@ def write_metadata_file(datasource_path, spec, lineage_statement, keywords_list)
attr_spec = spec['fields']
if attr_spec:
for key, value in attr_spec.items():
about = value['about'] if 'about' in value else ''
units = format_unit(value['units']) if 'units' in value else ''
try:
resource.set_field_description(
key, description=about, units=units)
# field names in attr_spec are always lowercase, but the
# actual fieldname in the data could be any case because
# invest does not require case-sensitive fieldnames
yaml_key = _get_key(key, resource)
# Field description only gets set if its empty, i.e. ''
if len(resource.get_field_description(yaml_key)
.description.strip()) < 1:
about = value['about'] if 'about' in value else ''
resource.set_field_description(yaml_key, description=about)
# units only get set if empty
if len(resource.get_field_description(yaml_key)
.units.strip()) < 1:
units = format_unit(value['units']) if 'units' in value else ''
resource.set_field_description(yaml_key, units=units)
except KeyError as error:
# fields that are in the spec but missing
# from model results because they are conditional.
LOGGER.debug(error)
if 'bands' in spec:
for idx, value in spec['bands'].items():
try:
units = format_unit(spec['bands'][idx]['units'])
except KeyError:
units = ''
resource.set_band_description(idx, units=units)
if len(resource.get_band_description(idx).units) < 1:
try:
units = format_unit(spec['bands'][idx]['units'])
except KeyError:
units = ''
resource.set_band_description(idx, units=units)
resource.write()
resource.write(workspace=out_workspace)
def generate_metadata(model_module, args_dict):
def generate_metadata_for_outputs(model_module, args_dict):
"""Create metadata for all items in an invest model output workspace.
Args:
@ -698,7 +739,7 @@ def generate_metadata(model_module, args_dict):
if os.path.exists(full_path):
try:
write_metadata_file(
full_path, spec_data, lineage_statement, keywords)
full_path, spec_data, keywords, lineage_statement)
except ValueError as error:
# Some unsupported file formats, e.g. html
LOGGER.debug(error)

View File

@ -805,14 +805,23 @@ def check_spatial_overlap(spatial_filepaths_list,
return MESSAGES['NO_PROJECTION'].format(filepath=filepath)
if different_projections_ok:
bounding_box = pygeoprocessing.transform_bounding_box(
info['bounding_box'], info['projection_wkt'], wgs84_wkt)
try:
bounding_box = pygeoprocessing.transform_bounding_box(
info['bounding_box'], info['projection_wkt'], wgs84_wkt)
except (ValueError, RuntimeError) as err:
LOGGER.debug(err)
LOGGER.warning(
f'Skipping spatial overlap check for {filepath}. '
'Bounding box cannot be transformed to EPSG:4326')
continue
else:
bounding_box = info['bounding_box']
if all([numpy.isinf(coord) for coord in bounding_box]):
LOGGER.warning(
'Skipping infinite bounding box for file %s', filepath)
f'Skipping spatial overlap check for {filepath} '
f'because of infinite bounding box {bounding_box}')
continue
bounding_boxes.append(bounding_box)

View File

@ -776,7 +776,7 @@ def execute(args):
target_path_list=[wind_data_pickle_path],
task_name='compute_density_harvested_fields')
if 'aoi_vector_path' in args:
if 'aoi_vector_path' in args and args['aoi_vector_path'] != '':
LOGGER.info('AOI Provided')
aoi_vector_path = args['aoi_vector_path']

View File

@ -0,0 +1,395 @@
"""Helper module for generating test data related to Crop Production tests."""
import pandas
def sample_nutrient_df():
"""Generate a sample nutrient DataFrame for crops.
This function creates a small DataFrame containing example nutrient
and production data for crops such as corn and soybean. It can be
used for testing nutrient calculations and validating model outputs.
Returns:
pandas.DataFrame: Nutrient and production data indexed by crop name.
"""
return pandas.DataFrame([
{'crop': 'corn', 'area (ha)': 21.0, 'production_observed': 0.2,
'percentrefuse': 7, 'protein': 42., 'lipid': 8, 'energy': 476.,
'ca': 27.0, 'fe': 15.7, 'mg': 280.0, 'ph': 704.0, 'k': 1727.0,
'na': 2.0, 'zn': 4.9, 'cu': 1.9, 'fl': 8, 'mn': 2.9, 'se': 0.1,
'vita': 3.0, 'betac': 16.0, 'alphac': 2.30, 'vite': 0.8,
'crypto': 1.6, 'lycopene': 0.36, 'lutein': 63.0, 'betat': 0.5,
'gammat': 2.1, 'deltat': 1.9, 'vitc': 6.8, 'thiamin': 0.4,
'riboflavin': 1.8, 'niacin': 8.2, 'pantothenic': 0.9,
'vitb6': 1.4, 'folate': 385.0, 'vitb12': 2.0, 'vitk': 41.0},
{'crop': 'soybean', 'area (ha)': 5., 'production_observed': 4.,
'percentrefuse': 9, 'protein': 33., 'lipid': 2., 'energy': 99.,
'ca': 257., 'fe': 15.7, 'mg': 280., 'ph': 704.0, 'k': 197.0,
'na': 2., 'zn': 4.9, 'cu': 1.6, 'fl': 3., 'mn': 5.2, 'se': 0.3,
'vita': 3.0, 'betac': 16.0, 'alphac': 1.0, 'vite': 0.8,
'crypto': 0.6, 'lycopene': 0.3, 'lutein': 61.0, 'betat': 0.5,
'gammat': 2.3, 'deltat': 1.2, 'vitc': 3.0, 'thiamin': 0.42,
'riboflavin': 0.82, 'niacin': 12.2, 'pantothenic': 0.92,
'vitb6': 5.4, 'folate': 305., 'vitb12': 3., 'vitk': 42.},
]).set_index('crop')
def tabulate_regr_results_table():
"""Generate expected output for unit tests of tabulated regression results.
This function returns a DataFrame that represents the expected output
of the `tabulate_regression_results` function, with the parameters
defined in the unit test `test_tabulate_regression_results`.
Returns:
pandas.DataFrame: Expected tabulated results for regression output.
"""
return pandas.DataFrame([
{'crop': 'corn', 'area (ha)': 20.0,
'production_observed': 80.0, 'production_modeled': 40.0,
'protein_modeled': 15624000.0, 'protein_observed': 31248000.0,
'lipid_modeled': 2976000.0, 'lipid_observed': 5952000.0,
'energy_modeled': 177072000.0, 'energy_observed': 354144000.0,
'ca_modeled': 10044000.0, 'ca_observed': 20088000.0,
'fe_modeled': 5840400.0, 'fe_observed': 11680800.0,
'mg_modeled': 104160000.0, 'mg_observed': 208320000.0,
'ph_modeled': 261888000.0, 'ph_observed': 523776000.0,
'k_modeled': 642444000.0, 'k_observed': 1284888000.0,
'na_modeled': 744000.0, 'na_observed': 1488000.0,
'zn_modeled': 1822800.0, 'zn_observed': 3645600.0,
'cu_modeled': 706800.0, 'cu_observed': 1413600.0,
'fl_modeled': 2976000.0, 'fl_observed': 5952000.0,
'mn_modeled': 1078800.0, 'mn_observed': 2157600.0,
'se_modeled': 37200.0, 'se_observed': 74400.0,
'vita_modeled': 1116000.0, 'vita_observed': 2232000.0,
'betac_modeled': 5952000.0, 'betac_observed': 11904000.0,
'alphac_modeled': 855600.0, 'alphac_observed': 1711200.0,
'vite_modeled': 297600.0, 'vite_observed': 595200.0,
'crypto_modeled': 595200.0, 'crypto_observed': 1190400.0,
'lycopene_modeled': 133920.0, 'lycopene_observed': 267840.0,
'lutein_modeled': 23436000.0, 'lutein_observed': 46872000.0,
'betat_modeled': 186000.0, 'betat_observed': 372000.0,
'gammat_modeled': 781200.0, 'gammat_observed': 1562400.0,
'deltat_modeled': 706800.0, 'deltat_observed': 1413600.0,
'vitc_modeled': 2529600.0, 'vitc_observed': 5059200.0,
'thiamin_modeled': 148800.0, 'thiamin_observed': 297600.0,
'riboflavin_modeled': 669600.0, 'riboflavin_observed': 1339200.0,
'niacin_modeled': 3050400.0, 'niacin_observed': 6100800.0,
'pantothenic_modeled': 334800.0, 'pantothenic_observed': 669600.0,
'vitb6_modeled': 520800.0, 'vitb6_observed': 1041600.0,
'folate_modeled': 143220000.0, 'folate_observed': 286440000.0,
'vitb12_modeled': 744000.0, 'vitb12_observed': 1488000.0,
'vitk_modeled': 15252000.0, 'vitk_observed': 30504000.0},
{'crop': 'soybean', 'area (ha)': 40.0,
'production_observed': 120.0, 'production_modeled': 70.0,
'protein_modeled': 21021000.0, 'protein_observed': 36036000.0,
'lipid_modeled': 1274000.0, 'lipid_observed': 2184000.0,
'energy_modeled': 63063000.0, 'energy_observed': 108108000.0,
'ca_modeled': 163709000.0, 'ca_observed': 280644000.0,
'fe_modeled': 10000900.0, 'fe_observed': 17144400.0,
'mg_modeled': 178360000.0, 'mg_observed': 305760000.0,
'ph_modeled': 448448000.0, 'ph_observed': 768768000.0,
'k_modeled': 125489000.0, 'k_observed': 215124000.0,
'na_modeled': 1274000.0, 'na_observed': 2184000.0,
'zn_modeled': 3121300.0, 'zn_observed': 5350800.0,
'cu_modeled': 1019200.0, 'cu_observed': 1747200.0,
'fl_modeled': 1911000.0, 'fl_observed': 3276000.0,
'mn_modeled': 3312400.0, 'mn_observed': 5678400.0,
'se_modeled': 191100.0, 'se_observed': 327600.0,
'vita_modeled': 1911000.0, 'vita_observed': 3276000.0,
'betac_modeled': 10192000.0, 'betac_observed': 17472000.0,
'alphac_modeled': 637000.0, 'alphac_observed': 1092000.0,
'vite_modeled': 509600.0, 'vite_observed': 873600.0,
'crypto_modeled': 382200.0, 'crypto_observed': 655200.0,
'lycopene_modeled': 191100.0, 'lycopene_observed': 327600.0,
'lutein_modeled': 38857000.0, 'lutein_observed': 66612000.0,
'betat_modeled': 318500.0, 'betat_observed': 546000.0,
'gammat_modeled': 1465100.0, 'gammat_observed': 2511600.0,
'deltat_modeled': 764400.0, 'deltat_observed': 1310400.0,
'vitc_modeled': 1911000.0, 'vitc_observed': 3276000.0,
'thiamin_modeled': 267540.0, 'thiamin_observed': 458640.0,
'riboflavin_modeled': 522340.0, 'riboflavin_observed': 895440.0,
'niacin_modeled': 7771400.0, 'niacin_observed': 13322400.0,
'pantothenic_modeled': 586040.0, 'pantothenic_observed': 1004640.0,
'vitb6_modeled': 3439800.0, 'vitb6_observed': 5896800.0,
'folate_modeled': 194285000.0, 'folate_observed': 333060000.0,
'vitb12_modeled': 1911000.0, 'vitb12_observed': 3276000.0,
'vitk_modeled': 26754000.0, 'vitk_observed': 45864000.0}])
def tabulate_pctl_results_table():
"""Generate expected output for unit tests of tabulated percentile results.
This function returns the expected DataFrame output of the
`tabulate_percentile_results` function, with the parameters
defined in the unit test `test_tabulate_percentile_results`.
Returns:
pandas.DataFrame: Expected tabulated results for percentile output.
"""
return pandas.DataFrame({
"crop": ["corn", "soybean"], "area (ha)": [2, 4],
"production_observed": [4, 7], "production_25": [1.25, 2.25],
"production_50": [2.5, 4.5], "production_75": [3.75, 6.75],
"protein_25": [488250, 675675], "protein_50": [976500, 1351350],
"protein_75": [1464750, 2027025],
"protein_observed": [1562400, 2102100],
"lipid_25": [93000, 40950], "lipid_50": [186000, 81900],
"lipid_75": [279000, 122850], "lipid_observed": [297600, 127400],
"energy_25": [5533500, 2027025], "energy_50": [11067000, 4054050],
"energy_75": [16600500, 6081075],
"energy_observed": [17707200, 6306300],
"ca_25": [313875, 5262075], "ca_50": [627750, 10524150],
"ca_75": [941625, 15786225], "ca_observed": [1004400, 16370900],
"fe_25": [182512.5, 321457.5], "fe_50": [365025, 642915],
"fe_75": [547537.5, 964372.5], "fe_observed": [584040, 1000090],
"mg_25": [3255000, 5733000], "mg_50": [6510000, 11466000],
"mg_75": [9765000, 17199000], "mg_observed": [10416000, 17836000],
"ph_25": [8184000, 14414400], "ph_50": [16368000, 28828800],
"ph_75": [24552000, 43243200], "ph_observed": [26188800, 44844800],
"k_25": [20076375, 4033575], "k_50": [40152750, 8067150],
"k_75": [60229125, 12100725], "k_observed": [64244400, 12548900],
"na_25": [23250, 40950], "na_50": [46500, 81900],
"na_75": [69750, 122850], "na_observed": [74400, 127400],
"zn_25": [56962.5, 100327.5], "zn_50": [113925, 200655],
"zn_75": [170887.5, 300982.5], "zn_observed": [182280, 312130],
"cu_25": [22087.5, 32760], "cu_50": [44175, 65520],
"cu_75": [66262.5, 98280], "cu_observed": [70680, 101920],
"fl_25": [93000, 61425], "fl_50": [186000, 122850],
"fl_75": [279000, 184275], "fl_observed": [297600, 191100],
"mn_25": [33712.5, 106470], "mn_50": [67425, 212940],
"mn_75": [101137.5, 319410], "mn_observed": [107880, 331240],
"se_25": [1162.5, 6142.5], "se_50": [2325, 12285],
"se_75": [3487.5, 18427.5], "se_observed": [3720, 19110],
"vita_25": [34875, 61425], "vita_50": [69750, 122850],
"vita_75": [104625, 184275], "vita_observed": [111600, 191100],
"betac_25": [186000, 327600], "betac_50": [372000, 655200],
"betac_75": [558000, 982800], "betac_observed": [595200, 1019200],
"alphac_25": [26737.5, 20475], "alphac_50": [53475, 40950],
"alphac_75": [80212.5, 61425], "alphac_observed": [85560, 63700],
"vite_25": [9300, 16380], "vite_50": [18600, 32760],
"vite_75": [27900, 49140], "vite_observed": [29760, 50960],
"crypto_25": [18600, 12285], "crypto_50": [37200, 24570],
"crypto_75": [55800, 36855], "crypto_observed": [59520, 38220],
"lycopene_25": [4185, 6142.5], "lycopene_50": [8370, 12285],
"lycopene_75": [12555, 18427.5], "lycopene_observed": [13392, 19110],
"lutein_25": [732375, 1248975], "lutein_50": [1464750, 2497950],
"lutein_75": [2197125, 3746925], "lutein_observed": [2343600, 3885700],
"betat_25": [5812.5, 10237.5], "betat_50": [11625, 20475],
"betat_75": [17437.5, 30712.5], "betat_observed": [18600, 31850],
"gammat_25": [24412.5, 47092.5], "gammat_50": [48825, 94185],
"gammat_75": [73237.5, 141277.5], "gammat_observed": [78120, 146510],
"deltat_25": [22087.5, 24570], "deltat_50": [44175, 49140],
"deltat_75": [66262.5, 73710], "deltat_observed": [70680, 76440],
"vitc_25": [79050, 61425], "vitc_50": [158100, 122850],
"vitc_75": [237150, 184275], "vitc_observed": [252960, 191100],
"thiamin_25": [4650, 8599.5], "thiamin_50": [9300, 17199],
"thiamin_75": [13950, 25798.5], "thiamin_observed": [14880, 26754],
"riboflavin_25": [20925, 16789.5], "riboflavin_50": [41850, 33579],
"riboflavin_75": [62775, 50368.5],
"riboflavin_observed": [66960, 52234], "niacin_25": [95325, 249795],
"niacin_50": [190650, 499590], "niacin_75": [285975, 749385],
"niacin_observed": [305040, 777140],
"pantothenic_25": [10462.5, 18837], "pantothenic_50": [20925, 37674],
"pantothenic_75": [31387.5, 56511],
"pantothenic_observed": [33480, 58604],
"vitb6_25": [16275, 110565], "vitb6_50": [32550, 221130],
"vitb6_75": [48825, 331695], "vitb6_observed": [52080, 343980],
"folate_25": [4475625, 6244875], "folate_50": [8951250, 12489750],
"folate_75": [13426875, 18734625],
"folate_observed": [14322000, 19428500],
"vitb12_25": [23250, 61425], "vitb12_50": [46500, 122850],
"vitb12_75": [69750, 184275], "vitb12_observed": [74400, 191100],
"vitk_25": [476625, 859950], "vitk_50": [953250, 1719900],
"vitk_75": [1429875, 2579850], "vitk_observed": [1525200, 2675400]
})
def aggregate_regr_polygons_table():
"""Generate expected aggregated nutrient results by polygon (e.g., FID).
This function returns the expected DataFrame output of the
`aggregate_regression_results_to_polygons` function for use in the
`test_aggregate_regression_results_to_polygons` unit test. It
summarizes nutrient production values for multiple crops grouped by
polygon identifiers (FIDs).
Returns:
pandas.DataFrame: Aggregated modeled and observed values by FID.
"""
return pandas.DataFrame([
{"FID": 0, "corn_modeled": 10, "corn_observed": 40,
"soybean_modeled": 20, "soybean_observed": 50,
"protein_modeled": 9912000, "protein_observed": 30639000,
"lipid_modeled": 1108000, "lipid_observed": 3886000,
"energy_modeled": 62286000, "energy_observed": 222117000,
"ca_modeled": 49285000, "ca_observed": 126979000,
"fe_modeled": 4317500, "fe_observed": 12983900,
"mg_modeled": 77000000, "mg_observed": 231560000,
"ph_modeled": 193600000, "ph_observed": 582208000,
"k_modeled": 196465000, "k_observed": 732079000,
"na_modeled": 550000, "na_observed": 1654000,
"zn_modeled": 1347500, "zn_observed": 4052300,
"cu_modeled": 467900, "cu_observed": 1434800,
"fl_modeled": 1290000, "fl_observed": 4341000,
"mn_modeled": 1216100, "mn_observed": 3444800,
"se_modeled": 63900, "se_observed": 173700,
"vita_modeled": 825000, "vita_observed": 2481000,
"betac_modeled": 4400000, "betac_observed": 13232000,
"alphac_modeled": 395900, "alphac_observed": 1310600,
"vite_modeled": 220000, "vite_observed": 661600,
"crypto_modeled": 258000, "crypto_observed": 868200,
"lycopene_modeled": 88080, "lycopene_observed": 270420,
"lutein_modeled": 16961000, "lutein_observed": 51191000,
"betat_modeled": 137500, "betat_observed": 413500,
"gammat_modeled": 613900, "gammat_observed": 1827700,
"deltat_modeled": 395100, "deltat_observed": 1252800,
"vitc_modeled": 1178400, "vitc_observed": 3894600,
"thiamin_modeled": 113640, "thiamin_observed": 339900,
"riboflavin_modeled": 316640, "riboflavin_observed": 1042700,
"niacin_modeled": 2983000, "niacin_observed": 8601400,
"pantothenic_modeled": 251140, "pantothenic_observed": 753400,
"vitb6_modeled": 1113000, "vitb6_observed": 2977800,
"folate_modeled": 91315000, "folate_observed": 281995000,
"vitb12_modeled": 732000, "vitb12_observed": 2109000,
"vitk_modeled": 11457000, "vitk_observed": 34362000},
{"FID": 1, "corn_modeled": 40, "corn_observed": 80,
"soybean_modeled": 70, "soybean_observed": 120,
"protein_modeled": 36645000, "protein_observed": 67284000,
"lipid_modeled": 4250000, "lipid_observed": 8136000,
"energy_modeled": 240135000, "energy_observed": 462252000,
"ca_modeled": 173753000, "ca_observed": 300732000,
"fe_modeled": 15841300, "fe_observed": 28825200,
"mg_modeled": 282520000, "mg_observed": 514080000,
"ph_modeled": 710336000, "ph_observed": 1292544000,
"k_modeled": 767933000, "k_observed": 1500012000,
"na_modeled": 2018000, "na_observed": 3672000,
"zn_modeled": 4944100, "zn_observed": 8996400,
"cu_modeled": 1726000, "cu_observed": 3160800,
"fl_modeled": 4887000, "fl_observed": 9228000,
"mn_modeled": 4391200, "mn_observed": 7836000,
"se_modeled": 228300, "se_observed": 402000,
"vita_modeled": 3027000, "vita_observed": 5508000,
"betac_modeled": 16144000, "betac_observed": 29376000,
"alphac_modeled": 1492600, "alphac_observed": 2803200,
"vite_modeled": 807200, "vite_observed": 1468800,
"crypto_modeled": 977400, "crypto_observed": 1845600,
"lycopene_modeled": 325020, "lycopene_observed": 595440,
"lutein_modeled": 62293000, "lutein_observed": 113484000,
"betat_modeled": 504500, "betat_observed": 918000,
"gammat_modeled": 2246300, "gammat_observed": 4074000,
"deltat_modeled": 1471200, "deltat_observed": 2724000,
"vitc_modeled": 4440600, "vitc_observed": 8335200,
"thiamin_modeled": 416340, "thiamin_observed": 756240,
"riboflavin_modeled": 1191940, "riboflavin_observed": 2234640,
"niacin_modeled": 10821800, "niacin_observed": 19423200,
"pantothenic_modeled": 920840, "pantothenic_observed": 1674240,
"vitb6_modeled": 3960600, "vitb6_observed": 6938400,
"folate_modeled": 337505000, "folate_observed": 619500000,
"vitb12_modeled": 2655000, "vitb12_observed": 4764000,
"vitk_modeled": 42006000, "vitk_observed": 76368000}
], dtype=float)
def aggregate_pctl_polygons_table():
"""Generate expected aggregated nutrient results by polygon (e.g., FID).
This function returns the expected DataFrame output of the
`aggregate_to_polygons` function for use in the
`test_aggregate_to_polygons` unit test. The DataFrame contains observed
and modeled (at different percentiles) crop yields and nutrient totals,
aggregated by polygon (FID).
Returns:
pandas.DataFrame: A table with observed and percentile-based modeled
values for crop yield and nutrient totals, grouped by FID.
"""
data = [
[0, 0.25, 0.5, 0.75, 1, 0.5, 1, 1.5, 2, 247800, 495600, 743400,
991200, 27700, 55400, 83100, 110800, 1557150, 3114300, 4671450,
6228600, 1232125, 2464250, 3696375, 4928500, 107937.5, 215875,
323812.5, 431750, 1925000, 3850000, 5775000, 7700000, 4840000,
9680000, 14520000, 19360000, 4911625, 9823250, 14734875,
19646500, 13750, 27500, 41250, 55000, 33687.5, 67375,
101062.5, 134750, 11697.5, 23395, 35092.5, 46790, 32250,
64500, 96750, 129000, 30402.5, 60805, 91207.5, 121610, 1597.5,
3195, 4792.5, 6390, 20625, 41250, 61875, 82500, 110000, 220000,
330000, 440000, 9897.5, 19795, 29692.5, 39590, 5500, 11000,
16500, 22000, 6450, 12900, 19350, 25800, 2202, 4404, 6606,
8808, 424025, 848050, 1272075, 1696100, 3437.5, 6875, 10312.5,
13750, 15347.5, 30695, 46042.5, 61390, 9877.5, 19755, 29632.5,
39510, 29460, 58920, 88380, 117840, 2841, 5682, 8523, 11364,
7916, 15832, 23748, 31664, 74575, 149150, 223725, 298300,
6278.5, 12557, 18835.5, 25114, 27825, 55650, 83475, 111300,
2282875, 4565750, 6848625, 9131500, 18300, 36600, 54900,
73200, 286425, 572850, 859275, 1145700],
[1, 1.25, 2.5, 3.75, 4, 2.25, 4.5, 6.75, 7, 1163925, 2327850,
3491775, 3664500, 133950, 267900, 401850, 425000, 7560525,
15121050, 22681575, 24013500, 5575950, 11151900, 16727850,
17375300, 503970, 1007940, 1511910, 1584130, 8988000,
17976000, 26964000, 28252000, 22598400, 45196800, 67795200,
71033600, 24109950, 48219900, 72329850, 76793300, 64200,
128400, 192600, 201800, 157290, 314580, 471870, 494410,
54847.5, 109695, 164542.5, 172600, 154425, 308850, 463275,
488700, 140182.5, 280365, 420547.5, 439120, 7305, 14610,
21915, 22830, 96300, 192600, 288900, 302700, 513600, 1027200,
1540800, 1614400, 47212.5, 94425, 141637.5, 149260, 25680,
51360, 77040, 80720, 30885, 61770, 92655, 97740, 10327.5,
20655, 30982.5, 32502, 1981350, 3962700, 5944050, 6229300,
16050, 32100, 48150, 50450, 71505, 143010, 214515, 224630,
46657.5, 93315, 139972.5, 147120, 140475, 280950, 421425,
444060, 13249.5, 26499, 39748.5, 41634, 37714.5, 75429,
113143.5, 119194, 345120, 690240, 1035360, 1082180, 29299.5,
58599, 87898.5, 92084, 126840, 253680, 380520, 396060,
10720500, 21441000, 32161500, 33750500, 84675, 169350, 254025,
265500, 1336575, 2673150, 4009725, 4200600]
]
columns = [
"FID", "corn_25", "corn_50", "corn_75", "corn_observed",
"soybean_25", "soybean_50", "soybean_75", "soybean_observed",
"protein_25", "protein_50", "protein_75", "protein_observed",
"lipid_25", "lipid_50", "lipid_75", "lipid_observed",
"energy_25", "energy_50", "energy_75", "energy_observed",
"ca_25", "ca_50", "ca_75", "ca_observed", "fe_25", "fe_50",
"fe_75", "fe_observed", "mg_25", "mg_50", "mg_75",
"mg_observed", "ph_25", "ph_50", "ph_75", "ph_observed",
"k_25", "k_50", "k_75", "k_observed", "na_25", "na_50",
"na_75", "na_observed", "zn_25", "zn_50", "zn_75",
"zn_observed", "cu_25", "cu_50", "cu_75", "cu_observed",
"fl_25", "fl_50", "fl_75", "fl_observed", "mn_25", "mn_50",
"mn_75", "mn_observed", "se_25", "se_50", "se_75",
"se_observed", "vita_25", "vita_50", "vita_75",
"vita_observed", "betac_25", "betac_50", "betac_75",
"betac_observed", "alphac_25", "alphac_50", "alphac_75",
"alphac_observed", "vite_25", "vite_50", "vite_75",
"vite_observed", "crypto_25", "crypto_50", "crypto_75",
"crypto_observed", "lycopene_25", "lycopene_50", "lycopene_75",
"lycopene_observed", "lutein_25", "lutein_50", "lutein_75",
"lutein_observed", "betat_25", "betat_50", "betat_75",
"betat_observed", "gammat_25", "gammat_50", "gammat_75",
"gammat_observed", "deltat_25", "deltat_50", "deltat_75",
"deltat_observed", "vitc_25", "vitc_50", "vitc_75",
"vitc_observed", "thiamin_25", "thiamin_50", "thiamin_75",
"thiamin_observed", "riboflavin_25", "riboflavin_50",
"riboflavin_75", "riboflavin_observed", "niacin_25",
"niacin_50", "niacin_75", "niacin_observed", "pantothenic_25",
"pantothenic_50", "pantothenic_75", "pantothenic_observed",
"vitb6_25", "vitb6_50", "vitb6_75", "vitb6_observed",
"folate_25", "folate_50", "folate_75", "folate_observed",
"vitb12_25", "vitb12_50", "vitb12_75", "vitb12_observed",
"vitk_25", "vitk_50", "vitk_75", "vitk_observed"
]
return pandas.DataFrame(data, columns=columns, dtype=float)

View File

@ -18,7 +18,7 @@ gdal.UseExceptions()
def make_simple_raster(base_raster_path, fill_val, nodata_val):
"""Create a 10x10 raster on designated path with fill value.
"""Create a 10x10 int32 raster on designated path with fill value.
Args:
base_raster_path (str): the raster path for making the new raster.
@ -26,32 +26,25 @@ def make_simple_raster(base_raster_path, fill_val, nodata_val):
nodata_val (int or None): for defining a band's nodata value.
Returns:
lulc_path (str): the path of the raster file.
None.
"""
srs = osr.SpatialReference()
srs.ImportFromEPSG(26910) # UTM Zone 10N
projection_wkt = srs.ExportToWkt()
# origin hand-picked for this epsg:
geotransform = [461261, 1.0, 0.0, 4923265, 0.0, -1.0]
origin = (461261, 4923265)
n = 10
gtiff_driver = gdal.GetDriverByName('GTiff')
new_raster = gtiff_driver.Create(
base_raster_path, n, n, 1, gdal.GDT_Int32, options=[
'TILED=YES', 'BIGTIFF=YES', 'COMPRESS=LZW',
'BLOCKXSIZE=16', 'BLOCKYSIZE=16'])
new_raster.SetProjection(projection_wkt)
new_raster.SetGeoTransform(geotransform)
new_band = new_raster.GetRasterBand(1)
array = numpy.empty((n, n))
array = numpy.empty((n, n), dtype=numpy.int32)
array.fill(fill_val)
new_band.WriteArray(array)
if nodata_val is not None:
new_band.SetNoDataValue(nodata_val)
new_raster.FlushCache()
new_band = None
new_raster = None
pixel_size = (1, -1)
pygeoprocessing.numpy_array_to_raster(
array, nodata_val, pixel_size, origin, projection_wkt,
base_raster_path)
def assert_raster_equal_value(base_raster_path, val_to_compare):

View File

@ -30,6 +30,7 @@ def _get_pixels_per_hectare(raster_path):
Returns:
A float representing the number of pixels per hectare.
"""
raster_info = pygeoprocessing.get_raster_info(raster_path)
pixel_area = abs(numpy.prod(raster_info['pixel_size']))
@ -37,12 +38,14 @@ def _get_pixels_per_hectare(raster_path):
def make_aggregate_vector(path_to_shp):
"""
Generate shapefile with two overlapping polygons
"""Generate shapefile with two overlapping polygons.
Args:
path_to_shp (str): path to store watershed results vector
Outputs:
path_to_shp (str): path to store results vector
Returns:
None
"""
# (xmin, ymin), (xmax, ymin), (xmax, ymax), (xmin, ymax)
shapely_geometry_list = [
@ -71,10 +74,13 @@ def make_aggregate_vector(path_to_shp):
def make_simple_raster(base_raster_path, array):
"""Create a raster on designated path with arbitrary values.
Args:
base_raster_path (str): the raster path for making the new raster.
Returns:
None.
None
"""
# UTM Zone 10N
srs = osr.SpatialReference()
@ -90,31 +96,6 @@ def make_simple_raster(base_raster_path, array):
base_raster_path)
def create_nutrient_df():
"""Creates a nutrient DataFrame for testing."""
return pandas.DataFrame([
{'crop': 'corn', 'area (ha)': 21.0, 'production_observed': 0.2,
'percentrefuse': 7, 'protein': 42., 'lipid': 8, 'energy': 476.,
'ca': 27.0, 'fe': 15.7, 'mg': 280.0, 'ph': 704.0, 'k': 1727.0,
'na': 2.0, 'zn': 4.9, 'cu': 1.9, 'fl': 8, 'mn': 2.9, 'se': 0.1,
'vita': 3.0, 'betac': 16.0, 'alphac': 2.30, 'vite': 0.8,
'crypto': 1.6, 'lycopene': 0.36, 'lutein': 63.0, 'betat': 0.5,
'gammat': 2.1, 'deltat': 1.9, 'vitc': 6.8, 'thiamin': 0.4,
'riboflavin': 1.8, 'niacin': 8.2, 'pantothenic': 0.9,
'vitb6': 1.4, 'folate': 385.0, 'vitb12': 2.0, 'vitk': 41.0},
{'crop': 'soybean', 'area (ha)': 5., 'production_observed': 4.,
'percentrefuse': 9, 'protein': 33., 'lipid': 2., 'energy': 99.,
'ca': 257., 'fe': 15.7, 'mg': 280., 'ph': 704.0, 'k': 197.0,
'na': 2., 'zn': 4.9, 'cu': 1.6, 'fl': 3., 'mn': 5.2, 'se': 0.3,
'vita': 3.0, 'betac': 16.0, 'alphac': 1.0, 'vite': 0.8,
'crypto': 0.6, 'lycopene': 0.3, 'lutein': 61.0, 'betat': 0.5,
'gammat': 2.3, 'deltat': 1.2, 'vitc': 3.0, 'thiamin': 0.42,
'riboflavin': 0.82, 'niacin': 12.2, 'pantothenic': 0.92,
'vitb6': 5.4, 'folate': 305., 'vitb12': 3., 'vitk': 42.},
]).set_index('crop')
def _create_crop_rasters(output_dir, crop_names, file_suffix):
"""Creates raster files for test setup."""
_OBSERVED_PRODUCTION_FILE_PATTERN = os.path.join(
@ -138,6 +119,33 @@ def _create_crop_rasters(output_dir, crop_names, file_suffix):
make_simple_raster(crop_production_raster_path, crop_array)
def _create_crop_pctl_rasters(output_dir, crop_names, file_suffix, pctls):
"""Creates crop percentile raster files for test setup."""
_OBSERVED_PRODUCTION_FILE_PATTERN = os.path.join(
'.', '%s_observed_production%s.tif')
_CROP_PRODUCTION_FILE_PATTERN = os.path.join(
'.', '%s_%s_production%s.tif')
for i, crop in enumerate(crop_names):
observed_yield_path = os.path.join(
output_dir,
_OBSERVED_PRODUCTION_FILE_PATTERN % (crop, file_suffix))
# Create arbitrary raster arrays
observed_array = numpy.array(
[[i, 1], [i*2, 3]], dtype=numpy.int16)
make_simple_raster(observed_yield_path, observed_array)
for pctl in pctls:
crop_production_raster_path = os.path.join(
output_dir,
_CROP_PRODUCTION_FILE_PATTERN % (crop, pctl, file_suffix))
crop_array = numpy.array(
[[i, 1], [i*3, 4]], dtype=numpy.int16) * float(pctl[-2:])/100
make_simple_raster(crop_production_raster_path, crop_array)
class CropProductionTests(unittest.TestCase):
"""Tests for the Crop Production model."""
@ -626,82 +634,10 @@ class CropProductionTests(unittest.TestCase):
"""Test `tabulate_regression_results`"""
from natcap.invest.crop_production_regression import \
tabulate_regression_results
from crop_production.data_helpers import sample_nutrient_df
from crop_production.data_helpers import tabulate_regr_results_table
def _create_expected_results():
"""Creates the expected results DataFrame."""
return pandas.DataFrame([
{'crop': 'corn', 'area (ha)': 20.0,
'production_observed': 80.0, 'production_modeled': 40.0,
'protein_modeled': 15624000.0, 'protein_observed': 31248000.0,
'lipid_modeled': 2976000.0, 'lipid_observed': 5952000.0,
'energy_modeled': 177072000.0, 'energy_observed': 354144000.0,
'ca_modeled': 10044000.0, 'ca_observed': 20088000.0,
'fe_modeled': 5840400.0, 'fe_observed': 11680800.0,
'mg_modeled': 104160000.0, 'mg_observed': 208320000.0,
'ph_modeled': 261888000.0, 'ph_observed': 523776000.0,
'k_modeled': 642444000.0, 'k_observed': 1284888000.0,
'na_modeled': 744000.0, 'na_observed': 1488000.0,
'zn_modeled': 1822800.0, 'zn_observed': 3645600.0,
'cu_modeled': 706800.0, 'cu_observed': 1413600.0,
'fl_modeled': 2976000.0, 'fl_observed': 5952000.0,
'mn_modeled': 1078800.0, 'mn_observed': 2157600.0,
'se_modeled': 37200.0, 'se_observed': 74400.0,
'vita_modeled': 1116000.0, 'vita_observed': 2232000.0,
'betac_modeled': 5952000.0, 'betac_observed': 11904000.0,
'alphac_modeled': 855600.0, 'alphac_observed': 1711200.0,
'vite_modeled': 297600.0, 'vite_observed': 595200.0,
'crypto_modeled': 595200.0, 'crypto_observed': 1190400.0,
'lycopene_modeled': 133920.0, 'lycopene_observed': 267840.0,
'lutein_modeled': 23436000.0, 'lutein_observed': 46872000.0,
'betat_modeled': 186000.0, 'betat_observed': 372000.0,
'gammat_modeled': 781200.0, 'gammat_observed': 1562400.0,
'deltat_modeled': 706800.0, 'deltat_observed': 1413600.0,
'vitc_modeled': 2529600.0, 'vitc_observed': 5059200.0,
'thiamin_modeled': 148800.0, 'thiamin_observed': 297600.0,
'riboflavin_modeled': 669600.0, 'riboflavin_observed': 1339200.0,
'niacin_modeled': 3050400.0, 'niacin_observed': 6100800.0,
'pantothenic_modeled': 334800.0, 'pantothenic_observed': 669600.0,
'vitb6_modeled': 520800.0, 'vitb6_observed': 1041600.0,
'folate_modeled': 143220000.0, 'folate_observed': 286440000.0,
'vitb12_modeled': 744000.0, 'vitb12_observed': 1488000.0,
'vitk_modeled': 15252000.0, 'vitk_observed': 30504000.0},
{'crop': 'soybean', 'area (ha)': 40.0,
'production_observed': 120.0, 'production_modeled': 70.0,
'protein_modeled': 21021000.0, 'protein_observed': 36036000.0,
'lipid_modeled': 1274000.0, 'lipid_observed': 2184000.0,
'energy_modeled': 63063000.0, 'energy_observed': 108108000.0,
'ca_modeled': 163709000.0, 'ca_observed': 280644000.0,
'fe_modeled': 10000900.0, 'fe_observed': 17144400.0,
'mg_modeled': 178360000.0, 'mg_observed': 305760000.0,
'ph_modeled': 448448000.0, 'ph_observed': 768768000.0,
'k_modeled': 125489000.0, 'k_observed': 215124000.0,
'na_modeled': 1274000.0, 'na_observed': 2184000.0,
'zn_modeled': 3121300.0, 'zn_observed': 5350800.0,
'cu_modeled': 1019200.0, 'cu_observed': 1747200.0,
'fl_modeled': 1911000.0, 'fl_observed': 3276000.0,
'mn_modeled': 3312400.0, 'mn_observed': 5678400.0,
'se_modeled': 191100.0, 'se_observed': 327600.0,
'vita_modeled': 1911000.0, 'vita_observed': 3276000.0,
'betac_modeled': 10192000.0, 'betac_observed': 17472000.0,
'alphac_modeled': 637000.0, 'alphac_observed': 1092000.0,
'vite_modeled': 509600.0, 'vite_observed': 873600.0,
'crypto_modeled': 382200.0, 'crypto_observed': 655200.0,
'lycopene_modeled': 191100.0, 'lycopene_observed': 327600.0,
'lutein_modeled': 38857000.0, 'lutein_observed': 66612000.0,
'betat_modeled': 318500.0, 'betat_observed': 546000.0,
'gammat_modeled': 1465100.0, 'gammat_observed': 2511600.0,
'deltat_modeled': 764400.0, 'deltat_observed': 1310400.0,
'vitc_modeled': 1911000.0, 'vitc_observed': 3276000.0,
'thiamin_modeled': 267540.0, 'thiamin_observed': 458640.0,
'riboflavin_modeled': 522340.0, 'riboflavin_observed': 895440.0,
'niacin_modeled': 7771400.0, 'niacin_observed': 13322400.0,
'pantothenic_modeled': 586040.0, 'pantothenic_observed': 1004640.0,
'vitb6_modeled': 3439800.0, 'vitb6_observed': 5896800.0,
'folate_modeled': 194285000.0, 'folate_observed': 333060000.0,
'vitb12_modeled': 1911000.0, 'vitb12_observed': 3276000.0,
'vitk_modeled': 26754000.0, 'vitk_observed': 45864000.0}])
nutrient_df = create_nutrient_df()
nutrient_df = sample_nutrient_df()
pixel_area_ha = 10
workspace_dir = self.workspace_dir
@ -728,7 +664,7 @@ class CropProductionTests(unittest.TestCase):
# Read only the first 2 crop's data (skipping total area)
actual_result_table = pandas.read_csv(target_table_path, nrows=2,
header=0)
expected_result_table = _create_expected_results()
expected_result_table = tabulate_regr_results_table()
# Compare expected vs actual
pandas.testing.assert_frame_equal(actual_result_table,
@ -738,82 +674,9 @@ class CropProductionTests(unittest.TestCase):
"""Test `aggregate_regression_results_to_polygons`"""
from natcap.invest.crop_production_regression import \
aggregate_regression_results_to_polygons
def _create_expected_agg_table():
"""Create expected output results"""
# Define the new values manually
return pandas.DataFrame([
{"FID": 0, "corn_modeled": 10, "corn_observed": 40,
"soybean_modeled": 20, "soybean_observed": 50,
"protein_modeled": 9912000, "protein_observed": 30639000,
"lipid_modeled": 1108000, "lipid_observed": 3886000,
"energy_modeled": 62286000, "energy_observed": 222117000,
"ca_modeled": 49285000, "ca_observed": 126979000,
"fe_modeled": 4317500, "fe_observed": 12983900,
"mg_modeled": 77000000, "mg_observed": 231560000,
"ph_modeled": 193600000, "ph_observed": 582208000,
"k_modeled": 196465000, "k_observed": 732079000,
"na_modeled": 550000, "na_observed": 1654000,
"zn_modeled": 1347500, "zn_observed": 4052300,
"cu_modeled": 467900, "cu_observed": 1434800,
"fl_modeled": 1290000, "fl_observed": 4341000,
"mn_modeled": 1216100, "mn_observed": 3444800,
"se_modeled": 63900, "se_observed": 173700,
"vita_modeled": 825000, "vita_observed": 2481000,
"betac_modeled": 4400000, "betac_observed": 13232000,
"alphac_modeled": 395900, "alphac_observed": 1310600,
"vite_modeled": 220000, "vite_observed": 661600,
"crypto_modeled": 258000, "crypto_observed": 868200,
"lycopene_modeled": 88080, "lycopene_observed": 270420,
"lutein_modeled": 16961000, "lutein_observed": 51191000,
"betat_modeled": 137500, "betat_observed": 413500,
"gammat_modeled": 613900, "gammat_observed": 1827700,
"deltat_modeled": 395100, "deltat_observed": 1252800,
"vitc_modeled": 1178400, "vitc_observed": 3894600,
"thiamin_modeled": 113640, "thiamin_observed": 339900,
"riboflavin_modeled": 316640, "riboflavin_observed": 1042700,
"niacin_modeled": 2983000, "niacin_observed": 8601400,
"pantothenic_modeled": 251140, "pantothenic_observed": 753400,
"vitb6_modeled": 1113000, "vitb6_observed": 2977800,
"folate_modeled": 91315000, "folate_observed": 281995000,
"vitb12_modeled": 732000, "vitb12_observed": 2109000,
"vitk_modeled": 11457000, "vitk_observed": 34362000},
{"FID": 1, "corn_modeled": 40, "corn_observed": 80,
"soybean_modeled": 70, "soybean_observed": 120,
"protein_modeled": 36645000, "protein_observed": 67284000,
"lipid_modeled": 4250000, "lipid_observed": 8136000,
"energy_modeled": 240135000, "energy_observed": 462252000,
"ca_modeled": 173753000, "ca_observed": 300732000,
"fe_modeled": 15841300, "fe_observed": 28825200,
"mg_modeled": 282520000, "mg_observed": 514080000,
"ph_modeled": 710336000, "ph_observed": 1292544000,
"k_modeled": 767933000, "k_observed": 1500012000,
"na_modeled": 2018000, "na_observed": 3672000,
"zn_modeled": 4944100, "zn_observed": 8996400,
"cu_modeled": 1726000, "cu_observed": 3160800,
"fl_modeled": 4887000, "fl_observed": 9228000,
"mn_modeled": 4391200, "mn_observed": 7836000,
"se_modeled": 228300, "se_observed": 402000,
"vita_modeled": 3027000, "vita_observed": 5508000,
"betac_modeled": 16144000, "betac_observed": 29376000,
"alphac_modeled": 1492600, "alphac_observed": 2803200,
"vite_modeled": 807200, "vite_observed": 1468800,
"crypto_modeled": 977400, "crypto_observed": 1845600,
"lycopene_modeled": 325020, "lycopene_observed": 595440,
"lutein_modeled": 62293000, "lutein_observed": 113484000,
"betat_modeled": 504500, "betat_observed": 918000,
"gammat_modeled": 2246300, "gammat_observed": 4074000,
"deltat_modeled": 1471200, "deltat_observed": 2724000,
"vitc_modeled": 4440600, "vitc_observed": 8335200,
"thiamin_modeled": 416340, "thiamin_observed": 756240,
"riboflavin_modeled": 1191940, "riboflavin_observed": 2234640,
"niacin_modeled": 10821800, "niacin_observed": 19423200,
"pantothenic_modeled": 920840, "pantothenic_observed": 1674240,
"vitb6_modeled": 3960600, "vitb6_observed": 6938400,
"folate_modeled": 337505000, "folate_observed": 619500000,
"vitb12_modeled": 2655000, "vitb12_observed": 4764000,
"vitk_modeled": 42006000, "vitk_observed": 76368000}
], dtype=float)
from crop_production.data_helpers import sample_nutrient_df
from crop_production.data_helpers import \
aggregate_regr_polygons_table
workspace = self.workspace_dir
@ -829,7 +692,7 @@ class CropProductionTests(unittest.TestCase):
landcover_raster_projection = spatial_ref.ExportToWkt()
crop_names = ['corn', 'soybean']
nutrient_df = create_nutrient_df()
nutrient_df = sample_nutrient_df()
output_dir = os.path.join(workspace, "OUTPUT")
os.makedirs(output_dir, exist_ok=True)
file_suffix = 'test'
@ -852,11 +715,91 @@ class CropProductionTests(unittest.TestCase):
actual_aggregate_table = pandas.read_csv(aggregate_table_path,
dtype=float)
expected_aggregate_table = _create_expected_agg_table()
expected_aggregate_table = aggregate_regr_polygons_table()
pandas.testing.assert_frame_equal(
actual_aggregate_table, expected_aggregate_table)
def test_aggregate_to_polygons(self):
"""Test `aggregate_to_polygons`"""
from natcap.invest.crop_production_percentile import \
aggregate_to_polygons
from crop_production.data_helpers import sample_nutrient_df
from crop_production.data_helpers import aggregate_pctl_polygons_table
workspace = self.workspace_dir
base_aggregate_vector_path = os.path.join(workspace,
"agg_vector.shp")
make_aggregate_vector(base_aggregate_vector_path)
target_aggregate_vector_path = os.path.join(workspace,
"agg_vector_prj.shp")
spatial_ref = osr.SpatialReference()
spatial_ref.ImportFromEPSG(26910)
landcover_raster_projection = spatial_ref.ExportToWkt()
crop_names = ['corn', 'soybean']
nutrient_df = sample_nutrient_df()
yield_percentile_headers = ['25', '50', '75']
pixel_area_ha = 1
output_dir = os.path.join(workspace, "OUTPUT")
os.makedirs(output_dir, exist_ok=True)
file_suffix = 'v1'
target_aggregate_table_path = os.path.join(output_dir,
"results.csv")
_create_crop_pctl_rasters(output_dir, crop_names, file_suffix,
yield_percentile_headers)
aggregate_to_polygons(
base_aggregate_vector_path, target_aggregate_vector_path,
landcover_raster_projection, crop_names, nutrient_df,
yield_percentile_headers, pixel_area_ha, output_dir,
file_suffix, target_aggregate_table_path)
actual_aggregate_pctl_table = pandas.read_csv(
target_aggregate_table_path, dtype=float)
expected_aggregate_pctl_table = aggregate_pctl_polygons_table()
pandas.testing.assert_frame_equal(
actual_aggregate_pctl_table, expected_aggregate_pctl_table)
def test_tabulate_percentile_results(self):
"""Test `tabulate_results"""
from natcap.invest.crop_production_percentile import \
tabulate_results
from crop_production.data_helpers import sample_nutrient_df
from crop_production.data_helpers import tabulate_pctl_results_table
nutrient_df = sample_nutrient_df()
output_dir = os.path.join(self.workspace_dir, "output")
os.makedirs(output_dir, exist_ok=True)
yield_percentile_headers = ['yield_25', 'yield_50', 'yield_75']
crop_names = ['corn', 'soybean']
pixel_area_ha = 1
landcover_raster_path = os.path.join(self.workspace_dir,
"landcover.tif")
landcover_nodata = -1
make_simple_raster(landcover_raster_path,
numpy.array([[1, 4], [2, 2]], dtype=numpy.int16))
file_suffix = 'test'
target_table_path = os.path.join(output_dir, "result_table.csv")
_create_crop_pctl_rasters(output_dir, crop_names, file_suffix,
yield_percentile_headers)
tabulate_results(nutrient_df, yield_percentile_headers,
crop_names, pixel_area_ha,
landcover_raster_path, landcover_nodata,
output_dir, file_suffix, target_table_path)
actual_table = pandas.read_csv(target_table_path, nrows=2)
expected_table = tabulate_pctl_results_table()
pandas.testing.assert_frame_equal(actual_table, expected_table,
check_dtype=False)
class CropValidationTests(unittest.TestCase):
"""Tests for the Crop Productions' MODEL_SPEC and validation."""

View File

@ -231,6 +231,60 @@ class DatastackArchiveTests(unittest.TestCase):
self.assertEqual(len(archived_params), 1) # sanity check
def test_datastack_metadata(self):
"""Test correct metadata is created for datastack
Copy files into a temp directory, create metadata for 1 file
"""
from natcap.invest import datastack
import geometamaker
params = {
'raster': os.path.join(DATA_DIR, "landcover.tif"),
'simple_table': os.path.join(DATA_DIR, "carbon_pools_samp.csv"),
}
# Copy params into new dir
temp_dir = os.path.join(self.workspace, "temp_dir")
os.mkdir(temp_dir)
for name, f in params.items():
shutil.copyfile(f, os.path.join(temp_dir, os.path.basename(f)))
params = {k: os.path.join(temp_dir, os.path.basename(f))
for k, f in params.items()}
# generate custom metadata for 1 file before building datastack
resource = geometamaker.describe(params['raster'])
resource.set_description("foo")
resource.set_keywords(["bar"])
resource.write()
archive_path = os.path.join(self.workspace, 'archive.invs.tar.gz')
datastack.build_datastack_archive(
params, 'test_datastack_modules.archive_extraction', archive_path)
# extract the archive
out_directory = os.path.join(self.workspace, 'extracted_archive')
datastack._tarfile_safe_extract(archive_path, out_directory)
# validate metadata in directory to ensure 2 yamls exist
files, messages = geometamaker.validate_dir(out_directory,
recursive=True)
self.assertEqual(len(files), 2)
self.assertFalse(any(messages))
# test that custom description and keyword are not overwritten and new
# keywords are added
raster_path = os.path.join(out_directory, "data",
"raster_raster", "landcover.tif")
resource = geometamaker.describe(raster_path)
self.assertEqual(resource.get_description(), "foo")
self.assertCountEqual(resource.get_keywords(),
["archive_extraction_model", "InVEST", "bar"])
def test_nonspatial_files(self):
"""Datastack: test nonspatial files."""
from natcap.invest import datastack

View File

@ -1,4 +1,5 @@
MODEL_SPEC = {
'model_id': 'archive_extraction_model',
'args': {
'blank': {'type': 'freestyle_string'},
'a': {'type': 'integer'},
@ -6,7 +7,7 @@ MODEL_SPEC = {
'c': {'type': 'freestyle_string'},
'foo': {'type': 'file'},
'bar': {'type': 'file'},
'data_dir': {'type': 'directory'},
'data_dir': {'type': 'directory', 'contents': {}},
'raster': {'type': 'raster'},
'vector': {'type': 'vector'},
'simple_table': {'type': 'csv'},

View File

@ -1,4 +1,5 @@
MODEL_SPEC = {
'model_id': 'duplicate_filepaths_model',
'args': {
'foo': {'type': 'file'},
'bar': {'type': 'file'},

View File

@ -1,6 +1,7 @@
MODEL_SPEC = {
'model_id': 'nonspatial_model',
'args': {
'some_file': {'type': 'file'},
'data_dir': {'type': 'directory'},
'data_dir': {'type': 'directory', 'contents': {}},
}
}

View File

@ -1,4 +1,5 @@
MODEL_SPEC = {
'model_id': 'raster_model',
'args': {
'raster': {'type': 'raster'},
}

View File

@ -1,9 +1,10 @@
MODEL_SPEC = {
'model_id': 'simple_model',
'args': {
'a': {'type': 'integer'},
'b': {'type': 'freestyle_string'},
'c': {'type': 'freestyle_string'},
'd': {'type': 'freestyle_string'},
'workspace_dir': {'type': 'directory'},
'workspace_dir': {'type': 'directory', 'contents': {}},
}
}

View File

@ -1,4 +1,5 @@
MODEL_SPEC = {
'model_id': 'ui_parameters_model',
'args': {
'foo': {'type': 'freestyle_string'},
'bar': {'type': 'freestyle_string'},

View File

@ -1,4 +1,5 @@
MODEL_SPEC = {
'model_id': 'vector_model',
'args': {
'vector': {'type': 'vector'},
}

View File

@ -1,13 +1,18 @@
"""Module for Regression Testing the InVEST Forest Carbon Edge model."""
import unittest
import tempfile
import shutil
import os
import pickle
import shutil
import tempfile
import unittest
from osgeo import gdal
import numpy
import pandas
import pygeoprocessing
import shapely
from osgeo import gdal
from osgeo import ogr
from osgeo import osr
from shapely.geometry import Polygon
gdal.UseExceptions()
REGRESSION_DATA = os.path.join(
@ -15,13 +20,68 @@ REGRESSION_DATA = os.path.join(
'forest_carbon_edge_effect')
def make_simple_vector(path_to_shp):
"""
Generate shapefile with one rectangular polygon
Args:
path_to_shp (str): path to target shapefile
Returns:
None
"""
# (xmin, ymin), (xmax, ymin), (xmax, ymax), (xmin, ymax), (xmin, ymin)
shapely_geometry_list = [
Polygon([(461251, 4923195), (461501, 4923195),
(461501, 4923445), (461251, 4923445),
(461251, 4923195)])
]
srs = osr.SpatialReference()
srs.ImportFromEPSG(26910)
projection_wkt = srs.ExportToWkt()
vector_format = "ESRI Shapefile"
fields = {"id": ogr.OFTReal}
attribute_list = [{"id": 0}]
pygeoprocessing.shapely_geometry_to_vector(shapely_geometry_list,
path_to_shp, projection_wkt,
vector_format, fields,
attribute_list)
def make_simple_raster(base_raster_path, array, nodata_val=-1):
"""Create a raster on designated path.
Args:
base_raster_path (str): the raster path for the new raster.
array (array): numpy array to convert to tif.
nodata_val (int or None): for defining a raster's nodata value.
Returns:
None
"""
srs = osr.SpatialReference()
srs.ImportFromEPSG(26910) # UTM Zone 10N
projection_wkt = srs.ExportToWkt()
# origin hand-picked for this epsg:
origin = (461261, 4923265)
pixel_size = (1, -1)
pygeoprocessing.numpy_array_to_raster(
array, nodata_val, pixel_size, origin, projection_wkt,
base_raster_path)
class ForestCarbonEdgeTests(unittest.TestCase):
"""Tests for the Forest Carbon Edge Model."""
def setUp(self):
"""Overriding setUp function to create temp workspace directory."""
# this lets us delete the workspace after its done no matter the
# the rest result
# test result
self.workspace_dir = tempfile.mkdtemp()
def tearDown(self):
@ -243,6 +303,374 @@ class ForestCarbonEdgeTests(unittest.TestCase):
actual_message = str(cm.exception)
self.assertTrue(expected_message in actual_message, actual_message)
def test_combine_carbon_maps(self):
"""Test `combine_carbon_maps`"""
from natcap.invest.forest_carbon_edge_effect import combine_carbon_maps
# note that NODATA_VALUE = -1
carbon_arr1 = numpy.array([[7, 2, -1], [0, -2, -1]])
carbon_arr2 = numpy.array([[-1, 900, -1], [1, 20, 0]])
expected_output = numpy.array([[7, 902, -1], [1, 18, 0]])
actual_output = combine_carbon_maps(carbon_arr1, carbon_arr2)
numpy.testing.assert_allclose(actual_output, expected_output)
def test_aggregate_carbon_map(self):
"""Test `_aggregate_carbon_map`"""
from natcap.invest.forest_carbon_edge_effect import \
_aggregate_carbon_map
aoi_vector_path = os.path.join(self.workspace_dir, "aoi.shp")
carbon_map_path = os.path.join(self.workspace_dir, "carbon.tif")
target_vector_path = os.path.join(self.workspace_dir,
"agg_carbon.shp")
# make data
make_simple_vector(aoi_vector_path)
carbon_array = numpy.array(([1, 2, 3], [4, 5, 6]))
make_simple_raster(carbon_map_path, carbon_array)
_aggregate_carbon_map(aoi_vector_path, carbon_map_path,
target_vector_path)
# Validate fields in the agg carbon results vector
with gdal.OpenEx(target_vector_path,
gdal.OF_VECTOR | gdal.GA_Update) as ws_ds:
ws_layer = ws_ds.GetLayer()
for field_name, expected_value in zip(['c_sum', 'c_ha_mean'],
[0.0021, 0.000336]):
actual_values = [ws_feat.GetField(field_name)
for ws_feat in ws_layer][0]
error_msg = f"Error with {field_name} in agg_carbon.shp"
self.assertEqual(actual_values, expected_value, msg=error_msg)
def test_calculate_lulc_carbon_map(self):
"""Test `_calculate_lulc_carbon_map`"""
from natcap.invest.forest_carbon_edge_effect import \
_calculate_lulc_carbon_map
# Make synthetic data
lulc_raster_path = os.path.join(self.workspace_dir, "lulc.tif")
lulc_array = numpy.array([[1, 2, 3], [3, 2, 1]], dtype=numpy.int16)
make_simple_raster(lulc_raster_path, lulc_array)
biophysical_table_path = os.path.join(self.workspace_dir,
"biophysical_table.csv")
data = {"lucode": [1, 2, 3]}
df = pandas.DataFrame(data).set_index("lucode")
df["is_tropical_forest"] = [0, 1, 0]
df["c_above"] = [100, 500, 200]
df.to_csv(biophysical_table_path)
carbon_pool_type = 'c_above'
ignore_tropical_type = False
compute_forest_edge_effects = True
carbon_map_path = os.path.join(self.workspace_dir, "output_carbon.tif")
_calculate_lulc_carbon_map(
lulc_raster_path, biophysical_table_path, carbon_pool_type,
ignore_tropical_type, compute_forest_edge_effects,
carbon_map_path)
actual_output = pygeoprocessing.raster_to_numpy_array(carbon_map_path)
expected_output = numpy.array([[100, 500, 200], [200, 500, 100]])
numpy.testing.assert_allclose(actual_output, expected_output)
def test_map_distance_from_tropical_forest_edge(self):
"""Test `_map_distance_from_tropical_forest_edge`"""
from natcap.invest.forest_carbon_edge_effect import \
_map_distance_from_tropical_forest_edge
# Make synthetic data
base_lulc_raster_path = os.path.join(self.workspace_dir, "lulc.tif")
lulc_array = numpy.array([
[2, 2, 3, 3, 3, 2, 2],
[2, 1, 1, 1, 1, 1, 2],
[3, 1, 1, 1, 1, 1, 3],
[2, 1, 1, 1, 1, 1, 2],
[2, 2, 3, 3, 3, 2, 2]
], dtype=numpy.int16)
make_simple_raster(base_lulc_raster_path, lulc_array)
biophysical_table_path = os.path.join(self.workspace_dir,
"biophysical_table.csv")
data = {"lucode": [1, 2, 3]}
df = pandas.DataFrame(data).set_index("lucode")
df["is_tropical_forest"] = [1, 0, 0]
df["c_above"] = [100, 500, 200]
df.to_csv(biophysical_table_path)
target_edge_distance_path = os.path.join(self.workspace_dir,
"edge_distance.tif")
target_mask_path = os.path.join(self.workspace_dir,
"non_forest_mask.tif")
_map_distance_from_tropical_forest_edge(
base_lulc_raster_path, biophysical_table_path,
target_edge_distance_path, target_mask_path)
# check forest mask
actual_output = pygeoprocessing.raster_to_numpy_array(target_mask_path)
expected_output = numpy.array([
[1, 1, 1, 1, 1, 1, 1],
[1, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 1],
[1, 1, 1, 1, 1, 1, 1]
], dtype=numpy.int16)
numpy.testing.assert_allclose(actual_output, expected_output)
# check edge distance map
actual_output = pygeoprocessing.raster_to_numpy_array(
target_edge_distance_path)
expected_output = numpy.array([
[0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 1, 1, 0],
[0, 1, 2, 2, 2, 1, 0],
[0, 1, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0]
], dtype=numpy.int16)
numpy.testing.assert_allclose(actual_output, expected_output)
def test_calculate_tropical_forest_edge_carbon_map(self):
"""Test `_calculate_tropical_forest_edge_carbon_map`"""
from natcap.invest.forest_carbon_edge_effect import \
_calculate_tropical_forest_edge_carbon_map
from scipy.spatial import cKDTree
edge_dist_array = numpy.array([
[0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 1, 1, 0],
[0, 1, 2, 2, 2, 1, 0],
[0, 1, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0]
], dtype=numpy.int16)
edge_distance_path = os.path.join(self.workspace_dir, "edge_dist.tif")
make_simple_raster(edge_distance_path, edge_dist_array)
spatial_index_pickle_path = os.path.join(self.workspace_dir,
"spatial_index.pkl")
def _create_spatial_index_pickle(spatial_index_pickle_path,
raster_path):
"""
Create and save a KD-tree.
This function reads the spatial extent and resolution from a raster
file, then generates a grid of sample points in geographic space
(one point every other row, all columns). It builds a KD-tree from
these points for fast spatial lookup, along with synthetic theta
and method model parameters, and saves the result as a pickle file.
Args:
spatial_index_pickle_path (str): Path to save the pickle file.
raster_path (string): Path to the raster used to extract
spatial metadata.
Return:
None
"""
# Get origin and pixel_size
raster_info = pygeoprocessing.get_raster_info(raster_path)
gt = raster_info['geotransform']#461261, 4923265
origin_x, origin_y = gt[0], gt[3]
pixel_size_x, pixel_size_y = raster_info['pixel_size']
cols, rows = raster_info['raster_size']
# only create a point every other row and every col (in raster)
row_col_pairs = [(r, c) for r in range(0, rows, 2) for c in range(cols)]
# Get spatial coordinates
points = []
for row, col in row_col_pairs:
x = origin_x + col * pixel_size_x
y = origin_y + row * pixel_size_y
points.append((y, x))
# note: row → y, col → x (so KD-tree works with (lat, lon))
theta_model_parameters = numpy.linspace(
100, 200, len(points)*3).reshape(len(points), 3) # Nx3
method_model_parameter = numpy.ones((len(points),))
# change method for one area
method_model_parameter[0] = 3
kd_tree = cKDTree(points)
# Save the data as a tuple in a pickle file
with open(spatial_index_pickle_path, 'wb') as f:
pickle.dump((kd_tree, theta_model_parameters,
method_model_parameter), f)
_create_spatial_index_pickle(spatial_index_pickle_path,
edge_distance_path)
n_nearest_model_points = 6
biomass_to_carbon_conversion_factor = 10
tropical_forest_edge_carbon_map_path = os.path.join(self.workspace_dir,
"output.tif")
_calculate_tropical_forest_edge_carbon_map(
edge_distance_path, spatial_index_pickle_path,
n_nearest_model_points, biomass_to_carbon_conversion_factor,
tropical_forest_edge_carbon_map_path)
actual_output = pygeoprocessing.raster_to_numpy_array(
tropical_forest_edge_carbon_map_path)
actual_output_sum = actual_output.sum()
# Comparing sums as 1 value in the output arrays is ~8% different
# on windows vs. mac, which seemed too high a relative tolerance to set
numpy.testing.assert_allclose(actual_output_sum, 3659.7915, rtol=0.009)
# Spot checking several values too
self.assertAlmostEqual(actual_output[1, 2], 142.47273)
self.assertAlmostEqual(actual_output[3, 5], 274.47815)
self.assertAlmostEqual(actual_output[0, 1], -1)
def test_invalid_geometries(self):
"""Forest Carbon: Check handling of invalid vector geometries."""
from natcap.invest import forest_carbon_edge_effect
srs = osr.SpatialReference()
srs.ImportFromEPSG(26910)
projection_wkt = srs.ExportToWkt()
lulc_matrix = numpy.ones((5,5), dtype=numpy.int8)
lulc_raster_path = os.path.join(self.workspace_dir, 'lulc.tif')
pygeoprocessing.numpy_array_to_raster(
lulc_matrix, 255, (1, -1), (461262, 4923269), projection_wkt,
lulc_raster_path)
fields_polys = {'geom_id': ogr.OFTInteger}
attrs_polys = [
{'geom_id': 1},
{'geom_id': 2},
{'geom_id': 3}
]
# valid polygon
valid_poly = Polygon([
(461265, 4923269), (461267, 4923269), (461267, 4923267),
(461265, 4923267), (461265, 4923269)])
self.assertTrue(shapely.is_valid(valid_poly))
# invalid bowtie polygon
invalid_bowtie_poly = Polygon([
(461262, 4923267), (461264, 4923267), (461262, 4923265),
(461264, 4923265), (461262, 4923267)])
self.assertFalse(shapely.is_valid(invalid_bowtie_poly))
# invalid bowtie polygon with vertex in the middle
invalid_alt_bowtie_poly = Polygon([
(461262, 4923267), (461264, 4923267), (461263, 4923266),
(461264, 4923265), (461262, 4923265), (461263, 4923266),
(461262, 4923267)])
self.assertFalse(shapely.is_valid(invalid_alt_bowtie_poly))
test_vector_path = os.path.join(self.workspace_dir, 'vector.gpkg')
pygeoprocessing.shapely_geometry_to_vector(
[valid_poly, invalid_bowtie_poly,
invalid_alt_bowtie_poly],
test_vector_path, projection_wkt, 'GPKG',
fields=fields_polys,
attribute_list=attrs_polys)
test_vector = gdal.OpenEx(
test_vector_path, gdal.OF_VECTOR | gdal.GA_ReadOnly)
test_layer = test_vector.GetLayer()
self.assertEqual(test_layer.GetFeatureCount(), 3)
test_layer = None
test_vector = None
target_vector_path = os.path.join(
self.workspace_dir, 'checked_geometries.gpkg')
forest_carbon_edge_effect._clip_global_regression_models_vector(
lulc_raster_path, test_vector_path, target_vector_path)
target_vector = gdal.OpenEx(target_vector_path, gdal.OF_VECTOR)
target_layer = target_vector.GetLayer()
self.assertEqual(
target_layer.GetFeatureCount(), 1)
# valid_polygon geom_id == 1
self.assertEqual(target_layer.GetFeature(0).GetField('geom_id'), 1)
target_layer = None
target_vector = None
def test_vector_clipping_buffer(self):
"""Forest Carbon: Check DISTANCE_UPPER_BOUND buffer."""
from natcap.invest import forest_carbon_edge_effect
srs = osr.SpatialReference()
srs.ImportFromEPSG(26910)
projection_wkt = srs.ExportToWkt()
lulc_matrix = numpy.ones((5,5), dtype=numpy.int8)
lulc_raster_path = os.path.join(self.workspace_dir, 'lulc.tif')
pygeoprocessing.numpy_array_to_raster(
lulc_matrix, 255, (1000, -1000), (461261, 4923270), projection_wkt,
lulc_raster_path)
# get LULC bouding box; will be used to construct polygons
raster_info = pygeoprocessing.get_raster_info(lulc_raster_path)
bbox_minx, bbox_miny, bbox_maxx, bbox_maxy = raster_info['bounding_box']
fields_polys = {'geom_id': ogr.OFTInteger}
attrs_polys = [
{'geom_id': 1},
{'geom_id': 2},
{'geom_id': 3},
{'geom_id': 4}
]
# polygon that intersects with the lulc
in_bbox_poly = shapely.geometry.box(
bbox_maxx - 150, bbox_maxy - 150,
bbox_maxx - 50, bbox_maxy - 50)
self.assertTrue(shapely.is_valid(in_bbox_poly))
# polygon outside of lulc bb, but within buffer distance
in_buffer_poly = shapely.geometry.box(
bbox_maxx + 50, bbox_maxy + 50,
bbox_maxx + 150, bbox_maxy + 150)
self.assertTrue(shapely.is_valid(in_buffer_poly))
buffer = forest_carbon_edge_effect.DISTANCE_UPPER_BOUND
# polygon on the edge of the buffer
buffer_edge_poly = shapely.geometry.box(
bbox_maxx + buffer + 50, bbox_maxy + buffer + 50,
bbox_maxx + buffer - 50, bbox_maxy + buffer - 50)
self.assertTrue(shapely.is_valid(buffer_edge_poly))
# polygon outside of buffer distance
out_of_buffer_poly = shapely.geometry.box(
bbox_maxx + buffer + 50, bbox_maxy + buffer + 50,
bbox_maxx + buffer + 150, bbox_maxy + buffer + 150)
self.assertTrue(shapely.is_valid(out_of_buffer_poly))
test_vector_path = os.path.join(self.workspace_dir, 'vector.gpkg')
pygeoprocessing.shapely_geometry_to_vector(
[in_bbox_poly, in_buffer_poly,
buffer_edge_poly, out_of_buffer_poly],
test_vector_path, projection_wkt, 'GPKG',
fields=fields_polys,
attribute_list=attrs_polys)
target_vector_path = os.path.join(
self.workspace_dir, 'checked_geometries.gpkg')
forest_carbon_edge_effect._clip_global_regression_models_vector(
lulc_raster_path, test_vector_path, target_vector_path)
vector = gdal.OpenEx(
target_vector_path, gdal.OF_VECTOR | gdal.GA_ReadOnly)
layer = vector.GetLayer()
self.assertEqual(layer.GetFeatureCount(), 3)
feat_ids = [feat.GetField('geom_id') for feat in layer]
self.assertEqual(feat_ids, [1, 2, 3])
@staticmethod
def _test_same_files(base_list_path, directory_path):
"""Assert files in `base_list_path` are in `directory_path`.

View File

@ -6,8 +6,9 @@ import tempfile
import unittest
import numpy
import pandas
import pygeoprocessing
from osgeo import gdal
from osgeo import gdal, gdal_array
from osgeo import ogr
from osgeo import osr
from shapely.geometry import Polygon
@ -16,18 +17,26 @@ gdal.UseExceptions()
def make_raster_from_array(
base_array, base_raster_path, nodata_val=-1, gdal_type=gdal.GDT_Int32):
base_array, base_raster_path, nodata_val=-1, gdal_type=None,
pixel_size=1):
"""Make a raster from an array on a designated path.
Args:
base_array (numpy.ndarray): the 2D array for making the raster.
base_raster_path (str): the path for the raster to be created.
nodata_val (int; float): nodata value for the raster.
gdal_type (gdal datatype; int): gdal datatype for the raster.
base_raster_path (str): the path for the raster to be created.
pixel_size (int): pixel size for the raster.
Returns:
None.
"""
if gdal_type is None:
numpy_dtype = base_array.dtype
else:
numpy_dtype = gdal_array.GDALTypeCodeToNumericTypeCode(gdal_type)
base_array = base_array.astype(numpy_dtype)
# Projection to user for generated sample data UTM Zone 10N
srs = osr.SpatialReference()
srs.ImportFromEPSG(26910)
@ -35,7 +44,8 @@ def make_raster_from_array(
origin = (1180000, 690000)
pygeoprocessing.numpy_array_to_raster(
base_array, nodata_val, (1, -1), origin, project_wkt, base_raster_path)
base_array, nodata_val, (pixel_size, -pixel_size),
origin, project_wkt, base_raster_path)
def make_access_shp(access_shp_path):
@ -2332,3 +2342,63 @@ class HabitatQualityTests(unittest.TestCase):
with self.assertRaises(TypeError):
habitat_quality.execute(args)
def test_compute_rarity_operation(self):
"""Test `_compute_rarity_operation`"""
from natcap.invest.habitat_quality import _compute_rarity_operation
base_lulc_path_band = (os.path.join(self.workspace_dir, "base_lulc.tif"), 1)
lulc_path_band = (os.path.join(self.workspace_dir, "fut_lulc.tif"), 1)
new_cover_path = (os.path.join(self.workspace_dir, "new_cover.tif"), 1)
rarity_raster_path = os.path.join(self.workspace_dir, "out_rarity.tif")
rarity_csv_path = os.path.join(self.workspace_dir, "out_rarity.csv")
lulc_array = numpy.array([[1, 3, 1, 4], [1, 3, 3, 3]])
fut_array = numpy.array([[2, 1, 3, 3], [1, 3, 4, 4]])
make_raster_from_array(lulc_array, base_lulc_path_band[0])
make_raster_from_array(fut_array, lulc_path_band[0], pixel_size=2)
_compute_rarity_operation(
base_lulc_path_band, lulc_path_band, new_cover_path,
rarity_raster_path, rarity_csv_path)
actual_rarity = pandas.read_csv(rarity_csv_path)['rarity_value']
expected_rarity = [0.27272727, 0, 0.25, 0.1111111111]
numpy.testing.assert_allclose(actual_rarity, expected_rarity)
def test_raster_values_in_bounds(self):
"""Test `_raster_values_in_bounds`"""
from natcap.invest.habitat_quality import _raster_values_in_bounds
raster_path_band = (os.path.join(self.workspace_dir, "ras.tif"), 1)
lower_bound = 1
upper_bound = 50
# create array with a value (0) outside of range(lower, upper)
array = numpy.array([[1, 0], [50, 4]])
make_raster_from_array(array, raster_path_band[0])
values_valid = _raster_values_in_bounds(
raster_path_band, lower_bound, upper_bound)
self.assertFalse(values_valid)
def test_decay_distance(self):
"""Test `_decay_distance`"""
from natcap.invest.habitat_quality import _decay_distance
dist_raster_path = os.path.join(self.workspace_dir, "dist.tif")
max_dist = 20
decay_type = 'linear'
target_path = os.path.join(self.workspace_dir, "output.tif")
dist_array = numpy.array([[0, 21, 1], [2, 50, 600]], dtype=float)
make_raster_from_array(dist_array, dist_raster_path)
_decay_distance(dist_raster_path, max_dist, decay_type, target_path)
actual_output = pygeoprocessing.raster_to_numpy_array(target_path)
expected_output = numpy.array([[1.0, 0.0, 0.95],
[0.9, 0.0, 0.0]])
numpy.testing.assert_allclose(actual_output, expected_output)

View File

@ -5,6 +5,7 @@ import tempfile
import unittest
import numpy
import pandas
import pygeoprocessing
import shapely.geometry
from osgeo import gdal
@ -46,6 +47,27 @@ EXPECTED_FILE_LIST = [
'intermediate_outputs/reprojected_farm_vector.shx']
def make_simple_raster(base_raster_path, array):
"""Create a raster on designated path with arbitrary values.
Args:
base_raster_path (str): the raster path for making the new raster.
Returns:
None.
"""
# UTM Zone 10N
srs = osr.SpatialReference()
srs.ImportFromEPSG(26910)
projection_wkt = srs.ExportToWkt()
origin = (461251, 4923245)
pixel_size = (30, -30)
no_data = -1
pygeoprocessing.numpy_array_to_raster(
array, no_data, pixel_size, origin, projection_wkt,
base_raster_path)
class PollinationTests(unittest.TestCase):
"""Tests for the Pollination model."""
@ -479,6 +501,143 @@ class PollinationTests(unittest.TestCase):
"The following files were expected but not found: " +
'\n'.join(missing_files))
def test_parse_scenario_variables(self):
"""Test `_parse_scenario_variables`"""
from natcap.invest.pollination import _parse_scenario_variables
def _create_guild_table_csv(output_path):
data = {"species": ["Bee_A", "Bee_B", "Butterfly_C"]}
df = pandas.DataFrame(data).set_index("species")
df["nesting_suitability_soil_index"] = [0.8, 0.5, 0.2]
df["nesting_suitability_wood_index"] = [0.3, 0.2, 0.4]
df["foraging_activity_spring_index"] = [1.0, 0.8, 0.6]
df["foraging_activity_summer_index"] = [.9, 0.6, 0.3]
df["alpha"] = [150, 200, 120]
df["relative_abundance"] = [0.4, 0.3, 0.2]
df.to_csv(output_path)
def _create_biophysical_table(output_path):
data = {"lucode": [100, 200, 300]}
df = pandas.DataFrame(data).set_index("lucode")
df["nesting_soil_availability_index"] = [0.7, 0.4, 0.6]
df["nesting_wood_availability_index"] = [0.1, 0.2, 0.6]
df["floral_resources_spring_index"] = [0.3, 0.3, 0.8]
df["floral_resources_summer_index"] = [0.9, 0.6, 0.3]
df.to_csv(output_path)
def _create_farm_vector(output_path):
from shapely import Polygon
shapely_geometry_list = [
Polygon([(0, 0), (1, 0), (1, 1), (0, 1), (0, 0)]),
Polygon([(0, 0), (1, 0), (1, 1), (0, 1), (0, 0)])
]
srs = osr.SpatialReference()
srs.ImportFromEPSG(26910)
projection_wkt = srs.ExportToWkt()
fields = {"crop_type": ogr.OFTString, "half_sat": ogr.OFTReal,
"season": ogr.OFTString, "fr_spring": ogr.OFTReal,
"fr_summer": ogr.OFTReal,
"n_soil": ogr.OFTReal, "n_wood": ogr.OFTReal,
"p_dep": ogr.OFTReal,
"p_managed": ogr.OFTReal}
attribute_list = [{
"crop_type": "barley", "half_sat": 0.5, "season": "spring",
"fr_spring": 0.8, "fr_summer": 0.3, "n_soil": 0.7,
"n_wood": 0.5, "p_dep": 0.9, "p_managed": 0.4},
{"crop_type": "almonds", "half_sat": 0.7, "season": "summer",
"fr_spring": 0.4, "fr_summer": 0.9, "n_soil": 0.5,
"n_wood": 0.6, "p_dep": 0.8, "p_managed": 0.6}]
pygeoprocessing.shapely_geometry_to_vector(
shapely_geometry_list, output_path, projection_wkt,
"ESRI Shapefile", fields, attribute_list,
ogr_geom_type=ogr.wkbPolygon)
def _generate_output_dict():
return {'season_list': ['spring', 'summer'],
'substrate_list': ['soil', 'wood'],
'species_list': ['bee_a', 'bee_b', 'butterfly_c'],
'alpha_value': {'bee_a': 150., 'bee_b': 200., 'butterfly_c': 120.},
'species_abundance': {'bee_a': 0.44444444444444453,
'bee_b': 0.33333333333333337,
'butterfly_c': 0.22222222222222227},
'species_foraging_activity': {
('bee_a', 'spring'): 0.5263157894736842,
('bee_a', 'summer'): 0.4736842105263158,
('bee_b', 'spring'): 0.5714285714285715,
('bee_b', 'summer'): 0.4285714285714286,
('butterfly_c', 'spring'): 0.6666666666666667,
('butterfly_c', 'summer'): 0.33333333333333337},
'landcover_substrate_index': {
'soil': {100: .7, 200: .4, 300: 0.6},
'wood': {100: 0.1, 200: 0.2, 300: .6}},
'landcover_floral_resources': {
'spring': {100: 0.3, 200: 0.3, 300: 0.8},
'summer': {100: 0.9, 200: 0.6, 300: 0.3}},
'species_substrate_index': {
'bee_a': {'soil': 0.8, 'wood': 0.3},
'bee_b': {'soil': 0.5, 'wood': 0.2},
'butterfly_c': {'soil': 0.2, 'wood': 0.4}},
'foraging_activity_index': {
('bee_a', 'spring'): 1.0, ('bee_a', 'summer'): 0.9,
('bee_b', 'spring'): 0.8, ('bee_b', 'summer'): 0.6,
('butterfly_c', 'spring'): 0.6,
('butterfly_c', 'summer'): 0.3}}
args = {'guild_table_path':
os.path.join(self.workspace_dir, "guild_table.csv"),
'landcover_biophysical_table_path':
os.path.join(self.workspace_dir, "biophysical_table.csv"),
'farm_vector_path': os.path.join(self.workspace_dir, "farm.shp")}
_create_guild_table_csv(args['guild_table_path'])
_create_biophysical_table(args['landcover_biophysical_table_path'])
_create_farm_vector(args['farm_vector_path'])
actual_dict = _parse_scenario_variables(args)
expected_dict = _generate_output_dict()
self.assertDictEqual(actual_dict, expected_dict)
def test_calculate_habitat_nesting_index(self):
"""Test `_calculate_habitat_nesting_index`"""
from natcap.invest.pollination import _calculate_habitat_nesting_index
substrate_path_map = {
"wood": os.path.join(self.workspace_dir, "wood.tif"),
"soil": os.path.join(self.workspace_dir, "soil.tif")
}
make_simple_raster(substrate_path_map["wood"],
numpy.array([[5, 6], [3, 2]]))
make_simple_raster(substrate_path_map["soil"],
numpy.array([[2, 12], [1, 0]]))
species_substrate_index_map = {"wood": 0.8, "soil": 0.5}
target_habitat_nesting_index_path = os.path.join(
self.workspace_dir, "habitat_nesting.tif")
_calculate_habitat_nesting_index(
substrate_path_map, species_substrate_index_map,
target_habitat_nesting_index_path)
# read habitat nesting tif
habitat_raster = gdal.Open(target_habitat_nesting_index_path)
band = habitat_raster.GetRasterBand(1)
actual_array = band.ReadAsArray()
expected_array = numpy.array([[4, 6], [2.4, 1.6]])
numpy.testing.assert_allclose(actual_array, expected_array)
class PollinationValidationTests(unittest.TestCase):
"""Tests for the Pollination Model MODEL_SPEC and validation."""

View File

@ -65,7 +65,7 @@ def make_simple_shp(base_shp_path, origin):
data_source = None
def make_raster_from_array(base_array, base_raster_path):
def make_raster_from_array(base_array, base_raster_path, nodata=-1):
"""Make a raster from an array on a designated path.
Args:
@ -82,7 +82,7 @@ def make_raster_from_array(base_array, base_raster_path):
# Each pixel is 1x1 m
pygeoprocessing.numpy_array_to_raster(
base_array, -1, (1, -1), (1180000, 690000), project_wkt,
base_array, nodata, (1, -1), (1180000, 690000), project_wkt,
base_raster_path)
@ -1267,6 +1267,52 @@ class SeasonalWaterYieldRegressionTests(unittest.TestCase):
pygeoprocessing.raster_to_numpy_array(output_path),
expected_quickflow_array, atol=1e-5)
def test_monthly_quickflow_nodata_propagation(self):
"""Test correct nodata propagation in `_calculate_monthly_quick_flow`
This test checks that:
1. If n=nodata: output is nodata
2. If precip=nodata: output is nodata
3. If precip<0 and not nodata & n is valid: output is 0
4. If precip and n are valid & stream=1 & SI=nodata: output is valid
5. If precip and n are valid & stream=nodata: output is nodata
"""
from natcap.invest.seasonal_water_yield import seasonal_water_yield
# Test a variety of valid/nodata combinations across the input layers
precip_array = numpy.array([[-1, -6, 32767, 32767],
[5, 6, 30, 8]], dtype=numpy.float32)
n_events_array = numpy.array([[-1, 1, -8, 8],
[-1, 6, 2, 9]], dtype=numpy.float32)
si_array = numpy.array([[1, -1, 3, 4],
[5, -1, 7, 8]], dtype=numpy.float32)
stream_mask = numpy.array([[1, -1, 1, 1],
[1, 1, 0, -1]], dtype=numpy.float32)
expected_quickflow_array = numpy.array([[-1, 0, -1, -1],
[-1, 6, 0.382035, -1]])
precip_path = os.path.join(self.workspace_dir, 'precip.tif')
si_path = os.path.join(self.workspace_dir, 'si.tif')
n_events_path = os.path.join(self.workspace_dir, 'n_events.tif')
stream_path = os.path.join(self.workspace_dir, 'stream.tif')
output_path = os.path.join(self.workspace_dir, 'quickflow.tif')
# write all the test arrays to raster files
for array, path in [(n_events_array, n_events_path),
(si_array, si_path),
(stream_mask, stream_path)]:
# define a nodata value for intermediate outputs
make_raster_from_array(array, path)
# Ensure positive nodata value for precip is handled correctly
make_raster_from_array(precip_array, precip_path, nodata=32767)
seasonal_water_yield._calculate_monthly_quick_flow(
precip_path, n_events_path, stream_path, si_path, output_path)
numpy.testing.assert_allclose(
pygeoprocessing.raster_to_numpy_array(output_path),
expected_quickflow_array, atol=1e-6)
def test_local_recharge_undefined_nodata(self):
"""Test `calculate_local_recharge` with undefined nodata values"""
from natcap.invest.seasonal_water_yield import \

View File

@ -338,7 +338,7 @@ class TestMetadataFromSpec(unittest.TestCase):
"""Override tearDown function to remove temporary directory."""
shutil.rmtree(self.workspace_dir)
def test_write_metadata(self):
def test_write_metadata_for_outputs(self):
"""Test writing metadata for an invest output workspace."""
# An example invest output spec
@ -389,7 +389,7 @@ class TestMetadataFromSpec(unittest.TestCase):
args_dict = {'workspace_dir': self.workspace_dir}
spec_utils.generate_metadata(model_module, args_dict)
spec_utils.generate_metadata_for_outputs(model_module, args_dict)
files, messages = geometamaker.validate_dir(
self.workspace_dir, recursive=True)
self.assertEqual(len(files), 2)

View File

@ -107,6 +107,36 @@ class SpatialOverlapTest(unittest.TestCase):
expected = validation.MESSAGES['NO_PROJECTION'].format(filepath=filepath_2)
self.assertEqual(error_msg, expected)
def test_check_overlap_unable_to_transform(self):
"""Validation: check overlap when layer cannot transform to EPSG:4326."""
from natcap.invest import validation
driver = gdal.GetDriverByName('GTiff')
filepath_1 = os.path.join(self.workspace_dir, 'raster_1.tif')
filepath_2 = os.path.join(self.workspace_dir, 'raster_2.tif')
raster_1 = driver.Create(filepath_1, 3, 3, 1, gdal.GDT_Int32)
wgs84_srs = osr.SpatialReference()
wgs84_srs.ImportFromEPSG(4326)
raster_1.SetProjection(wgs84_srs.ExportToWkt())
raster_1.SetGeoTransform([1, 1, 0, 1, 0, 1])
raster_1 = None
# set up a raster with an outside-the-globe extent
raster_2 = driver.Create(filepath_2, 3, 3, 1, gdal.GDT_Int32)
eckert_srs = osr.SpatialReference()
proj4str = '+proj=eck4 +lon_0=0 +x_0=0 +y_0=0 +datum=WGS84 +units=m +no_defs +type=crs'
eckert_srs.ImportFromProj4(proj4str)
raster_2.SetProjection(eckert_srs.ExportToWkt())
raster_2.SetGeoTransform(
(-10000000.0, 19531.25, 0.0, 15000000.0, 0.0, -4882.8125))
raster_2 = None
# The improper raster should be skipped by validation, thus no errors
error_msg = validation.check_spatial_overlap(
[filepath_1, filepath_2], different_projections_ok=True)
self.assertEqual(None, error_msg)
@unittest.skip("skipping due to unresolved projection comparison question")
def test_different_projections_not_ok(self):
"""Validation: different projections not allowed by default.

View File

@ -57,6 +57,41 @@ The order of groups should be as follows:
<!-- Unreleased Changes
------------------ -->
<section id="section-1">
<h1>3.15.1 (2025-05-06)</h1>
<section id="general">
<h2>General</h2>
<ul>
<li>Handle exceptions that can arise during <code>validation.check_spatial_overlap</code> when a layer's bounding box cannot be transformed to EPSG:4326. (<a href="https://github.com/natcap/invest/issues/1849">#1849</a>).</li>
</ul>
</section>
<section id="workbench">
<h2>Workbench</h2>
<ul>
<li>Fixed a bug that did not allow users to select a folder as the location to extract a datastack archive. (<a href="https://github.com/natcap/invest/issues/1879">#1879</a>).</li>
<li>When a parameter from a previous model run is changed, the model status indicator (e.g., the "Model Complete" notice) is cleared to help prevent confusion about which parameters went into the most recent model run (<a href="https://github.com/natcap/invest/issues/1655">#1655</a>).</li>
</ul>
</section>
<section id="crop-production">
<h2>Crop Production</h2>
<ul>
<li>Both the Percentile and Regression models now issue a warning if any LULC code in the LULC raster is not present in the landcover to crop table, or if any LULC code in the landcover to crop table is not present in the LULC raster (<a href="https://github.com/natcap/invest/issues/925">#925</a>).</li>
<li>The Regression model now correctly validates crop names against the existence of a corresponding regression yield table (<a href="https://github.com/natcap/invest/issues/1723">#1723</a>).</li>
</ul>
</section>
<section id="seasonal-water-yield">
<h2>Seasonal Water Yield</h2>
<ul>
<li>Fixed bug in quickflow calculation where <cite>NoData</cite> values in precipitation were being incorrectly converted to 0's (<a href="https://github.com/natcap/invest/issues/1592">#1592</a>).</li>
</ul>
</section>
<section id="wind-energy">
<h2>Wind Energy</h2>
<ul>
<li>Fixed a bug where the model would error if no AOI was provided when run from the workbench or from a datastack file where the value for 'aoi_vector_path' was an empty string. (<a href="https://github.com/natcap/invest/issues/1900">#1900</a>)</li>
</ul>
</section>
</section>
<section id="section-2">
<h1>3.15.0 (2025-04-03)</h1>
<section id="highlights">
<h2>Highlights</h2>
@ -75,7 +110,7 @@ The order of groups should be as follows:
<li>InVEST model outputs now include metadata. Open the '.yml' files in a text editor to read and add to the metadata.</li>
</ul>
</section>
<section id="general">
<section id="general-1">
<h2>General</h2>
<ul>
<li>Fixed an issue where a user's PROJ_DATA environment variable could trigger a RuntimeError about a missing proj.db file. <a href="https://github.com/natcap/invest/issues/1742">https://github.com/natcap/invest/issues/1742</a></li>
@ -87,7 +122,7 @@ The order of groups should be as follows:
<li>Removed the warning about <code>gdal.UseExceptions()</code>. Python API users should still call <code>gdal.UseExceptions()</code>, but no longer need to do so before importing <code>natcap.invest</code>. <a href="https://github.com/natcap/invest/issues/1702">https://github.com/natcap/invest/issues/1702</a></li>
</ul>
</section>
<section id="workbench">
<section id="workbench-1">
<h2>Workbench</h2>
<ul>
<li>Auto-scrolling of log output is halted on user-initiated scrolling, enabling easier inspection of log output while a model is running (<a href="https://github.com/natcap/invest/issues/1533">InVEST #1533</a>).</li>
@ -116,7 +151,7 @@ The order of groups should be as follows:
<li>The <code>code</code> column in the model's biophysical table input, as well as the <code>code</code> column in the preprocessor's LULC lookup table input and <code>carbon_pool_transient_template</code> output, have been renamed <code>lucode</code>, for consistency with other InVEST models (<a href="https://github.com/natcap/invest/issues/1249">InVEST #1249</a>).</li>
</ul>
</section>
<section id="crop-production">
<section id="crop-production-1">
<h2>Crop Production</h2>
<ul>
<li>Raster outputs that previously contained per-pixel values (e.g., t/pixel) now contain per-hectare values (e.g., t/ha). This change affects both the Percentile and Regression models (<a href="https://github.com/natcap/invest/issues/1270">InVEST #1270</a>).</li>
@ -164,7 +199,7 @@ The order of groups should be as follows:
<li>D8 routing is now supported in addition to MFD (<a href="https://github.com/natcap/invest/issues/1440">#1440</a>).</li>
</ul>
</section>
<section id="seasonal-water-yield">
<section id="seasonal-water-yield-1">
<h2>Seasonal Water Yield</h2>
<ul>
<li>D8 routing is now supported in addition to MFD (<a href="https://github.com/natcap/invest/issues/1440">#1440</a>).</li>
@ -192,14 +227,14 @@ The order of groups should be as follows:
<li>Regression coefficients are still listed in a summary text file, and are now also included in a tabular output: "regression_coefficients.csv".</li>
</ul>
</section>
<section id="wind-energy">
<section id="wind-energy-1">
<h2>Wind Energy</h2>
<ul>
<li>Fixed a bug that could cause the Workbench to crash when running the Wind Energy model with <code>Taskgraph</code> logging set to <code>DEBUG</code> (<a href="https://github.com/natcap/invest/issues/1497">InVEST #1497</a>).</li>
</ul>
</section>
</section>
<section id="section-2">
<section id="section-3">
<h1>3.14.3 (2024-12-19)</h1>
<ul>
<li>
@ -309,7 +344,7 @@ The order of groups should be as follows:
</li>
</ul>
</section>
<section id="section-3">
<section id="section-4">
<h1>3.14.2 (2024-05-29)</h1>
<ul>
<li>
@ -430,7 +465,7 @@ The order of groups should be as follows:
</li>
</ul>
</section>
<section id="section-4">
<section id="section-5">
<h1>3.14.1 (2023-12-18)</h1>
<ul>
<li>
@ -561,7 +596,7 @@ The order of groups should be as follows:
</li>
</ul>
</section>
<section id="section-5">
<section id="section-6">
<h1>3.14.0 (2023-09-08)</h1>
<ul>
<li>
@ -766,7 +801,7 @@ The order of groups should be as follows:
</li>
</ul>
</section>
<section id="section-6">
<section id="section-7">
<h1>3.13.0 (2023-03-17)</h1>
<ul>
<li>
@ -899,7 +934,7 @@ The order of groups should be as follows:
</li>
</ul>
</section>
<section id="section-7">
<section id="section-8">
<h1>3.12.1 (2022-12-16)</h1>
<ul>
<li>
@ -972,7 +1007,7 @@ The order of groups should be as follows:
</li>
</ul>
</section>
<section id="section-8">
<section id="section-9">
<h1>3.12.0 (2022-08-31)</h1>
<ul>
<li>
@ -1096,7 +1131,7 @@ The order of groups should be as follows:
</li>
</ul>
</section>
<section id="section-9">
<section id="section-10">
<h1>3.11.0 (2022-05-24)</h1>
<ul>
<li>
@ -1194,7 +1229,7 @@ The order of groups should be as follows:
</li>
</ul>
</section>
<section id="section-10">
<section id="section-11">
<h1>3.10.2 (2022-02-08)</h1>
<ul>
<li>
@ -1296,7 +1331,7 @@ setuptools_scm</code> from the project root.</li>
</li>
</ul>
</section>
<section id="section-11">
<section id="section-12">
<h1>3.10.1 (2022-01-06)</h1>
<ul>
<li>
@ -1311,7 +1346,7 @@ setuptools_scm</code> from the project root.</li>
</li>
</ul>
</section>
<section id="section-12">
<section id="section-13">
<h1>3.10.0 (2022-01-04)</h1>
<ul>
<li>
@ -1495,7 +1530,7 @@ setuptools_scm</code> from the project root.</li>
</li>
</ul>
</section>
<section id="section-13">
<section id="section-14">
<h1>3.9.2 (2021-10-29)</h1>
<ul>
<li>
@ -1556,7 +1591,7 @@ setuptools_scm</code> from the project root.</li>
</li>
</ul>
</section>
<section id="section-14">
<section id="section-15">
<h1>3.9.1 (2021-09-22)</h1>
<ul>
<li>
@ -1705,7 +1740,7 @@ setuptools_scm</code> from the project root.</li>
</li>
</ul>
</section>
<section id="section-15">
<section id="section-16">
<h1>3.9.0 (2020-12-11)</h1>
<ul>
<li>
@ -1893,7 +1928,7 @@ setuptools_scm</code> from the project root.</li>
</li>
</ul>
</section>
<section id="section-16">
<section id="section-17">
<h1>3.8.9 (2020-09-15)</h1>
<ul>
<li>
@ -1918,7 +1953,7 @@ setuptools_scm</code> from the project root.</li>
</li>
</ul>
</section>
<section id="section-17">
<section id="section-18">
<h1>3.8.8 (2020-09-04)</h1>
<ul>
<li>
@ -2018,7 +2053,7 @@ setuptools_scm</code> from the project root.</li>
</li>
</ul>
</section>
<section id="section-18">
<section id="section-19">
<h1>3.8.7 (2020-07-17)</h1>
<ul>
<li>
@ -2057,7 +2092,7 @@ setuptools_scm</code> from the project root.</li>
</li>
</ul>
</section>
<section id="section-19">
<section id="section-20">
<h1>3.8.6 (2020-07-03)</h1>
<ul>
<li>
@ -2072,7 +2107,7 @@ setuptools_scm</code> from the project root.</li>
</li>
</ul>
</section>
<section id="section-20">
<section id="section-21">
<h1>3.8.5 (2020-06-26)</h1>
<ul>
<li>
@ -2123,7 +2158,7 @@ setuptools_scm</code> from the project root.</li>
</li>
</ul>
</section>
<section id="section-21">
<section id="section-22">
<h1>3.8.4 (2020-06-05)</h1>
<ul>
<li>
@ -2158,7 +2193,7 @@ setuptools_scm</code> from the project root.</li>
</li>
</ul>
</section>
<section id="section-22">
<section id="section-23">
<h1>3.8.3 (2020-05-29)</h1>
<ul>
<li>
@ -2173,13 +2208,13 @@ setuptools_scm</code> from the project root.</li>
</li>
</ul>
</section>
<section id="section-23">
<section id="section-24">
<h1>3.8.2 (2020-05-15)</h1>
<ul>
<li>InVEST's CSV encoding requirements are now described in the validation error message displayed when a CSV cannot be opened.</li>
</ul>
</section>
<section id="section-24">
<section id="section-25">
<h1>3.8.1 (2020-05-08)</h1>
<ul>
<li>Fixed a compilation issue on Mac OS X Catalina.</li>
@ -2203,7 +2238,7 @@ setuptools_scm</code> from the project root.</li>
<li>Update api-docs conf file to mock sdr.sdr_core and to use updated unittest mock</li>
</ul>
</section>
<section id="section-25">
<section id="section-26">
<h1>3.8.0 (2020-02-07)</h1>
<ul>
<li>Created a sub-directory for the sample data in the installation directory.</li>
@ -2259,7 +2294,7 @@ setuptools_scm</code> from the project root.</li>
<li>Added a new InVEST model: Urban Cooling Model.</li>
</ul>
</section>
<section id="section-26">
<section id="section-27">
<h1>3.7.0 (2019-05-09)</h1>
<ul>
<li>Refactoring Coastal Vulnerability (CV) model. CV now uses TaskGraph and Pygeoprocessing &gt;=1.6.1. The model is now largely vector-based instead of raster-based. Fewer input datasets are required for the same functionality. Runtime in sycnhronous mode is similar to previous versions, but runtime can be reduced with multiprocessing. CV also supports avoided recomputation for successive runs in the same workspace, even if a different file suffix is used. Output vector files are in CSV and geopackage formats.</li>
@ -2280,7 +2315,7 @@ setuptools_scm</code> from the project root.</li>
<li>Adding encoding='utf-8-sig' to pandas.read_csv() to support utils.build_lookup_from_csv() to read CSV files encoded with UTF-8 BOM (byte-order mark) properly.</li>
</ul>
</section>
<section id="section-27">
<section id="section-28">
<h1>3.6.0 (2019-01-30)</h1>
<ul>
<li>Correcting an issue with the InVEST Carbon Storage and Sequestration model where filepaths containing non-ASCII characters would cause the model's report generation to crash. The output report is now a UTF-8 document.</li>
@ -2310,7 +2345,7 @@ setuptools_scm</code> from the project root.</li>
<li>Fixing a case where a zero discount rate and rate of change in the carbon model would cause a divide by zero error.</li>
</ul>
</section>
<section id="section-28">
<section id="section-29">
<h1>3.5.0 (2018-08-14)</h1>
<ul>
<li>Bumped pygeoprocessing requirement to <code>pygeoprocessing&gt;=1.2.3</code>.</li>
@ -2332,7 +2367,7 @@ setuptools_scm</code> from the project root.</li>
<li>Fixed an issue in the model data of the crop production model where some crops were using incorrect climate bin rasters. Since the error was in the data and not the code, users will need to download the most recent version of InVEST's crop model data during the installation step to get the fix.</li>
</ul>
</section>
<section id="section-29">
<section id="section-30">
<h1>3.4.4 (2018-03-26)</h1>
<ul>
<li>InVEST now requires GDAL 2.0.0 and has been tested up to GDAL 2.2.3. Any API users of InVEST will need to use GDAL version &gt;= 2.0. When upgrading GDAL we noticed slight numerical differences in our test suite in both numerical raster differences, geometry transforms, and occasionally a single pixel difference when using <cite>gdal.RasterizeLayer</cite>. Each of these differences in the InVEST test suite is within a reasonable numerical tolerance and we have updated our regression test suite appropriately. Users comparing runs between previous versions of InVEST may also notice reasonable numerical differences between runs.</li>
@ -2342,7 +2377,7 @@ setuptools_scm</code> from the project root.</li>
<li>Fixed a broken link to local and online user documentation from the Seasonal Water Yield model from the model's user interface.</li>
</ul>
</section>
<section id="section-30">
<section id="section-31">
<h1>3.4.3 (2018-03-26)</h1>
<ul>
<li>Fixed a critical issue in the carbon model UI that would incorrectly state the user needed a "REDD Priority Raster" when none was required.</li>
@ -2350,7 +2385,7 @@ setuptools_scm</code> from the project root.</li>
<li>Fixed an issue in wind energy UI that was incorrectly validating most of the inputs.</li>
</ul>
</section>
<section id="section-31">
<section id="section-32">
<h1>3.4.2 (2017-12-15)</h1>
<ul>
<li>Fixed a cross-platform issue with the UI where logfiles could not be dropped onto UI windows.</li>
@ -2360,7 +2395,7 @@ setuptools_scm</code> from the project root.</li>
<li>Fixing an issue with the <code>FileSystemRunDialog</code> where pressing the 'X' button in the corner of the window would close the window, but not reset its state. The window's state is now reset whenever the window is closed (and the window cannot be closed when the model is running)</li>
</ul>
</section>
<section id="section-32">
<section id="section-33">
<h1>3.4.1 (2017-12-11)</h1>
<ul>
<li>In the Coastal Blue Carbon model, the <code>interest_rate</code> parameter has been renamed to <code>inflation_rate</code>.</li>
@ -2368,7 +2403,7 @@ setuptools_scm</code> from the project root.</li>
<li>Added better error checking to the SDR model for missing <cite>ws_id</cite> and invalid <cite>ws_id</cite> values such as <cite>None</cite> or some non-integer value. Also added tests for the <cite>SDR</cite> validation module.</li>
</ul>
</section>
<section id="section-33">
<section id="section-34">
<h1>3.4.0 (2017-12-03)</h1>
<ul>
<li>Fixed an issue with most InVEST models where the suffix was not being reflected in the output filenames. This was due to a bug in the InVEST UI, where the suffix args key was assumed to be <code>'suffix'</code>. Instances of <code>InVESTModel</code> now accept a keyword argument to defined the suffix args key.</li>
@ -2400,7 +2435,7 @@ setuptools_scm</code> from the project root.</li>
<li>Updated the erodibility sample raster that ships with InVEST for the SDR model. The old version was in US units, in this version we convert to SI units as the model requires, and clipped the raster to the extents of the other stack to save disk space.</li>
</ul>
</section>
<section id="section-34">
<section id="section-35">
<h1>3.3.3 (2017-02-06)</h1>
<ul>
<li>Fixed an issue in the UI where the carbon model wouldn't accept negative numbers in the price increase of carbon.</li>
@ -2420,7 +2455,7 @@ setuptools_scm</code> from the project root.</li>
<li>Updated branding and usability of the InVEST installer for Windows, and the Mac Disk Image (.dmg).</li>
</ul>
</section>
<section id="section-35">
<section id="section-36">
<h1>3.3.2 (2016-10-17)</h1>
<ul>
<li>Partial test coverage for HRA model.</li>
@ -2459,7 +2494,7 @@ setuptools_scm</code> from the project root.</li>
<li>Fixed an issue in SDR that reported runtime overflow errors during normal processing even though the model completed without other errors.</li>
</ul>
</section>
<section id="section-36">
<section id="section-37">
<h1>3.3.1 (2016-06-13)</h1>
<ul>
<li>Refactored API documentation for readability, organization by relevant topics, and to allow docs to build on <a href="http://invest.readthedocs.io">invest.readthedocs.io</a>,</li>
@ -2485,7 +2520,7 @@ setuptools_scm</code> from the project root.</li>
<li>Updated Crop Production model to add a simplified UI, faster runtime, and more testing.</li>
</ul>
</section>
<section id="section-37">
<section id="section-38">
<h1>3.3.0 (2016-03-14)</h1>
<ul>
<li>Refactored Wind Energy model to use a CSV input for wind data instead of a Binary file.</li>
@ -2539,7 +2574,7 @@ setuptools_scm</code> from the project root.</li>
<li>Documentation to the GLOBIO code base including the large docstring for 'execute'.</li>
</ul>
</section>
<section id="section-38">
<section id="section-39">
<h1>3.2.0 (2015-05-31)</h1>
<p>InVEST 3.2.0 is a major release with the addition of several experimental models and tools as well as an upgrade to the PyGeoprocessing core:</p>
<ul>
@ -2552,11 +2587,11 @@ setuptools_scm</code> from the project root.</li>
<li>Miscelaneous performance patches and bug fixes.</li>
</ul>
</section>
<section id="section-39">
<section id="section-40">
<h1>3.1.3 (2015-04-23)</h1>
<p>InVEST 3.1.3 is a hotfix release patching a memory blocking issue resolved in PyGeoprocessing version 0.2.1. Users might have experienced slow runtimes on SDR or other routed models.</p>
</section>
<section id="section-40">
<section id="section-41">
<h1>3.1.2 (2015-04-15)</h1>
<p>InVEST 3.1.2 is a minor release patching issues mostly related to the freshwater routing models and signed GDAL Byte datasets.</p>
<ul>
@ -2572,7 +2607,7 @@ setuptools_scm</code> from the project root.</li>
<li>Fixed an issue in the Blue Carbon model that prevented the report from being generated in the outputs file.</li>
</ul>
</section>
<section id="section-41">
<section id="section-42">
<h1>3.1.1 (2015-03-13)</h1>
<p>InVEST 3.1.1 is a major performance and memory bug patch to the InVEST toolsuite. We recommend all users upgrade to this version.</p>
<ul>
@ -2595,7 +2630,7 @@ setuptools_scm</code> from the project root.</li>
<li>Fixed a bug in Habitat Quality where the future output "quality_out_f.tif" was not reflecting the habitat value given in the sensitivity table for the specified landcover types.</li>
</ul>
</section>
<section id="section-42">
<section id="section-43">
<h1>3.1.0 (2014-11-19)</h1>
<p>InVEST 3.1.0 (<a href="http://www.naturalcapitalproject.org/download.html">http://www.naturalcapitalproject.org/download.html</a>) is a major software and science milestone that includes an overhauled sedimentation model, long awaited fixes to exponential decay routines in habitat quality and pollination, and a massive update to the underlying hydrological routing routines. The updated sediment model, called SDR (sediment delivery ratio), is part of our continuing effort to improve the science and capabilities of the InVEST tool suite. The SDR model inputs are backwards comparable with the InVEST 3.0.1 sediment model with two additional global calibration parameters and removed the need for the retention efficiency parameter in the biophysical table; most users can run SDR directly with the data they have prepared for previous versions. The biophysical differences between the models are described in a section within the SDR user's guide and represent a superior representation of the hydrological connectivity of the watershed, biophysical parameters that are independent of cell size, and a more accurate representation of sediment retention on the landscape. Other InVEST improvements to include standard bug fixes, performance improvements, and usability features which in part are described below:</p>
<ul>
@ -2636,7 +2671,7 @@ setuptools_scm</code> from the project root.</li>
<li>Fixed an issue where the data type of the nodata value in a raster might be different than the values in the raster. This was common in the case of 64 bit floating point values as nodata when the underlying raster was 32 bit. Now nodata values are cast to the underlying types which improves the reliability of many of the InVEST models.</li>
</ul>
</section>
<section id="section-43">
<section id="section-44">
<h1>3.0.1 (2014-05-19)</h1>
<ul>
<li>Blue Carbon model released.</li>
@ -2658,7 +2693,7 @@ setuptools_scm</code> from the project root.</li>
<li>Fixed an issue in Marine Water Quality where the UV points were supposed to be optional, but instead raised an exception when not passed in.</li>
</ul>
</section>
<section id="section-44">
<section id="section-45">
<h1>3.0.0 (2014-03-23)</h1>
<p>The 3.0.0 release of InVEST represents a shift away from the ArcGIS to the InVEST standalone computational platform. The only exception to this shift is the marine coastal protection tier 1 model which is still supported in an ArcGIS toolbox and has no InVEST 3.0 standalone at the moment. Specific changes are detailed below</p>
<ul>
@ -2677,7 +2712,7 @@ setuptools_scm</code> from the project root.</li>
<li>Numerous other minor bug fixes and performance enhacnements.</li>
</ul>
</section>
<section id="section-45">
<section id="section-46">
<h1>2.6.0 (2013-12-16)</h1>
<p>The 2.6.0 release of InVEST removes most of the old InVEST models from the Arc toolbox in favor of the new InVEST standalone models. While we have been developing standalone equivalents for the InVEST Arc models since version 2.3.0, this is the first release in which we removed support for the deprecated ArcGIS versions after an internal review of correctness, performance, and stability on the standalones. Additionally, this is one of the last milestones before the InVEST 3.0.0 release later next year which will transition InVEST models away from strict ArcGIS dependence to a standalone form.</p>
<p>Specifically, support for the following models have been moved from the ArcGIS toolbox to their Windows based standalones: (1) hydropower/water yield, (2) finfish aquaculture, (3) coastal protection tier 0/coastal vulnerability, (4) wave energy, (5) carbon, (6) habitat quality/biodiversity, (7) pollination, (8) timber, and (9) overlap analysis. Additionally, documentation references to ArcGIS for those models have been replaced with instructions for launching standalone InVEST models from the Windows start menu.</p>
@ -2700,7 +2735,7 @@ setuptools_scm</code> from the project root.</li>
<li>Changing support from <a href="mailto:richsharp@stanford.edu">richsharp@stanford.edu</a> to the user support forums at <a href="http://ncp-yamato.stanford.edu/natcapforums">http://ncp-yamato.stanford.edu/natcapforums</a>.</li>
</ul>
</section>
<section id="section-46">
<section id="section-47">
<h1>2.5.6 (2013-09-06)</h1>
<p>The 2.5.6 release of InVEST that addresses minor bugs, performance tweaks, and new functionality of the InVEST standalone models. Including:</p>
<ul>
@ -2726,7 +2761,7 @@ setuptools_scm</code> from the project root.</li>
<li>Added an infrastructure feature so that temporary files are created in the user's workspace rather than at the system level folder. This lets users work in a secondary workspace on a USB attached hard drive and use the space of that drive, rather than the primary operating system drive.</li>
</ul>
</section>
<section id="section-47">
<section id="section-48">
<h1>2.5.5 (2013-08-06)</h1>
<p>The 2.5.5 release of InVEST that addresses minor bugs, performance tweaks, and new functionality of the InVEST standalone models. Including:</p>
<blockquote>
@ -2820,7 +2855,7 @@ setuptools_scm</code> from the project root.</li>
</ul>
</blockquote>
</section>
<section id="section-48">
<section id="section-49">
<h1>2.5.4 (2013-06-07)</h1>
<p>This is a minor release of InVEST that addresses numerous minor bugs and performance tweaks in the InVEST 3.0 models. Including:</p>
<blockquote>
@ -2870,15 +2905,15 @@ setuptools_scm</code> from the project root.</li>
</ul>
</blockquote>
</section>
<section id="section-49">
<section id="section-50">
<h1>2.5.3 (2013-03-21)</h1>
<p>This is a minor release of InVEST that fixes an issue with the HRA model that caused ArcGIS versions of the model to fail when calculating habitat maps for risk hotspots. This upgrade is strongly recommended for users of InVEST 2.5.1 or 2.5.2.</p>
</section>
<section id="section-50">
<section id="section-51">
<h1>2.5.2 (2013-03-17)</h1>
<p>This is a minor release of InVEST that fixes an issue with the HRA sample data that caused ArcGIS versions of the model to fail on the training data. There is no need to upgrade for most users unless you are doing InVEST training.</p>
</section>
<section id="section-51">
<section id="section-52">
<h1>2.5.1 (2013-03-12)</h1>
<p>This is a minor release of InVEST that does not add any new models, but does add additional functionality, stability, and increased performance to one of the InVEST 3.0 standalones:</p>
<blockquote>
@ -2897,7 +2932,7 @@ setuptools_scm</code> from the project root.</li>
</blockquote>
<p>Additionally, this minor release fixes a bug in the InVEST user interface where collapsible containers became entirely non-interactive.</p>
</section>
<section id="section-52">
<section id="section-53">
<h1>2.5.0 (2013-03-08)</h1>
<p>This a major release of InVEST that includes new standalone versions (ArcGIS is not required) our models as well as additional functionality, stability, and increased performance to many of the existing models. This release is timed to support our group's annual training event at Stanford University. We expect to release InVEST 2.5.1 a couple of weeks after to address any software issues that arise during the training. See the release notes below for details of the release, and please contact <a href="mailto:richsharp@stanford.edu">richsharp@stanford.edu</a> for any issues relating to software:</p>
<blockquote>
@ -2984,7 +3019,7 @@ setuptools_scm</code> from the project root.</li>
</blockquote>
</blockquote>
</section>
<section id="section-53">
<section id="section-54">
<h1>2.4.5 (2013-02-01)</h1>
<p>This is a minor release of InVEST that does not add any new models, but does add additional functionality, stability, and increased performance to many of the InVEST 3.0 standalones:</p>
<blockquote>
@ -3044,7 +3079,7 @@ setuptools_scm</code> from the project root.</li>
</ul>
</blockquote>
</section>
<section id="section-54">
<section id="section-55">
<h1>2.4.4 (2012-10-24)</h1>
<ul>
<li>Fixes memory errors experienced by some users in the Carbon Valuation 3.0 Beta model.</li>
@ -3052,7 +3087,7 @@ setuptools_scm</code> from the project root.</li>
<li>Fixes an issue importing packages for some officially-unreleased InVEST models.</li>
</ul>
</section>
<section id="section-55">
<section id="section-56">
<h1>2.4.3 (2012-10-19)</h1>
<ul>
<li>Fixed a minor issue with hydropower output vaulation rasters whose statistics were not pre-calculated. This would cause the range in ArcGIS to show ther rasters at -3e38 to 3e38.</li>
@ -3063,20 +3098,20 @@ setuptools_scm</code> from the project root.</li>
<li>Added a feature to all InVEST 3.0 models to list disk usage before and after each run and in most cases report a low free space error if relevant.</li>
</ul>
</section>
<section id="section-56">
<section id="section-57">
<h1>2.4.2 (2012-10-15)</h1>
<ul>
<li>Fixed an issue with the ArcMap document where the paths to default data were not saved as relative paths. This caused the default data in the document to not be found by ArcGIS.</li>
<li>Introduced some more memory-efficient processing for Biodiversity 3.0 Beta. This fixes an out-of-memory issue encountered by some users when using very large raster datasets as inputs.</li>
</ul>
</section>
<section id="section-57">
<section id="section-58">
<h1>2.4.1 (2012-10-08)</h1>
<ul>
<li>Fixed a compatibility issue with ArcGIS 9.3 where the ArcMap and ArcToolbox were unable to be opened by Arc 9.3.</li>
</ul>
</section>
<section id="section-58">
<section id="section-59">
<h1>2.4.0 (2012-10-05)</h1>
<p>Changes in InVEST 2.4.0</p>
<p>General:</p>
@ -3156,7 +3191,7 @@ setuptools_scm</code> from the project root.</li>
<li>Fixed a visualization bug with wave energy where output rasters did not have the min/max/stdev calculations on them. This made the default visualization in arc be a gray blob.</li>
</ul>
</section>
<section id="section-59">
<section id="section-60">
<h1>2.3.0 (2012-08-02)</h1>
<p>Changes in InVEST 2.3.0</p>
<p>General:</p>
@ -3222,7 +3257,7 @@ setuptools_scm</code> from the project root.</li>
<li>Other minor bug fixes and runtime performance tweaks in the 3.0 framework.</li>
</ul>
</section>
<section id="section-60">
<section id="section-61">
<h1>2.2.2 (2012-03-03)</h1>
<p>Changes in InVEST 2.2.2</p>
<p>General:</p>
@ -3243,14 +3278,14 @@ setuptools_scm</code> from the project root.</li>
<dd>toolbox if the workspace name is too long.</dd>
</dl>
</section>
<section id="section-61">
<section id="section-62">
<h1>2.2.1 (2012-01-26)</h1>
<p>Changes in InVEST 2.2.1</p>
<p>General:</p>
<p>This is a minor release which fixes the following defects:</p>
<p>-A variety of miscellaneous bugs were fixed that were causing crashes of the Coastal Protection model in Arc 9.3. -Fixed an issue in the Pollination model that was looking for an InVEST1005 directory. -The InVEST "models only" release had an entry for the InVEST 3.0 Beta tools, but was missing the underlying runtime. This has been added to the models only 2.2.1 release at the cost of a larger installer. -The default InVEST ArcMap document wouldn't open in ArcGIS 9.3. It can now be opened by Arc 9.3 and above. -Minor updates to the Coastal Protection user's guide.</p>
</section>
<section id="section-62">
<section id="section-63">
<h1>2.2.0 (2011-12-22)</h1>
<p>In this release we include updates to the habitat risk assessment model, updates to Coastal Vulnerability Tier 0 (previously named Coastal Protection), and a new tier 1 Coastal Vulnerability tool. Additionally, we are releasing a beta version of our 3.0 platform that includes the terrestrial timber and carbon models.</p>
<p>See the "Marine Models" and "InVEST 3.0 Beta" sections below for more details.</p>
@ -3304,7 +3339,7 @@ setuptools_scm</code> from the project root.</li>
</dd>
</dl>
</section>
<section id="section-63">
<section id="section-64">
<h1>2.1.1 (2011-10-17)</h1>
<p>Changes in InVEST 2.1.1</p>
<p>General:</p>

View File

@ -1,6 +1,6 @@
{
"name": "invest-workbench",
"version": "3.15.0",
"version": "3.15.1",
"description": "Models that map and value the goods and services from nature that sustain and fulfill human life",
"main": "build/main/main.js",
"homepage": "./",

View File

@ -318,6 +318,8 @@ class InvestTab extends React.Component {
executeClicked={executeClicked}
switchTabs={this.switchTabs}
investList={investList}
tabID={tabID}
updateJobProperties={this.props.updateJobProperties}
/>
</TabPane>
<TabPane

View File

@ -261,7 +261,7 @@ export default function ArgInput(props) {
name={argkey}
value={value}
onChange={handleChange}
onFocus={handleChange}
onFocus={handleFocus}
disabled={!enabled}
isValid={enabled && isValid}
custom

View File

@ -274,18 +274,28 @@ class SetupTab extends React.Component {
try {
if (filepath.endsWith('gz')) { // .tar.gz, .tgz
const extractLocation = await ipcRenderer.invoke(
ipcMainChannels.SHOW_SAVE_DIALOG,
{ title: t('Choose location to extract archive') }
ipcMainChannels.SHOW_OPEN_DIALOG,
{
title: t('Choose location to extract archive'),
properties: ['openDirectory'],
}
);
if (extractLocation.filePath) {
datastack = await fetchDatastackFromFile({
model_id: modelID,
filepath: filepath,
extractPath: extractLocation.filePath,
});
} else {
return;
}
if (extractLocation.filePaths.length) {
const directoryPath = extractLocation.filePaths[0];
const writable = await ipcRenderer.invoke(
ipcMainChannels.CHECK_FILE_PERMISSIONS, directoryPath);
if (writable) {
datastack = await fetchDatastackFromFile({
model_id: modelID,
filepath: filepath,
extractPath: directoryPath,
});
} else {
alert( // eslint-disable-line no-alert
`${t('Permission denied extracting files to:')}\n${directoryPath}`
);
}
} else { return; } // dialog closed without selection
} else {
datastack = await fetchDatastackFromFile(
{ model_id: modelID, filepath: filepath }
@ -349,6 +359,7 @@ class SetupTab extends React.Component {
* Updating means:
* 1) setting the value
* 2) toggling the enabled/disabled/hidden state of any dependent args
* 3) clearing job status in case args are being updated after the model has been run.
*
* @param {string} key - the invest argument key
* @param {string} value - the invest argument value
@ -361,6 +372,9 @@ class SetupTab extends React.Component {
this.setState({
argsValues: argsValues,
}, () => {
this.props.updateJobProperties(this.props.tabID, {
status: undefined, // Clear job status to hide model status indicator.
});
this.debouncedValidate();
this.debouncedArgsEnabled();
if (uiSpec.dropdown_functions) {
@ -677,4 +691,6 @@ SetupTab.propTypes = {
type: PropTypes.string,
}).isRequired,
t: PropTypes.func.isRequired,
tabID: PropTypes.string.isRequired,
updateJobProperties: PropTypes.func.isRequired,
};

View File

@ -37,7 +37,6 @@ function renderInvestTab(job = DEFAULT_JOB) {
<InvestTab
job={job}
tabID={tabID}
investSettings={{ nWorkers: '-1', loggingLevel: 'INFO', taskgraphLoggingLevel: 'ERROR' }}
saveJob={() => {}}
updateJobProperties={() => {}}
investList={{ foo: { modelTitle: 'Foo Model' } }}
@ -117,6 +116,9 @@ describe('Run status Alert renders with status from a recent run', () => {
describe('Open Workspace button', () => {
const spec = {
pyname: 'natcap.invest.foo',
model_name: 'Foo Model',
userguide: 'foo.html',
args: {},
ui_spec: { order: [] },
};
@ -483,6 +485,45 @@ describe('Sidebar Buttons', () => {
expect(input2).toHaveValue(mockDatastack.args.port);
});
test('Load parameters from datastack: tgz asks for extract location', async () => {
const mockDatastack = {
module_name: spec.pyname,
args: {
workspace: 'myworkspace',
port: '9999',
},
};
fetchDatastackFromFile.mockResolvedValue(mockDatastack);
const mockDialogData = {
canceled: false,
filePaths: ['foo.tgz'],
};
ipcRenderer.invoke.mockImplementation((channel, options) => {
if (channel === ipcMainChannels.SHOW_OPEN_DIALOG) {
return Promise.resolve(mockDialogData);
}
if (channel === ipcMainChannels.CHECK_FILE_PERMISSIONS) {
return Promise.resolve(true);
}
return Promise.resolve(undefined);
});
const job = new InvestJob({
modelRunName: 'carbon',
modelHumanName: 'Carbon Model',
argsValues: {},
});
const { findByText, findByLabelText } = renderInvestTab(job);
const loadButton = await findByText('Load parameters from file');
await userEvent.click(loadButton);
const input1 = await findByLabelText((content) => content.startsWith(spec.args.workspace.name));
expect(input1).toHaveValue(mockDatastack.args.workspace);
const input2 = await findByLabelText((content) => content.startsWith(spec.args.port.name));
expect(input2).toHaveValue(mockDatastack.args.port);
});
test('Load parameters from file does nothing when canceled', async () => {
// callback data if the OS dialog was canceled
const mockDialogData = {

View File

@ -83,6 +83,8 @@ function renderSetupFromSpec(baseSpec, uiSpec, initValues = undefined) {
sidebarFooterElementId="foo"
executeClicked={false}
switchTabs={() => {}}
tabID={'999'}
updateJobProperties={() => {}}
/>
);
return utils;