Skip to content

Conversation

@PC-FSU
Copy link

@PC-FSU PC-FSU commented Oct 3, 2024

PR Description: Add Continuous Boyce Index Calculation and Test Cases

Summary:

In this pull request, I have added functionality to calculate the continuous Boyce index as described in Hirzel et al. (2006) . This method provides a reliable way to evaluate habitat suitability models, specifically for presence-only data. Along with the implementation, I have also added test cases to ensure the correctness and robustness of the new function.

Key Updates:

  1. Boyce Index Calculation:

    • Implemented a function to compute the continuous Boyce index based on intervals of habitat suitability values.
    • This includes handling both sliding window and fixed-bin approaches, as well as ensuring compatibility with NumPy, Pandas, and GeoPandas data structures.
    • Added logic to handle edge cases like empty arrays, NaN values, and invalid inputs (e.g., non-1D arrays).
  2. Test Cases:

    • Created test cases to validate the behavior of the Boyce index calculation under various scenarios (e.g., different bin sizes, presence of NaN values).
    • Ensured that the function calculates the correct predicted-to-expected (P/E) ratio and returns accurate Spearman correlation coefficients.
  3. Notebook Update:

    • Updated the notebook WorkingWithGeospatialData.ipynb to include a detailed example demonstrating how to use the continuous Boyce index function.

Testing:

The test cases ensure that the continuous Boyce index function works as expected. The test cases cover:

  • Different binning strategies.
  • Varying habitat suitability ranges.
  • Handling of NaN values in input data.

This PR enhances the project by providing a robust and well-tested method to evaluate habitat suitability models using presence-only data, with clear examples in the updated notebook.

Copy link
Owner

@earth-chris earth-chris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for submitting this PR, @PC-FSU. Including this metric is a nice addition to the package. I have a few comments and requests.

  • Thank you very much for the detailed docstring and jupyter examples.
  • Thank you very very much for including tests.
  • Please apply proper code formatting. To do this, review the contributing guidelines and set up a dev environment with pre-commit installed. Then run pre-commit run --all to apply formatting.
  • To simplify the code, and to better align with other sklearn metrics, I'd recommend breaking the boyce_index function into smaller components. One to compute the intervals, one for the p/e ratio, one for plotting, and one for returning the correlation coefficient. As a user, I'd expect a single value returned from the boyce_index function.
  • I haven't had much time to evaluate the scientific merit of the code yet, so I'll likely provide another review after addressing the key software comments I've made. But what strikes me at the moment is that:
    • The relationships between nclass, window and res are not super clear to me, and the results appear very sensitive to these parameters.
    • It is very easy to return nan values, despite the many nan checks that are applied. This makes me think that something is not sufficiently robust.

Thanks for your submission, and I'll look forward to your updates.

# implement Boyce index as describe in https://www.whoi.edu/cms/files/hirzel_etal_2006_53457.pdf (Eq.4)


def boycei(interval, obs, fit):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i might prefer renaming these functions boyce_index and continuous_boyce_index to differentiate (boycei/boyce_index are easily confused)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done as per suggestion.

Args:
interval (tuple or list): Two elements representing the lower and upper bounds of the interval.
obs (numpy.ndarray): Observed suitability values (i.e., predictions at presence points).
fit (numpy.ndarray): Suitability values (e.g., from a raster), i.e., predictions at presence + background points.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to better align with sklearn api design, prefer renaming variables and order them as boyce_index(yobs, ypred, interval)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done as per suggestion.

return fi


def boyce_index(fit, obs, nclass=0, window="default", res=100, PEplot=False):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please rename and reorder fit, obs as yobs, ypred

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done as per suggestion.



# Remove NaNs from fit
fit = fit[~np.isnan(fit)]
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove duplicate code

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done as per suggestion.

Comment on lines 135 to 136
print(vec_mov)
print(intervals)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove debug print statements

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed.

vec_mov = np.linspace(mini, maxi, num=nclass + 1)
intervals = np.column_stack((vec_mov[:-1], vec_mov[1:]))
else:
raise ValueError("Invalid nclass value.")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a comment or two in this section would be useful to elucidate the different methods for computing the intervals

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I refactored the code. Intervals are now calculated in a new function, and refactored the code, turned out res was not required at all. Updated the name of the argument to more logical names.

corr, _ = spearmanr(f_valid, intervals_mid)


if PEplot:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to keep plotting as a separate function, which makes for cleaner code.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done



# Remove NaNs
valid = ~np.isnan(f)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there seems to be a lot of nan checking. if the nans are removed from the initial arrays, what would lead to 'invalid' values? are we sure this isn't covering up some other issue in the calculations?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove extra nan checking.


results = {
'F.ratio': f,
'Spearman.cor': round(corr, 3) if not np.isnan(corr) else np.nan,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

another suspicious nan check here. also, it seems unnecessary to round the correlation coefficient.

predicted = np.array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])

# Observed presence suitability scores (e.g., predictions at presence points)
observed = np.array([0.3, 0.7, 0.8, 0.9])
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

based on this example, it's not clear to me why users would want to pass in arrays of different lengths (where predicted and observed are not matching: one is presence+background, the other is just presence).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I misinterpreted the equation. For P/E ratio, you need the predicted frequency for each habitat suitability region and need only presence-only data. Expected frequency is calculated using the random distribution only. i.e at background points and doesn't required prediction at presence data. I changed the code and docs to reflect the same.

…e made at presence and background points, not presence and combined presence + background (Thanks for pointing that out, I misinterpreted the paper). Removed redundant NaN checks. Updated test cases, and example notebook.
@PC-FSU
Copy link
Author

PC-FSU commented Oct 18, 2024

Any updates?

@PC-FSU PC-FSU requested a review from earth-chris November 18, 2024 01:43
Comment on lines 125 to 126
nbins (int | list, optional): Number of classes or a list of class thresholds. Defaults to 0.
bin_size (float | str, optional): Width of the the bin. Defaults to 'default' which sets width as 1/10th of the fit range.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the interactions between these parameters isn't super clear to me. the default behavior sets nbins to 10. setting 0 here doesn't mean that zero bins are estimated, but instead behaves as if no bins are passed at all, which depends on the 'default' parameter.

I might recommend one of the following:

  • set the defaults as nbins=10 and bin_size=None, support passing None for each, and throw an error if both are passed (since they are mutually exclusive)
  • drop the bin_size argument and just go with nbins.

mini, maxi = range

if isinstance(bin_size, float):
nbins = (maxi - mini) / bin_size
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get the following warning that appears to be driven by floating point precision. I don't think it's a problem, just flagging it:

ela.evaluate.continuous_boyce_index(y, ypred, bin_size=0.25)
/home/cba/src/elapid/elapid/evaluate.py:59: UserWarning: bin_size has been adjusted to nearest appropriate size using ceil, as range/bin_size : 0.9999999999983606 / 0.25 is not an integer.

Comment on lines 183 to 189
results = {
"F.ratio": f_scores,
"Spearman.cor": corr,
"HS": intervals,
}

return results
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my personal preference is to avoid returning a dictionary, but instead return the three values as a tuple, so users can specify:

f_ratio, cor, hs = ela.evaluate.continuous_boyce_index(presence, background)

Copy link
Owner

@earth-chris earth-chris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks much better, @PC-FSU, thank you for the improvements! One more round and I think it's there. There are a few small updates I've requested in code, but I'll make one more here regarding the notebook.

The example provided uses some dummy array data, which does not provide much insight into what the index does in a real-life context. Would you please use modeled predictions from the notebook to demonstrate what it can tell you beyond what you see from the AUC results?

Comment on lines 111 to 112
yobs: Union[np.ndarray, pd.Series, gpd.GeoSeries],
ypred: Union[np.ndarray, pd.Series, gpd.GeoSeries],
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I had misunderstood the original implementation, and it looks like these are both actually predicted values, just predictions at different locations (ypred at presence and background sites).

I might prefer we rename these variables to be a bit more clear (like ypred_observed and ypred_background).

@earth-chris
Copy link
Owner

@PC-FSU let me know if you plan to finish up this contribution. If not, I'll go ahead and merge it and make the remaining updates. Cheers,

@PC-FSU
Copy link
Author

PC-FSU commented Mar 22, 2025

@earth-chris Please give me some time. I will be able to push it after 2nd April.

@PC-FSU
Copy link
Author

PC-FSU commented May 10, 2025

I've added the suggestions you mentioned. I also spent some time thinking about how to improve the explanation of the Boyce Index in the notebook, but I couldn't come up with a concise way to do it. Could you clarify what you had in mind? The example in the notebook feels fairly limited.

@earth-chris earth-chris added the science More analysis-focused than software-focused label Sep 17, 2025
@earth-chris earth-chris self-assigned this Sep 17, 2025
@earth-chris
Copy link
Owner

I've reviewed this PR again and want to document the next steps.

PF-FSU has contributed a large PR with several pieces. There are still a number of changes I want to make to simplify these new features, however. I think this will best be done by merging this PR then submitting the new changes. These include:

  • update the CBI call to accept yobs, ypred as the inputs instead of having users pre-index the background/presence points
  • update the plotting function to compute the CBI values then make the plot, which currently requires x/y values that are the outputs from the CBI function (the mean intervals and f scores, respectively).
  • remove some redundant and verbose type checking and instead simply use .asarray() to enforce type harmonization
  • remove the use of a string 'default' option and instead set a numerical default
  • only import the cbi and plotting functions to the main API
  • add the evaluate.py module to the docs
  • add clearer usage examples in the simple maxent model notebook (instead of the overloaded geospatial data notebook)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

science More analysis-focused than software-focused

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants