Skip to content
This repository has been archived by the owner on Feb 11, 2023. It is now read-only.

Commit

Permalink
CIMA experiments (#45)
Browse files Browse the repository at this point in the history
* add CIMA pairing
* prepare CIMA experiments
* add notebook CIMA scope
* add notebook w. scope compare
* fix for too large images
* fix evaluation
* fix fig export
* add STD measures
* add JSON results
* update readm & add ref.
* update shell experiments
* drop enlighten
* minor rename
  • Loading branch information
Borda committed Apr 9, 2020
1 parent 99b324f commit f9b62fb
Show file tree
Hide file tree
Showing 26 changed files with 2,203 additions and 78 deletions.
1 change: 1 addition & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ before_install:
fi

install:
- pip install "setuptools<46" -U # v46 crashes openslide-python install
- pip install -r requirements.txt
- pip install -r ./tests/requirements.txt
- pip --version ; pip list
Expand Down
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -320,6 +320,7 @@ The project is using the standard [BSD license](http://opensource.org/licenses/B
For complete references see [bibtex](docs/references.bib).
1. Borovec, J., Munoz-Barrutia, A., & Kybic, J. (2018). **[Benchmarking of image registration methods for differently stained histological slides](https://www.researchgate.net/publication/325019076_Benchmarking_of_image_registration_methods_for_differently_stained_histological_slides)**. In IEEE International Conference on Image Processing (ICIP) (pp. 3368–3372), Athens. [DOI: 10.1109/ICIP.2018.8451040](https://doi.org/10.1109/ICIP.2018.8451040)
2. Borovec, J. (2019). BIRL: **Benchmark on Image Registration methods with Landmark validation**. arXiv preprint [arXiv:1912.13452.](https://arxiv.org/abs/1912.13452)
## Appendix - Useful information
Expand Down
1 change: 1 addition & 0 deletions appveyor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@ install:
# the parent CMD process).
- "SET PATH=%PYTHON%;%PYTHON%\\Scripts;%PATH%"
- python -m pip install --upgrade pip
- pip install "setuptools<46" -U # v46 crashes openslide-python install
- pip install -r requirements.txt
- pip install -r ./tests/requirements.txt
- pip install tox
Expand Down
6 changes: 6 additions & 0 deletions birl/benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -455,6 +455,9 @@ def _perform_registration(self, df_row):
row = self.__images_preprocessing(row)
row[self.COL_TIME_PREPROC] = (time.time() - time_start) / 60.
row = self._prepare_img_registration(row)
# if the pre-processing failed, return back None
if not row:
return None

# measure execution time
time_start = time.time()
Expand All @@ -468,6 +471,9 @@ def _perform_registration(self, df_row):
row = self.__remove_pproc_images(row)

row = self._parse_regist_results(row)
# if the post-processing failed, return back None
if not row:
return None
row = self._clear_after_registration(row)

if self.params.get('visual', False):
Expand Down
16 changes: 10 additions & 6 deletions birl/utilities/drawing.py
Original file line number Diff line number Diff line change
Expand Up @@ -324,11 +324,13 @@ def __init__(self, df, steps=5, fig=None, rect=None, fill_alpha=0.05, colors='ni
for i, (idx, row) in enumerate(self.data.iterrows()):
self.__draw_curve(idx, row, fill_alpha, color=colors[i], *args, **kwargs)

self._labels = []
for ax in self.axes:
for theta, label in zip(ax.get_xticks(), ax.get_xticklabels()):
self.__realign_polar_xtick(ax, theta, label)
self._labels.append(label)

self.ax.legend(loc='center left', bbox_to_anchor=(1.2, 0.7))
self._legend = self.ax.legend(loc='center left', bbox_to_anchor=(1.2, 0.7))

@classmethod
def __ax_set_invisible(self, ax):
Expand Down Expand Up @@ -490,7 +492,7 @@ def draw_matrix_user_ranking(df_stat, higher_better=False, fig=None, cmap='tab20
ranking = compute_matrix_user_ranking(df_stat, higher_better)

if fig is None:
fig, _ = plt.subplots(figsize=np.array(df_stat.as_matrix().shape[::-1]) * 0.35)
fig, _ = plt.subplots(figsize=np.array(df_stat.values.shape[::-1]) * 0.35)
ax = fig.gca()
arange = np.linspace(-0.5, len(df_stat) - 0.5, len(df_stat) + 1)
norm = plt_colors.BoundaryNorm(arange, len(df_stat))
Expand All @@ -513,7 +515,7 @@ def draw_scatter_double_scale(df, colors='nipy_spectral',
figsize=None,
legend_style=None,
plot_style=None,
x_spread=(0.3, 5)):
x_spread=(0.4, 5)):
"""Draw a scatter with double scales on left and right
:param DF df: dataframe
Expand All @@ -531,7 +533,7 @@ def draw_scatter_double_scale(df, colors='nipy_spectral',
>>> df = pd.DataFrame(np.random.random((10, 3)), columns=['col1', 'col2', 'col3'])
>>> fig, axs = draw_scatter_double_scale(df, ax_decs={'name': None}, xlabel='X')
>>> axs # doctest: +ELLIPSIS
(<...>, None)
{...}
>>> # just the selected columns
>>> fig, axs = draw_scatter_double_scale(df, ax_decs={'name1': ['col1', 'col2'],
... 'name2': ['col3']})
Expand Down Expand Up @@ -602,5 +604,7 @@ def draw_scatter_double_scale(df, colors='nipy_spectral',
# legend - https://matplotlib.org/3.1.1/gallery/text_labels_and_annotations/custom_legends.html
if legend_style is None:
legend_style = dict(loc='upper center', bbox_to_anchor=(1.25, 1.0), ncol=1)
ax1.legend(idx_names, **legend_style)
return fig, (ax1, ax2)
lgd = ax1.legend(idx_names, **legend_style)

extras = {'ax1': ax1, 'ax2': ax2, 'legend': lgd}
return fig, extras
2 changes: 1 addition & 1 deletion birl/utilities/evaluate.py
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ def compute_matrix_user_ranking(df_stat, higher_better=False):
[ 0., 2., 1.],
[ 4., 4., 2.]])
"""
ranking = np.zeros(df_stat.as_matrix().shape)
ranking = np.zeros(df_stat.values.shape)
nan = -np.inf if higher_better else np.inf
for i, col in enumerate(df_stat.columns):
vals = [v if not np.isnan(v) else nan for v in df_stat[col]]
Expand Down
2 changes: 1 addition & 1 deletion bm_ANHIR/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ ENV PATH="/home/evaluator/.local/bin:${PATH}"
COPY --chown=evaluator:evaluator ./evaluate_submission.py /opt/evaluation/
COPY --chown=evaluator:evaluator ./dataset_ANHIR/dataset_medium.csv /opt/evaluation/dataset.csv
COPY --chown=evaluator:evaluator ./dataset_ANHIR/computer-performances_cmpgrid-71.json /opt/evaluation/computer-performances.json
COPY --chown=evaluator:evaluator ./dataset_ANHIR/landmarks_user /opt/evaluation/lnds_provided
COPY --chown=evaluator:evaluator dataset_ANHIR/landmarks_user_phase1 /opt/evaluation/lnds_provided
COPY --chown=evaluator:evaluator ./dataset_ANHIR/landmarks_all /opt/evaluation/lnds_reference

# Define execution
Expand Down
29 changes: 20 additions & 9 deletions bm_ANHIR/evaluate_submission.py
Original file line number Diff line number Diff line change
Expand Up @@ -209,18 +209,25 @@ def parse_landmarks(idx_row):
# 'reference landmarks': np.round(lnds_ref, 1).tolist(),
# 'warped landmarks': np.round(lnds_warp, 1).tolist(),
'matched-landmarks': match_lnds,
'Robustness': row.get(ImRegBenchmark.COL_ROBUSTNESS, 0),
'Norm-Time_minutes': row.get(COL_NORM_TIME, None),
'Robustness': np.round(row.get(ImRegBenchmark.COL_ROBUSTNESS, 0), 3),
'Norm-Time_minutes': np.round(row.get(COL_NORM_TIME, None), 5),
'Status': row.get(ImRegBenchmark.COL_STATUS, None),
}

def _round_val(row, col):
dec = 5 if col.startswith('rTRE') else 2
return np.round(row[col], dec)

# copy all columns with Affine statistic
item.update({col.replace(' ', '-'): row[col] for col in row if 'affine' in col.lower()})
item.update({col.replace(' ', '-'): _round_val(row, col)
for col in row if 'affine' in col.lower()})
# copy all columns with rTRE, TRE and Overlap
# item.update({col.replace(' (final)', '').replace(' ', '-'): row[col]
# for col in row if '(final)' in col})
item.update({col.replace(' (elastic)', '_elastic').replace(' ', '-'): row[col]
item.update({col.replace(' (elastic)', '_elastic').replace(' ', '-'): _round_val(row, col)
for col in row if 'TRE' in col})
return idx, item
# later in JSON keys ahs to be str only
return str(idx), item


def compute_scores(df_experiments, min_landmarks=1.):
Expand Down Expand Up @@ -271,6 +278,7 @@ def _compute_scores_general(df_experiments, df_expt_robust):
# parse specific metrics
scores = {
'Average-Robustness': np.mean(df_experiments[ImRegBenchmark.COL_ROBUSTNESS]),
'STD-Robustness': np.std(df_experiments[ImRegBenchmark.COL_ROBUSTNESS]),
'Median-Robustness': np.median(df_experiments[ImRegBenchmark.COL_ROBUSTNESS]),
'Average-Rank-Median-rTRE': np.nan,
'Average-Rank-Max-rTRE': np.nan,
Expand All @@ -280,15 +288,18 @@ def _compute_scores_general(df_experiments, df_expt_robust):
('Max-rTRE', 'rTRE Max'),
('Average-rTRE', 'rTRE Mean'),
('Norm-Time', COL_NORM_TIME)]:
scores['Average-' + name] = np.nanmean(df_experiments[col])
scores['Average-' + name + '-Robust'] = np.nanmean(df_expt_robust[col])
scores['Median-' + name] = np.median(df_experiments[col])
scores['Median-' + name + '-Robust'] = np.median(df_expt_robust[col])
for df, sufix in [(df_experiments, ''), (df_expt_robust, '-Robust')]:
scores['Average-' + name + sufix] = np.nanmean(df[col])
scores['STD-' + name + sufix] = np.nanstd(df[col])
scores['Median-' + name + sufix] = np.median(df[col])
return scores


def _compute_scores_state_tissue(df_experiments):
scores = {}
if ImRegBenchmark.COL_STATUS not in df_experiments.columns:
logging.warning('experiments (table) is missing "%s" column', ImRegBenchmark.COL_STATUS)
df_experiments[ImRegBenchmark.COL_STATUS] = 'any'
# filter all statuses in the experiments
statuses = df_experiments[ImRegBenchmark.COL_STATUS].unique()
# parse metrics according to TEST and TRAIN case
Expand Down
32 changes: 32 additions & 0 deletions bm_CIMA/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Experimentation CIMA dataset

This section is strictly limited to image registration experiment on [CIMA dataset](http://cmp.felk.cvut.cz/~borovji3/?page=dataset).

## Structure

- **Datasets**: the particular dataset setting described in the image/landmarks pairing - csv tables called `dataset_CIMA_<scope>.csv`
- **Script**: the execution script is `run-SOTA-experiments.sh` and it perform all experiments
- **Results**: the experimental results are exported and zipped per particular dataset scope, the archives are `results_size-<scope>.zip`


## Usage

**Reproduce statistic**

You need to unzip the particular result for each dataset scale in to a separate folder (e.g with the same name).
Then you need to run the [scope notebook](../notebooks/CIMA_SOTA-results_scope.ipynb) for showing results on a particular dataset scope or [comparing notebook](../notebooks/CIMA_SOTA-results_comparing.ipynb) to compare some statistics over two scopes.
Note that with using attached JSON results you do not need to run cells related parsing results from raw benchmarks results.

**Add own method to statistic**

You need to run your benchmark on the particular dataset scope, the image oaring are:
- [10k scope](dataset_CIMA_10k.csv)
- [full scope](dataset_CIMA_full.csv)

Then you can parse just the new results with [evaluation script](../bm_ANHIR/evaluate_submission.py) or execute the parsing cells at the beginning of [scope notebook](../notebooks/CIMA_SOTA-results_scope.ipynb).


## References

For complete references see [bibtex](../docs/references.bib).
1. Borovec, J. (2019). **BIRL: Benchmark on Image Registration methods with Landmark validation**. arXiv preprint [arXiv:1912.13452.](https://arxiv.org/abs/1912.13452)
Loading

0 comments on commit f9b62fb

Please sign in to comment.