Skip to content

Commit

Permalink
Merge pull request #38 from MStarmans91/development
Browse files Browse the repository at this point in the history
Release version 3.3.0
  • Loading branch information
MStarmans91 authored Jul 28, 2020
2 parents 6728e5c + 9453bcb commit f7b032d
Show file tree
Hide file tree
Showing 215 changed files with 10,687 additions and 13,370 deletions.
12 changes: 11 additions & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,11 @@ dist: bionic
python:
- 3.7

# blocklist
branches:
except:
- optimization-development

# ===== Build for each OS ======
matrix:
include:
Expand All @@ -23,8 +28,12 @@ matrix:
- git clone --single-branch --branch develop https://github.com/MStarmans91/WORCTutorial
# 'python' points to Python 2.7 on macOS but points to Python 3.8 on Linux and Windows
# 'python3' is a 'command not found' error on Windows but 'py' works on Windows only.
script:
script:
- python WORCTutorial/WORCTutorialSimple.py
- fastr trace "C:\Users\travis\AppData\Local\Temp\WORC_Example_STWStrategyHN\__sink_data__.json" --sinks features_train_CT_0_predict --samples HN1331
- fastr trace "C:\Users\travis\AppData\Local\Temp\WORC_Example_STWStrategyHN\__sink_data__.json" --sinks classification --samples all
- fastr trace "C:\Users\travis\AppData\Local\Temp\WORC_Example_STWStrategyHN\__sink_data__.json" --sinks classification --samples all
- fastr trace "C:\Users\travis\AppData\Local\Temp\GS\DEBUG_0\tmp\__sink_data__.json" --sinks output --samples id_0__0__0
- name: "Linux"
before_install:
- sudo apt-get -qq update
Expand All @@ -41,6 +50,7 @@ matrix:
- python WORCTutorial/WORCTutorialSimple.py
- fastr trace /tmp/WORC_Example_STWStrategyHN/__sink_data__.json --sinks features_train_CT_0_predict --samples HN1331
- fastr trace /tmp/WORC_Example_STWStrategyHN/__sink_data__.json --sinks classification --samples all
- fastr trace /tmp/GS/DEBUG_0/tmp/__sink_data__.json --sinks output --samples id_0__0__0

notifications:
slack:
Expand Down
44 changes: 43 additions & 1 deletion CHANGELOG
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,48 @@ All notable changes to this project will be documented in this file.
The format is based on `Keep a Changelog <http://keepachangelog.com/>`_
and this project adheres to `Semantic Versioning <http://semver.org/>`_

3.3.0 - 2020-07-28
------------------

Added
~~~~~~~
- Graphviz vizualization of network is now nicely grouped.
- Properly integrated ObjectSampler: various resampling options now available.
- Verbose option to fit and score tool
- Validator for PyRadiomics output.
- FAQ version to documentation

Changed
~~~~~~~
- Upgraded to new versions of sklearn (0.23.1) and imbalanced learn (0.7.0)
- Some defaults, based on computation time.
- Do not skip workflow if feature selection selects zero features,
but disable the feature selection.
- Do not skip workflow if resampling is unsuccesfull,
but disable the resampling.
- Default scaling is now not only Z-score, but also MinMax and Robust
- Renamed plot SVM function and all functions using it, as now
we use all kinds of estimators.
- L1 penalty does not work with new standard LR solver. Removed L1 penalty.

Fixed
~~~~~
- Bug when using both elastix and segmentix.
- Bug when using elastix in train-test workflow.
- IMPORTANT: Previously, all methods except the machine learning where fit on
both the training and validation set together in fitandscore. This led
to overfitting on the validation set. Now, these are properly split.
- Bugfix in Evaluate standalone for decompositon tool.
- Applied imputation in decomposition if NaNs are detected.
- In the facade ConfigBuilder, an error is raised when incorrect
overrides are given.
- Bugfix in statistical feature test plotting.
- Bugfix in Evaluate when using ComBat
- Bugfix in feature converter of PyRadiomics when using 2D images.
- Catch Graphviz error.
- Bug in ICC.


3.2.2 - 2020-07-14
------------------

Expand Down Expand Up @@ -431,6 +473,6 @@ Fixed
- For multiple modalities, add only optional sources like metadata when present.

1.0.0rc1 - 2017-05-08
------------------
---------------------

First release
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# WORC v3.2.2
# WORC v3.3.0
## Workflow for Optimal Radiomics Classification

## Information

| Linux | Windows | Documentation | PyPi |Citing WORC |
|--------------------------------|-------------------------------|-------------------------------|-------------------------------|---------------------|
| [![][tci-linx]][tci-linx-lnk] | [![][tci-wind]][tci-wind-lnk] | [![][doc]][doc-lnk] | [![][pypi]][pypi-lnk] | [![][DOI]][DOI-lnk] |
| [![][tci-linx]][tci-linx-lnk] | [![][tci-wind]][tci-wind-lnk] | [![][doc]][doc-lnk] | [![][pypi]][pypi-lnk] | [![][DOI]][DOI-lnk] |

[tci-linx]: https://travis-ci.com/MStarmans91/WORC.svg?token=qyvaeq7Cpwu7hJGB98Gp&branch=master&job=1
[tci-linx-lnk]: https://travis-ci.com/MStarmans91/WORC
Expand Down
2 changes: 1 addition & 1 deletion README.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
WORC v3.2.2
WORC v3.3.0
===========

Workflow for Optimal Radiomics Classification
Expand Down
59 changes: 34 additions & 25 deletions WORC/IOparser/config_io_classifier.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,14 @@


def load_config(config_file_path):
"""
Load the config ini, parse settings to WORC
"""Load the config ini, parse settings to WORC.
Args:
config_file_path (String): path of the .ini config file
Returns:
settings_dict (dict): dict with the loaded settings
"""
if not os.path.exists(config_file_path):
e = f'File {config_file_path} does not exist!'
Expand All @@ -42,7 +42,7 @@ def load_config(config_file_path):
'Labels': dict(), 'HyperOptimization': dict(),
'Classification': dict(), 'SelectFeatGroup': dict(),
'Featsel': dict(), 'FeatureScaling': dict(),
'SampleProcessing': dict(), 'Imputation': dict(),
'Resampling': dict(), 'Imputation': dict(),
'Ensemble': dict(), 'Bootstrap': dict(),
'FeatPreProcess': dict(), 'Evaluation': dict()}

Expand All @@ -58,6 +58,13 @@ def load_config(config_file_path):
settings_dict['General']['tempsave'] =\
settings['General'].getboolean('tempsave')

# Feature Scaling
settings_dict['FeatureScaling']['scale_features'] =\
settings['FeatureScaling'].getboolean('scale_features')
settings_dict['FeatureScaling']['scaling_method'] =\
str(settings['FeatureScaling']['scaling_method'])

# Feature selection
settings_dict['Featsel']['Variance'] =\
settings['Featsel'].getfloat('Variance')

Expand Down Expand Up @@ -130,6 +137,30 @@ def load_config(config_file_path):
[str(item).strip() for item in
settings['SelectFeatGroup'][key].split(',')]

# Settings for sample processing, i.e. oversampling, undersampling etc
settings_dict['Resampling']['Use'] =\
settings['Resampling'].getfloat('Use')

settings_dict['Resampling']['Method'] =\
[str(item).strip() for item in
settings['Resampling']['Method'].split(',')]

settings_dict['Resampling']['sampling_strategy'] =\
[str(item).strip() for item in
settings['Resampling']['sampling_strategy'].split(',')]

settings_dict['Resampling']['n_neighbors'] =\
[int(str(item).strip()) for item in
settings['Resampling']['n_neighbors'].split(',')]

settings_dict['Resampling']['k_neighbors'] =\
[int(str(item).strip()) for item in
settings['Resampling']['k_neighbors'].split(',')]

settings_dict['Resampling']['threshold_cleaning'] =\
[float(str(item).strip()) for item in
settings['Resampling']['threshold_cleaning'].split(',')]

# Classification options
settings_dict['Classification']['fastr'] =\
settings['Classification'].getboolean('fastr')
Expand Down Expand Up @@ -257,28 +288,6 @@ def load_config(config_file_path):
settings_dict['HyperOptimization']['ranking_score'] = \
str(settings['HyperOptimization']['ranking_score'])

settings_dict['FeatureScaling']['scale_features'] =\
settings['FeatureScaling'].getboolean('scale_features')
settings_dict['FeatureScaling']['scaling_method'] =\
str(settings['FeatureScaling']['scaling_method'])

# Settings for sample processing, i.e. oversampling, undersampling etc
settings_dict['SampleProcessing']['SMOTE'] =\
[str(item).strip() for item in
settings['SampleProcessing']['SMOTE'].split(',')]

settings_dict['SampleProcessing']['SMOTE_ratio'] =\
[int(str(item).strip()) for item in
settings['SampleProcessing']['SMOTE_ratio'].split(',')]

settings_dict['SampleProcessing']['SMOTE_neighbors'] =\
[int(str(item).strip()) for item in
settings['SampleProcessing']['SMOTE_neighbors'].split(',')]

settings_dict['SampleProcessing']['Oversampling'] =\
[str(item).strip() for item in
settings['SampleProcessing']['Oversampling'].split(',')]

# Settings for ensembling
settings_dict['Ensemble']['Use'] =\
settings['Ensemble'].getint('Use')
Expand Down
Loading

0 comments on commit f7b032d

Please sign in to comment.