Skip to content

Commit

Permalink
chore: added pre-commit, improved typing
Browse files Browse the repository at this point in the history
  • Loading branch information
maxmekiska committed May 26, 2024
1 parent 6edf94d commit 9bcac4a
Show file tree
Hide file tree
Showing 17 changed files with 235 additions and 202 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,4 @@ jobs:
- name: Install tox and any other packages
run: pip install tox
- name: Run tox
run: tox -e py
run: tox -e py
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,4 @@ share/python-wheels/
*.egg
.tox
env/
.coverage
.coverage
8 changes: 8 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
- id: check-yaml
- id: detect-private-key
- id: end-of-file-fixer
- id: trailing-whitespace
15 changes: 9 additions & 6 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,12 +89,12 @@ activation function type
- removed encoder-decoder architectures
- improved layer configuration via dictionary input
- split data argument into target and feature numpy arrays

### 2.0.1

- fix: removed dead pandas imports
- fix: removed dead pandas imports
- chore: added tensorflow as base requirement

### 2.1.0

- feat!: removed data preparation out of predictor class, sub_seq, steps_past, steps_future need now to be defined in each model method
Expand All @@ -107,9 +107,12 @@ activation function type
- chore!: removed python 3.8 support to accomodate tensorflow and keras dependiencies
- chore: increased major to 3.0.0 to align with keras major
- feat: added evaluate_model method to test model performance on test data
- refactor!: removed validation split from `fit_model`. Control validation and test split via evaluation_split and validation_split paramters in class variables
- refactor!: removed validation split from `fit_model`. Control validation and test split via evaluation_split and validation_split paramters in class variables

### 3.1.0

- feat: added optional `batch_size` paramter to `fit_model`
- refactor!: train, test, validation split default change
- feat: added Tensor Board to `evaluate_model`
- refactor!: train, test, validation split default change
- chore: added pre-commit checks
- refactor: added, improved typing
34 changes: 17 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# imbrium [![Downloads](https://pepy.tech/badge/imbrium)](https://pepy.tech/project/imbrium) [![PyPi](https://img.shields.io/pypi/v/imbrium.svg?color=blue)](https://pypi.org/project/imbrium/) [![GitHub license](https://img.shields.io/github/license/maxmekiska/Imbrium?color=black)](https://github.com/maxmekiska/Imbrium/blob/main/LICENSE) [![PyPI pyversions](https://img.shields.io/pypi/pyversions/imbrium.svg)](https://pypi.python.org/project/imbrium/)

## Status

| Build | Status|
Expand Down Expand Up @@ -34,7 +34,7 @@ imbrium is a deep learning library that specializes in time series forecasting.

imbrium is designed to simplify the application of deep learning models for time series forecasting. The library offers a variety of pre-built architectures. The user retains full control over the configuration of each layer, including the number of neurons, the type of activation function, loss function, optimizer, and metrics applied. This allows for the flexibility to adapt the architecture to the specific needs of the forecast task at hand. Imbrium also offers a user-friendly interface for training and evaluating these models, making it easy to quickly iterate and test different configurations.

imbrium uses the sliding window approach to generate forecasts. The sliding window approach in time series forecasting involves moving a fixed-size window - `steps_past` through historical data, using the data within the window as input features. The next data points outside the window are used as the target variables - `steps_future`. This method allows the model to learn sequential patterns and trends in the data, enabling accurate predictions for future points in the time series.
imbrium uses the sliding window approach to generate forecasts. The sliding window approach in time series forecasting involves moving a fixed-size window - `steps_past` through historical data, using the data within the window as input features. The next data points outside the window are used as the target variables - `steps_future`. This method allows the model to learn sequential patterns and trends in the data, enabling accurate predictions for future points in the time series.


### Get started with imbrium
Expand All @@ -58,12 +58,12 @@ Note, if you choose to use TensorBoard, run the following command to display the
from imbrium import PureUni

# create a PureUni object (numpy array expected)
predictor = PureUni(target = target_numpy_array, evaluation_split = 0.1, validation_split = 0.2)
predictor = PureUni(target = target_numpy_array, evaluation_split = 0.1, validation_split = 0.2)

# the following models are available for a PureUni objects;

# create and fit a muti-layer perceptron model
predictor.create_fit_mlp(
predictor.create_fit_mlp(
steps_past,
steps_future,
optimizer = "adam",
Expand Down Expand Up @@ -225,7 +225,7 @@ predictor = create_fit_cnn(
board = False,
)

# create and fit a gated recurrent unit neural network
# create and fit a gated recurrent unit neural network
predictor.create_fit_gru(
steps_past,
steps_future,
Expand Down Expand Up @@ -400,7 +400,7 @@ predictor.freeze(absolute_path)

# load saved predictor via:
predictor.retrieve(location)
```
```

</details>

Expand All @@ -413,12 +413,12 @@ predictor.retrieve(location)
from imbrium import PureMulti

# create a PureMulti object (numpy array expected)
predictor = PureMulti(target = target_numpy_array, features = features_numpy_array, evaluation_split = 0.1, validation_split = 0.2)
predictor = PureMulti(target = target_numpy_array, features = features_numpy_array, evaluation_split = 0.1, validation_split = 0.2)

# the following models are available for a PureMulti objects;

# create and fit a muti-layer perceptron model
predictor.create_fit_mlp(
predictor.create_fit_mlp(
steps_past,
steps_future,
optimizer = "adam",
Expand Down Expand Up @@ -580,7 +580,7 @@ predictor = create_fit_cnn(
board = False,
)

# create and fit a gated recurrent unit neural network
# create and fit a gated recurrent unit neural network
predictor.create_fit_gru(
steps_past,
steps_future,
Expand Down Expand Up @@ -755,7 +755,7 @@ predictor.freeze(absolute_path)

# load saved predictor via:
predictor.retrieve(location)
```
```
</details>

<details>
Expand All @@ -766,7 +766,7 @@ predictor.retrieve(location)
from imbrium import HybridUni

# create a HybridUni object (numpy array expected)
predictor = HybridUni(target = target_numpy_array, evaluation_split = 0.1, validation_split = 0.2)
predictor = HybridUni(target = target_numpy_array, evaluation_split = 0.1, validation_split = 0.2)

# the following models are available for a HybridUni objects:
# create and fit a convolutional recurrent neural network
Expand Down Expand Up @@ -887,7 +887,7 @@ predictor.create_fit_cnnlstm(
board = False,
)

# create and fit a convolutional gated recurrent unit neural network
# create and fit a convolutional gated recurrent unit neural network
predictor.create_fit_cnngru(
sub_seq,
steps_past,
Expand Down Expand Up @@ -1158,7 +1158,7 @@ predictor.freeze(absolute_path)

# load saved predictor via:
predictor.retrieve(location)
```
```

</details>

Expand All @@ -1171,7 +1171,7 @@ predictor.retrieve(location)
from imbrium import HybridMulti

# create a HybridMulti object (numpy array expected)
predictor = HybridMulti(target = target_numpy_array, features = features_numpy_array, evaluation_split = 0.1, validation_split = 0.2)
predictor = HybridMulti(target = target_numpy_array, features = features_numpy_array, evaluation_split = 0.1, validation_split = 0.2)

# the following models are available for a HybridMulti objects:
# create and fit a convolutional recurrent neural network
Expand Down Expand Up @@ -1292,7 +1292,7 @@ predictor.create_fit_cnnlstm(
board = False,
)

# create and fit a convolutional gated recurrent unit neural network
# create and fit a convolutional gated recurrent unit neural network
predictor.create_fit_cnngru(
sub_seq,
steps_past,
Expand Down Expand Up @@ -1563,7 +1563,7 @@ predictor.freeze(absolute_path)

# load saved predictor via:
predictor.retrieve(location)
```
```
</details>

</details>
Expand Down Expand Up @@ -1602,4 +1602,4 @@ perceptron-models-for-time-series-forecasting/.
</details>


</details>
</details>
22 changes: 12 additions & 10 deletions imbrium/blueprints/abstract_multivariate.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
from abc import ABC, abstractmethod

from numpy import array
import numpy as np


class MultiVariateMultiStep(ABC):
Expand All @@ -11,8 +11,8 @@ class MultiVariateMultiStep(ABC):
@abstractmethod
def __init__(
self,
target: array = array([]),
features: array = array([]),
target: np.ndarray = np.array([]),
features: np.ndarray = np.array([]),
evaluation_split: float = 0.2,
validation_split: float = 0.2,
):
Expand All @@ -27,12 +27,12 @@ def set_model_id(self, name: str):

@property
@abstractmethod
def get_target(self) -> array:
def get_target(self) -> np.ndarray:
pass

@property
@abstractmethod
def get_target_shape(self) -> array:
def get_target_shape(self) -> np.ndarray:
pass

@property
Expand All @@ -42,7 +42,7 @@ def get_model_id(self) -> str:

@property
@abstractmethod
def get_X_input(self) -> array:
def get_X_input(self) -> np.ndarray:
pass

@property
Expand All @@ -52,7 +52,7 @@ def get_X_input_shape(self) -> tuple:

@property
@abstractmethod
def get_y_input(self) -> array:
def get_y_input(self) -> np.ndarray:
pass

@property
Expand All @@ -79,14 +79,16 @@ def get_metrics(self) -> str:
def fit_model(
self,
epochs: int,
board: bool = False,
batch_size=None,
show_progress: int = 1,
validation_split: float = 0.20,
**callback_setting: dict
):
pass

@abstractmethod
def evaluate_model(self):
def evaluate_model(self, board: bool = False):
pass

@abstractmethod
Expand All @@ -104,11 +106,11 @@ def show_evaluation(self):
@abstractmethod
def predict(
self,
data: array,
data: np.ndarray,
sub_seq: int = None,
steps_past: int = None,
steps_future: int = None,
) -> array:
) -> np.ndarray:
pass

@abstractmethod
Expand Down
20 changes: 11 additions & 9 deletions imbrium/blueprints/abstract_univariate.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
from abc import ABC, abstractmethod

from numpy import array
import numpy as np


class UniVariateMultiStep(ABC):
Expand All @@ -11,8 +11,8 @@ class UniVariateMultiStep(ABC):
@abstractmethod
def __init__(
self,
target: array = array([]),
features: array = array([]),
target: np.ndarray = np.array([]),
features: np.ndarray = np.array([]),
evaluation_split: float = 0.2,
validation_split: float = 0.2,
):
Expand All @@ -27,12 +27,12 @@ def set_model_id(self, name: str):

@property
@abstractmethod
def get_target(self) -> array:
def get_target(self) -> np.ndarray:
pass

@property
@abstractmethod
def get_target_shape(self) -> array:
def get_target_shape(self) -> np.ndarray:
pass

@property
Expand All @@ -42,7 +42,7 @@ def get_model_id(self) -> str:

@property
@abstractmethod
def get_X_input(self) -> array:
def get_X_input(self) -> np.ndarray:
pass

@property
Expand All @@ -52,7 +52,7 @@ def get_X_input_shape(self) -> tuple:

@property
@abstractmethod
def get_y_input(self) -> array:
def get_y_input(self) -> np.ndarray:
pass

@property
Expand All @@ -79,14 +79,16 @@ def get_metrics(self) -> str:
def fit_model(
self,
epochs: int,
board: bool = False,
batch_size=None,
show_progress: int = 1,
validation_split: float = 0.20,
**callback_setting: dict
):
pass

@abstractmethod
def evaluate_model(self):
def evaluate_model(self, board: bool = False):
pass

@abstractmethod
Expand All @@ -102,7 +104,7 @@ def show_evaluation(self):
pass

@abstractmethod
def predict(self, data: array) -> array:
def predict(self, data: np.ndarray) -> np.ndarray:
pass

@abstractmethod
Expand Down
Loading

0 comments on commit 9bcac4a

Please sign in to comment.