Skip to content

Commit

Permalink
Merge pull request #28 from cmusso86/update_release
Browse files Browse the repository at this point in the history
Update release
  • Loading branch information
cmusso86 authored Jul 6, 2024
2 parents 986e90d + 849c859 commit 9e48ed2
Show file tree
Hide file tree
Showing 7 changed files with 18 additions and 34 deletions.
4 changes: 4 additions & 0 deletions CRAN-SUBMISSION
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
Version: 0.3.0
Date: 2024-07-06 00:14:40 UTC
SHA:
bac22742080b29022aa2f2aee0e83d842a497558
2 changes: 1 addition & 1 deletion DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Package: recalibratiNN
Title: Quantile Recalibration for Regression Models
Version: 0.2.1
Version: 0.3.0
Authors@R:
c(person(given = "Carolina",
family = "Musso",
Expand Down
9 changes: 8 additions & 1 deletion NEWS.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,15 @@
# recalibratiNN 0.3.0

# recalibratiNN 0.2.0

* Resubmission after correction of doi.

# recalibratiNN 0.2.1

Substantial improvement in the documentation of the functions and correction of typos.
* Substantial improvement in the documentation of the functions and correction of typos.
Nothing has changed in the code.

* Includes a vignette with application on a Neural Network.



2 changes: 1 addition & 1 deletion README.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ output: github_document
knitr::opts_chunk$set(
collapse = TRUE,
warning = F,
message = F,
message = F ,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "80%",
Expand Down
27 changes: 0 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,33 +46,6 @@ download.
``` r
if(!require(pacman)) install.packages("pacman")
pacman::p_load_current_gh("cmusso86/recalibratiNN")
#> crayon (1.5.2 -> 1.5.3) [CRAN]
#> cli (3.6.2 -> 3.6.3) [CRAN]
#>
#> The downloaded binary packages are in
#> /var/folders/rp/h9_9qkdd7c57z9_hytk4306h0000gn/T//Rtmpx2IcOw/downloaded_packages
#> ── R CMD build ─────────────────────────────────────────────────────────────────
#> checking for file ‘/private/var/folders/rp/h9_9qkdd7c57z9_hytk4306h0000gn/T/Rtmpx2IcOw/remotes17f90582977a6/cmusso86-recalibratiNN-94d02e4/DESCRIPTION’ ... ✔ checking for file ‘/private/var/folders/rp/h9_9qkdd7c57z9_hytk4306h0000gn/T/Rtmpx2IcOw/remotes17f90582977a6/cmusso86-recalibratiNN-94d02e4/DESCRIPTION’
#> ─ preparing ‘recalibratiNN’:
#> checking DESCRIPTION meta-information ... ✔ checking DESCRIPTION meta-information
#> ─ installing the package to process help pages
#> Loading required namespace: recalibratiNN
#> ─ saving partial Rd database
#> ─ checking for LF line-endings in source and make files and shell scripts
#> ─ checking for empty or unneeded directories
#> NB: this package now depends on R (>= 3.5.0)
#> WARNING: Added dependency on R >= 3.5.0 because serialized objects in
#> serialize/load version 3 cannot be read in older versions of R.
#> File(s) containing such objects:
#> ‘recalibratiNN/inst/extdata/mse_cal.rds’
#> ‘recalibratiNN/inst/extdata/y_hat_cal.rds’
#> ‘recalibratiNN/inst/extdata/y_hat_test.rds’
#> ‘recalibratiNN/vignettes/mse_cal.rds’
#> ‘recalibratiNN/vignettes/y_hat_cal.rds’
#> ‘recalibratiNN/vignettes/y_hat_test.rds’
#> ─ building ‘recalibratiNN_0.2.1.tar.gz’
#>
#>
```

## Understanding calibration/miscalibration
Expand Down
6 changes: 3 additions & 3 deletions cran-comments.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

0 errors | 0 warnings | 1 note

As soon as it was available on CRAN other authors noticed many typos in the documentation that needed correction. Also, the description filed of each function was improved. Nothing was changed in the code.
Improvement in the documentation of the functions and correction of typos.

I also included a sentence in the main DESCRIPTION file to emphasize the theoretical background.
Inclusion of a vignette with application on a Neural Network.

I hope it is alright. Thanks for the patience.
Inclusion of a new reference (my undergrad thesis) for see a more detailed documentation of the functions. I also included the link of this work on the description file. However, as it does not have a DOI, I dont know if I did it correctly.
2 changes: 1 addition & 1 deletion vignettes/simple_mlp.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ y_test <- y[(split2*n+1):n]
```

Now, this toy model was trained using the Keras framework with TensorFlow backend. The ANN architecture consist of an ANN with 3 hidden layers with ReLU activation functions and dropout for regularization as follows:
Now, this toy model was trained using the Keras framework with TensorFlow backend. The ANN architecture consist of an ANN with 3 fully connected hidden layers with ReLU activation functions. The first two layers are followed by a dropout layer for regularization and the final hidden layer id followed by a batch normalization. The output layer has a linear activation function to predict the mean of the response variable. The model was trained using the Adam optimizer and the mean squared error as the loss function. If we want to interpret the results probabilistically, then my using the MSE we are assuming a Gaussian distribution of the response variable (since it is equivalent to the maximum likelihood estimation. The training was stopped early using the EarlyStopping callback with a patience of 20 epochs. The code below trains the model and predicts the response variable for the calibration and test sets.

```{r, eval=F}
model_nn <- keras_model_sequential()
Expand Down

0 comments on commit 9e48ed2

Please sign in to comment.