diff --git a/.github/workflows/pkgdown.yaml b/.github/workflows/pkgdown.yaml new file mode 100644 index 0000000..83fa4ef --- /dev/null +++ b/.github/workflows/pkgdown.yaml @@ -0,0 +1,51 @@ +# Workflow derived from https://github.com/r-lib/actions/tree/v2/examples +# Need help debugging build failures? Start at https://github.com/r-lib/actions#where-to-find-help +on: + push: + branches: [main, develop] + release: + types: [published] + workflow_dispatch: + +name: pkgdown + +jobs: + pkgdown: + runs-on: ubuntu-latest + # Only restrict concurrency for non-PR jobs + concurrency: + group: pkgdown-${{ github.event_name != 'pull_request' || github.run_id }} + env: + GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }} + steps: + - uses: actions/checkout@v4 + + - uses: r-lib/actions/setup-pandoc@v2 + + - uses: r-lib/actions/setup-r@v2 + with: + use-public-rspm: true + + - uses: r-lib/actions/setup-r-dependencies@v2 + with: + cache: always + extra-packages: any::pkgdown, ohdsi/OhdsiRTools + needs: website + + - uses: lycheeverse/lychee-action@v2 + with: + args: --base . --verbose --no-progress --accept '100..=103, 200..=299, 403' './**/*.md' './**/*.Rmd' + + - name: Build site + run: Rscript -e 'pkgdown::build_site_github_pages(new_process = FALSE, install = TRUE)' + + - name: Fix Hades Logo + run: Rscript -e 'OhdsiRTools::fixHadesLogo()' + + - name: Deploy to GitHub pages 🚀 + if: github.event_name != 'pull_request' + uses: JamesIves/github-pages-deploy-action@v4 + with: + clean: false + branch: gh-pages + folder: docs diff --git a/_pkgdown.yml b/_pkgdown.yml index 5e3ef84..4735fda 100644 --- a/_pkgdown.yml +++ b/_pkgdown.yml @@ -1,6 +1,12 @@ template: + bootstrap: 5 params: bootswatch: cosmo + light-switch: false + +development: + mode: auto + development: docs/dev home: links: @@ -9,29 +15,27 @@ home: navbar: structure: - left: - - home - - intro - - reference - - articles - - news + left: [home, intro, reference, articles, news] right: [hades, github] components: home: icon: fa-home fa-lg href: index.html reference: + icon: fa-info-circle fa-lg text: Reference href: reference/index.html intro: + icon: fa-download fa-lg text: Get started href: articles/InstallationGuide.html news: + icon: fa-newspaper-o fa-lg text: Changelog href: news/index.html github: icon: fa-github fa-lg - href: https://github.com/OHDSI/PatientLevelPrediction + href: https://github.com/OHDSI/Characterization hades: text: hadesLogo href: https://ohdsi.github.io/Hades diff --git a/docs/404.html b/docs/404.html deleted file mode 100644 index 47c5d24..0000000 --- a/docs/404.html +++ /dev/null @@ -1,133 +0,0 @@ - - - - - - - -Page not found (404) • Characterization - - - - - - - - - - - -
-
- - - - -
-
- - -Content not found. Please use links in the navbar. - -
- - - -
- - - - -
- - - - - - - - diff --git a/docs/articles/InstallationGuide.html b/docs/articles/InstallationGuide.html deleted file mode 100644 index 4c56168..0000000 --- a/docs/articles/InstallationGuide.html +++ /dev/null @@ -1,248 +0,0 @@ - - - - - - - -Characterization Installation Guide • Characterization - - - - - - - - - - - - -
-
- - - - -
-
- - - - - -
-

Introduction -

-

This vignette describes how you need to install the Observational -Health Data Sciences and Informatics (OHDSI) Characterization -package under Windows, Mac, and Linux.

-
-
-

Software Prerequisites -

-
-

Windows Users -

-

Under Windows the OHDSI Characterization package requires -installing:

- -
-
-

Mac/Linux Users -

-

Under Mac and Linux the OHDSI Characterization package requires -installing:

- -
-
-
-

Installing the Package -

-

The preferred way to install the package is by using -remotes, which will automatically install the latest -release and all the latest dependencies.

-

If you do not want the official release you could install the -bleeding edge version of the package (latest develop branch).

-

Note that the latest develop branch could contain bugs, please report -them to us if you experience problems.

-
-

Installing Characterization using remotes -

-

To install using remotes run:

-
-install.packages("remotes")
-remotes::install_github("OHDSI/Characterization")
-

When installing make sure to close any other Rstudio sessions that -are using Characterization or any dependency. Keeping -Rstudio sessions open can cause locks that prevent the package -installing.

-
-
-
-

Installation issues -

-

Installation issues need to be posted in our issue tracker: http://github.com/OHDSI/Characterization/issues

-

The list below provides solutions for some common issues:

-
    -
  1. If you have an error when trying to install a package in R saying -‘Dependency X not available …’ then this can sometimes -be fixed by running install.packages('X') and then once -that completes trying to reinstall the package that had the -error.

  2. -
  3. I have found that using the github `remotes`` to install packages -can be impacted if you have multiple R sessions open as -one session with a library open can cause the library to be locked and -this can prevent an install of a package that depends on that -library.

  4. -
-
-
-

Acknowledgments -

-

Considerable work has been dedicated to provide the -Characterization package.

-
-citation("Characterization")
-
## 
-## To cite package 'Characterization' in publications use:
-## 
-##   Reps J, Ryan P, Knoll C (2024). _Characterization: Characterizations
-##   of Cohorts_. https://ohdsi.github.io/Characterization,
-##   https://github.com/OHDSI/Characterization.
-## 
-## A BibTeX entry for LaTeX users is
-## 
-##   @Manual{,
-##     title = {Characterization: Characterizations of Cohorts},
-##     author = {Jenna Reps and Patrick Ryan and Chris Knoll},
-##     year = {2024},
-##     note = {https://ohdsi.github.io/Characterization, https://github.com/OHDSI/Characterization},
-##   }
-
-
- - - -
- - - - -
- - - - - - - - diff --git a/docs/articles/Specification.html b/docs/articles/Specification.html deleted file mode 100644 index 142b1a5..0000000 --- a/docs/articles/Specification.html +++ /dev/null @@ -1,1998 +0,0 @@ - - - - - - - -Characterization Package Specification • Characterization - - - - - - - - - - - - -
-
- - - - -
-
- - - - - -
-

Time-to-event -

-
-

Inputs -

-

A vector of targetIds and a vector of outcomeIds

-
-
-

Output -

-

Summary data.frame with the counts of how often an outcome occurred -within a time-period relative to the first target index date for each -combination of target and outcome. The counts are stratified by whether -the outcome was first event or subsequent and the timing category for -when the outcome occurred (before first target exposure, during first -target exposure, during a subsequent target exposure, between target -exposures and after last target exposure).

-
-
-

Worked Example -

-
-

Example Inputs -

-

Here we consider the inputs are:

-
-targetIds <- c(1)
-outcomeIds <- c(2) 
-

Consider we have five patients; the target cohort (dates each of the -five patients are exposed to a drug) is in Table 1 and the outcome -cohort (dates each of the five patients have the outcome) is in Table 2. -This is also illustrated in Figures 1 and 2.

-
-
-

Example Data Image -

-
-Figure 1 - Example data for five patients with dates.
Figure 1 - Example data for five patients with -dates.
-
-
-Figure 2 - Example data for five patients with timing.
Figure 2 - Example data for five patients with -timing.
-
-
-
-

Example Data Table -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example time-to-event data with dates.
patientIdcohortDefinitionIdcohortStartDatecohortEndDate
112001-01-202001-01-25
112001-10-202001-12-05
212005-09-102005-09-15
312004-04-022004-05-17
412002-03-032002-06-12
412003-02-012003-02-30
412003-08-042003-08-24
512005-02-012005-10-08
512007-04-032007-05-03
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example time-to-event data with timing.
patientIdcohortDefinitionIdcohortStartDatecohortEndDate
121999-10-031999-10-08
122001-10-302001-11-07
322004-05-162004-05-18
422002-06-032002-06-14
422003-02-202003-03-01
522006-07-212006-08-03
522008-01-012008-01-09
-

For all rows in the outcome table, we calculate the time between the -patients first exposure in the target cohort and the outcome date -(time-to-event), we classify the ‘type’ as the timing of when the -outcome occurs with ‘before first exposure’ meaning the outcome occurs -before the patient is observed in the target cohort, ‘during first’ -means the outcome occurs during the first target cohort exposure, -‘between eras’ means the outcome occurs between target exposures, -‘during subsequent’ means the outcome occurs during a non-first target -exposure and ‘after last exposure’ means the outcome occurs after the -last exposure in the target cohort’s end date for the patient. The -outcome type is classified whether the outcome is the ‘first occurrence’ -or a ‘subsequent occurrence’. Let’s consider patient 1, he has the -outcome twice. The first outcome occurs 475 days before his first target -exposure and his second outcome occurs 283 days after his first target -exposure. The second outcome for patient 1 occurs during a subsequent -target exposure era (not the first). Patient 2 does not have the outcome -so does not contribute to the time-to-event. Patient 3 has her first -(and only) outcome during the first exposure to the drug and 44 days -after she started the drug for the first time. Patient 4 has the outcome -twice, 92 days after the first exposure to the drug and 354 days after. -The first time she has the outcome is during the first exposure to the -drug and the subsequent time she has the outcome is during her second -exposure (subsequent exposure). Patient 5 has the outcome twice, 535 -days after he is first exposure to the drug and 1064 days after. The -first time he has the outcome occurs between drug exposure eras and the -subsequent outcome occurs after the last exposure era. This is -summarized in Table 3.

- - -------- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Table 3: Time-to-event intermediate summary.
patientIdoutcomeDatefirstExposureDatetimeToEventtypeoutcomeType
11999-10-032001-01-20-475Before first exposureFirst
12001-10-302001-01-20283During subsequentSubsequent
32004-05-162004-04-0244During firstFirst
42002-06-032002-03-0392During firstFirst
42003-02-202002-03-03354During subsequentSubsequent
52006-07-212005-02-01535Between erasFirst
52008-01-012005-02-011064After last exposureSubsequent
-

The time-to-event output aggregates the summary table into three -different perspectives:

-
    -
  • 1-day aggregate – this calculates the total number of patients -that have the outcome at each time-to-event day grouped by type and -outcome type. Only looks at outcomes between -100 days and 100 days for -the time-to-event.

  • -
  • 30-day aggregate – this calculates the total number of patients -that have the outcome at each 30-day sliding window for time-to-event -(e.g., 0-29, 30-59, 60-89, etc.) grouped by type and outcome type. Only -looks at outcomes between -1095 days and 1095 days for the -time-to-event.

  • -
  • 365-day aggregate – this calculates the total number of patients -that have the outcome at each 365-day sliding window for time-to-event -(e.g., 0-365, 366-730, 731-1095, etc.) grouped by type and outcome type. -Only looks at outcomes between -1095 days and 1095 days for the -time-to-event.

  • -
-

The summary results that would be output by time-to-event are -displayed in Table 4.

- - -------- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Table 4: Time-to-event output.
timeTypeTypeoutcomeTypetimeStarttimeEndcount
1-dayDuring firstFirst44441
1-dayDuring firstFirst92921
30-dayBefore first exposureFirst-481-4501
30-dayDuring firstFirst31601
30-dayDuring firstFirst911201
30-dayDuring subsequentSubsequent2713001
30-dayDuring subsequentSubsequent3313601
30-dayBetween erasFirst5115401
30-dayAfter last exposureSubsequent105110801
365-dayBefore first exposureFirst-731-3651
365-dayDuring firstFirst13652
365-dayDuring subsequentSubsequent13652
365-dayBetween erasFirst3667301
365-dayAfter last exposureSubsequent73110951
-
-
-
-
-

Dechallenge-rechallenge -

-
-

Inputs -

-

A vector of targetIds, a vector of outcomeIds, an integer -dechallengeStopInterval and an integer dechallengeEvaluationWindow.

-
-
-

Output -

-

A summary data.frame with the number of dechallenge and rechallenge -attempts per target and outcome combination plus the number of -dechallange/rechallenge attempts that were successes and failures.

-
-
-

Worked Example -

-

The dechallenge-rechallenge analysis finds out how often a patient -stops the drug due to the occurrence of an outcome and whether the -outcome stops, then it looks at whether people re-exposed have the -outcome start again. In observational data we infer these situations by -finding cases where a patient has the outcome recorded during a drug -exposure and seems to stop the drug within <dechallenge stop interval -days – default 30 days> after the outcome occurs. For patients who -have a dechallenge, we then determine whether it is a success (the -outcome stops) or a failure (the outcome continues). This is determined -by seeing whether the outcome starts within <decallenge evaluation -window days – default 30 days> after the exposure ends (outcome -starting is a dechallenge failure otherwise it is a success). For -patients who had a dechallenge, we then look at whether they have -another exposure (after decallenge evaluation window days from the first -exposure end), which is a rechallenge and this is classed as a failure -(if the outcome does not start during the rechallenge exposure era) and -a success (if the outcome does occur during the rechallenge exposure -era).

-
-

Example Inputs -

-

Here we consider the inputs are:

-
-targetIds <- c(1)
-outcomeIds <- c(2) 
-dechallengeStopInterval <- 30
-dechallengeEvaluationWindow <- 31
-
-
-

Example Data Plot -

-
-Figure 3 - Example data for five patients with dates.
Figure 3 - Example data for five patients with -dates.
-
-
-
-

Example Data Table -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example dechallenge-rechallenge data with dates.
patientIdcohortDefinitionIdcohortStartDatecohortEndDate
112001-01-202001-01-25
112001-10-202001-12-05
212005-09-102005-09-15
212006-03-042006-03-21
212006-05-032006-05-05
312004-04-022004-05-17
412002-03-032002-06-12
412003-02-012003-02-30
412003-08-042003-08-24
512005-02-012005-10-08
512007-04-032007-05-03
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example time-to-event data with timing.
patientIdcohortDefinitionIdcohortStartDatecohortEndDate
121999-10-031999-10-08
122001-10-302001-11-07
322004-05-162004-05-18
422002-06-032002-06-14
422003-02-202003-03-01
522006-07-212006-08-03
522008-01-012008-01-09
-

Let’s consider ten patients in Table 5 and Table 6 with 30 days for -the dechallenge stop interval and 31 days for the decallenge evaluation -window. First, find all cases where the outcome occurs during any -exposure era and then the exposure ends within 30 days after the outcome -start. These are the dechallenges. Then investigate whether a new -outcome starts within 31 days of the exposure era ending. These are the -failed dechallenges, otherwise the dechallenge is a success. Next, for -the dechallenges, find any drug exposures that occur more than 31 days -after the dechallenge exposure era end. These are rechallaneges. For -each rechallenge, determine whether the outcome starts within 31 days of -the rechallenge exposure era start. If an outcome occurs, the -rechallenge is a success, otherwise it is a failure.

-
-
-

Intermediary Table -

- - ---------- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Dehcallenge-rechallenge summary table showing each dechallenge. -Only some patients with a dechallenge will have a rechallenge.
patientIdoutcomeDateexposureEndoutcomeAfterfutureExposurefutureOutcomedechallengeTyperechallengeType
12001-11-302001-12-05---Success-
22006-03-102006-03-21-2006-05-03-SeccessSuccess
32004-05-162004-05-172004-01-12--Fail-
42002-06-032002-06-12-2003-01-012003-02-20SuccessFail
-
-
-

Intermediary Plots -

-
-Figure 4 - Example data for five patients with dechallenges highlighted and labelled.
Figure 4 - Example data for five patients with -dechallenges highlighted and labelled.
-
-
-Figure 5 - Example data for five patients with rechallenges highlighted and labelled.
Figure 5 - Example data for five patients with -rechallenges highlighted and labelled.
-
-
-
-

Summary -

-

We would then summarize the results by saying there were 4 -dechallenges, 3 of which were a success and 1 of which was a fail. 2 -patients had rechallenges with 1 being a fail and 1 being a success, see -Table 8 as the example output for one target and outcome.

- - -------- - - - - - - - - - - - - - - - - -
Dehcallenge-rechallenge output.
dechallengeAttemptsdechallengeSuccessdechallengeFailurerechallengeAttemptsrechallengeSuccessrechallangeFailure
431211
-

Note: The way an outcome and exposure phenotype are -designed can make it impossible or unlikely to see a dechallenge fail. -For example, if an outcome is designed with a 365 day washout window, -then it means there cannot be another outcome occurring within 365 days -of another outcome. As a dechallange failure is the outcome occurring -within dechallenge evaluation window days after the exposure ends (and -the exposure must end within stop interval days of the outcome to be a -dechallenge), then using the defaults for these values means a -dechallenge failure requires an outcome to be possible within 60 days of -the dechallenge outcome, which is impossible with a 365 washout -window.

-
-
-
-
-

Aggregate Covariates -

-
-

Inputs -

-

A vector of targetIds plus the minimum prior observation required for -the target cohorts and specifying which features to extract -(covariateSettings).

-
-
-

Outputs -

-

For each target cohort restricted to only the patients with minimum -prior observation at index and first occurrence the mean value for each -feature of interest is extracted into a data.frame.

-
-
-

Worked Example -

-

The aggregate covariates calculates the mean value of a feature -within a cohort of patients. In this analysis we restrict to first -occurrence in the cohort with a minimum prior observation in days -specified by the user (default 365 days). This restriction is -implemented as otherwise a patient could contribute multiple times to -the mean value and this makes interpretation difficult.

-
-

Example Inputs -

-

Here we consider the inputs are:

-
-minPriorObservation <- 365
-covariateSettings <- FeatureExtraction::createCovariateSettings(
-  useDemographicsAge = T, 
-  useDemographicsGender = T,
-  useConditionOccurrenceAnyTimePrior = T, includedCovariateConceptIds = c(201820)
-  )
-
-
-

Example Data -

-

Let’s assumed we have two two cohorts (id 1 and 2) the first cohort -contains five patients who have >365 days prior observation at index -and the second contains three patients who have >365 days prior -observation at index.

-

The patients features’ are displayed in Table 7, containing patients’ -age at index, whether they have diabetes anytime prior to index and -their sex as features.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example patient level feature data.
patientIdcohortIdfeaturevalue
11age50
11sexMale
11diabetesYes
21age18
21sexFemale
21diabetesNo
31age22
31sexMale
31diabetesNo
41age40
41sexMale
41diabetesNo
51age70
51sexFemale
51diabetesYes
12age24
12sexFemale
12diabetesNo
22age35
22sexFemale
22diabetesNo
32age31
32sexFemale
32diabetesNo
-
-
-

Results -

-

We calculate the mean values for each feature per cohort:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example aggregate features for two example cohorts.
cohortIdfeaturemean
1Age40.0
1Sex: Male0.6
1Diabetes: Yes0.4
2Age30.0
2Sex: Male0.0
2Diabetes: Yes0.0
-

The database and cohort comparison implements the aggregate covariate -analysis for all target and outcome ids fed into characterization across -all OMOP CDM databases available and then lets users compare the mean -values of the features between databases for the same cohort or across -different cohorts within the same database. The standardized mean -different is calculated between two cohorts when possible, this is -calculated per feature as: abs(mean value in cohort 1 - mean value in -cohort 2)/((standard deviation of values in cohort 1 squared plus -standard deviation of values in cohort 2 squared)/2).

-
-
-
-
-

Risk Factor Analysis -

-
-

Inputs -

-

A vector of targetIds and outcomeIds plus the minimum prior -observation required for the target cohorts, the outcome washout days -for the outcomes, settings for the time-at-risk and covariate settings -specifying which features to extract.

-
-
-

Outputs -

-

For each target and outcome combination we run aggregate covariate -analysis for the special case of comparing patients in cohort 1 -(patients in the target cohort for the first time with 365 days prior -observation who go on to have the first occurrence of the outcome in -washout days during some time-at-risk relative to the target cohort -index) vs cohort 2 (patients in the target cohort for the first time -with 365 days prior observation who do not go on to have the first -occurrence of the outcome in washout days during some time-at-risk -relative to the target cohort index).

-
-
-

Worked Example -

-

Lets consider an example with a time-at-risk of target cohort start + -1 to target cohort start + 180.

-
-

Example Inputs -

-
-targetId <- 1
-outcomeId <- 2
-minPriorObservation <- 365
-outcomeWashoutDays <- 365
-riskWindowStart <- 1
-startAnchor <- 'cohort start'
-riskWindowEnd <- 180
-endAnchor <- 'cohort start'
-covariateSettings <- FeatureExtraction::createCovariateSettings(
-  useDemographicsAge = T, 
-  useDemographicsGender = T,
-  useConditionOccurrenceAnyTimePrior = T, includedCovariateConceptIds = c(201820)
-  )
-
-
-

Example Data -

- - ------- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example target cohort.
patientIdtargetCohortIdcohortStartDatecohortEndDateobservationStart
112001-01-202001-01-252000-02-01
112001-10-202001-12-052000-02-01
212005-09-102005-09-152001-02-01
312004-04-022004-05-172001-02-01
412002-03-032002-06-122001-02-01
412003-02-012003-02-302001-02-01
412003-08-042003-08-242001-02-01
512005-02-012005-10-082001-02-01
512007-04-032007-05-032001-02-01
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example outcome cohort.
patientIdtargetCohortIdcohortStartDatecohortEndDate
121999-10-031999-10-08
122001-10-302001-11-07
322004-05-162004-05-18
422002-06-032002-06-14
422003-02-202003-03-01
522006-07-212006-08-03
522008-01-012008-01-09
-
-
-

Intermedeiary Tables -

-

First, we find the first target with 365 days prior obs. Patient 1 is -removed as they are exposed for the first time with less than 365 days -prior observation. Patients 4 and 5 non-first exposures are removed. -This leaves:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example target cohort meeting risk factor inclusion -criteria.
patientIdtargetCohortIdcohortStartDatecohortEndDate
212005-09-102005-09-15
312004-04-022004-05-17
412002-03-032002-06-12
512005-02-012005-10-08
-

We then find the patients in the target cohort with the outcome and -no-outcome occurring during 1 day to 180 days after index:

- - ------- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example target cohort meeting risk factor inclusion -criteria.
patientIdtargetCohortIdcohortStartDatecohortEndDatelabels
212005-09-102005-09-15Non-outcome
312004-04-022004-05-17Outcome
412002-03-032002-06-12Outcome
512005-02-012005-10-08Non-outcome
-

Note: we also remove patients in the target who have -the outcome during outcome washout days prior to target index. In the -example, nobody had the outcome prior, so this was not observed.

-

If the features for these four patients are:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example patient level feature data.
patientIdcohortIdfeaturevalue
2Non-outcomeage50
2Non-outcomesexMale
2Non-outcomediabetesYes
3Outcomeage18
3OutcomesexFemale
3OutcomediabetesNo
4Outcomeage22
4OutcomesexMale
4OutcomediabetesNo
5Non-outcomeage40
5Non-outcomesexMale
5Non-outcomediabetesNo
-
-
-

Results -

-

We calculate the mean values for each feature per non-outcome and -outcome cohort:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example aggregate features for risk factor analysis.
cohortIdfeaturemean
OutcomeAge20.0
OutcomeSex: Male0.5
OutcomeDiabetes: Yes0.0
Non-outcomeAge45.0
Non-outcomeSex: Male1.0
Non-outcomeDiabetes: Yes0.5
-

We can then implement the standardized mean different calculated -between the outcome and non-outcome cohorts, this is calculated per -feature as: abs(mean value in outcome cohort - mean value in non-outcome -cohort)/((standard deviation of values in outcome cohort squared plus -standard deviation of values in non-outcome cohort squared)/2).

-
-
-
-
-

Case Series -

-

The cases series looks at the patients in a target cohort who have -the outcome during a specified time-at-risk and calculates the aggregate -covariates at three different time periods: shortly before index, -between target index and outcome index and shortly after outcome -index.

-
-

Inputs -

-

A vector of targetIds and outcomeIds plus the minimum prior -observation required for the target cohorts, the outcome washout days -for the outcomes, settings for the time-at-risk and covariate settings -specifying which features to extract.

-

In addition you need to specify how long before target index to -extract before index features (preTargetIndexDays) and how long after -outcome index to extract after index features -(postOutcomeIndexDays).

-
-
-

Outputs -

-

For each target and outcome combination we run aggregate covariate -analysis patients in the target patients (with a minimum of prior -observation day before index) who have the outcome (for the first time -in outcome washout days). We use three different time periods for -feature extraction:

-
    -
  • (before) preTargetIndexDays before target index up to target -index
  • -
  • (during) between target index plus 1 day to outcome index
  • -
  • (after) 1 day after outcome index to outcome index plus -postOutcomeIndexDays
  • -
-
-
-

Worked Example -

-

In this example we look at how often diabetes is recorded for the -cases (people with the target cohort who have the outcome within 180 -days of target index) in the year before target index, between target -index and outcome index and the 1 year after outcome index.

-
-

Example Inputs -

-

Here we consider the inputs are:

-
-targetId <- 1
-outcomeId <- 2
-minPriorObservation <- 365
-outcomeWashoutDays <- 365
-preTargetIndexDays <- 365
-postOutcomeIndexDays <- 365
-riskWindowStart <- 1
-startAnchor <- 'cohort start'
-riskWindowEnd <- 180
-endAnchor <- 'cohort start'
-covariateSettings <- FeatureExtraction::createCovariateSettings(
-  useConditionOccurrenceAnyTimePrior = T, includedCovariateConceptIds = c(201820)
-  )
-
-
-

Example Data -

- - ------- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example target cohort.
patientIdtargetCohortIdcohortStartDatecohortEndDateobservationStart
112001-01-202001-01-252000-02-01
112001-10-202001-12-052000-02-01
212005-09-102005-09-152001-02-01
312004-04-022004-05-172001-02-01
412002-03-032002-06-122001-02-01
412003-02-012003-02-302001-02-01
412003-08-042003-08-242001-02-01
512005-02-012005-10-082001-02-01
512007-04-032007-05-032001-02-01
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example outcome cohort.
patientIdtargetCohortIdcohortStartDatecohortEndDate
121999-10-031999-10-08
122001-10-302001-11-07
322004-05-162004-05-18
422002-06-032002-06-14
422003-02-202003-03-01
522006-07-212006-08-03
522008-01-012008-01-09
-
-
-

Intermedeiary Tables -

-

First, we find the first target with 365 days prior obs. Patient 1 is -removed as they are exposed for the first time with less than 365 days -prior observation. Patients 4 and 5 non-first exposures are removed. -This leaves:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example target cohort meeting risk factor inclusion -criteria.
patientIdtargetCohortIdcohortStartDatecohortEndDate
212005-09-102005-09-15
312004-04-022004-05-17
412002-03-032002-06-12
512005-02-012005-10-08
-

We then find the patients in the target cohort with the outcome -occurring during 1 day to 180 days after index:

- - - - - - - - - - - - - - - - - - - - - - - - - -
Example target cohort meeting case inclusion -criteria.
patientIdtargetCohortIdcohortStartDatecohortEndDatelabels
312004-04-022004-05-17Outcome
412002-03-032002-06-12Outcome
-

Note: we also remove patients in the target who have -the outcome during outcome washout days prior to target index. In the -example, nobody had the outcome prior, so this was not observed.

-

Now we define the before( 365 days before target index up to target -index), between (target index plus 1 and outcome) and after (outcome -index plus 1 to outcome index plus 365):

- - ------------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example cases with before/during/after dates.
patientIdtargetCohortIdtargetStartDateoutcomeStartDatebeforeStartDatebeforeEndDateduringStartDateduringEndDateafterStartDateafterEndDate
312004-04-022004-05-162003-04-032004-04-022004-04-032004-05-162004-05-172005-05-16
412002-03-032002-06-032001-03-032002-03-032002-03-042002-06-032002-06-042003-06-03
-

If the features for these two patients at the three time periods -are:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example patient level case feature data.
patientIdfeaturetimePeriodvalue
3diabetesbeforeNo
3diabetesduringNo
3diabetesafterYes
4diabetesbeforeYes
4diabetesduringYes
4diabetesafterYes
-
-
-

Results -

-

Finally, we aggregate over the time periods:

- - - - - - - - - - - - - - - - - - - - - - - - -
Example patient level case feature data.
featuretimePeriodvalue
diabetes: Yesbefore0.5
diabetes: Yesduring0.5
diabetes: Yesafter1.0
-
-
-
-
- - - -
- - - - -
- - - - - - - - diff --git a/docs/articles/UsingCharacterizationPackage.html b/docs/articles/UsingCharacterizationPackage.html deleted file mode 100644 index 33b44c6..0000000 --- a/docs/articles/UsingCharacterizationPackage.html +++ /dev/null @@ -1,3054 +0,0 @@ - - - - - - - -Using Characterization Package • Characterization - - - - - - - - - - - - -
-
- - - - -
-
-
- - - - - -
-

Introduction -

-

This vignette describes how you can use the Characterization package -for various descriptive studies using OMOP CDM data. The -Characterization package currently contains three different types of -analyses:

-
    -
  • Aggregate Covariates: this returns the mean feature value for a set -of features specified by the user for i) the Target cohort population, -ii) the Outcome cohort population, iii) the Target population patients -who had the outcome during some user specified time-at-risk and iv) the -Target population patients who did not have the outcome during some user -specified time-at-risk.
  • -
  • DechallengeRechallenge: this is mainly aimed at investigating -whether a drug and event are causally related by seeing whether the drug -is stopped close in time to the event occurrence (dechallenge) and then -whether the drug is restarted (a rechallenge occurs) and if so, whether -the event starts again (a failed rechallenge). In this analysis, the -Target cohorts are the drug users of interest and the Outcome cohorts -are the medical events you wish to see whether the drug may cause. The -user must also specify how close in time a drug must be stopped after -the outcome to be considered a dechallenge and how close in time an -Outcome must occur after restarting the drug to be considered a failed -rechallenge).
  • -
  • Time-to-event: this returns descriptive results showing the timing -between the target cohort and outcome. This can help identify whether -the outcome often precedes the target cohort or whether it generally -comes after.
  • -
-
-
-

Setup -

-

In this vignette we will show working examples using the -Eunomia R package that contains simulated data. Run the -following code to install the Eunomia R package:

-
-install.packages("remotes")
-remotes::install_github("ohdsi/Eunomia")
-

Eunomia can be used to create a temporary SQLITE database with the -simulated data. The function getEunomiaConnectionDetails -creates a SQLITE connection to a temporary location. The function -createCohorts then populates the temporary SQLITE database -with the simulated data ready to be used.

-
-connectionDetails <- Eunomia::getEunomiaConnectionDetails()
-Eunomia::createCohorts(connectionDetails = connectionDetails)
-
## Connecting using SQLite driver
-
## Creating cohort: Celecoxib
-## 
-  |                                                                            
-  |                                                                      |   0%
-  |                                                                            
-  |======================================================================| 100%
-
## Executing SQL took 0.00557 secs
-
## Creating cohort: Diclofenac
-## 
-  |                                                                            
-  |                                                                      |   0%
-  |                                                                            
-  |======================================================================| 100%
-
## Executing SQL took 0.00487 secs
-
## Creating cohort: GiBleed
-## 
-  |                                                                            
-  |                                                                      |   0%
-  |                                                                            
-  |======================================================================| 100%
-
## Executing SQL took 0.00951 secs
-
## Creating cohort: NSAIDs
-## 
-  |                                                                            
-  |                                                                      |   0%
-  |                                                                            
-  |======================================================================| 100%
-
## Executing SQL took 0.0507 secs
-
## Cohorts created in table main.cohort
-
##   cohortId       name
-## 1        1  Celecoxib
-## 2        2 Diclofenac
-## 3        3    GiBleed
-## 4        4     NSAIDs
-##                                                                                        description
-## 1    A simplified cohort definition for new users of celecoxib, designed specifically for Eunomia.
-## 2    A simplified cohort definition for new users ofdiclofenac, designed specifically for Eunomia.
-## 3 A simplified cohort definition for gastrointestinal bleeding, designed specifically for Eunomia.
-## 4       A simplified cohort definition for new users of NSAIDs, designed specifically for Eunomia.
-##   count
-## 1  1844
-## 2   850
-## 3   479
-## 4  2694
-

We also need to have the Characterization package installed and -loaded

-
-remotes::install_github("ohdsi/FeatureExtraction")
-remotes::install_github("ohdsi/Characterization")
- -
## 
-## Attaching package: 'dplyr'
-
## The following objects are masked from 'package:stats':
-## 
-##     filter, lag
-
## The following objects are masked from 'package:base':
-## 
-##     intersect, setdiff, setequal, union
-
-
-

Examples -

-
-

Aggreagate Covariates -

-

To run an ‘Aggregate Covariate’ analysis you need to create a setting -object using createAggregateCovariateSettings. This -requires specifying:

-
    -
  • one or more targetIds (these must be pre-generated in a cohort -table)
  • -
  • one or more outcomeIds (these must be pre-generated in a cohort -table)
  • -
  • the covariate settings using -FeatureExtraction::createCovariateSettings or by creating -your own custom feature extraction code.
  • -
  • the time-at-risk settings
  • -
  • riskWindowStart
  • -
  • startAnchor
  • -
  • riskWindowEnd
  • -
  • endAnchor
  • -
-

Using the Eunomia data were we previous generated four cohorts, we -can use cohort ids 1,2 and 4 as the targetIds and cohort id 3 as the -outcomeIds:

-
-exampleTargetIds <- c(1, 2, 4)
-exampleOutcomeIds <- 3
-

If we want to get information on the sex assigned at birth, age at -index and Charlson Comorbidity index we can create the settings using -FeatureExtraction::createCovariateSettings:

-
-exampleCovariateSettings <- FeatureExtraction::createCovariateSettings(
-  useDemographicsGender = T,
-  useDemographicsAge = T,
-  useCharlsonIndex = T
-)
-

If we want to create the aggregate features for all our target -cohorts, our outcome cohort and each target cohort restricted to those -with a record of the outcome 1 day after target cohort start date until -365 days after target cohort end date, excluding mean values below 0.01, -we can run:

-
-exampleAggregateCovariateSettings <- createAggregateCovariateSettings(
-  targetIds = exampleTargetIds,
-  outcomeIds = exampleOutcomeIds,
-  riskWindowStart = 1, startAnchor = "cohort start",
-  riskWindowEnd = 365, endAnchor = "cohort start",
-  covariateSettings = exampleCovariateSettings,
-  minCharacterizationMean = 0.01
-)
-

Next we need to use the -exampleAggregateCovariateSettings as the settings to -computeAggregateCovariateAnalyses, we need to use the -Eunomia connectionDetails and in Eunomia the OMOP CDM data and cohort -table are in the ‘main’ schema. The cohort table name is ‘cohort’. The -following code will apply the aggregated covariates analysis using the -previously specified settings on the simulated Eunomia data:

-
-agc <- computeAggregateCovariateAnalyses(
-  connectionDetails = connectionDetails,
-  cdmDatabaseSchema = "main",
-  cdmVersion = 5,
-  targetDatabaseSchema = "main",
-  targetTable = "cohort",
-  aggregateCovariateSettings = exampleAggregateCovariateSettings,
-  databaseId = "Eunomia",
-  runId = 1
-)
-

If you would like to save the results you can use the function -saveAggregateCovariateAnalyses and this can then be loaded -using loadAggregateCovariateAnalyses.

-

The results are Andromeda objects that can we viewed using -dplyr. There are four tables:

-
    -
  • covariates:
  • -
-
-agc$covariates %>%
-  collect() %>%
-  kableExtra::kbl()
-
## Warning in !is.null(rmarkdown::metadata$output) && rmarkdown::metadata$output
-## %in% : 'length(x) = 3 > 1' in coercion to 'logical(1)'
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-databaseId - -runId - -cohortDefinitionId - -covariateId - -sumValue - -averageValue -
-Eunomia - -1 - -170096739 - -8507001 - -57 - -0.4596774 -
-Eunomia - -1 - -170096739 - -8532001 - -67 - -0.5403226 -
-Eunomia - -1 - -1421450349 - -8507001 - -237 - -0.4947808 -
-Eunomia - -1 - -1421450349 - -8532001 - -242 - -0.5052192 -
-Eunomia - -1 - -1498088760 - -8507001 - -1289 - -0.4901141 -
-Eunomia - -1 - -1498088760 - -8532001 - -1341 - -0.5098859 -
-Eunomia - -1 - -2038349030 - -8507001 - -237 - -0.4947808 -
-Eunomia - -1 - -2038349030 - -8532001 - -242 - -0.5052192 -
-Eunomia - -1 - -2038795861 - -8507001 - -894 - -0.4966667 -
-Eunomia - -1 - -2038795861 - -8532001 - -906 - -0.5033333 -
-Eunomia - -1 - -2246615035 - -8507001 - -395 - -0.4759036 -
-Eunomia - -1 - -2246615035 - -8532001 - -435 - -0.5240964 -
-Eunomia - -1 - -3945088378 - -8507001 - -180 - -0.5070423 -
-Eunomia - -1 - -3945088378 - -8532001 - -175 - -0.4929577 -
-Eunomia - -1 - -1810054421 - -8507001 - -237 - -0.4947808 -
-Eunomia - -1 - -1810054421 - -8532001 - -242 - -0.5052192 -
-Eunomia - -1 - -3451257159 - -8507001 - -57 - -0.4596774 -
-Eunomia - -1 - -3451257159 - -8532001 - -67 - -0.5403226 -
-Eunomia - -1 - -4205858076 - -8507001 - -180 - -0.5070423 -
-Eunomia - -1 - -4205858076 - -8532001 - -175 - -0.4929577 -
-Eunomia - -1 - -1639865637 - -8507001 - -57 - -0.4596774 -
-Eunomia - -1 - -1639865637 - -8532001 - -67 - -0.5403226 -
-Eunomia - -1 - -2896235937 - -8507001 - -237 - -0.4947808 -
-Eunomia - -1 - -2896235937 - -8532001 - -242 - -0.5052192 -
-Eunomia - -1 - -3648795733 - -8507001 - -180 - -0.5070423 -
-Eunomia - -1 - -3648795733 - -8532001 - -175 - -0.4929577 -
-
    -
  • covariatesContinuous:
  • -
-
-agc$covariatesContinuous %>%
-  collect() %>%
-  kableExtra::kbl()
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-databaseId - -runId - -cohortDefinitionId - -covariateId - -countValue - -minValue - -maxValue - -averageValue - -standardDeviation - -medianValue - -p10Value - -p25Value - -p75Value - -p90Value -
-Eunomia - -1 - -170096739 - -1901 - -41 - -0 - -2 - -0.4274194 - -0.4606464 - -0 - -0 - -0 - -1 - -1 -
-Eunomia - -1 - -3945088378 - -1901 - -275 - -0 - -2 - -0.9549296 - -0.4233403 - -1 - -0 - -1 - -1 - -2 -
-Eunomia - -1 - -2246615035 - -1901 - -296 - -0 - -3 - -0.4024096 - -0.3450454 - -0 - -0 - -0 - -1 - -1 -
-Eunomia - -1 - -2038349030 - -1901 - -316 - -0 - -2 - -0.8183716 - -0.4280688 - -1 - -0 - -0 - -1 - -2 -
-Eunomia - -1 - -1421450349 - -1901 - -316 - -0 - -2 - -0.8204593 - -0.4299773 - -1 - -0 - -0 - -1 - -2 -
-Eunomia - -1 - -2038795861 - -1901 - -935 - -0 - -2 - -0.6144444 - -0.3867813 - -1 - -0 - -0 - -1 - -1 -
-Eunomia - -1 - -1498088760 - -1901 - -1231 - -0 - -3 - -0.5475285 - -0.3777510 - -0 - -0 - -0 - -1 - -1 -
-Eunomia - -1 - -170096739 - -1002 - -124 - -32 - -46 - -38.8709677 - -3.4000663 - -39 - -34 - -36 - -41 - -44 -
-Eunomia - -1 - -3945088378 - -1002 - -355 - -32 - -46 - -38.7746479 - -3.2746121 - -39 - -35 - -36 - -41 - -43 -
-Eunomia - -1 - -2038349030 - -1002 - -479 - -32 - -46 - -38.7995825 - -3.3042257 - -39 - -34 - -36 - -41 - -44 -
-Eunomia - -1 - -1421450349 - -1002 - -479 - -32 - -47 - -38.9206681 - -3.2884308 - -39 - -35 - -36 - -41 - -44 -
-Eunomia - -1 - -2246615035 - -1002 - -830 - -31 - -46 - -38.5746988 - -3.2910429 - -39 - -34 - -36 - -41 - -43 -
-Eunomia - -1 - -2038795861 - -1002 - -1800 - -31 - -47 - -38.6450000 - -3.3212435 - -39 - -34 - -36 - -41 - -43 -
-Eunomia - -1 - -1498088760 - -1002 - -2630 - -31 - -47 - -38.6228137 - -3.3112779 - -39 - -34 - -36 - -41 - -43 -
-Eunomia - -1 - -3451257159 - -1901 - -41 - -0 - -2 - -0.4274194 - -0.4606464 - -0 - -0 - -0 - -1 - -1 -
-Eunomia - -1 - -4205858076 - -1901 - -275 - -0 - -2 - -0.9549296 - -0.4233403 - -1 - -0 - -1 - -1 - -2 -
-Eunomia - -1 - -1810054421 - -1901 - -316 - -0 - -2 - -0.8183716 - -0.4280688 - -1 - -0 - -0 - -1 - -2 -
-Eunomia - -1 - -3451257159 - -1002 - -124 - -32 - -46 - -38.8709677 - -3.4000663 - -39 - -34 - -36 - -41 - -44 -
-Eunomia - -1 - -4205858076 - -1002 - -355 - -32 - -46 - -38.7746479 - -3.2746121 - -39 - -35 - -36 - -41 - -43 -
-Eunomia - -1 - -1810054421 - -1002 - -479 - -32 - -46 - -38.7995825 - -3.3042257 - -39 - -34 - -36 - -41 - -44 -
-Eunomia - -1 - -1639865637 - -1901 - -41 - -0 - -2 - -0.4274194 - -0.4606464 - -0 - -0 - -0 - -1 - -1 -
-Eunomia - -1 - -3648795733 - -1901 - -277 - -0 - -2 - -0.9633803 - -0.4245513 - -1 - -0 - -1 - -1 - -2 -
-Eunomia - -1 - -2896235937 - -1901 - -318 - -0 - -2 - -0.8246347 - -0.4290528 - -1 - -0 - -0 - -1 - -2 -
-Eunomia - -1 - -1639865637 - -1002 - -124 - -32 - -47 - -38.9758065 - -3.4226973 - -39 - -34 - -36 - -41 - -44 -
-Eunomia - -1 - -3648795733 - -1002 - -355 - -32 - -46 - -38.9014085 - -3.2449654 - -39 - -35 - -36 - -41 - -43 -
-Eunomia - -1 - -2896235937 - -1002 - -479 - -32 - -47 - -38.9206681 - -3.2884308 - -39 - -35 - -36 - -41 - -44 -
-
    -
  • covariateRef:
  • -
-
-agc$covariateRef %>%
-  collect() %>%
-  kableExtra::kbl()
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-databaseId - -runId - -covariateId - -covariateName - -analysisId - -conceptId -
-Eunomia - -1 - -8507001 - -gender = MALE - -1 - -8507 -
-Eunomia - -1 - -8532001 - -gender = FEMALE - -1 - -8532 -
-Eunomia - -1 - -1901 - -Charlson index - Romano adaptation - -901 - -0 -
-Eunomia - -1 - -1002 - -age in years - -2 - -0 -
-
    -
  • analysisRef:
  • -
-
-agc$analysisRef %>%
-  collect() %>%
-  kableExtra::kbl()
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-databaseId - -runId - -analysisId - -analysisName - -domainId - -startDay - -endDay - -isBinary - -missingMeansZero -
-Eunomia - -1 - -1 - -DemographicsGender - -Demographics - -NA - -NA - -Y - -NA -
-Eunomia - -1 - -901 - -CharlsonIndex - -Condition - -NA - -0 - -N - -Y -
-Eunomia - -1 - -2 - -DemographicsAge - -Demographics - -NA - -NA - -N - -Y -
-
-
-

Dechallenge Rechallenge -

-

To run a ‘Dechallenge Rechallenge’ analysis you need to create a -setting object using createDechallengeRechallengeSettings. -This requires specifying:

-
    -
  • one or more targetIds (these must be pre-generated in a cohort -table)
  • -
  • one or more outcomeIds (these must be pre-generated in a cohort -table)
  • -
  • dechallengeStopInterval
  • -
  • dechallengeEvaluationWindow
  • -
-

Using the Eunomia data were we previous generated four cohorts, we -can use cohort ids 1,2 and 4 as the targetIds and cohort id 3 as the -outcomeIds:

-
-exampleTargetIds <- c(1, 2, 4)
-exampleOutcomeIds <- 3
-

If we want to create the dechallenge rechallenge for all our target -cohorts and our outcome cohort with a 30 day dechallengeStopInterval and -31 day dechallengeEvaluationWindow:

-
-exampleDechallengeRechallengeSettings <- createDechallengeRechallengeSettings(
-  targetIds = exampleTargetIds,
-  outcomeIds = exampleOutcomeIds,
-  dechallengeStopInterval = 30,
-  dechallengeEvaluationWindow = 31
-)
-

We can then run the analysis on the Eunomia data using -computeDechallengeRechallengeAnalyses and the settings -previously specified:

-
-dc <- computeDechallengeRechallengeAnalyses(
-  connectionDetails = connectionDetails,
-  targetDatabaseSchema = "main",
-  targetTable = "cohort",
-  dechallengeRechallengeSettings = exampleDechallengeRechallengeSettings,
-  databaseId = "Eunomia"
-)
-
## Inputs checked
-
## Connecting using SQLite driver
-## Computing dechallenge rechallenge results
-
## 
-  |                                                                            
-  |                                                                      |   0%
-  |                                                                            
-  |============                                                          |  17%
-  |                                                                            
-  |=======================                                               |  33%
-  |                                                                            
-  |===================================                                   |  50%
-  |                                                                            
-  |===============================================                       |  67%
-  |                                                                            
-  |==========================================================            |  83%
-  |                                                                            
-  |======================================================================| 100%
-
## Executing SQL took 0.00665 secs
-## Computing dechallenge rechallenge for 3 target ids and 1outcome ids took 0.0583 secs
-

If you would like to save the results you can use the function -saveDechallengeRechallengeAnalyses and this can then be -loaded using loadDechallengeRechallengeAnalyses.

-

The results are Andromeda objects that can we viewed using -dplyr. There is just one table named -dechallengeRechallenge:

-
-dc$dechallengeRechallenge %>%
-  collect() %>%
-  kableExtra::kbl()
- - - - - - - - - - - - - - - - - - - - - - - - -
-databaseId - -dechallengeStopInterval - -dechallengeEvaluationWindow - -targetCohortDefinitionId - -outcomeCohortDefinitionId - -numExposureEras - -numPersonsExposed - -numCases - -dechallengeAttempt - -dechallengeFail - -dechallengeSuccess - -rechallengeAttempt - -rechallengeFail - -rechallengeSuccess - -pctDechallengeAttempt - -pctDechallengeSuccess - -pctDechallengeFail - -pctRechallengeAttempt - -pctRechallengeSuccess - -pctRechallengeFail -
-

Next it is possible to computer and extract the failed rechallenge -cases

-
-failed <- computeRechallengeFailCaseSeriesAnalyses(
-  connectionDetails = connectionDetails,
-  targetDatabaseSchema = "main",
-  targetTable = "cohort",
-  dechallengeRechallengeSettings = exampleDechallengeRechallengeSettings,
-  outcomeDatabaseSchema = "main",
-  outcomeTable = "cohort",
-  databaseId = "Eunomia"
-)
-
## Inputs checked
-
## Connecting using SQLite driver
-## Computing dechallenge rechallenge results
-
## 
-  |                                                                            
-  |                                                                      |   0%
-  |                                                                            
-  |============                                                          |  17%
-  |                                                                            
-  |=======================                                               |  33%
-  |                                                                            
-  |===================================                                   |  50%
-  |                                                                            
-  |===============================================                       |  67%
-  |                                                                            
-  |==========================================================            |  83%
-  |                                                                            
-  |======================================================================| 100%
-
## Executing SQL took 0.0953 secs
-## Computing dechallenge failed case series for 3 target IDs and 1 outcome IDs took 0.141 secs
-

The results are Andromeda objects that can we viewed using -dplyr. There is just one table named -rechallengeFailCaseSeries:

-
-failed$rechallengeFailCaseSeries %>%
-  collect() %>%
-  kableExtra::kbl()
- - - - - - - - - - - - - - - - - - - - - -
-databaseId - -dechallengeStopInterval - -dechallengeEvaluationWindow - -targetCohortDefinitionId - -outcomeCohortDefinitionId - -personKey - -subjectId - -dechallengeExposureNumber - -dechallengeExposureStartDateOffset - -dechallengeExposureEndDateOffset - -dechallengeOutcomeNumber - -dechallengeOutcomeStartDateOffset - -rechallengeExposureNumber - -rechallengeExposureStartDateOffset - -rechallengeExposureEndDateOffset - -rechallengeOutcomeNumber - -rechallengeOutcomeStartDateOffset -
-
-
-

Time to Event -

-

To run a ‘Time-to-event’ analysis you need to create a setting object -using createTimeToEventSettings. This requires -specifying:

-
    -
  • one or more targetIds (these must be pre-generated in a cohort -table)
  • -
  • one or more outcomeIds (these must be pre-generated in a cohort -table)
  • -
-
-exampleTimeToEventSettings <- createTimeToEventSettings(
-  targetIds = exampleTargetIds,
-  outcomeIds = exampleOutcomeIds
-)
-

We can then run the analysis on the Eunomia data using -computeTimeToEventAnalyses and the settings previously -specified:

-
-tte <- computeTimeToEventAnalyses(
-  connectionDetails = connectionDetails,
-  cdmDatabaseSchema = "main",
-  targetDatabaseSchema = "main",
-  targetTable = "cohort",
-  timeToEventSettings = exampleTimeToEventSettings,
-  databaseId = "Eunomia"
-)
-
## Connecting using SQLite driver
-## Uploading #cohort_settings
-## 
-## Inserting data took 0.00319 secs
-## Computing time to event results
-
## 
-  |                                                                            
-  |                                                                      |   0%
-  |                                                                            
-  |===                                                                   |   4%
-  |                                                                            
-  |======                                                                |   8%
-  |                                                                            
-  |=========                                                             |  12%
-  |                                                                            
-  |============                                                          |  17%
-  |                                                                            
-  |===============                                                       |  21%
-  |                                                                            
-  |==================                                                    |  25%
-  |                                                                            
-  |====================                                                  |  29%
-  |                                                                            
-  |=======================                                               |  33%
-  |                                                                            
-  |==========================                                            |  38%
-  |                                                                            
-  |=============================                                         |  42%
-  |                                                                            
-  |================================                                      |  46%
-  |                                                                            
-  |===================================                                   |  50%
-  |                                                                            
-  |======================================                                |  54%
-  |                                                                            
-  |=========================================                             |  58%
-  |                                                                            
-  |============================================                          |  62%
-  |                                                                            
-  |===============================================                       |  67%
-  |                                                                            
-  |==================================================                    |  71%
-  |                                                                            
-  |====================================================                  |  75%
-  |                                                                            
-  |=======================================================               |  79%
-  |                                                                            
-  |==========================================================            |  83%
-  |                                                                            
-  |=============================================================         |  88%
-  |                                                                            
-  |================================================================      |  92%
-  |                                                                            
-  |===================================================================   |  96%
-  |                                                                            
-  |======================================================================| 100%
-
## Executing SQL took 0.0416 secs
-## Computing time-to-event for T-O pairs took 0.146 secs
-

If you would like to save the results you can use the function -saveTimeToEventAnalyses and this can then be loaded using -loadTimeToEventAnalyses.

-

The results are Andromeda objects that can we viewed using -dplyr. There is just one table named timeToEvent:

-
-tte$timeToEvent %>%
-  collect() %>%
-  top_n(10) %>%
-  kableExtra::kbl()
-
## Selecting by timeScale
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-databaseId - -targetCohortDefinitionId - -outcomeCohortDefinitionId - -outcomeType - -targetOutcomeType - -timeToEvent - -numEvents - -timeScale -
-Eunomia - -1 - -3 - -first - -After last target end - -30 - -109 - -per 30-day -
-Eunomia - -1 - -3 - -first - -After last target end - -60 - -114 - -per 30-day -
-Eunomia - -1 - -3 - -first - -After last target end - -90 - -132 - -per 30-day -
-Eunomia - -2 - -3 - -first - -After last target end - -30 - -46 - -per 30-day -
-Eunomia - -2 - -3 - -first - -After last target end - -60 - -39 - -per 30-day -
-Eunomia - -2 - -3 - -first - -After last target end - -90 - -39 - -per 30-day -
-Eunomia - -4 - -3 - -first - -After last target end - -30 - -155 - -per 30-day -
-Eunomia - -4 - -3 - -first - -After last target end - -60 - -153 - -per 30-day -
-Eunomia - -4 - -3 - -first - -After last target end - -90 - -171 - -per 30-day -
-Eunomia - -1 - -3 - -first - -After last target end - -365 - -355 - -per 365-day -
-Eunomia - -2 - -3 - -first - -After last target end - -365 - -124 - -per 365-day -
-Eunomia - -4 - -3 - -first - -After last target end - -365 - -479 - -per 365-day -
-
-
-

Run Multiple -

-

If you want to run multiple analyses (of the three previously shown) -you can use createCharacterizationSettings. You need to -input a list of each of the settings (or NULL if you do not want to run -one type of analysis). To run all the analyses previously shown in one -function:

-
-characterizationSettings <- createCharacterizationSettings(
-  timeToEventSettings = list(
-    exampleTimeToEventSettings
-  ),
-  dechallengeRechallengeSettings = list(
-    exampleDechallengeRechallengeSettings
-  ),
-  aggregateCovariateSettings = list(
-    exampleAggregateCovariateSettings
-  )
-)
-
-# save the settings using
-saveCharacterizationSettings(
-  settings = characterizationSettings,
-  saveDirectory = file.path(tempdir(), "saveSettings")
-)
-
-# the settings can be loaded
-characterizationSettings <- loadCharacterizationSettings(
-  saveDirectory = file.path(tempdir(), "saveSettings")
-)
-
-runCharacterizationAnalyses(
-  connectionDetails = connectionDetails,
-  cdmDatabaseSchema = "main",
-  targetDatabaseSchema = "main",
-  targetTable = "cohort",
-  outcomeDatabaseSchema = "main",
-  outcomeTable = "cohort",
-  characterizationSettings = characterizationSettings,
-  saveDirectory = file.path(tempdir(), "example"),
-  tablePrefix = "c_",
-  databaseId = "1"
-)
-

This will create an SQLITE database with all the analyses saved into -the saveDirectory. You can export the results as csv files using:

-
-connectionDetailsT <- DatabaseConnector::createConnectionDetails(
-  dbms = "sqlite",
-  server = file.path(tempdir(), "example", "sqliteCharacterization", "sqlite.sqlite")
-)
-
-exportDatabaseToCsv(
-  connectionDetails = connectionDetailsT,
-  resultSchema = "main",
-  targetDialect = "sqlite",
-  tablePrefix = "c_",
-  saveDirectory = file.path(tempdir(), "csv")
-)
-
-
-
- - - -
- - - - -
- - - - - - - - diff --git a/docs/articles/UsingCharacterizationPackage_files/kePrint-0.0.1/kePrint.js b/docs/articles/UsingCharacterizationPackage_files/kePrint-0.0.1/kePrint.js deleted file mode 100644 index e6fbbfc..0000000 --- a/docs/articles/UsingCharacterizationPackage_files/kePrint-0.0.1/kePrint.js +++ /dev/null @@ -1,8 +0,0 @@ -$(document).ready(function(){ - if (typeof $('[data-toggle="tooltip"]').tooltip === 'function') { - $('[data-toggle="tooltip"]').tooltip(); - } - if ($('[data-toggle="popover"]').popover === 'function') { - $('[data-toggle="popover"]').popover(); - } -}); diff --git a/docs/articles/UsingCharacterizationPackage_files/lightable-0.0.1/lightable.css b/docs/articles/UsingCharacterizationPackage_files/lightable-0.0.1/lightable.css deleted file mode 100644 index 3be3be9..0000000 --- a/docs/articles/UsingCharacterizationPackage_files/lightable-0.0.1/lightable.css +++ /dev/null @@ -1,272 +0,0 @@ -/*! - * lightable v0.0.1 - * Copyright 2020 Hao Zhu - * Licensed under MIT (https://github.com/haozhu233/kableExtra/blob/master/LICENSE) - */ - -.lightable-minimal { - border-collapse: separate; - border-spacing: 16px 1px; - width: 100%; - margin-bottom: 10px; -} - -.lightable-minimal td { - margin-left: 5px; - margin-right: 5px; -} - -.lightable-minimal th { - margin-left: 5px; - margin-right: 5px; -} - -.lightable-minimal thead tr:last-child th { - border-bottom: 2px solid #00000050; - empty-cells: hide; - -} - -.lightable-minimal tbody tr:first-child td { - padding-top: 0.5em; -} - -.lightable-minimal.lightable-hover tbody tr:hover { - background-color: #f5f5f5; -} - -.lightable-minimal.lightable-striped tbody tr:nth-child(even) { - background-color: #f5f5f5; -} - -.lightable-classic { - border-top: 0.16em solid #111111; - border-bottom: 0.16em solid #111111; - width: 100%; - margin-bottom: 10px; - margin: 10px 5px; -} - -.lightable-classic tfoot tr td { - border: 0; -} - -.lightable-classic tfoot tr:first-child td { - border-top: 0.14em solid #111111; -} - -.lightable-classic caption { - color: #222222; -} - -.lightable-classic td { - padding-left: 5px; - padding-right: 5px; - color: #222222; -} - -.lightable-classic th { - padding-left: 5px; - padding-right: 5px; - font-weight: normal; - color: #222222; -} - -.lightable-classic thead tr:last-child th { - border-bottom: 0.10em solid #111111; -} - -.lightable-classic.lightable-hover tbody tr:hover { - background-color: #F9EEC1; -} - -.lightable-classic.lightable-striped tbody tr:nth-child(even) { - background-color: #f5f5f5; -} - -.lightable-classic-2 { - border-top: 3px double #111111; - border-bottom: 3px double #111111; - width: 100%; - margin-bottom: 10px; -} - -.lightable-classic-2 tfoot tr td { - border: 0; -} - -.lightable-classic-2 tfoot tr:first-child td { - border-top: 3px double #111111; -} - -.lightable-classic-2 caption { - color: #222222; -} - -.lightable-classic-2 td { - padding-left: 5px; - padding-right: 5px; - color: #222222; -} - -.lightable-classic-2 th { - padding-left: 5px; - padding-right: 5px; - font-weight: normal; - color: #222222; -} - -.lightable-classic-2 tbody tr:last-child td { - border-bottom: 3px double #111111; -} - -.lightable-classic-2 thead tr:last-child th { - border-bottom: 1px solid #111111; -} - -.lightable-classic-2.lightable-hover tbody tr:hover { - background-color: #F9EEC1; -} - -.lightable-classic-2.lightable-striped tbody tr:nth-child(even) { - background-color: #f5f5f5; -} - -.lightable-material { - min-width: 100%; - white-space: nowrap; - table-layout: fixed; - font-family: Roboto, sans-serif; - border: 1px solid #EEE; - border-collapse: collapse; - margin-bottom: 10px; -} - -.lightable-material tfoot tr td { - border: 0; -} - -.lightable-material tfoot tr:first-child td { - border-top: 1px solid #EEE; -} - -.lightable-material th { - height: 56px; - padding-left: 16px; - padding-right: 16px; -} - -.lightable-material td { - height: 52px; - padding-left: 16px; - padding-right: 16px; - border-top: 1px solid #eeeeee; -} - -.lightable-material.lightable-hover tbody tr:hover { - background-color: #f5f5f5; -} - -.lightable-material.lightable-striped tbody tr:nth-child(even) { - background-color: #f5f5f5; -} - -.lightable-material.lightable-striped tbody td { - border: 0; -} - -.lightable-material.lightable-striped thead tr:last-child th { - border-bottom: 1px solid #ddd; -} - -.lightable-material-dark { - min-width: 100%; - white-space: nowrap; - table-layout: fixed; - font-family: Roboto, sans-serif; - border: 1px solid #FFFFFF12; - border-collapse: collapse; - margin-bottom: 10px; - background-color: #363640; -} - -.lightable-material-dark tfoot tr td { - border: 0; -} - -.lightable-material-dark tfoot tr:first-child td { - border-top: 1px solid #FFFFFF12; -} - -.lightable-material-dark th { - height: 56px; - padding-left: 16px; - padding-right: 16px; - color: #FFFFFF60; -} - -.lightable-material-dark td { - height: 52px; - padding-left: 16px; - padding-right: 16px; - color: #FFFFFF; - border-top: 1px solid #FFFFFF12; -} - -.lightable-material-dark.lightable-hover tbody tr:hover { - background-color: #FFFFFF12; -} - -.lightable-material-dark.lightable-striped tbody tr:nth-child(even) { - background-color: #FFFFFF12; -} - -.lightable-material-dark.lightable-striped tbody td { - border: 0; -} - -.lightable-material-dark.lightable-striped thead tr:last-child th { - border-bottom: 1px solid #FFFFFF12; -} - -.lightable-paper { - width: 100%; - margin-bottom: 10px; - color: #444; -} - -.lightable-paper tfoot tr td { - border: 0; -} - -.lightable-paper tfoot tr:first-child td { - border-top: 1px solid #00000020; -} - -.lightable-paper thead tr:last-child th { - color: #666; - vertical-align: bottom; - border-bottom: 1px solid #00000020; - line-height: 1.15em; - padding: 10px 5px; -} - -.lightable-paper td { - vertical-align: middle; - border-bottom: 1px solid #00000010; - line-height: 1.15em; - padding: 7px 5px; -} - -.lightable-paper.lightable-hover tbody tr:hover { - background-color: #F9EEC1; -} - -.lightable-paper.lightable-striped tbody tr:nth-child(even) { - background-color: #00000008; -} - -.lightable-paper.lightable-striped tbody td { - border: 0; -} - diff --git a/docs/articles/UsingPackage.html b/docs/articles/UsingPackage.html deleted file mode 100644 index ecc3beb..0000000 --- a/docs/articles/UsingPackage.html +++ /dev/null @@ -1,511 +0,0 @@ - - - - - - - -Using Characterization Package • Characterization - - - - - - - - - - - - -
-
- - - - -
-
- - - - - -
-

Introduction -

-

This vignette describes how you can use the Characterization package -for various descriptive studies using OMOP CDM data. The -Characterization package currently contains three different types of -analyses:

-
    -
  • Aggregate Covariates: this returns the mean feature value for a set -of features specified by the user for i) the Target cohort population, -ii) the Outcome cohort population, iii) the Target population patients -who had the outcome during some user specified time-at-risk and iv) the -Target population patients who did not have the outcome during some user -specified time-at-risk.
  • -
  • DechallengeRechallenge: this is mainly aimed at investigating -whether a drug and event are causally related by seeing whether the drug -is stopped close in time to the event occurrence (dechallenge) and then -whether the drug is restarted (a rechallenge occurs) and if so, whether -the event starts again (a failed rechallenge). In this analysis, the -Target cohorts are the drug users of interest and the Outcome cohorts -are the medical events you wish to see whether the drug may cause. The -user must also specify how close in time a drug must be stopped after -the outcome to be considered a dechallenge and how close in time an -Outcome must occur after restarting the drug to be considered a failed -rechallenge).
  • -
  • Time-to-event: this returns descriptive results showing the timing -between the target cohort and outcome. This can help identify whether -the outcome often precedes the target cohort or whether it generally -comes after.
  • -
-
-
-

Setup -

-

In this vignette we will show working examples using the -Eunomia R package that contains simulated data. Run the -following code to install the Eunomia R package:

-
-install.packages("remotes")
-remotes::install_github("ohdsi/Eunomia")
-

Eunomia can be used to create a temporary SQLITE database with the -simulated data. The function getEunomiaConnectionDetails -creates a SQLITE connection to a temporary location. The function -createCohorts then populates the temporary SQLITE database -with the simulated data ready to be used.

-
-connectionDetails <- Eunomia::getEunomiaConnectionDetails()
-Eunomia::createCohorts(connectionDetails = connectionDetails)
-
## Connecting using SQLite driver
-
## Creating cohort: Celecoxib
-## 
-  |                                                                            
-  |                                                                      |   0%
-  |                                                                            
-  |======================================================================| 100%
-
## Executing SQL took 0.0129 secs
-
## Creating cohort: Diclofenac
-## 
-  |                                                                            
-  |                                                                      |   0%
-  |                                                                            
-  |======================================================================| 100%
-
## Executing SQL took 0.00521 secs
-
## Creating cohort: GiBleed
-## 
-  |                                                                            
-  |                                                                      |   0%
-  |                                                                            
-  |======================================================================| 100%
-
## Executing SQL took 0.00938 secs
-
## Creating cohort: NSAIDs
-## 
-  |                                                                            
-  |                                                                      |   0%
-  |                                                                            
-  |======================================================================| 100%
-
## Executing SQL took 0.05 secs
-
## Cohorts created in table main.cohort
-
##   cohortId       name
-## 1        1  Celecoxib
-## 2        2 Diclofenac
-## 3        3    GiBleed
-## 4        4     NSAIDs
-##                                                                                        description
-## 1    A simplified cohort definition for new users of celecoxib, designed specifically for Eunomia.
-## 2    A simplified cohort definition for new users ofdiclofenac, designed specifically for Eunomia.
-## 3 A simplified cohort definition for gastrointestinal bleeding, designed specifically for Eunomia.
-## 4       A simplified cohort definition for new users of NSAIDs, designed specifically for Eunomia.
-##   count
-## 1  1844
-## 2   850
-## 3   479
-## 4  2694
-

We also need to have the Characterization package installed and -loaded

-
-remotes::install_github("ohdsi/FeatureExtraction")
-remotes::install_github("ohdsi/Characterization", ref = "new_approach")
- -
## 
-## Attaching package: 'dplyr'
-
## The following objects are masked from 'package:stats':
-## 
-##     filter, lag
-
## The following objects are masked from 'package:base':
-## 
-##     intersect, setdiff, setequal, union
-
-
-

Examples -

-
-

Aggreagate Covariates -

-

To run an ‘Aggregate Covariate’ analysis you need to create a setting -object using createAggregateCovariateSettings. This -requires specifying:

-
    -
  • one or more targetIds (these must be pre-generated in a cohort -table)
  • -
  • one or more outcomeIds (these must be pre-generated in a cohort -table)
  • -
  • the covariate settings using -FeatureExtraction::createCovariateSettings or by creating -your own custom feature extraction code.
  • -
  • the time-at-risk settings
  • -
  • riskWindowStart
  • -
  • startAnchor
  • -
  • riskWindowEnd
  • -
  • endAnchor
  • -
-

Using the Eunomia data were we previous generated four cohorts, we -can use cohort ids 1,2 and 4 as the targetIds and cohort id 3 as the -outcomeIds:

-
-exampleTargetIds <- c(1, 2, 4)
-exampleOutcomeIds <- 3
-

If we want to get information on the sex, age at index and Charlson -Comorbidity index we can create the settings using -FeatureExtraction::createCovariateSettings:

-
-exampleCovariateSettings <- FeatureExtraction::createCovariateSettings(
-  useDemographicsGender = T,
-  useDemographicsAge = T,
-  useCharlsonIndex = T
-)
-

There is an additional covariate setting require that is calculated -for the cases (patients in the target cohort with have the outcome -during the time-at-risk). This is called caseCovariateSettings and -should be created using the createDuringCovariateSettings function. The -user can pick conditions, drugs, measurements, procedures and -observations. In this example, we just include condition eras groups by -vocabulary heirarchy. We also need to specify two related variables -casePreTargetDuration which is the number of days before -target index to extract features for the cases (answers what happens -shortly before the target index) and -casePostOutcomeDuration which is the number of days after -the outcome date to extract features for the cases (answers what happens -after the outcome). The case covariates are also extracted between -target index and outcome (answers the question what happens during -target exposure).

-
-caseCovariateSettings <- Characterization::createDuringCovariateSettings(
-  useConditionGroupEraDuring = T
-)
-

If we want to create the aggregate features for all our target -cohorts, our outcome cohort and each target cohort restricted to those -with a record of the outcome 1 day after target cohort start date until -365 days after target cohort end date with a outcome washout of 9999 -(meaning we only include outcomes that are the first occurrence in the -past 9999 days) and only include targets or outcomes where the patient -was observed for 365 days or more prior, we can run:

-
-exampleAggregateCovariateSettings <- createAggregateCovariateSettings(
-  targetIds = exampleTargetIds,
-  outcomeIds = exampleOutcomeIds,
-  riskWindowStart = 1, startAnchor = "cohort start",
-  riskWindowEnd = 365, endAnchor = "cohort start",
-  outcomeWashoutDays = 9999,
-  minPriorObservation = 365,
-  covariateSettings = exampleCovariateSettings,
-  caseCovariateSettings = caseCovariateSettings,
-  casePreTargetDuration = 90,
-  casePostOutcomeDuration = 90
-)
-

Next we need to use the -exampleAggregateCovariateSettings as the settings to -computeAggregateCovariateAnalyses, we need to use the -Eunomia connectionDetails and in Eunomia the OMOP CDM data and cohort -table are in the ‘main’ schema. The cohort table name is ‘cohort’. The -following code will apply the aggregated covariates analysis using the -previously specified settings on the simulated Eunomia data, but we can -specify the minCharacterizationMean to exclude covarites -with mean values below 0.01, and we must specify the -outputFolder where the csv results will be written to.

-
-runCharacterizationAnalyses(
-  connectionDetails = connectionDetails,
-  cdmDatabaseSchema = "main",
-  targetDatabaseSchema = "main",
-  targetTable = "cohort",
-  outcomeDatabaseSchema = "main",
-  outcomeTable = "cohort",
-  characterizationSettings = createCharacterizationSettings(
-    aggregateCovariateSettings = exampleAggregateCovariateSettings
-  ),
-  databaseId = "Eunomia",
-  runId = 1,
-  minCharacterizationMean = 0.01,
-  outputDirectory = file.path(getwd(), "example_char", "results"), executionPath = file.path(getwd(), "example_char", "execution"),
-  minCellCount = 10,
-  incremental = F,
-  threads = 1
-)
-

You can then see the results in the location -file.path(getwd(), 'example_char', 'results') where you -will find csv files.

-
-
-

Dechallenge Rechallenge -

-

To run a ‘Dechallenge Rechallenge’ analysis you need to create a -setting object using createDechallengeRechallengeSettings. -This requires specifying:

-
    -
  • one or more targetIds (these must be pre-generated in a cohort -table)
  • -
  • one or more outcomeIds (these must be pre-generated in a cohort -table)
  • -
  • dechallengeStopInterval
  • -
  • dechallengeEvaluationWindow
  • -
-

Using the Eunomia data were we previous generated four cohorts, we -can use cohort ids 1,2 and 4 as the targetIds and cohort id 3 as the -outcomeIds:

-
-exampleTargetIds <- c(1, 2, 4)
-exampleOutcomeIds <- 3
-

If we want to create the dechallenge rechallenge for all our target -cohorts and our outcome cohort with a 30 day dechallengeStopInterval and -31 day dechallengeEvaluationWindow:

-
-exampleDechallengeRechallengeSettings <- createDechallengeRechallengeSettings(
-  targetIds = exampleTargetIds,
-  outcomeIds = exampleOutcomeIds,
-  dechallengeStopInterval = 30,
-  dechallengeEvaluationWindow = 31
-)
-

We can then run the analysis on the Eunomia data using -computeDechallengeRechallengeAnalyses and the settings -previously specified, with minCellCount removing values -less than the specified value:

-
-dc <- computeDechallengeRechallengeAnalyses(
-  connectionDetails = connectionDetails,
-  targetDatabaseSchema = "main",
-  targetTable = "cohort",
-  settings = exampleDechallengeRechallengeSettings,
-  databaseId = "Eunomia",
-  outcomeTable = file.path(getwd(), "example_char", "results"),
-  minCellCount = 5
-)
-

Next it is possible to compute the failed rechallenge cases

-
-failed <- computeRechallengeFailCaseSeriesAnalyses(
-  connectionDetails = connectionDetails,
-  targetDatabaseSchema = "main",
-  targetTable = "cohort",
-  settings = exampleDechallengeRechallengeSettings,
-  outcomeDatabaseSchema = "main",
-  outcomeTable = "cohort",
-  databaseId = "Eunomia",
-  outcomeTable = file.path(getwd(), "example_char", "results"),
-  minCellCount = 5
-)
-
-
-

Time to Event -

-

To run a ‘Time-to-event’ analysis you need to create a setting object -using createTimeToEventSettings. This requires -specifying:

-
    -
  • one or more targetIds (these must be pre-generated in a cohort -table)
  • -
  • one or more outcomeIds (these must be pre-generated in a cohort -table)
  • -
-
-exampleTimeToEventSettings <- createTimeToEventSettings(
-  targetIds = exampleTargetIds,
-  outcomeIds = exampleOutcomeIds
-)
-

We can then run the analysis on the Eunomia data using -computeTimeToEventAnalyses and the settings previously -specified:

-
-tte <- computeTimeToEventAnalyses(
-  connectionDetails = connectionDetails,
-  cdmDatabaseSchema = "main",
-  targetDatabaseSchema = "main",
-  targetTable = "cohort",
-  settings = exampleTimeToEventSettings,
-  databaseId = "Eunomia",
-  outcomeTable = file.path(getwd(), "example_char", "results"),
-  minCellCount = 5
-)
-
-
-

Run Multiple -

-

If you want to run multiple analyses (of the three previously shown) -you can use createCharacterizationSettings. You need to -input a list of each of the settings (or NULL if you do not want to run -one type of analysis). To run all the analyses previously shown in one -function:

-
-characterizationSettings <- createCharacterizationSettings(
-  timeToEventSettings = list(
-    exampleTimeToEventSettings
-  ),
-  dechallengeRechallengeSettings = list(
-    exampleDechallengeRechallengeSettings
-  ),
-  aggregateCovariateSettings = exampleAggregateCovariateSettings
-)
-
-# save the settings using
-saveCharacterizationSettings(
-  settings = characterizationSettings,
-  saveDirectory = file.path(tempdir(), "saveSettings")
-)
-
-# the settings can be loaded
-characterizationSettings <- loadCharacterizationSettings(
-  saveDirectory = file.path(tempdir(), "saveSettings")
-)
-
-runCharacterizationAnalyses(
-  connectionDetails = connectionDetails,
-  cdmDatabaseSchema = "main",
-  targetDatabaseSchema = "main",
-  targetTable = "cohort",
-  outcomeDatabaseSchema = "main",
-  outcomeTable = "cohort",
-  characterizationSettings = characterizationSettings,
-  outputDirectory = file.path(tempdir(), "example", "results"),
-  executionPath = file.path(tempdir(), "example", "execution"),
-  csvFilePrefix = "c_",
-  databaseId = "1",
-  incremental = F,
-  minCharacterizationMean = 0.01,
-  minCellCount = 5
-)
-

This will create csv files with the results in the saveDirectory. You -can run the following code to view the results in a shiny app:

-
-viewCharacterization(
-  resultFolder = file.path(tempdir(), "example", "results"),
-  cohortDefinitionSet = NULL
-)
-
-
-
- - - -
- - - - -
- - - - - - - - diff --git a/docs/articles/UsingPackage_files/kePrint-0.0.1/kePrint.js b/docs/articles/UsingPackage_files/kePrint-0.0.1/kePrint.js deleted file mode 100644 index e6fbbfc..0000000 --- a/docs/articles/UsingPackage_files/kePrint-0.0.1/kePrint.js +++ /dev/null @@ -1,8 +0,0 @@ -$(document).ready(function(){ - if (typeof $('[data-toggle="tooltip"]').tooltip === 'function') { - $('[data-toggle="tooltip"]').tooltip(); - } - if ($('[data-toggle="popover"]').popover === 'function') { - $('[data-toggle="popover"]').popover(); - } -}); diff --git a/docs/articles/UsingPackage_files/lightable-0.0.1/lightable.css b/docs/articles/UsingPackage_files/lightable-0.0.1/lightable.css deleted file mode 100644 index 3be3be9..0000000 --- a/docs/articles/UsingPackage_files/lightable-0.0.1/lightable.css +++ /dev/null @@ -1,272 +0,0 @@ -/*! - * lightable v0.0.1 - * Copyright 2020 Hao Zhu - * Licensed under MIT (https://github.com/haozhu233/kableExtra/blob/master/LICENSE) - */ - -.lightable-minimal { - border-collapse: separate; - border-spacing: 16px 1px; - width: 100%; - margin-bottom: 10px; -} - -.lightable-minimal td { - margin-left: 5px; - margin-right: 5px; -} - -.lightable-minimal th { - margin-left: 5px; - margin-right: 5px; -} - -.lightable-minimal thead tr:last-child th { - border-bottom: 2px solid #00000050; - empty-cells: hide; - -} - -.lightable-minimal tbody tr:first-child td { - padding-top: 0.5em; -} - -.lightable-minimal.lightable-hover tbody tr:hover { - background-color: #f5f5f5; -} - -.lightable-minimal.lightable-striped tbody tr:nth-child(even) { - background-color: #f5f5f5; -} - -.lightable-classic { - border-top: 0.16em solid #111111; - border-bottom: 0.16em solid #111111; - width: 100%; - margin-bottom: 10px; - margin: 10px 5px; -} - -.lightable-classic tfoot tr td { - border: 0; -} - -.lightable-classic tfoot tr:first-child td { - border-top: 0.14em solid #111111; -} - -.lightable-classic caption { - color: #222222; -} - -.lightable-classic td { - padding-left: 5px; - padding-right: 5px; - color: #222222; -} - -.lightable-classic th { - padding-left: 5px; - padding-right: 5px; - font-weight: normal; - color: #222222; -} - -.lightable-classic thead tr:last-child th { - border-bottom: 0.10em solid #111111; -} - -.lightable-classic.lightable-hover tbody tr:hover { - background-color: #F9EEC1; -} - -.lightable-classic.lightable-striped tbody tr:nth-child(even) { - background-color: #f5f5f5; -} - -.lightable-classic-2 { - border-top: 3px double #111111; - border-bottom: 3px double #111111; - width: 100%; - margin-bottom: 10px; -} - -.lightable-classic-2 tfoot tr td { - border: 0; -} - -.lightable-classic-2 tfoot tr:first-child td { - border-top: 3px double #111111; -} - -.lightable-classic-2 caption { - color: #222222; -} - -.lightable-classic-2 td { - padding-left: 5px; - padding-right: 5px; - color: #222222; -} - -.lightable-classic-2 th { - padding-left: 5px; - padding-right: 5px; - font-weight: normal; - color: #222222; -} - -.lightable-classic-2 tbody tr:last-child td { - border-bottom: 3px double #111111; -} - -.lightable-classic-2 thead tr:last-child th { - border-bottom: 1px solid #111111; -} - -.lightable-classic-2.lightable-hover tbody tr:hover { - background-color: #F9EEC1; -} - -.lightable-classic-2.lightable-striped tbody tr:nth-child(even) { - background-color: #f5f5f5; -} - -.lightable-material { - min-width: 100%; - white-space: nowrap; - table-layout: fixed; - font-family: Roboto, sans-serif; - border: 1px solid #EEE; - border-collapse: collapse; - margin-bottom: 10px; -} - -.lightable-material tfoot tr td { - border: 0; -} - -.lightable-material tfoot tr:first-child td { - border-top: 1px solid #EEE; -} - -.lightable-material th { - height: 56px; - padding-left: 16px; - padding-right: 16px; -} - -.lightable-material td { - height: 52px; - padding-left: 16px; - padding-right: 16px; - border-top: 1px solid #eeeeee; -} - -.lightable-material.lightable-hover tbody tr:hover { - background-color: #f5f5f5; -} - -.lightable-material.lightable-striped tbody tr:nth-child(even) { - background-color: #f5f5f5; -} - -.lightable-material.lightable-striped tbody td { - border: 0; -} - -.lightable-material.lightable-striped thead tr:last-child th { - border-bottom: 1px solid #ddd; -} - -.lightable-material-dark { - min-width: 100%; - white-space: nowrap; - table-layout: fixed; - font-family: Roboto, sans-serif; - border: 1px solid #FFFFFF12; - border-collapse: collapse; - margin-bottom: 10px; - background-color: #363640; -} - -.lightable-material-dark tfoot tr td { - border: 0; -} - -.lightable-material-dark tfoot tr:first-child td { - border-top: 1px solid #FFFFFF12; -} - -.lightable-material-dark th { - height: 56px; - padding-left: 16px; - padding-right: 16px; - color: #FFFFFF60; -} - -.lightable-material-dark td { - height: 52px; - padding-left: 16px; - padding-right: 16px; - color: #FFFFFF; - border-top: 1px solid #FFFFFF12; -} - -.lightable-material-dark.lightable-hover tbody tr:hover { - background-color: #FFFFFF12; -} - -.lightable-material-dark.lightable-striped tbody tr:nth-child(even) { - background-color: #FFFFFF12; -} - -.lightable-material-dark.lightable-striped tbody td { - border: 0; -} - -.lightable-material-dark.lightable-striped thead tr:last-child th { - border-bottom: 1px solid #FFFFFF12; -} - -.lightable-paper { - width: 100%; - margin-bottom: 10px; - color: #444; -} - -.lightable-paper tfoot tr td { - border: 0; -} - -.lightable-paper tfoot tr:first-child td { - border-top: 1px solid #00000020; -} - -.lightable-paper thead tr:last-child th { - color: #666; - vertical-align: bottom; - border-bottom: 1px solid #00000020; - line-height: 1.15em; - padding: 10px 5px; -} - -.lightable-paper td { - vertical-align: middle; - border-bottom: 1px solid #00000010; - line-height: 1.15em; - padding: 7px 5px; -} - -.lightable-paper.lightable-hover tbody tr:hover { - background-color: #F9EEC1; -} - -.lightable-paper.lightable-striped tbody tr:nth-child(even) { - background-color: #00000008; -} - -.lightable-paper.lightable-striped tbody td { - border: 0; -} - diff --git a/docs/articles/images/dcrc_dechal_summary.png b/docs/articles/images/dcrc_dechal_summary.png deleted file mode 100644 index 4a2a794..0000000 Binary files a/docs/articles/images/dcrc_dechal_summary.png and /dev/null differ diff --git a/docs/articles/images/dcrc_example_data.png b/docs/articles/images/dcrc_example_data.png deleted file mode 100644 index 996e99e..0000000 Binary files a/docs/articles/images/dcrc_example_data.png and /dev/null differ diff --git a/docs/articles/images/dcrc_rechal_summary.png b/docs/articles/images/dcrc_rechal_summary.png deleted file mode 100644 index c611721..0000000 Binary files a/docs/articles/images/dcrc_rechal_summary.png and /dev/null differ diff --git a/docs/articles/images/tte_example_data.png b/docs/articles/images/tte_example_data.png deleted file mode 100644 index 996e99e..0000000 Binary files a/docs/articles/images/tte_example_data.png and /dev/null differ diff --git a/docs/articles/images/tte_example_data_with_times.png b/docs/articles/images/tte_example_data_with_times.png deleted file mode 100644 index 0e4a84e..0000000 Binary files a/docs/articles/images/tte_example_data_with_times.png and /dev/null differ diff --git a/docs/articles/index.html b/docs/articles/index.html deleted file mode 100644 index 7e54a77..0000000 --- a/docs/articles/index.html +++ /dev/null @@ -1,108 +0,0 @@ - -Articles • Characterization - - -
-
- - - -
- -
- - -
- - - - - - - - diff --git a/docs/authors.html b/docs/authors.html deleted file mode 100644 index 9bb8b04..0000000 --- a/docs/authors.html +++ /dev/null @@ -1,133 +0,0 @@ - -Authors and Citation • Characterization - - -
-
- - - -
-
-
- - - -
  • -

    Jenna Reps. Author, maintainer. -

    -
  • -
  • -

    Patrick Ryan. Author. -

    -
  • -
  • -

    Chris Knoll. Author. -

    -
  • -
-
-
-

Citation

- Source: DESCRIPTION -
-
- - -

Reps J, Ryan P, Knoll C (2024). -Characterization: Characterizations of Cohorts. -https://ohdsi.github.io/Characterization, https://github.com/OHDSI/Characterization. -

-
@Manual{,
-  title = {Characterization: Characterizations of Cohorts},
-  author = {Jenna Reps and Patrick Ryan and Chris Knoll},
-  year = {2024},
-  note = {https://ohdsi.github.io/Characterization, https://github.com/OHDSI/Characterization},
-}
- -
- -
- - - -
- - - - - - - - diff --git a/docs/bootstrap-toc.css b/docs/bootstrap-toc.css deleted file mode 100644 index 5a85941..0000000 --- a/docs/bootstrap-toc.css +++ /dev/null @@ -1,60 +0,0 @@ -/*! - * Bootstrap Table of Contents v0.4.1 (http://afeld.github.io/bootstrap-toc/) - * Copyright 2015 Aidan Feldman - * Licensed under MIT (https://github.com/afeld/bootstrap-toc/blob/gh-pages/LICENSE.md) */ - -/* modified from https://github.com/twbs/bootstrap/blob/94b4076dd2efba9af71f0b18d4ee4b163aa9e0dd/docs/assets/css/src/docs.css#L548-L601 */ - -/* All levels of nav */ -nav[data-toggle='toc'] .nav > li > a { - display: block; - padding: 4px 20px; - font-size: 13px; - font-weight: 500; - color: #767676; -} -nav[data-toggle='toc'] .nav > li > a:hover, -nav[data-toggle='toc'] .nav > li > a:focus { - padding-left: 19px; - color: #563d7c; - text-decoration: none; - background-color: transparent; - border-left: 1px solid #563d7c; -} -nav[data-toggle='toc'] .nav > .active > a, -nav[data-toggle='toc'] .nav > .active:hover > a, -nav[data-toggle='toc'] .nav > .active:focus > a { - padding-left: 18px; - font-weight: bold; - color: #563d7c; - background-color: transparent; - border-left: 2px solid #563d7c; -} - -/* Nav: second level (shown on .active) */ -nav[data-toggle='toc'] .nav .nav { - display: none; /* Hide by default, but at >768px, show it */ - padding-bottom: 10px; -} -nav[data-toggle='toc'] .nav .nav > li > a { - padding-top: 1px; - padding-bottom: 1px; - padding-left: 30px; - font-size: 12px; - font-weight: normal; -} -nav[data-toggle='toc'] .nav .nav > li > a:hover, -nav[data-toggle='toc'] .nav .nav > li > a:focus { - padding-left: 29px; -} -nav[data-toggle='toc'] .nav .nav > .active > a, -nav[data-toggle='toc'] .nav .nav > .active:hover > a, -nav[data-toggle='toc'] .nav .nav > .active:focus > a { - padding-left: 28px; - font-weight: 500; -} - -/* from https://github.com/twbs/bootstrap/blob/e38f066d8c203c3e032da0ff23cd2d6098ee2dd6/docs/assets/css/src/docs.css#L631-L634 */ -nav[data-toggle='toc'] .nav > .active > ul { - display: block; -} diff --git a/docs/bootstrap-toc.js b/docs/bootstrap-toc.js deleted file mode 100644 index 1cdd573..0000000 --- a/docs/bootstrap-toc.js +++ /dev/null @@ -1,159 +0,0 @@ -/*! - * Bootstrap Table of Contents v0.4.1 (http://afeld.github.io/bootstrap-toc/) - * Copyright 2015 Aidan Feldman - * Licensed under MIT (https://github.com/afeld/bootstrap-toc/blob/gh-pages/LICENSE.md) */ -(function() { - 'use strict'; - - window.Toc = { - helpers: { - // return all matching elements in the set, or their descendants - findOrFilter: function($el, selector) { - // http://danielnouri.org/notes/2011/03/14/a-jquery-find-that-also-finds-the-root-element/ - // http://stackoverflow.com/a/12731439/358804 - var $descendants = $el.find(selector); - return $el.filter(selector).add($descendants).filter(':not([data-toc-skip])'); - }, - - generateUniqueIdBase: function(el) { - var text = $(el).text(); - var anchor = text.trim().toLowerCase().replace(/[^A-Za-z0-9]+/g, '-'); - return anchor || el.tagName.toLowerCase(); - }, - - generateUniqueId: function(el) { - var anchorBase = this.generateUniqueIdBase(el); - for (var i = 0; ; i++) { - var anchor = anchorBase; - if (i > 0) { - // add suffix - anchor += '-' + i; - } - // check if ID already exists - if (!document.getElementById(anchor)) { - return anchor; - } - } - }, - - generateAnchor: function(el) { - if (el.id) { - return el.id; - } else { - var anchor = this.generateUniqueId(el); - el.id = anchor; - return anchor; - } - }, - - createNavList: function() { - return $(''); - }, - - createChildNavList: function($parent) { - var $childList = this.createNavList(); - $parent.append($childList); - return $childList; - }, - - generateNavEl: function(anchor, text) { - var $a = $(''); - $a.attr('href', '#' + anchor); - $a.text(text); - var $li = $('
  • '); - $li.append($a); - return $li; - }, - - generateNavItem: function(headingEl) { - var anchor = this.generateAnchor(headingEl); - var $heading = $(headingEl); - var text = $heading.data('toc-text') || $heading.text(); - return this.generateNavEl(anchor, text); - }, - - // Find the first heading level (`

    `, then `

    `, etc.) that has more than one element. Defaults to 1 (for `

    `). - getTopLevel: function($scope) { - for (var i = 1; i <= 6; i++) { - var $headings = this.findOrFilter($scope, 'h' + i); - if ($headings.length > 1) { - return i; - } - } - - return 1; - }, - - // returns the elements for the top level, and the next below it - getHeadings: function($scope, topLevel) { - var topSelector = 'h' + topLevel; - - var secondaryLevel = topLevel + 1; - var secondarySelector = 'h' + secondaryLevel; - - return this.findOrFilter($scope, topSelector + ',' + secondarySelector); - }, - - getNavLevel: function(el) { - return parseInt(el.tagName.charAt(1), 10); - }, - - populateNav: function($topContext, topLevel, $headings) { - var $context = $topContext; - var $prevNav; - - var helpers = this; - $headings.each(function(i, el) { - var $newNav = helpers.generateNavItem(el); - var navLevel = helpers.getNavLevel(el); - - // determine the proper $context - if (navLevel === topLevel) { - // use top level - $context = $topContext; - } else if ($prevNav && $context === $topContext) { - // create a new level of the tree and switch to it - $context = helpers.createChildNavList($prevNav); - } // else use the current $context - - $context.append($newNav); - - $prevNav = $newNav; - }); - }, - - parseOps: function(arg) { - var opts; - if (arg.jquery) { - opts = { - $nav: arg - }; - } else { - opts = arg; - } - opts.$scope = opts.$scope || $(document.body); - return opts; - } - }, - - // accepts a jQuery object, or an options object - init: function(opts) { - opts = this.helpers.parseOps(opts); - - // ensure that the data attribute is in place for styling - opts.$nav.attr('data-toggle', 'toc'); - - var $topContext = this.helpers.createChildNavList(opts.$nav); - var topLevel = this.helpers.getTopLevel(opts.$scope); - var $headings = this.helpers.getHeadings(opts.$scope, topLevel); - this.helpers.populateNav($topContext, topLevel, $headings); - } - }; - - $(function() { - $('nav[data-toggle="toc"]').each(function(i, el) { - var $nav = $(el); - Toc.init($nav); - }); - }); -})(); diff --git a/docs/docsearch.css b/docs/docsearch.css deleted file mode 100644 index e5f1fe1..0000000 --- a/docs/docsearch.css +++ /dev/null @@ -1,148 +0,0 @@ -/* Docsearch -------------------------------------------------------------- */ -/* - Source: https://github.com/algolia/docsearch/ - License: MIT -*/ - -.algolia-autocomplete { - display: block; - -webkit-box-flex: 1; - -ms-flex: 1; - flex: 1 -} - -.algolia-autocomplete .ds-dropdown-menu { - width: 100%; - min-width: none; - max-width: none; - padding: .75rem 0; - background-color: #fff; - background-clip: padding-box; - border: 1px solid rgba(0, 0, 0, .1); - box-shadow: 0 .5rem 1rem rgba(0, 0, 0, .175); -} - -@media (min-width:768px) { - .algolia-autocomplete .ds-dropdown-menu { - width: 175% - } -} - -.algolia-autocomplete .ds-dropdown-menu::before { - display: none -} - -.algolia-autocomplete .ds-dropdown-menu [class^=ds-dataset-] { - padding: 0; - background-color: rgb(255,255,255); - border: 0; - max-height: 80vh; -} - -.algolia-autocomplete .ds-dropdown-menu .ds-suggestions { - margin-top: 0 -} - -.algolia-autocomplete .algolia-docsearch-suggestion { - padding: 0; - overflow: visible -} - -.algolia-autocomplete .algolia-docsearch-suggestion--category-header { - padding: .125rem 1rem; - margin-top: 0; - font-size: 1.3em; - font-weight: 500; - color: #00008B; - border-bottom: 0 -} - -.algolia-autocomplete .algolia-docsearch-suggestion--wrapper { - float: none; - padding-top: 0 -} - -.algolia-autocomplete .algolia-docsearch-suggestion--subcategory-column { - float: none; - width: auto; - padding: 0; - text-align: left -} - -.algolia-autocomplete .algolia-docsearch-suggestion--content { - float: none; - width: auto; - padding: 0 -} - -.algolia-autocomplete .algolia-docsearch-suggestion--content::before { - display: none -} - -.algolia-autocomplete .ds-suggestion:not(:first-child) .algolia-docsearch-suggestion--category-header { - padding-top: .75rem; - margin-top: .75rem; - border-top: 1px solid rgba(0, 0, 0, .1) -} - -.algolia-autocomplete .ds-suggestion .algolia-docsearch-suggestion--subcategory-column { - display: block; - padding: .1rem 1rem; - margin-bottom: 0.1; - font-size: 1.0em; - font-weight: 400 - /* display: none */ -} - -.algolia-autocomplete .algolia-docsearch-suggestion--title { - display: block; - padding: .25rem 1rem; - margin-bottom: 0; - font-size: 0.9em; - font-weight: 400 -} - -.algolia-autocomplete .algolia-docsearch-suggestion--text { - padding: 0 1rem .5rem; - margin-top: -.25rem; - font-size: 0.8em; - font-weight: 400; - line-height: 1.25 -} - -.algolia-autocomplete .algolia-docsearch-footer { - width: 110px; - height: 20px; - z-index: 3; - margin-top: 10.66667px; - float: right; - font-size: 0; - line-height: 0; -} - -.algolia-autocomplete .algolia-docsearch-footer--logo { - background-image: url("data:image/svg+xml;utf8,"); - background-repeat: no-repeat; - background-position: 50%; - background-size: 100%; - overflow: hidden; - text-indent: -9000px; - width: 100%; - height: 100%; - display: block; - transform: translate(-8px); -} - -.algolia-autocomplete .algolia-docsearch-suggestion--highlight { - color: #FF8C00; - background: rgba(232, 189, 54, 0.1) -} - - -.algolia-autocomplete .algolia-docsearch-suggestion--text .algolia-docsearch-suggestion--highlight { - box-shadow: inset 0 -2px 0 0 rgba(105, 105, 105, .5) -} - -.algolia-autocomplete .ds-suggestion.ds-cursor .algolia-docsearch-suggestion--content { - background-color: rgba(192, 192, 192, .15) -} diff --git a/docs/docsearch.js b/docs/docsearch.js deleted file mode 100644 index b35504c..0000000 --- a/docs/docsearch.js +++ /dev/null @@ -1,85 +0,0 @@ -$(function() { - - // register a handler to move the focus to the search bar - // upon pressing shift + "/" (i.e. "?") - $(document).on('keydown', function(e) { - if (e.shiftKey && e.keyCode == 191) { - e.preventDefault(); - $("#search-input").focus(); - } - }); - - $(document).ready(function() { - // do keyword highlighting - /* modified from https://jsfiddle.net/julmot/bL6bb5oo/ */ - var mark = function() { - - var referrer = document.URL ; - var paramKey = "q" ; - - if (referrer.indexOf("?") !== -1) { - var qs = referrer.substr(referrer.indexOf('?') + 1); - var qs_noanchor = qs.split('#')[0]; - var qsa = qs_noanchor.split('&'); - var keyword = ""; - - for (var i = 0; i < qsa.length; i++) { - var currentParam = qsa[i].split('='); - - if (currentParam.length !== 2) { - continue; - } - - if (currentParam[0] == paramKey) { - keyword = decodeURIComponent(currentParam[1].replace(/\+/g, "%20")); - } - } - - if (keyword !== "") { - $(".contents").unmark({ - done: function() { - $(".contents").mark(keyword); - } - }); - } - } - }; - - mark(); - }); -}); - -/* Search term highlighting ------------------------------*/ - -function matchedWords(hit) { - var words = []; - - var hierarchy = hit._highlightResult.hierarchy; - // loop to fetch from lvl0, lvl1, etc. - for (var idx in hierarchy) { - words = words.concat(hierarchy[idx].matchedWords); - } - - var content = hit._highlightResult.content; - if (content) { - words = words.concat(content.matchedWords); - } - - // return unique words - var words_uniq = [...new Set(words)]; - return words_uniq; -} - -function updateHitURL(hit) { - - var words = matchedWords(hit); - var url = ""; - - if (hit.anchor) { - url = hit.url_without_anchor + '?q=' + escape(words.join(" ")) + '#' + hit.anchor; - } else { - url = hit.url + '?q=' + escape(words.join(" ")); - } - - return url; -} diff --git a/docs/index.html b/docs/index.html deleted file mode 100644 index baa2c26..0000000 --- a/docs/index.html +++ /dev/null @@ -1,318 +0,0 @@ - - - - - - - -Characterizations of Cohorts • Characterization - - - - - - - - - - - - -
    -
    - - - - -
    -
    - -
    - -

    Build Status codecov.io

    -

    Characterization is part of HADES.

    -
    -
    -

    Introduction -

    -

    Characterization is an R package for performing characterization of a target and a comparator cohort.

    -
    -
    -

    Features -

    -
      -
    • Compute time to event
    • -
    • Compute dechallenge and rechallenge
    • -
    • Computer characterization of target cohort with and without occurring in an outcome cohort during some time at risk
    • -
    • Run multiple characterization analyses efficiently
    • -
    • upload results to database
    • -
    • export results as csv files
    • -
    -
    -
    -

    Examples -

    -
    -
    -library(Eunomia)
    -library(Characterization)
    -
    -connectionDetails <- Eunomia::getEunomiaConnectionDetails()
    -Eunomia::createCohorts(connectionDetails = connectionDetails)
    -
    -targetIds <- c(1,2,4)
    -  outcomeIds <- c(3)
    -
    -  timeToEventSettings1 <- createTimeToEventSettings(
    -    targetIds = 1,
    -    outcomeIds = c(3,4)
    -  )
    -  timeToEventSettings2 <- createTimeToEventSettings(
    -    targetIds = 2,
    -    outcomeIds = c(3,4)
    -  )
    -
    -  dechallengeRechallengeSettings <- createDechallengeRechallengeSettings(
    -    targetIds = targetIds,
    -    outcomeIds = outcomeIds,
    -    dechallengeStopInterval = 30,
    -    dechallengeEvaluationWindow = 31
    -  )
    -
    -  aggregateCovariateSettings1 <- createAggregateCovariateSettings(
    -    targetIds = targetIds,
    -    outcomeIds = outcomeIds,
    -    riskWindowStart = 1,
    -    startAnchor = 'cohort start',
    -    riskWindowEnd = 365,
    -    endAnchor = 'cohort start',
    -    covariateSettings = FeatureExtraction::createCovariateSettings(
    -      useDemographicsGender = T,
    -      useDemographicsAge = T,
    -      useDemographicsRace = T
    -    )
    -  )
    -
    -  aggregateCovariateSettings2 <- createAggregateCovariateSettings(
    -    targetIds = targetIds,
    -    outcomeIds = outcomeIds,
    -    riskWindowStart = 1,
    -    startAnchor = 'cohort start',
    -    riskWindowEnd = 365,
    -    endAnchor = 'cohort start',
    -    covariateSettings = FeatureExtraction::createCovariateSettings(
    -      useConditionOccurrenceLongTerm = T
    -    )
    -  )
    -
    -  characterizationSettings <- createCharacterizationSettings(
    -    timeToEventSettings = list(
    -      timeToEventSettings1,
    -      timeToEventSettings2
    -      ),
    -    dechallengeRechallengeSettings = list(
    -      dechallengeRechallengeSettings
    -    ),
    -    aggregateCovariateSettings = list(
    -      aggregateCovariateSettings1,
    -      aggregateCovariateSettings2
    -      )
    -  )
    -  
    -runCharacterizationAnalyses(
    -  connectionDetails = connectionDetails,
    -  cdmDatabaseSchema = 'main',
    -  targetDatabaseSchema = 'main',
    -  targetTable = 'cohort',
    -  outcomeDatabaseSchema = 'main',
    -  outcomeTable = 'cohort',
    -  characterizationSettings = characterizationSettings,   
    -  outputDirectory = file.path(tempdir(), 'example', 'results'),
    -  executionPath = file.path(tempdir(), 'example', 'execution'),
    -  csvFilePrefix = 'c_',
    -  databaseId = 'Eunomia'
    -)
    -
    -
    -

    Technology -

    -

    Characterization is an R package.

    -
    -
    -

    System Requirements -

    -

    Requires R (version 4.0.0 or higher). Libraries used in Characterization require Java.

    -
    -
    -

    Installation -

    -
      -
    1. See the instructions here for configuring your R environment, including Java.

    2. -
    3. In R, use the following commands to download and install Characterization:

    4. -
    -
    -install.packages("remotes")
    -remotes::install_github("ohdsi/Characterization")
    -
    -
    -

    User Documentation -

    -

    Documentation can be found on the package website.

    -
    -
    -

    Support -

    - -
    -
    -

    Contributing -

    -

    Read here how you can contribute to this package.

    -
    -
    -

    License -

    -

    Characterization is licensed under Apache License 2.0

    -
    -
    -

    Development -

    -

    Characterization is being developed in R Studio.

    -
    - -
    - - -
    - - -
    - -
    -

    -

    Site built with pkgdown 2.0.7.

    -
    - -
    -
    - - - - - - - - diff --git a/docs/link.svg b/docs/link.svg deleted file mode 100644 index 88ad827..0000000 --- a/docs/link.svg +++ /dev/null @@ -1,12 +0,0 @@ - - - - - - diff --git a/docs/news/index.html b/docs/news/index.html deleted file mode 100644 index 5fb15d8..0000000 --- a/docs/news/index.html +++ /dev/null @@ -1,176 +0,0 @@ - -Changelog • Characterization - - -
    -
    - - - -
    -
    - - -
    - -
    • risk factors and case series now restrict to first outcome only.
    • -
    • added documentation describing the different analyses with examples.
    • -
    -
    - -
    • edited cohort_type in results to varchar(12)
    • -
    • fixed setting id being messed up by readr loading
    • -
    -
    - -
    • added tests for all HADES supported dbms
    • -
    • updated minCellCount censoring
    • -
    • fixed issues with incremental
    • -
    • made the code more modular to enable new characterizations to be added
    • -
    • added job optimization code to optimize the distributuion of jobs
    • -
    • fixed tests and made minor bug updates
    • -
    -
    - -
    • Added parallelization in the aggregate covariates analysis
    • -
    • Extact all results as csv files
    • -
    • Removed sqlite result creation
    • -
    • now using ResultModelManager to upload results into database
    • -
    -
    - -
    • Removed DatabaseConnector from Remotes in DESCRIPTION. Fixes GitHb issue 38.
    • -
    • Added check to covariateSettings input in createAggregateCovariateSettings to error if temporal is T
    • -
    • adding during cohort covariate settings
    • -
    • added a case covariate settings inputs to aggregate covariates
    • -
    • added casePreTargetDuration and casePostTreatmentDuration integer inputs to aggregate covariates
    • -
    -
    - -
    • Added new outcomeWashoutDays parameter to createAggregateCovariateSettings to remove outcome occurances that are continuations of a prior outcome occurrence
    • -
    • Changed the way cohort definition ids are created in createAggregateCovariateSettings to use hash of target id, outcome id and type. This lets users combine different studies into a single result database.
    • -
    • Added database migration capability and created new migrations for the recent updates.
    • -
    -
    - -

    Updated dependency to FeatureExtraction (>= 3.5.0) to support minCharacterizationMean paramater.

    -
    -
    - -

    Changed export to csv approach to use batch export from SQLite (#41)

    -
    -
    - -

    Added extra error logging

    -
    -
    - -

    Optimized aggregate features to remove T and not Os (as these can be calculated using T and T and Os) - requires latest shiny app though Optimized database extraction to csv

    -
    -
    - -

    Fixing bug where first outcome was still all outcomes Updating shiny app to work with old and new ShinyAppBuilder

    -
    -
    - -

    Fixing bug where cohort_counts were not being saved in the database

    -
    -
    - -
    • added support to enable target cohorts with multiple cohort entries for the aggregate covariate analysis by restricting to first cohort entry and ensuring the subject has a user specified minPriorObservation days observation in the database at first entry and also perform analysis on first outcomes and any outcome that is recorded during TAR.
    • -
    • added shiny app
    • -
    -
    - -

    Initial version

    -
    -
    - - - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/pkgdown.css b/docs/pkgdown.css deleted file mode 100644 index 80ea5b8..0000000 --- a/docs/pkgdown.css +++ /dev/null @@ -1,384 +0,0 @@ -/* Sticky footer */ - -/** - * Basic idea: https://philipwalton.github.io/solved-by-flexbox/demos/sticky-footer/ - * Details: https://github.com/philipwalton/solved-by-flexbox/blob/master/assets/css/components/site.css - * - * .Site -> body > .container - * .Site-content -> body > .container .row - * .footer -> footer - * - * Key idea seems to be to ensure that .container and __all its parents__ - * have height set to 100% - * - */ - -html, body { - height: 100%; -} - -body { - position: relative; -} - -body > .container { - display: flex; - height: 100%; - flex-direction: column; -} - -body > .container .row { - flex: 1 0 auto; -} - -footer { - margin-top: 45px; - padding: 35px 0 36px; - border-top: 1px solid #e5e5e5; - color: #666; - display: flex; - flex-shrink: 0; -} -footer p { - margin-bottom: 0; -} -footer div { - flex: 1; -} -footer .pkgdown { - text-align: right; -} -footer p { - margin-bottom: 0; -} - -img.icon { - float: right; -} - -/* Ensure in-page images don't run outside their container */ -.contents img { - max-width: 100%; - height: auto; -} - -/* Fix bug in bootstrap (only seen in firefox) */ -summary { - display: list-item; -} - -/* Typographic tweaking ---------------------------------*/ - -.contents .page-header { - margin-top: calc(-60px + 1em); -} - -dd { - margin-left: 3em; -} - -/* Section anchors ---------------------------------*/ - -a.anchor { - display: none; - margin-left: 5px; - width: 20px; - height: 20px; - - background-image: url(./link.svg); - background-repeat: no-repeat; - background-size: 20px 20px; - background-position: center center; -} - -h1:hover .anchor, -h2:hover .anchor, -h3:hover .anchor, -h4:hover .anchor, -h5:hover .anchor, -h6:hover .anchor { - display: inline-block; -} - -/* Fixes for fixed navbar --------------------------*/ - -.contents h1, .contents h2, .contents h3, .contents h4 { - padding-top: 60px; - margin-top: -40px; -} - -/* Navbar submenu --------------------------*/ - -.dropdown-submenu { - position: relative; -} - -.dropdown-submenu>.dropdown-menu { - top: 0; - left: 100%; - margin-top: -6px; - margin-left: -1px; - border-radius: 0 6px 6px 6px; -} - -.dropdown-submenu:hover>.dropdown-menu { - display: block; -} - -.dropdown-submenu>a:after { - display: block; - content: " "; - float: right; - width: 0; - height: 0; - border-color: transparent; - border-style: solid; - border-width: 5px 0 5px 5px; - border-left-color: #cccccc; - margin-top: 5px; - margin-right: -10px; -} - -.dropdown-submenu:hover>a:after { - border-left-color: #ffffff; -} - -.dropdown-submenu.pull-left { - float: none; -} - -.dropdown-submenu.pull-left>.dropdown-menu { - left: -100%; - margin-left: 10px; - border-radius: 6px 0 6px 6px; -} - -/* Sidebar --------------------------*/ - -#pkgdown-sidebar { - margin-top: 30px; - position: -webkit-sticky; - position: sticky; - top: 70px; -} - -#pkgdown-sidebar h2 { - font-size: 1.5em; - margin-top: 1em; -} - -#pkgdown-sidebar h2:first-child { - margin-top: 0; -} - -#pkgdown-sidebar .list-unstyled li { - margin-bottom: 0.5em; -} - -/* bootstrap-toc tweaks ------------------------------------------------------*/ - -/* All levels of nav */ - -nav[data-toggle='toc'] .nav > li > a { - padding: 4px 20px 4px 6px; - font-size: 1.5rem; - font-weight: 400; - color: inherit; -} - -nav[data-toggle='toc'] .nav > li > a:hover, -nav[data-toggle='toc'] .nav > li > a:focus { - padding-left: 5px; - color: inherit; - border-left: 1px solid #878787; -} - -nav[data-toggle='toc'] .nav > .active > a, -nav[data-toggle='toc'] .nav > .active:hover > a, -nav[data-toggle='toc'] .nav > .active:focus > a { - padding-left: 5px; - font-size: 1.5rem; - font-weight: 400; - color: inherit; - border-left: 2px solid #878787; -} - -/* Nav: second level (shown on .active) */ - -nav[data-toggle='toc'] .nav .nav { - display: none; /* Hide by default, but at >768px, show it */ - padding-bottom: 10px; -} - -nav[data-toggle='toc'] .nav .nav > li > a { - padding-left: 16px; - font-size: 1.35rem; -} - -nav[data-toggle='toc'] .nav .nav > li > a:hover, -nav[data-toggle='toc'] .nav .nav > li > a:focus { - padding-left: 15px; -} - -nav[data-toggle='toc'] .nav .nav > .active > a, -nav[data-toggle='toc'] .nav .nav > .active:hover > a, -nav[data-toggle='toc'] .nav .nav > .active:focus > a { - padding-left: 15px; - font-weight: 500; - font-size: 1.35rem; -} - -/* orcid ------------------------------------------------------------------- */ - -.orcid { - font-size: 16px; - color: #A6CE39; - /* margins are required by official ORCID trademark and display guidelines */ - margin-left:4px; - margin-right:4px; - vertical-align: middle; -} - -/* Reference index & topics ----------------------------------------------- */ - -.ref-index th {font-weight: normal;} - -.ref-index td {vertical-align: top; min-width: 100px} -.ref-index .icon {width: 40px;} -.ref-index .alias {width: 40%;} -.ref-index-icons .alias {width: calc(40% - 40px);} -.ref-index .title {width: 60%;} - -.ref-arguments th {text-align: right; padding-right: 10px;} -.ref-arguments th, .ref-arguments td {vertical-align: top; min-width: 100px} -.ref-arguments .name {width: 20%;} -.ref-arguments .desc {width: 80%;} - -/* Nice scrolling for wide elements --------------------------------------- */ - -table { - display: block; - overflow: auto; -} - -/* Syntax highlighting ---------------------------------------------------- */ - -pre, code, pre code { - background-color: #f8f8f8; - color: #333; -} -pre, pre code { - white-space: pre-wrap; - word-break: break-all; - overflow-wrap: break-word; -} - -pre { - border: 1px solid #eee; -} - -pre .img, pre .r-plt { - margin: 5px 0; -} - -pre .img img, pre .r-plt img { - background-color: #fff; -} - -code a, pre a { - color: #375f84; -} - -a.sourceLine:hover { - text-decoration: none; -} - -.fl {color: #1514b5;} -.fu {color: #000000;} /* function */ -.ch,.st {color: #036a07;} /* string */ -.kw {color: #264D66;} /* keyword */ -.co {color: #888888;} /* comment */ - -.error {font-weight: bolder;} -.warning {font-weight: bolder;} - -/* Clipboard --------------------------*/ - -.hasCopyButton { - position: relative; -} - -.btn-copy-ex { - position: absolute; - right: 0; - top: 0; - visibility: hidden; -} - -.hasCopyButton:hover button.btn-copy-ex { - visibility: visible; -} - -/* headroom.js ------------------------ */ - -.headroom { - will-change: transform; - transition: transform 200ms linear; -} -.headroom--pinned { - transform: translateY(0%); -} -.headroom--unpinned { - transform: translateY(-100%); -} - -/* mark.js ----------------------------*/ - -mark { - background-color: rgba(255, 255, 51, 0.5); - border-bottom: 2px solid rgba(255, 153, 51, 0.3); - padding: 1px; -} - -/* vertical spacing after htmlwidgets */ -.html-widget { - margin-bottom: 10px; -} - -/* fontawesome ------------------------ */ - -.fab { - font-family: "Font Awesome 5 Brands" !important; -} - -/* don't display links in code chunks when printing */ -/* source: https://stackoverflow.com/a/10781533 */ -@media print { - code a:link:after, code a:visited:after { - content: ""; - } -} - -/* Section anchors --------------------------------- - Added in pandoc 2.11: https://github.com/jgm/pandoc-templates/commit/9904bf71 -*/ - -div.csl-bib-body { } -div.csl-entry { - clear: both; -} -.hanging-indent div.csl-entry { - margin-left:2em; - text-indent:-2em; -} -div.csl-left-margin { - min-width:2em; - float:left; -} -div.csl-right-inline { - margin-left:2em; - padding-left:1em; -} -div.csl-indent { - margin-left: 2em; -} diff --git a/docs/pkgdown.js b/docs/pkgdown.js deleted file mode 100644 index 6f0eee4..0000000 --- a/docs/pkgdown.js +++ /dev/null @@ -1,108 +0,0 @@ -/* http://gregfranko.com/blog/jquery-best-practices/ */ -(function($) { - $(function() { - - $('.navbar-fixed-top').headroom(); - - $('body').css('padding-top', $('.navbar').height() + 10); - $(window).resize(function(){ - $('body').css('padding-top', $('.navbar').height() + 10); - }); - - $('[data-toggle="tooltip"]').tooltip(); - - var cur_path = paths(location.pathname); - var links = $("#navbar ul li a"); - var max_length = -1; - var pos = -1; - for (var i = 0; i < links.length; i++) { - if (links[i].getAttribute("href") === "#") - continue; - // Ignore external links - if (links[i].host !== location.host) - continue; - - var nav_path = paths(links[i].pathname); - - var length = prefix_length(nav_path, cur_path); - if (length > max_length) { - max_length = length; - pos = i; - } - } - - // Add class to parent
  • , and enclosing
  • if in dropdown - if (pos >= 0) { - var menu_anchor = $(links[pos]); - menu_anchor.parent().addClass("active"); - menu_anchor.closest("li.dropdown").addClass("active"); - } - }); - - function paths(pathname) { - var pieces = pathname.split("/"); - pieces.shift(); // always starts with / - - var end = pieces[pieces.length - 1]; - if (end === "index.html" || end === "") - pieces.pop(); - return(pieces); - } - - // Returns -1 if not found - function prefix_length(needle, haystack) { - if (needle.length > haystack.length) - return(-1); - - // Special case for length-0 haystack, since for loop won't run - if (haystack.length === 0) { - return(needle.length === 0 ? 0 : -1); - } - - for (var i = 0; i < haystack.length; i++) { - if (needle[i] != haystack[i]) - return(i); - } - - return(haystack.length); - } - - /* Clipboard --------------------------*/ - - function changeTooltipMessage(element, msg) { - var tooltipOriginalTitle=element.getAttribute('data-original-title'); - element.setAttribute('data-original-title', msg); - $(element).tooltip('show'); - element.setAttribute('data-original-title', tooltipOriginalTitle); - } - - if(ClipboardJS.isSupported()) { - $(document).ready(function() { - var copyButton = ""; - - $("div.sourceCode").addClass("hasCopyButton"); - - // Insert copy buttons: - $(copyButton).prependTo(".hasCopyButton"); - - // Initialize tooltips: - $('.btn-copy-ex').tooltip({container: 'body'}); - - // Initialize clipboard: - var clipboardBtnCopies = new ClipboardJS('[data-clipboard-copy]', { - text: function(trigger) { - return trigger.parentNode.textContent.replace(/\n#>[^\n]*/g, ""); - } - }); - - clipboardBtnCopies.on('success', function(e) { - changeTooltipMessage(e.trigger, 'Copied!'); - e.clearSelection(); - }); - - clipboardBtnCopies.on('error', function() { - changeTooltipMessage(e.trigger,'Press Ctrl+C or Command+C to copy'); - }); - }); - } -})(window.jQuery || window.$) diff --git a/docs/pkgdown.yml b/docs/pkgdown.yml deleted file mode 100644 index 12371ae..0000000 --- a/docs/pkgdown.yml +++ /dev/null @@ -1,9 +0,0 @@ -pandoc: 3.1.11 -pkgdown: 2.0.7 -pkgdown_sha: ~ -articles: - InstallationGuide: InstallationGuide.html - Specification: Specification.html - UsingPackage: UsingPackage.html -last_built: 2024-10-07T20:19Z - diff --git a/docs/pull_request_template.html b/docs/pull_request_template.html deleted file mode 100644 index 01b3f73..0000000 --- a/docs/pull_request_template.html +++ /dev/null @@ -1,111 +0,0 @@ - -NA • Characterization - - -
    -
    - - - -
    -
    - - - -

    Before you do a pull request, you should always file an issue and make sure the package maintainer agrees that it’s a problem, and is happy with your basic proposal for fixing it. We don’t want you to spend a bunch of time on something that we don’t think is a good idea.

    -

    Additional requirements for pull requests:

    -
    • Adhere to the Developer Guidelines as well as the OHDSI Code Style.

    • -
    • If possible, add unit tests for new functionality you add.

    • -
    • Restrict your pull request to solving the issue at hand. Do not try to ‘improve’ parts of the code that are not related to the issue. If you feel other parts of the code need better organization, create a separate issue for that.

    • -
    • Make sure you pass R check without errors and warnings before submitting.

    • -
    • Always target the develop branch, and make sure you are up-to-date with the develop branch.

    • -
    - - - -
    - - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/Characterization-package.html b/docs/reference/Characterization-package.html deleted file mode 100644 index 6178437..0000000 --- a/docs/reference/Characterization-package.html +++ /dev/null @@ -1,121 +0,0 @@ - -Characterization: Characterizations of Cohorts — Characterization-package • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Various characterizations of target and outcome cohorts.

    -
    - - - -
    -

    Author

    -

    Maintainer: Jenna Reps reps@ohdsi.org

    -

    Authors:

    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/Rplot001.png b/docs/reference/Rplot001.png deleted file mode 100644 index 17a3580..0000000 Binary files a/docs/reference/Rplot001.png and /dev/null differ diff --git a/docs/reference/cleanIncremental.html b/docs/reference/cleanIncremental.html deleted file mode 100644 index 426be97..0000000 --- a/docs/reference/cleanIncremental.html +++ /dev/null @@ -1,133 +0,0 @@ - -Removes csv files from folders that have not been marked as completed -and removes the record of the execution file — cleanIncremental • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Removes csv files from folders that have not been marked as completed -and removes the record of the execution file

    -
    - -
    -
    cleanIncremental(executionFolder)
    -
    - -
    -

    Arguments

    -
    executionFolder
    -

    The folder that has the execution files

    - -
    -
    -

    Value

    - - -

    A list with the settings

    -
    -
    -

    See also

    -

    Other Incremental: -cleanNonIncremental()

    -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/cleanNonIncremental.html b/docs/reference/cleanNonIncremental.html deleted file mode 100644 index 4ad340b..0000000 --- a/docs/reference/cleanNonIncremental.html +++ /dev/null @@ -1,133 +0,0 @@ - -Removes csv files from the execution folder as there should be no csv files -when running in non-incremental model — cleanNonIncremental • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Removes csv files from the execution folder as there should be no csv files -when running in non-incremental model

    -
    - -
    -
    cleanNonIncremental(executionFolder)
    -
    - -
    -

    Arguments

    -
    executionFolder
    -

    The folder that has the execution files

    - -
    -
    -

    Value

    - - -

    A list with the settings

    -
    -
    -

    See also

    -

    Other Incremental: -cleanIncremental()

    -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/computeAggregateCovariateAnalyses.html b/docs/reference/computeAggregateCovariateAnalyses.html deleted file mode 100644 index 7b277b2..0000000 --- a/docs/reference/computeAggregateCovariateAnalyses.html +++ /dev/null @@ -1,179 +0,0 @@ - -Compute aggregate covariate study — computeAggregateCovariateAnalyses • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Compute aggregate covariate study

    -
    - -
    -
    computeAggregateCovariateAnalyses(
    -  connectionDetails = NULL,
    -  cdmDatabaseSchema,
    -  cdmVersion = 5,
    -  targetDatabaseSchema,
    -  targetTable,
    -  outcomeDatabaseSchema = targetDatabaseSchema,
    -  outcomeTable = targetTable,
    -  tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"),
    -  aggregateCovariateSettings,
    -  databaseId = "database 1",
    -  runId = 1
    -)
    -
    - -
    -

    Arguments

    -
    connectionDetails
    -

    An object of type `connectionDetails` as created using the -[DatabaseConnector::createConnectionDetails()] function.

    - - -
    cdmDatabaseSchema
    -

    The schema with the OMOP CDM data

    - - -
    cdmVersion
    -

    The version of the OMOP CDM

    - - -
    targetDatabaseSchema
    -

    Schema name where your target cohort table resides. Note that for SQL Server, -this should include both the database and schema name, for example -'scratch.dbo'.

    - - -
    targetTable
    -

    Name of the target cohort table.

    - - -
    outcomeDatabaseSchema
    -

    Schema name where your outcome cohort table resides. Note that for SQL Server, -this should include both the database and schema name, for example -'scratch.dbo'.

    - - -
    outcomeTable
    -

    Name of the outcome cohort table.

    - - -
    tempEmulationSchema
    -

    Some database platforms like Oracle and Impala do not truly support temp tables. -To emulate temp tables, provide a schema with write privileges where temp tables -can be created

    - - -
    aggregateCovariateSettings
    -

    The settings for the AggregateCovariate study

    - - -
    databaseId
    -

    Unique identifier for the database (string)

    - - -
    runId
    -

    Unique identifier for the tar and covariate setting

    - -
    -
    -

    Value

    - - -

    The descriptive results for each target cohort in the settings.

    -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/computeDechallengeRechallengeAnalyses.html b/docs/reference/computeDechallengeRechallengeAnalyses.html deleted file mode 100644 index ba21638..0000000 --- a/docs/reference/computeDechallengeRechallengeAnalyses.html +++ /dev/null @@ -1,188 +0,0 @@ - -Compute dechallenge rechallenge study — computeDechallengeRechallengeAnalyses • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Compute dechallenge rechallenge study

    -
    - -
    -
    computeDechallengeRechallengeAnalyses(
    -  connectionDetails = NULL,
    -  targetDatabaseSchema,
    -  targetTable,
    -  outcomeDatabaseSchema = targetDatabaseSchema,
    -  outcomeTable = targetTable,
    -  tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"),
    -  settings,
    -  databaseId = "database 1",
    -  outputFolder = file.path(getwd(), "results"),
    -  minCellCount = 0,
    -  ...
    -)
    -
    - -
    -

    Arguments

    -
    connectionDetails
    -

    An object of type `connectionDetails` as created using the -[DatabaseConnector::createConnectionDetails()] function.

    - - -
    targetDatabaseSchema
    -

    Schema name where your target cohort table resides. Note that for SQL Server, -this should include both the database and schema name, for example -'scratch.dbo'.

    - - -
    targetTable
    -

    Name of the target cohort table.

    - - -
    outcomeDatabaseSchema
    -

    Schema name where your outcome cohort table resides. Note that for SQL Server, -this should include both the database and schema name, for example -'scratch.dbo'.

    - - -
    outcomeTable
    -

    Name of the outcome cohort table.

    - - -
    tempEmulationSchema
    -

    Some database platforms like Oracle and Impala do not truly support temp tables. -To emulate temp tables, provide a schema with write privileges where temp tables -can be created

    - - -
    settings
    -

    The settings for the timeToEvent study

    - - -
    databaseId
    -

    An identifier for the database (string)

    - - -
    outputFolder
    -

    A directory to save the results as csv files

    - - -
    minCellCount
    -

    The minimum cell value to display, values less than this will be replaced by -1

    - - -
    ...
    -

    extra inputs

    - -
    -
    -

    Value

    - - -

    An Andromeda::andromeda() object containing the dechallenge rechallenge results

    -
    -
    -

    See also

    - -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/computeRechallengeFailCaseSeriesAnalyses.html b/docs/reference/computeRechallengeFailCaseSeriesAnalyses.html deleted file mode 100644 index f1973de..0000000 --- a/docs/reference/computeRechallengeFailCaseSeriesAnalyses.html +++ /dev/null @@ -1,193 +0,0 @@ - -Compute fine the subjects that fail the dechallenge rechallenge study — computeRechallengeFailCaseSeriesAnalyses • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Compute fine the subjects that fail the dechallenge rechallenge study

    -
    - -
    -
    computeRechallengeFailCaseSeriesAnalyses(
    -  connectionDetails = NULL,
    -  targetDatabaseSchema,
    -  targetTable,
    -  outcomeDatabaseSchema = targetDatabaseSchema,
    -  outcomeTable = targetTable,
    -  tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"),
    -  settings,
    -  databaseId = "database 1",
    -  showSubjectId = F,
    -  outputFolder = file.path(getwd(), "results"),
    -  minCellCount = 0,
    -  ...
    -)
    -
    - -
    -

    Arguments

    -
    connectionDetails
    -

    An object of type `connectionDetails` as created using the -[DatabaseConnector::createConnectionDetails()] function.

    - - -
    targetDatabaseSchema
    -

    Schema name where your target cohort table resides. Note that for SQL Server, -this should include both the database and schema name, for example -'scratch.dbo'.

    - - -
    targetTable
    -

    Name of the target cohort table.

    - - -
    outcomeDatabaseSchema
    -

    Schema name where your outcome cohort table resides. Note that for SQL Server, -this should include both the database and schema name, for example -'scratch.dbo'.

    - - -
    outcomeTable
    -

    Name of the outcome cohort table.

    - - -
    tempEmulationSchema
    -

    Some database platforms like Oracle and Impala do not truly support temp tables. -To emulate temp tables, provide a schema with write privileges where temp tables -can be created

    - - -
    settings
    -

    The settings for the timeToEvent study

    - - -
    databaseId
    -

    An identifier for the database (string)

    - - -
    showSubjectId
    -

    if F then subject_ids are hidden (recommended if sharing results)

    - - -
    outputFolder
    -

    A directory to save the results as csv files

    - - -
    minCellCount
    -

    The minimum cell value to display, values less than this will be replaced by -1

    - - -
    ...
    -

    extra inputs

    - -
    -
    -

    Value

    - - -

    An Andromeda::andromeda() object with the case series details of the failed rechallenge

    -
    -
    -

    See also

    - -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/computeTimeToEventAnalyses.html b/docs/reference/computeTimeToEventAnalyses.html deleted file mode 100644 index e64b387..0000000 --- a/docs/reference/computeTimeToEventAnalyses.html +++ /dev/null @@ -1,192 +0,0 @@ - -Compute time to event study — computeTimeToEventAnalyses • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Compute time to event study

    -
    - -
    -
    computeTimeToEventAnalyses(
    -  connectionDetails = NULL,
    -  targetDatabaseSchema,
    -  targetTable,
    -  outcomeDatabaseSchema = targetDatabaseSchema,
    -  outcomeTable = targetTable,
    -  tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"),
    -  cdmDatabaseSchema,
    -  settings,
    -  databaseId = "database 1",
    -  outputFolder = file.path(getwd(), "results"),
    -  minCellCount = 0,
    -  ...
    -)
    -
    - -
    -

    Arguments

    -
    connectionDetails
    -

    An object of type `connectionDetails` as created using the -[DatabaseConnector::createConnectionDetails()] function.

    - - -
    targetDatabaseSchema
    -

    Schema name where your target cohort table resides. Note that for SQL Server, -this should include both the database and schema name, for example -'scratch.dbo'.

    - - -
    targetTable
    -

    Name of the target cohort table.

    - - -
    outcomeDatabaseSchema
    -

    Schema name where your outcome cohort table resides. Note that for SQL Server, -this should include both the database and schema name, for example -'scratch.dbo'.

    - - -
    outcomeTable
    -

    Name of the outcome cohort table.

    - - -
    tempEmulationSchema
    -

    Some database platforms like Oracle and Impala do not truly support temp tables. -To emulate temp tables, provide a schema with write privileges where temp tables -can be created

    - - -
    cdmDatabaseSchema
    -

    The database schema containing the OMOP CDM data

    - - -
    settings
    -

    The settings for the timeToEvent study

    - - -
    databaseId
    -

    An identifier for the database (string)

    - - -
    outputFolder
    -

    A directory to save the results as csv files

    - - -
    minCellCount
    -

    The minimum cell value to display, values less than this will be replaced by -1

    - - -
    ...
    -

    extra inputs

    - -
    -
    -

    Value

    - - -

    An Andromeda::andromeda() object containing the time to event results.

    -
    -
    -

    See also

    -

    Other TimeToEvent: -createTimeToEventSettings()

    -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/createAggregateCovariateSettings.html b/docs/reference/createAggregateCovariateSettings.html deleted file mode 100644 index 82c57db..0000000 --- a/docs/reference/createAggregateCovariateSettings.html +++ /dev/null @@ -1,203 +0,0 @@ - -Create aggregate covariate study settings — createAggregateCovariateSettings • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Create aggregate covariate study settings

    -
    - -
    -
    createAggregateCovariateSettings(
    -  targetIds,
    -  outcomeIds,
    -  minPriorObservation = 0,
    -  outcomeWashoutDays = 0,
    -  riskWindowStart = 1,
    -  startAnchor = "cohort start",
    -  riskWindowEnd = 365,
    -  endAnchor = "cohort start",
    -  covariateSettings = FeatureExtraction::createCovariateSettings(useDemographicsGender =
    -    T, useDemographicsAge = T, useDemographicsAgeGroup = T, useDemographicsRace = T,
    -    useDemographicsEthnicity = T, useDemographicsIndexYear = T, useDemographicsIndexMonth
    -    = T, useDemographicsTimeInCohort = T, useDemographicsPriorObservationTime = T,
    -    useDemographicsPostObservationTime = T, useConditionGroupEraLongTerm = T,
    -    useDrugGroupEraOverlapping = T, useDrugGroupEraLongTerm = T,
    -    useProcedureOccurrenceLongTerm = T, useMeasurementLongTerm = T, 
    -    
    -    useObservationLongTerm = T, useDeviceExposureLongTerm = T,
    -    useVisitConceptCountLongTerm = T, useConditionGroupEraShortTerm = T,
    -    useDrugGroupEraShortTerm = T, useProcedureOccurrenceShortTerm = T,
    -    useMeasurementShortTerm = T, useObservationShortTerm = T, useDeviceExposureShortTerm
    -    = T, useVisitConceptCountShortTerm = T, endDays = 0, longTermStartDays = -365,
    -    shortTermStartDays = -30),
    -  caseCovariateSettings = createDuringCovariateSettings(useConditionGroupEraDuring = T,
    -    useDrugGroupEraDuring = T, useProcedureOccurrenceDuring = T, useDeviceExposureDuring
    -    = T, useMeasurementDuring = T, useObservationDuring = T, useVisitConceptCountDuring =
    -    T),
    -  casePreTargetDuration = 365,
    -  casePostOutcomeDuration = 365,
    -  extractNonCaseCovariates = T
    -)
    -
    - -
    -

    Arguments

    -
    targetIds
    -

    A list of cohortIds for the target cohorts

    - - -
    outcomeIds
    -

    A list of cohortIds for the outcome cohorts

    - - -
    minPriorObservation
    -

    The minimum time (in days) in the database a patient in the target cohorts must be observed prior to index

    - - -
    outcomeWashoutDays
    -

    Patients with the outcome within outcomeWashout days prior to index are excluded from the risk factor analysis

    - - -
    riskWindowStart
    -

    The start of the risk window (in days) relative to the `startAnchor`.

    - - -
    startAnchor
    -

    The anchor point for the start of the risk window. Can be `"cohort start"` -or `"cohort end"`.

    - - -
    riskWindowEnd
    -

    The end of the risk window (in days) relative to the `endAnchor`.

    - - -
    endAnchor
    -

    The anchor point for the end of the risk window. Can be `"cohort start"` -or `"cohort end"`.

    - - -
    covariateSettings
    -

    An object created using FeatureExtraction::createCovariateSettings

    - - -
    caseCovariateSettings
    -

    An object created using createDuringCovariateSettings

    - - -
    casePreTargetDuration
    -

    The number of days prior to case index we use for FeatureExtraction

    - - -
    casePostOutcomeDuration
    -

    The number of days prior to case index we use for FeatureExtraction

    - - -
    extractNonCaseCovariates
    -

    Whether to extract aggregate covariates and counts for patients in the targets and outcomes in addition to the cases

    - -
    -
    -

    Value

    - - -

    A list with the settings

    -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/createCharacterizationSettings.html b/docs/reference/createCharacterizationSettings.html deleted file mode 100644 index c1e5490..0000000 --- a/docs/reference/createCharacterizationSettings.html +++ /dev/null @@ -1,146 +0,0 @@ - -Create the settings for a large scale characterization study — createCharacterizationSettings • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    This function creates a list of settings for different characterization studies

    -
    - -
    -
    createCharacterizationSettings(
    -  timeToEventSettings = NULL,
    -  dechallengeRechallengeSettings = NULL,
    -  aggregateCovariateSettings = NULL
    -)
    -
    - -
    -

    Arguments

    -
    timeToEventSettings
    -

    A list of timeToEvent settings

    - - -
    dechallengeRechallengeSettings
    -

    A list of dechallengeRechallenge settings

    - - -
    aggregateCovariateSettings
    -

    A list of aggregateCovariate settings

    - -
    -
    -

    Value

    - - -

    Returns the connection to the sqlite database

    -
    -
    -

    Details

    -

    Specify one or more timeToEvent, dechallengeRechallenge and aggregateCovariate settings

    -
    - - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/createCharacterizationTables.html b/docs/reference/createCharacterizationTables.html deleted file mode 100644 index 035b633..0000000 --- a/docs/reference/createCharacterizationTables.html +++ /dev/null @@ -1,167 +0,0 @@ - -Create the results tables to store characterization results into a database — createCharacterizationTables • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    This function executes a large set of SQL statements to create tables that can store results

    -
    - -
    -
    createCharacterizationTables(
    -  connectionDetails,
    -  resultSchema,
    -  targetDialect = "postgresql",
    -  deleteExistingTables = T,
    -  createTables = T,
    -  tablePrefix = "c_",
    -  tempEmulationSchema = getOption("sqlRenderTempEmulationSchema")
    -)
    -
    - -
    -

    Arguments

    -
    connectionDetails
    -

    The connectionDetails to a database created by using the -function createConnectDetails in the -DatabaseConnector package.

    - - -
    resultSchema
    -

    The name of the database schema that the result tables will be created.

    - - -
    targetDialect
    -

    The database management system being used

    - - -
    deleteExistingTables
    -

    If true any existing tables matching the Characterization result tables names will be deleted

    - - -
    createTables
    -

    If true the Characterization result tables will be created

    - - -
    tablePrefix
    -

    A string appended to the Characterization result tables

    - - -
    tempEmulationSchema
    -

    The temp schema used when the database management system is oracle

    - -
    -
    -

    Value

    - - -

    Returns NULL but creates the required tables into the specified database schema.

    -
    -
    -

    Details

    -

    This function can be used to create (or delete) Characterization result tables

    -
    -
    -

    See also

    - -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/createDechallengeRechallengeSettings.html b/docs/reference/createDechallengeRechallengeSettings.html deleted file mode 100644 index b5f4b6a..0000000 --- a/docs/reference/createDechallengeRechallengeSettings.html +++ /dev/null @@ -1,146 +0,0 @@ - -Create dechallenge rechallenge study settings — createDechallengeRechallengeSettings • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Create dechallenge rechallenge study settings

    -
    - -
    -
    createDechallengeRechallengeSettings(
    -  targetIds,
    -  outcomeIds,
    -  dechallengeStopInterval = 30,
    -  dechallengeEvaluationWindow = 30
    -)
    -
    - -
    -

    Arguments

    -
    targetIds
    -

    A list of cohortIds for the target cohorts

    - - -
    outcomeIds
    -

    A list of cohortIds for the outcome cohorts

    - - -
    dechallengeStopInterval
    -

    An integer specifying the how much time to add to the cohort_end when determining whether the event starts during cohort and ends after

    - - -
    dechallengeEvaluationWindow
    -

    An integer specifying the period of time after the cohort_end when you cannot see an outcome for a dechallenge success

    - -
    -
    -

    Value

    - - -

    A list with the settings

    -
    -
    -

    See also

    - -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/createDuringCovariateSettings.html b/docs/reference/createDuringCovariateSettings.html deleted file mode 100644 index 0c19901..0000000 --- a/docs/reference/createDuringCovariateSettings.html +++ /dev/null @@ -1,254 +0,0 @@ - -Create during covariate settings — createDuringCovariateSettings • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Create during covariate settings

    -
    - -
    -
    createDuringCovariateSettings(
    -  useConditionOccurrenceDuring = F,
    -  useConditionOccurrencePrimaryInpatientDuring = F,
    -  useConditionEraDuring = F,
    -  useConditionGroupEraDuring = F,
    -  useDrugExposureDuring = F,
    -  useDrugEraDuring = F,
    -  useDrugGroupEraDuring = F,
    -  useProcedureOccurrenceDuring = F,
    -  useDeviceExposureDuring = F,
    -  useMeasurementDuring = F,
    -  useObservationDuring = F,
    -  useVisitCountDuring = F,
    -  useVisitConceptCountDuring = F,
    -  includedCovariateConceptIds = c(),
    -  addDescendantsToInclude = F,
    -  excludedCovariateConceptIds = c(),
    -  addDescendantsToExclude = F,
    -  includedCovariateIds = c()
    -)
    -
    - -
    -

    Arguments

    -
    useConditionOccurrenceDuring
    -

    One covariate per condition in the -condition_occurrence table starting between -cohort start and cohort end. (analysis ID 109)

    - - -
    useConditionOccurrencePrimaryInpatientDuring
    -

    One covariate per condition observed as -a primary diagnosis in an inpatient -setting in the condition_occurrence table starting between -cohort start and cohort end. (analysis ID 110)

    - - -
    useConditionEraDuring
    -

    One covariate per condition in the condition_era table -starting between cohort start and cohort end. -(analysis ID 217)

    - - -
    useConditionGroupEraDuring
    -

    One covariate per condition era rolled -up to groups in the condition_era table -starting between cohort start and cohort end. -(analysis ID 218)

    - - -
    useDrugExposureDuring
    -

    One covariate per drug in the drug_exposure table between cohort start and end. -(analysisId 305)

    - - -
    useDrugEraDuring
    -

    One covariate per drug in the drug_era table between cohort start and end. -(analysis ID 417)

    - - -
    useDrugGroupEraDuring
    -

    One covariate per drug rolled up to ATC groups in the drug_era table between cohort start and end. -(analysis ID 418)

    - - -
    useProcedureOccurrenceDuring
    -

    One covariate per procedure in the procedure_occurrence table between cohort start and end. -(analysis ID 505)

    - - -
    useDeviceExposureDuring
    -

    One covariate per device in the device exposure table starting between cohort start and end. -(analysis ID 605)

    - - -
    useMeasurementDuring
    -

    One covariate per measurement in the measurement table between cohort start and end. -(analysis ID 713)

    - - -
    useObservationDuring
    -

    One covariate per observation in the observation table between cohort start and end. -(analysis ID 805)

    - - -
    useVisitCountDuring
    -

    The number of visits observed between cohort start and end. -(analysis ID 926)

    - - -
    useVisitConceptCountDuring
    -

    The number of visits observed between cohort start and end, stratified by visit concept ID. -(analysis ID 927)

    - - -
    includedCovariateConceptIds
    -

    A list of concept IDs that should be -used to construct covariates.

    - - -
    addDescendantsToInclude
    -

    Should descendant concept IDs be added -to the list of concepts to include?

    - - -
    excludedCovariateConceptIds
    -

    A list of concept IDs that should NOT be -used to construct covariates.

    - - -
    addDescendantsToExclude
    -

    Should descendant concept IDs be added -to the list of concepts to exclude?

    - - -
    includedCovariateIds
    -

    A list of covariate IDs that should be -restricted to.

    - -
    -
    -

    Value

    - - -

    An object of type covariateSettings, to be used in other functions.

    -
    -
    -

    Details

    -

    creates an object specifying how during covariates should be constructed from data in the CDM model.

    -
    -
    -

    See also

    -

    Other CovariateSetting: -getDbDuringCovariateData()

    -
    - -
    -

    Examples

    -
    settings <- createDuringCovariateSettings(
    -  useConditionOccurrenceDuring = TRUE,
    -  useConditionOccurrencePrimaryInpatientDuring = FALSE,
    -  useConditionEraDuring = FALSE,
    -  useConditionGroupEraDuring = FALSE
    -)
    -
    -
    -
    -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/createSqliteDatabase.html b/docs/reference/createSqliteDatabase.html deleted file mode 100644 index 49df29d..0000000 --- a/docs/reference/createSqliteDatabase.html +++ /dev/null @@ -1,133 +0,0 @@ - -Create an sqlite database connection — createSqliteDatabase • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    This function creates a connection to an sqlite database

    -
    - -
    -
    createSqliteDatabase(sqliteLocation = tempdir())
    -
    - -
    -

    Arguments

    -
    sqliteLocation
    -

    The location of the sqlite database

    - -
    -
    -

    Value

    - - -

    Returns the connection to the sqlite database

    -
    -
    -

    Details

    -

    This function creates a sqlite database and connection

    -
    -
    -

    See also

    - -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/createTimeToEventSettings.html b/docs/reference/createTimeToEventSettings.html deleted file mode 100644 index 6b8ff93..0000000 --- a/docs/reference/createTimeToEventSettings.html +++ /dev/null @@ -1,132 +0,0 @@ - -Create time to event study settings — createTimeToEventSettings • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Create time to event study settings

    -
    - -
    -
    createTimeToEventSettings(targetIds, outcomeIds)
    -
    - -
    -

    Arguments

    -
    targetIds
    -

    A list of cohortIds for the target cohorts

    - - -
    outcomeIds
    -

    A list of cohortIds for the outcome cohorts

    - -
    -
    -

    Value

    - - -

    An list with the time to event settings

    -
    -
    -

    See also

    -

    Other TimeToEvent: -computeTimeToEventAnalyses()

    -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/exportAggregateCovariateToCsv.html b/docs/reference/exportAggregateCovariateToCsv.html deleted file mode 100644 index 6b92f3b..0000000 --- a/docs/reference/exportAggregateCovariateToCsv.html +++ /dev/null @@ -1,128 +0,0 @@ - -export the AggregateCovariate results as csv — exportAggregateCovariateToCsv • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    export the AggregateCovariate results as csv

    -
    - -
    -
    exportAggregateCovariateToCsv(result, saveDirectory, minCellCount = 0)
    -
    - -
    -

    Arguments

    -
    result
    -

    The output of running computeAggregateCovariateAnalyses()

    - - -
    saveDirectory
    -

    An directory location to save the results into

    - - -
    minCellCount
    -

    The minimum value that will be displayed in count columns

    - -
    -
    -

    Value

    - - -

    A string specifying the directory the csv results are saved to

    -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/exportDatabaseToCsv.html b/docs/reference/exportDatabaseToCsv.html deleted file mode 100644 index ce5dfb9..0000000 --- a/docs/reference/exportDatabaseToCsv.html +++ /dev/null @@ -1,158 +0,0 @@ - -Exports all tables in the result database to csv files — exportDatabaseToCsv • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    This function extracts the database tables into csv files

    -
    - -
    -
    exportDatabaseToCsv(
    -  connectionDetails,
    -  resultSchema,
    -  targetDialect = NULL,
    -  tablePrefix = "c_",
    -  filePrefix = NULL,
    -  tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"),
    -  saveDirectory
    -)
    -
    - -
    -

    Arguments

    -
    connectionDetails
    -

    The connection details to input into the -function connect in the -DatabaseConnector package.

    - - -
    resultSchema
    -

    The name of the database schema that the result tables will be created.

    - - -
    targetDialect
    -

    DEPRECATED: derived from connectionDetails.

    - - -
    tablePrefix
    -

    The table prefix to apply to the characterization result tables

    - - -
    filePrefix
    -

    The prefix to apply to the files

    - - -
    tempEmulationSchema
    -

    The temp schema used when the database management system is oracle

    - - -
    saveDirectory
    -

    The directory to save the csv results

    - -
    -
    -

    Value

    - - -

    csv file per table into the saveDirectory

    -
    -
    -

    Details

    -

    This function extracts the database tables into csv files

    -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/exportDechallengeRechallengeToCsv.html b/docs/reference/exportDechallengeRechallengeToCsv.html deleted file mode 100644 index 0ab3091..0000000 --- a/docs/reference/exportDechallengeRechallengeToCsv.html +++ /dev/null @@ -1,137 +0,0 @@ - -export the DechallengeRechallenge results as csv — exportDechallengeRechallengeToCsv • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    export the DechallengeRechallenge results as csv

    -
    - -
    -
    exportDechallengeRechallengeToCsv(result, saveDirectory, minCellCount = 0)
    -
    - -
    -

    Arguments

    -
    result
    -

    The output of running computeDechallengeRechallengeAnalyses()

    - - -
    saveDirectory
    -

    An directory location to save the results into

    - - -
    minCellCount
    -

    The minimum value that will be displayed in count columns

    - -
    -
    -

    Value

    - - -

    A string specifying the directory the csv results are saved to

    -
    -
    -

    See also

    - -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/exportRechallengeFailCaseSeriesToCsv.html b/docs/reference/exportRechallengeFailCaseSeriesToCsv.html deleted file mode 100644 index c854c92..0000000 --- a/docs/reference/exportRechallengeFailCaseSeriesToCsv.html +++ /dev/null @@ -1,133 +0,0 @@ - -export the RechallengeFailCaseSeries results as csv — exportRechallengeFailCaseSeriesToCsv • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    export the RechallengeFailCaseSeries results as csv

    -
    - -
    -
    exportRechallengeFailCaseSeriesToCsv(result, saveDirectory)
    -
    - -
    -

    Arguments

    -
    result
    -

    The output of running computeRechallengeFailCaseSeriesAnalyses()

    - - -
    saveDirectory
    -

    An directory location to save the results into

    - -
    -
    -

    Value

    - - -

    A string specifying the directory the csv results are saved to

    -
    -
    -

    See also

    - -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/exportTimeToEventToCsv.html b/docs/reference/exportTimeToEventToCsv.html deleted file mode 100644 index 1d58b1d..0000000 --- a/docs/reference/exportTimeToEventToCsv.html +++ /dev/null @@ -1,137 +0,0 @@ - -export the TimeToEvent results as csv — exportTimeToEventToCsv • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    export the TimeToEvent results as csv

    -
    - -
    -
    exportTimeToEventToCsv(result, saveDirectory, minCellCount = 0)
    -
    - -
    -

    Arguments

    -
    result
    -

    The output of running computeTimeToEventAnalyses()

    - - -
    saveDirectory
    -

    An directory location to save the results into

    - - -
    minCellCount
    -

    The minimum value that will be displayed in count columns

    - -
    -
    -

    Value

    - - -

    A string specifying the directory the csv results are saved to

    -
    - - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/getDbDuringCovariateData.html b/docs/reference/getDbDuringCovariateData.html deleted file mode 100644 index 1ab8750..0000000 --- a/docs/reference/getDbDuringCovariateData.html +++ /dev/null @@ -1,184 +0,0 @@ - -Extracts covariates that occur during a cohort — getDbDuringCovariateData • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Extracts covariates that occur during a cohort

    -
    - -
    -
    getDbDuringCovariateData(
    -  connection,
    -  oracleTempSchema = NULL,
    -  cdmDatabaseSchema,
    -  cdmVersion = "5",
    -  cohortTable = "#cohort_person",
    -  rowIdField = "subject_id",
    -  aggregated = T,
    -  cohortIds = c(-1),
    -  covariateSettings,
    -  minCharacterizationMean = 0,
    -  ...
    -)
    -
    - -
    -

    Arguments

    -
    connection
    -

    The database connection

    - - -
    oracleTempSchema
    -

    The temp schema if using oracle

    - - -
    cdmDatabaseSchema
    -

    The schema of the OMOP CDM data

    - - -
    cdmVersion
    -

    version of the OMOP CDM data

    - - -
    cohortTable
    -

    the table name that contains the target population cohort

    - - -
    rowIdField
    -

    string representing the unique identifier in the target population cohort

    - - -
    aggregated
    -

    whether the covariate should be aggregated

    - - -
    cohortIds
    -

    cohort id for the target cohort

    - - -
    covariateSettings
    -

    settings for the covariate cohorts and time periods

    - - -
    minCharacterizationMean
    -

    the minimum value for a covariate to be extracted

    - - -
    ...
    -

    additional arguments from FeatureExtraction

    - -
    -
    -

    Value

    - - -

    The the during covariates based on user settings

    -
    -
    -

    Details

    -

    The user specifies a what during covariates they want and this executes them using FE

    -
    -
    -

    See also

    -

    Other CovariateSetting: -createDuringCovariateSettings()

    -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/index.html b/docs/reference/index.html deleted file mode 100644 index 0079068..0000000 --- a/docs/reference/index.html +++ /dev/null @@ -1,223 +0,0 @@ - -Function reference • Characterization - - -
    -
    - - - -
    -
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    -

    Aggregate Covariate Analysis

    -

    This analysis calculates the aggregate characteristics for a Target cohort (T), an Outcome cohort (O) and combiations of T with O during time at risk and T without O during time at risk.

    -
    -

    createAggregateCovariateSettings()

    -

    Create aggregate covariate study settings

    -

    Dechallenge Rechallenge Analysis

    -

    For a given Target cohort (T) and Outcome cohort (O) find any occurrances of a dechallenge (when the T cohort stops close to when O started) and a rechallenge (when T restarts and O starts again) This is useful for investigating causality between drugs and events.

    -
    -

    computeDechallengeRechallengeAnalyses()

    -

    Compute dechallenge rechallenge study

    -

    computeRechallengeFailCaseSeriesAnalyses()

    -

    Compute fine the subjects that fail the dechallenge rechallenge study

    -

    createDechallengeRechallengeSettings()

    -

    Create dechallenge rechallenge study settings

    -

    Time to Event Analysis

    -

    This analysis calculates the timing between the Target cohort (T) and an Outcome cohort (O).

    -
    -

    computeTimeToEventAnalyses()

    -

    Compute time to event study

    -

    createTimeToEventSettings()

    -

    Create time to event study settings

    -

    Run Large Scale Characterization Study

    -

    Run multipe aggregate covariate analysis, time to event and dechallenge/rechallenge studies.

    -
    -

    createCharacterizationSettings()

    -

    Create the settings for a large scale characterization study

    -

    loadCharacterizationSettings()

    -

    Load the characterization settings previously saved as a json file

    -

    runCharacterizationAnalyses()

    -

    execute a large-scale characterization study

    -

    saveCharacterizationSettings()

    -

    Save the characterization settings as a json

    -

    Save Load

    -

    Functions to save the analysis settings and the results (as sqlite or csv files).

    -
    -

    exportDechallengeRechallengeToCsv()

    -

    export the DechallengeRechallenge results as csv

    -

    exportRechallengeFailCaseSeriesToCsv()

    -

    export the RechallengeFailCaseSeries results as csv

    -

    exportTimeToEventToCsv()

    -

    export the TimeToEvent results as csv

    -

    Insert into Database

    -

    Functions to insert the results into a database.

    -
    -

    createCharacterizationTables()

    -

    Create the results tables to store characterization results into a database

    -

    createSqliteDatabase()

    -

    Create an sqlite database connection

    -

    insertResultsToDatabase()

    -

    Upload the results into a result database

    -

    Shiny App

    -

    Functions to interactively exlore the results from runCharacterizationAnalyses().

    -
    -

    viewCharacterization()

    -

    viewCharacterization - Interactively view the characterization results

    -

    Custom covariates

    -

    Code to create covariates during cohort start and end

    -
    -

    createDuringCovariateSettings()

    -

    Create during covariate settings

    -

    getDbDuringCovariateData()

    -

    Extracts covariates that occur during a cohort

    -

    Incremental

    -

    Code to run incremetal model

    -
    -

    cleanIncremental()

    -

    Removes csv files from folders that have not been marked as completed -and removes the record of the execution file

    -

    cleanNonIncremental()

    -

    Removes csv files from the execution folder as there should be no csv files -when running in non-incremental model

    - - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/insertResultsToDatabase.html b/docs/reference/insertResultsToDatabase.html deleted file mode 100644 index 044a90b..0000000 --- a/docs/reference/insertResultsToDatabase.html +++ /dev/null @@ -1,155 +0,0 @@ - -Upload the results into a result database — insertResultsToDatabase • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    This function uploads results in csv format into a result database

    -
    - -
    -
    insertResultsToDatabase(
    -  connectionDetails,
    -  schema,
    -  resultsFolder,
    -  tablePrefix = "",
    -  csvTablePrefix = "c_"
    -)
    -
    - -
    -

    Arguments

    -
    connectionDetails
    -

    The connection details to the result database

    - - -
    schema
    -

    The schema for the result database

    - - -
    resultsFolder
    -

    The folder containing the csv results

    - - -
    tablePrefix
    -

    A prefix to append to the result tables for the characterization results

    - - -
    csvTablePrefix
    -

    The prefix added to the csv results - default is 'c_'

    - -
    -
    -

    Value

    - - -

    Returns the connection to the sqlite database

    -
    -
    -

    Details

    -

    Calls ResultModelManager uploadResults function to upload the csv files

    -
    -
    -

    See also

    - -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/loadAggregateCovariateAnalyses.html b/docs/reference/loadAggregateCovariateAnalyses.html deleted file mode 100644 index af93eed..0000000 --- a/docs/reference/loadAggregateCovariateAnalyses.html +++ /dev/null @@ -1,120 +0,0 @@ - -Load the AggregateCovariate results — loadAggregateCovariateAnalyses • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Load the AggregateCovariate results

    -
    - -
    -
    loadAggregateCovariateAnalyses(fileName)
    -
    - -
    -

    Arguments

    -
    fileName
    -

    The file to save the results into.

    - -
    -
    -

    Value

    - - -

    A list of data.frames with the AggregateCovariate results

    -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/loadCharacterizationSettings.html b/docs/reference/loadCharacterizationSettings.html deleted file mode 100644 index 216ef2c..0000000 --- a/docs/reference/loadCharacterizationSettings.html +++ /dev/null @@ -1,134 +0,0 @@ - -Load the characterization settings previously saved as a json file — loadCharacterizationSettings • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    This function converts the json file back into an R object

    -
    - -
    -
    loadCharacterizationSettings(fileName)
    -
    - -
    -

    Arguments

    -
    fileName
    -

    The location of the the json settings

    - -
    -
    -

    Value

    - - -

    Returns the json settings as an R object

    -
    -
    -

    Details

    -

    Input the directory containing the 'characterizationSettings.json' file and load the settings into R

    -
    - - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/loadDechallengeRechallengeAnalyses.html b/docs/reference/loadDechallengeRechallengeAnalyses.html deleted file mode 100644 index 0408387..0000000 --- a/docs/reference/loadDechallengeRechallengeAnalyses.html +++ /dev/null @@ -1,120 +0,0 @@ - -Load the DechallengeRechallenge results — loadDechallengeRechallengeAnalyses • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Load the DechallengeRechallenge results

    -
    - -
    -
    loadDechallengeRechallengeAnalyses(fileName)
    -
    - -
    -

    Arguments

    -
    fileName
    -

    The file to save the results into.

    - -
    -
    -

    Value

    - - -

    A data.frame with the DechallengeRechallenge results

    -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/loadRechallengeFailCaseSeriesAnalyses.html b/docs/reference/loadRechallengeFailCaseSeriesAnalyses.html deleted file mode 100644 index 66ef094..0000000 --- a/docs/reference/loadRechallengeFailCaseSeriesAnalyses.html +++ /dev/null @@ -1,120 +0,0 @@ - -Load the RechallengeFailCaseSeries results — loadRechallengeFailCaseSeriesAnalyses • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Load the RechallengeFailCaseSeries results

    -
    - -
    -
    loadRechallengeFailCaseSeriesAnalyses(fileName)
    -
    - -
    -

    Arguments

    -
    fileName
    -

    The file to save the results into.

    - -
    -
    -

    Value

    - - -

    A data.frame with the RechallengeFailCaseSeries results

    -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/loadTimeToEventAnalyses.html b/docs/reference/loadTimeToEventAnalyses.html deleted file mode 100644 index 5b8f490..0000000 --- a/docs/reference/loadTimeToEventAnalyses.html +++ /dev/null @@ -1,120 +0,0 @@ - -Load the TimeToEvent results — loadTimeToEventAnalyses • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Load the TimeToEvent results

    -
    - -
    -
    loadTimeToEventAnalyses(fileName)
    -
    - -
    -

    Arguments

    -
    fileName
    -

    The file to save the results into.

    - -
    -
    -

    Value

    - - -

    A data.frame with the TimeToEvent results

    -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/runCharacterizationAnalyses.html b/docs/reference/runCharacterizationAnalyses.html deleted file mode 100644 index 594e98b..0000000 --- a/docs/reference/runCharacterizationAnalyses.html +++ /dev/null @@ -1,225 +0,0 @@ - -execute a large-scale characterization study — runCharacterizationAnalyses • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Specify the database connection containing the CDM data, the cohort database schemas/tables, -the characterization settings and the directory to save the results to

    -
    - -
    -
    runCharacterizationAnalyses(
    -  connectionDetails,
    -  targetDatabaseSchema,
    -  targetTable,
    -  outcomeDatabaseSchema,
    -  outcomeTable,
    -  tempEmulationSchema = getOption("sqlRenderTempEmulationSchema"),
    -  cdmDatabaseSchema,
    -  characterizationSettings,
    -  outputDirectory,
    -  executionPath = file.path(outputDirectory, "execution"),
    -  csvFilePrefix = "c_",
    -  databaseId = "1",
    -  showSubjectId = F,
    -  minCellCount = 0,
    -  incremental = T,
    -  threads = 1,
    -  minCharacterizationMean = 0.01
    -)
    -
    - -
    -

    Arguments

    -
    connectionDetails
    -

    The connection details to the database containing the OMOP CDM data

    - - -
    targetDatabaseSchema
    -

    Schema name where your target cohort table resides. Note that for SQL Server, -this should include both the database and schema name, for example -'scratch.dbo'.

    - - -
    targetTable
    -

    Name of the target cohort table.

    - - -
    outcomeDatabaseSchema
    -

    Schema name where your outcome cohort table resides. Note that for SQL Server, -this should include both the database and schema name, for example -'scratch.dbo'.

    - - -
    outcomeTable
    -

    Name of the outcome cohort table.

    - - -
    tempEmulationSchema
    -

    Some database platforms like Oracle and Impala do not truly support temp tables. -To emulate temp tables, provide a schema with write privileges where temp tables -can be created

    - - -
    cdmDatabaseSchema
    -

    The schema with the OMOP CDM data

    - - -
    characterizationSettings
    -

    The study settings created using createCharacterizationSettings

    - - -
    outputDirectory
    -

    The location to save the final csv files to

    - - -
    executionPath
    -

    The location where intermediate results are saved to

    - - -
    csvFilePrefix
    -

    A string to append the csv files in the outputDirectory

    - - -
    databaseId
    -

    The unique identifier for the cdm database

    - - -
    showSubjectId
    -

    Whether to include subjectId of failed rechallenge case series or hide

    - - -
    minCellCount
    -

    The minimum count value that is calculated

    - - -
    incremental
    -

    If TRUE then skip previously executed analyses that completed

    - - -
    threads
    -

    The number of threads to use when running aggregate covariates

    - - -
    minCharacterizationMean
    -

    The minimum mean threshold to extract when running aggregate covariates

    - -
    -
    -

    Value

    - - -

    Multiple csv files in the outputDirectory.

    -
    -
    -

    Details

    -

    The results of the characterization will be saved into an sqlite database inside the -specified saveDirectory

    -
    - - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/saveAggregateCovariateAnalyses.html b/docs/reference/saveAggregateCovariateAnalyses.html deleted file mode 100644 index 81de3af..0000000 --- a/docs/reference/saveAggregateCovariateAnalyses.html +++ /dev/null @@ -1,124 +0,0 @@ - -Save the AggregateCovariate results — saveAggregateCovariateAnalyses • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Save the AggregateCovariate results

    -
    - -
    -
    saveAggregateCovariateAnalyses(result, fileName)
    -
    - -
    -

    Arguments

    -
    result
    -

    The output of running computeAggregateCovariateAnalyses()

    - - -
    fileName
    -

    The file to save the results into.

    - -
    -
    -

    Value

    - - -

    A string specifying the directory the results are saved to

    -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/saveCharacterizationSettings.html b/docs/reference/saveCharacterizationSettings.html deleted file mode 100644 index 54d5b79..0000000 --- a/docs/reference/saveCharacterizationSettings.html +++ /dev/null @@ -1,138 +0,0 @@ - -Save the characterization settings as a json — saveCharacterizationSettings • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    This function converts the settings into a json object and saves it

    -
    - -
    -
    saveCharacterizationSettings(settings, fileName)
    -
    - -
    -

    Arguments

    -
    settings
    -

    An object of class characterizationSettings created using createCharacterizationSettings

    - - -
    fileName
    -

    The location to save the json settings

    - -
    -
    -

    Value

    - - -

    Returns the location of the directory containing the json settings

    -
    -
    -

    Details

    -

    Input the characterization settings and output a json file to a file named 'characterizationSettings.json' inside the saveDirectory

    -
    - - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/saveDechallengeRechallengeAnalyses.html b/docs/reference/saveDechallengeRechallengeAnalyses.html deleted file mode 100644 index 07b899e..0000000 --- a/docs/reference/saveDechallengeRechallengeAnalyses.html +++ /dev/null @@ -1,124 +0,0 @@ - -Save the DechallengeRechallenge results — saveDechallengeRechallengeAnalyses • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Save the DechallengeRechallenge results

    -
    - -
    -
    saveDechallengeRechallengeAnalyses(result, fileName)
    -
    - -
    -

    Arguments

    -
    result
    -

    The output of running computeDechallengeRechallengeAnalyses()

    - - -
    fileName
    -

    The file to save the results into.

    - -
    -
    -

    Value

    - - -

    A string specifying the directory the results are saved to

    -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/saveRechallengeFailCaseSeriesAnalyses.html b/docs/reference/saveRechallengeFailCaseSeriesAnalyses.html deleted file mode 100644 index c984b25..0000000 --- a/docs/reference/saveRechallengeFailCaseSeriesAnalyses.html +++ /dev/null @@ -1,124 +0,0 @@ - -Save the RechallengeFailCaseSeries results — saveRechallengeFailCaseSeriesAnalyses • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Save the RechallengeFailCaseSeries results

    -
    - -
    -
    saveRechallengeFailCaseSeriesAnalyses(result, fileName)
    -
    - -
    -

    Arguments

    -
    result
    -

    The output of running computeRechallengeFailCaseSeriesAnalyses()

    - - -
    fileName
    -

    The file to save the results into.

    - -
    -
    -

    Value

    - - -

    A string specifying the directory the results are saved to

    -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/saveTimeToEventAnalyses.html b/docs/reference/saveTimeToEventAnalyses.html deleted file mode 100644 index 33c43dd..0000000 --- a/docs/reference/saveTimeToEventAnalyses.html +++ /dev/null @@ -1,124 +0,0 @@ - -Save the TimeToEvent results — saveTimeToEventAnalyses • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    Save the TimeToEvent results

    -
    - -
    -
    saveTimeToEventAnalyses(result, fileName)
    -
    - -
    -

    Arguments

    -
    result
    -

    The output of running computeTimeToEventAnalyses()

    - - -
    fileName
    -

    The file to save the results into.

    - -
    -
    -

    Value

    - - -

    A string specifying the directory the results are saved to

    -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/reference/viewCharacterization.html b/docs/reference/viewCharacterization.html deleted file mode 100644 index 702c37a..0000000 --- a/docs/reference/viewCharacterization.html +++ /dev/null @@ -1,131 +0,0 @@ - -viewCharacterization - Interactively view the characterization results — viewCharacterization • Characterization - - -
    -
    - - - -
    -
    - - -
    -

    This is a shiny app for viewing interactive plots and tables

    -
    - -
    -
    viewCharacterization(resultFolder, cohortDefinitionSet = NULL)
    -
    - -
    -

    Arguments

    -
    resultFolder
    -

    The location of the csv results

    - - -
    cohortDefinitionSet
    -

    The cohortDefinitionSet extracted using webAPI

    - -
    -
    -

    Value

    - - -

    Opens a shiny app for interactively viewing the results

    -
    -
    -

    Details

    -

    Input is the output of ...

    -
    - -
    - -
    - - -
    - -
    -

    Site built with pkgdown 2.0.7.

    -
    - -
    - - - - - - - - diff --git a/docs/sitemap.xml b/docs/sitemap.xml deleted file mode 100644 index 3138ade..0000000 --- a/docs/sitemap.xml +++ /dev/null @@ -1,135 +0,0 @@ - - - - /404.html - - - /articles/InstallationGuide.html - - - /articles/Specification.html - - - /articles/UsingCharacterizationPackage.html - - - /articles/UsingPackage.html - - - /articles/index.html - - - /authors.html - - - /index.html - - - /news/index.html - - - /pull_request_template.html - - - /reference/Characterization-package.html - - - /reference/cleanIncremental.html - - - /reference/cleanNonIncremental.html - - - /reference/computeAggregateCovariateAnalyses.html - - - /reference/computeDechallengeRechallengeAnalyses.html - - - /reference/computeRechallengeFailCaseSeriesAnalyses.html - - - /reference/computeTimeToEventAnalyses.html - - - /reference/createAggregateCovariateSettings.html - - - /reference/createCharacterizationSettings.html - - - /reference/createCharacterizationTables.html - - - /reference/createDechallengeRechallengeSettings.html - - - /reference/createDuringCovariateSettings.html - - - /reference/createSqliteDatabase.html - - - /reference/createTimeToEventSettings.html - - - /reference/exportAggregateCovariateToCsv.html - - - /reference/exportDatabaseToCsv.html - - - /reference/exportDechallengeRechallengeToCsv.html - - - /reference/exportRechallengeFailCaseSeriesToCsv.html - - - /reference/exportTimeToEventToCsv.html - - - /reference/getDbDuringCovariateData.html - - - /reference/index.html - - - /reference/insertResultsToDatabase.html - - - /reference/loadAggregateCovariateAnalyses.html - - - /reference/loadCharacterizationSettings.html - - - /reference/loadDechallengeRechallengeAnalyses.html - - - /reference/loadRechallengeFailCaseSeriesAnalyses.html - - - /reference/loadTimeToEventAnalyses.html - - - /reference/runCharacterizationAnalyses.html - - - /reference/saveAggregateCovariateAnalyses.html - - - /reference/saveCharacterizationSettings.html - - - /reference/saveDechallengeRechallengeAnalyses.html - - - /reference/saveRechallengeFailCaseSeriesAnalyses.html - - - /reference/saveTimeToEventAnalyses.html - - - /reference/viewCharacterization.html - - diff --git a/extras/Characterization.pdf b/extras/Characterization.pdf deleted file mode 100644 index 9051939..0000000 Binary files a/extras/Characterization.pdf and /dev/null differ diff --git a/extras/PackageMaintenance.R b/extras/PackageMaintenance.R index 2118769..ac92bae 100644 --- a/extras/PackageMaintenance.R +++ b/extras/PackageMaintenance.R @@ -20,24 +20,3 @@ OhdsiRTools::checkUsagePackage("Characterization") OhdsiRTools::updateCopyrightYearFolder() OhdsiRTools::findNonAsciiStringsInFolder() devtools::spell_check() - -# Create manual and vignettes: -unlink("extras/Characterization.pdf") -system("R CMD Rd2pdf ./ --output=extras/Characterization.pdf") - -pkgdown::build_site() -OhdsiRTools::fixHadesLogo() - - -rmarkdown::render("vignettes/UsingCharacterizationPackage.Rmd", - output_file = "../inst/doc/UsingCharacterizationPackage.pdf", - rmarkdown::pdf_document(latex_engine = "pdflatex", - toc = TRUE, - number_sections = TRUE)) - -rmarkdown::render("vignettes/InstallationGuide.Rmd", - output_file = "../inst/doc/InstallationGuide.pdf", - rmarkdown::pdf_document(latex_engine = "pdflatex", - toc = TRUE, - number_sections = TRUE)) - diff --git a/vignettes/InstallationGuide.Rmd b/vignettes/InstallationGuide.Rmd index dd1bd41..a2ad781 100644 --- a/vignettes/InstallationGuide.Rmd +++ b/vignettes/InstallationGuide.Rmd @@ -12,21 +12,14 @@ header-includes: - \renewcommand{\headrulewidth}{0.4pt} - \renewcommand{\footrulewidth}{0.4pt} output: - pdf_document: - includes: - in_header: preamble.tex - number_sections: yes - toc: yes - word_document: - toc: yes html_document: number_sections: yes toc: yes +vignette: > + %\VignetteIndexEntry{Installation_guide} + %\VignetteEngine{knitr::knitr} + %\VignetteEncoding{UTF-8} --- - # Introduction This vignette describes how you need to install the Observational Health Data Sciences and Informatics (OHDSI) [`Characterization`](http://github.com/OHDSI/Characterization) package under Windows, Mac, and Linux. diff --git a/vignettes/Specification.Rmd b/vignettes/Specification.Rmd index 747186e..c192764 100644 --- a/vignettes/Specification.Rmd +++ b/vignettes/Specification.Rmd @@ -15,21 +15,12 @@ output: html_document: number_sections: yes toc: yes - word_document: - toc: yes - pdf_document: - includes: - in_header: preamble.tex - number_sections: yes - toc: yes +vignette: > + %\VignetteIndexEntry{Specification} + %\VignetteEngine{knitr::knitr} + %\VignetteEncoding{UTF-8} --- -```{=html} - -``` # Time-to-event ## Inputs diff --git a/vignettes/UsingPackage.Rmd b/vignettes/UsingPackage.Rmd index 809f6bc..7823006 100644 --- a/vignettes/UsingPackage.Rmd +++ b/vignettes/UsingPackage.Rmd @@ -12,21 +12,14 @@ header-includes: - \renewcommand{\headrulewidth}{0.4pt} - \renewcommand{\footrulewidth}{0.4pt} output: - pdf_document: - includes: - in_header: preamble.tex - number_sections: yes - toc: yes - word_document: - toc: yes html_document: number_sections: yes toc: yes +vignette: > + %\VignetteIndexEntry{Using_Package} + %\VignetteEngine{knitr::knitr} + %\VignetteEncoding{UTF-8} --- - # Introduction This vignette describes how you can use the Characterization package for various descriptive studies using OMOP CDM data. The Characterization package currently contains three different types of analyses: @@ -40,8 +33,7 @@ This vignette describes how you can use the Characterization package for various In this vignette we will show working examples using the `Eunomia` R package that contains simulated data. Run the following code to install the `Eunomia` R package: ```{r tidy=TRUE,eval=FALSE} -install.packages("remotes") -remotes::install_github("ohdsi/Eunomia") +install.packages(c("Eunomia", "remotes")) ``` Eunomia can be used to create a temporary SQLITE database with the simulated data. The function `getEunomiaConnectionDetails` creates a SQLITE connection to a temporary location. The function `createCohorts` then populates the temporary SQLITE database with the simulated data ready to be used. @@ -53,8 +45,7 @@ Eunomia::createCohorts(connectionDetails = connectionDetails) We also need to have the Characterization package installed and loaded ```{r tidy=TRUE,eval=FALSE} -remotes::install_github("ohdsi/FeatureExtraction") -remotes::install_github("ohdsi/Characterization", ref = "new_approach") +remotes::install_github("ohdsi/Characterization") ``` ```{r tidy=TRUE,eval=TRUE} diff --git a/vignettes/preamble.tex b/vignettes/preamble.tex deleted file mode 100644 index 2040267..0000000 --- a/vignettes/preamble.tex +++ /dev/null @@ -1,8 +0,0 @@ -\usepackage{float} -\let\origfigure\figure -\let\endorigfigure\endfigure -\renewenvironment{figure}[1][2] { - \expandafter\origfigure\expandafter[H] -} { - \endorigfigure -} \ No newline at end of file