Skip to content

Commit

Permalink
Remove angle brackets ref gesistsa/usemh#12
Browse files Browse the repository at this point in the history
  • Loading branch information
chainsawriot committed Dec 12, 2024
1 parent ff0561d commit 0bf6990
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
4 changes: 2 additions & 2 deletions methodshub.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,12 @@ package provides functions for validating topic models using word
intrusion, topic intrusion (Chang et al. 2009,
<https://papers.nips.cc/paper/3700-reading-tea-leaves-how-humans-interpret-topic-models>)
and word set intrusion (Ying et al. 2021)
[\<doi:10.1017/pan.2021.33\>](https://doi.org/10.1017/pan.2021.33)
[doi:10.1017/pan.2021.33](https://doi.org/10.1017/pan.2021.33)
tests. This package also provides functions for generating gold-standard
data which are useful for validating dictionary-based methods. The
default settings of all generated tests match those suggested in Chang
et al. (2009) and Song et al. (2020)
[\<doi:10.1080/10584609.2020.1723752\>](https://doi.org/10.1080/10584609.2020.1723752).
[doi:10.1080/10584609.2020.1723752](https://doi.org/10.1080/10584609.2020.1723752).

## Keywords

Expand Down
2 changes: 1 addition & 1 deletion methodshub.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ format:

<!-- - Provide a brief and clear description of the method, its purpose, and what it aims to achieve. Add a link to a related paper from social science domain and show how your method can be applied to solve that research question. -->

Intended to create standard human-in-the-loop validity tests for typical automated content analysis such as topic modeling and dictionary-based methods. This package offers a standard workflow with functions to prepare, administer and evaluate a human-in-the-loop validity test. This package provides functions for validating topic models using word intrusion, topic intrusion (Chang et al. 2009, <https://papers.nips.cc/paper/3700-reading-tea-leaves-how-humans-interpret-topic-models>) and word set intrusion (Ying et al. 2021) [<doi:10.1017/pan.2021.33>](https://doi.org/10.1017/pan.2021.33) tests. This package also provides functions for generating gold-standard data which are useful for validating dictionary-based methods. The default settings of all generated tests match those suggested in Chang et al. (2009) and Song et al. (2020) [<doi:10.1080/10584609.2020.1723752>](https://doi.org/10.1080/10584609.2020.1723752).
Intended to create standard human-in-the-loop validity tests for typical automated content analysis such as topic modeling and dictionary-based methods. This package offers a standard workflow with functions to prepare, administer and evaluate a human-in-the-loop validity test. This package provides functions for validating topic models using word intrusion, topic intrusion (Chang et al. 2009, <https://papers.nips.cc/paper/3700-reading-tea-leaves-how-humans-interpret-topic-models>) and word set intrusion (Ying et al. 2021) [doi:10.1017/pan.2021.33](https://doi.org/10.1017/pan.2021.33) tests. This package also provides functions for generating gold-standard data which are useful for validating dictionary-based methods. The default settings of all generated tests match those suggested in Chang et al. (2009) and Song et al. (2020) [doi:10.1080/10584609.2020.1723752](https://doi.org/10.1080/10584609.2020.1723752).

## Keywords

Expand Down

0 comments on commit 0bf6990

Please sign in to comment.