diff --git a/CHANGELOG.md b/CHANGELOG.md index 2d5983c70..8a1d99ebc 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -35,6 +35,7 @@ - Settings: Settings - Part-of-speeach Tagging - Tagsets - Mapping Settings - Allow editing of tagset mapping of Stanza's Armenian (Eastern), Armenian (Western), Basque, Buryat (Russia), Danish, French, Greek (Modern), Hebrew (Modern), Hungarian, Ligurian, Manx, Marathi, Nigerian Pidgin, Pomak, Portuguese, Russian, Sanskrit, Sindhi, Sorbian (Upper), and Telugu part-of-speech taggers - Utils: Update custom stop word lists - Work Area: Dependency Parser - Sentence - Highlight heads and dependents +- Work Area: Update Profiler - Readability - OSMAN ### 📌 Bugfixes - Utils: Fix downloading of Stanza models diff --git a/doc/doc.md b/doc/doc.md index 2ea2f257d..b28c2db8a 100644 --- a/doc/doc.md +++ b/doc/doc.md @@ -37,8 +37,8 @@ - [4.2 Supported File Types](#doc-4-2) - [4.3 Supported File Encodings](#doc-4-3) - [4.4 Supported Measures](#doc-4-4) - - [4.4.1 Measures of Readability](#doc-4-4-1) - - [4.4.2 Measures of Lexical Diversity](#doc-4-4-2) + - [4.4.1 Readability Formulas](#doc-4-4-1) + - [4.4.2 Indicators of Lexical Diversity](#doc-4-4-2) - [4.4.3 Measures of Dispersion and Adjusted Frequency](#doc-4-4-3) - [4.4.4 Tests of Statistical Significance, Measures of Bayes Factor, and Measures of Effect Size](#doc-4-4-4) - [5 References](#doc-5) @@ -116,7 +116,7 @@ In *Profiler*, you can check and compare general linguistic features of differen All statistics are grouped into 5 tables for better readability: Readability, Counts, Lexical Diversity, Lengths, Length Breakdown. - **3.1.1 Readability**
- Readability statistics of each file calculated according to the different readability tests used. See section [4.4.1 Measures of Readability](#doc-4-4-1) for more details. + Readability statistics of each file calculated according to the different readability tests used. See section [4.4.1 Readability Formulas](#doc-4-4-1) for more details. - **3.1.2 Counts**
- **3.1.2.1 Count of Paragraphs**
@@ -164,7 +164,7 @@ All statistics are grouped into 5 tables for better readability: Readability, Co The percentage of the number of characters in each file out of the total number of characters in all files. - **3.1.3 Lexical Diversity**
- Statistics of lexical diversity which reflect the the extend to which the vocabulary used in each file varies. See section [4.4.2 Measures of Lexical Diversity](#doc-4-4-2) for more details. + Statistics of lexical diversity which reflect the the extend to which the vocabulary used in each file varies. See section [4.4.2 Indicators of Lexical Diversity](#doc-4-4-2) for more details. - **3.1.4 Lengths**
- **3.1.4.1 Paragraph Length in Sentences / Sentence Segments / Tokens (Mean)**
@@ -322,7 +322,7 @@ All statistics are grouped into 5 tables for better readability: Readability, Co ### [3.2 Concordancer](#doc) In *Concordancer*, you can search for tokens in different files and generate concordance lines. You can adjust settings for data generation via **Generation Settings**. -After the concordance lines are generated and displayed in the table, you can sort the results by clicking **Sort Results** or search in results by clicking **Search in Results**, both buttons residing at the right corner of the *Results Area*. Highlight colors for sorting can be modified via **Menu Bar → Preferences → Settings → Tables → Concordancer → Sorting**. +After the concordance lines are generated and displayed in the table, you can sort the results by clicking **Sort Results** or search in *Data Table* for parts that might be of interest to you by clicking **Search in results**. Highlight colors for sorting can be modified via **Menu Bar → Preferences → Settings → Tables → Concordancer → Sorting**. You can generate concordance plots for all search terms. You can modify the settings for the generated figure via **Figure Settings**. @@ -373,7 +373,7 @@ You can generate concordance plots for all search terms. You can modify the sett In *Parallel Concordancer*, you can search for tokens in parallel corpora and generate parallel concordance lines. You may leave **Search Settings → Search Term** blank so as to search for instances of additions and deletions. -After the parallel concordance lines are generated and displayed in the table, you can search in results by clicking **Search in Results** which resides at the right corner of the *Results Area*. +You can search in *Data Table* for parts that might be of interest to you by clicking **Search in results**. - **3.3.1 Parallel Unit No.**
The position of the alignment unit (paragraph) where the the search term is found. @@ -393,7 +393,7 @@ After the parallel concordance lines are generated and displayed in the table, y In *Dependency Parser*, you can search for all dependency relations associated with different tokens and calculate their dependency lengths (distances). -You can search in the results for the part that might be of interest to you by clicking **Search in Results** which resides at the right corner of the *Results Area*. +You can search in *Data Table* for parts that might be of interest to you by clicking **Search in results**. You can select lines in the *Results Area* and then click *Generate Figure* to show dependency graphs for all selected sentences. You can modify the settings for the generated figure via **Figure Settings** and decide how the figures should be displayed. @@ -430,7 +430,7 @@ You can select lines in the *Results Area* and then click *Generate Figure* to s In *Wordlist Generator*, you can generate wordlists for different files and calculate the raw frequency, relative frequency, dispersion and adjusted frequency for each token. You can disable the calculation of dispersion and/or adjusted frequency by setting **Generation Settings → Measures of Dispersion / Measure of Adjusted Frequency** to **None**. -You can further filter the results as you see fit by clicking **Filter Results** or search in the results for the part that might be of interest to you by clicking **Search in Results**, both buttons residing at the right corner of the *Results Area*. +You can filter the results by clicking **Filter results** or search in *Data Table* for parts that might be of interest to you by clicking **Search in results**. You can generate line charts or word clouds for wordlists using any statistics. You can modify the settings for the generated figure via **Figure Settings**. @@ -469,9 +469,9 @@ You can generate line charts or word clouds for wordlists using any statistics. In *N-gram Generator*, you can search for n-grams (consecutive tokens) or skip-grams (non-consecutive tokens) in different files, count and compute the raw frequency and relative frequency of each n-gram/skip-gram, and calculate the dispersion and adjusted frequency for each n-gram/skip-gram using different measures. You can adjust the settings for the generated results via **Generation Settings**. You can disable the calculation of dispersion and/or adjusted frequency by setting **Generation Settings → Measures of Dispersion / Measure of Adjusted Frequency** to **None**. To allow skip-grams in the results, check **Generation Settings → Allow skipped tokens** and modify the settings. You can also set constraints on the position of search terms in all n-grams via **Search Settings → Search Term Position**. -You can generate line charts or word clouds for n-grams using any statistics. You can modify the settings for the generated figure via **Figure Settings**. +You can filter the results by clicking **Filter results** or search in *Data Table* for parts that might be of interest to you by clicking **Search in results**. -You can further filter the results as you see fit by clicking **Filter Results** or search in the results for the part that might be of interest to you by clicking **Search in Results**, both buttons residing at the right corner of the *Results Area*. +You can generate line charts or word clouds for n-grams using any statistics. You can modify the settings for the generated figure via **Figure Settings**. - **3.6.1 Rank**
The rank of the n-gram sorted by its frequency in the first file in descending order (by default). You can sort the results again by clicking the column headers. You can use continuous numbering after tied ranks (eg. 1/1/1/2/2/3 instead of 1/1/1/4/4/6) by checking **Menu Bar → Preferences → Settings → Tables → Rank Settings → Continue numbering after ties**. @@ -501,9 +501,9 @@ You can further filter the results as you see fit by clicking **Filter Results** In *Collocation Extractor*, you can search for patterns of collocation (tokens that co-occur more often than would be expected by chance) within a given collocational window (from 5 words to the left to 5 words to the right by default), conduct different tests of statistical significance on each pair of collocates and calculate the Bayes factor and effect size for each pair using different measures. You can adjust the settings for the generated results via **Generation Settings**. You can disable the calculation of statistical significance and/or Bayes factor and/or effect size by setting **Generation Settings → Test of Statistical Significance / Measures of Bayes Factor / Measure of Effect Size** to **None**. -You can generate line charts, word clouds, and network graphs for patterns of collocation using any statistics. You can modify the settings for the generated figure via **Figure Settings**. +You can filter the results by clicking **Filter results** or search in *Data Table* for parts that might be of interest to you by clicking **Search in results**. -You can further filter the results as you see fit by clicking **Filter Results** or search in the results for the part that might be of interest to you by clicking **Search in Results**, both buttons residing at the right corner of the *Results Area*. +You can generate line charts, word clouds, and network graphs for patterns of collocation using any statistics. You can modify the settings for the generated figure via **Figure Settings**. - **3.7.1 Rank**
The rank of the collocating token sorted by the p-value of the significance test conducted on the node and the collocating token in the first file in ascending order (by default). You can sort the results again by clicking the column headers. You can use continuous numbering after tied ranks (eg. 1/1/1/2/2/3 instead of 1/1/1/4/4/6) by checking **Menu Bar → Preferences → Settings → Tables → Rank Settings → Continue numbering after ties**. @@ -549,9 +549,9 @@ In *Colligation Extractor*, you can search for patterns of colligation (parts of *Wordless* will automatically apply its built-in part-of-speech tagger on every file that are not part-of-speech-tagged already according to the language of each file. If part-of-speech tagging is not supported for the given languages, the user should provide a file that has already been part-of-speech-tagged and make sure that the correct **Text Type** has been set on each file. -You can generate line charts or word clouds for patterns of colligation using any statistics. You can modify the settings for the generated figure via **Figure Settings**. +You can filter the results by clicking **Filter results** or search in *Data Table* for parts that might be of interest to you by clicking **Search in results**. -You can further filter the results as you see fit by clicking **Filter Results** or search in the results for the part that might be of interest to you by clicking **Search in Results**, both buttons residing at the right corner of the *Results Area*. +You can generate line charts or word clouds for patterns of colligation using any statistics. You can modify the settings for the generated figure via **Figure Settings**. - **3.8.1 Rank**
The rank of the collocating part of speech sorted by the p-value of the significance test conducted on the node and the collocating part of speech in the first file in ascending order (by default). You can sort the results again by clicking the column headers. You can use continuous numbering after tied ranks (eg. 1/1/1/2/2/3 instead of 1/1/1/4/4/6) by checking **Menu Bar → Preferences → Settings → Tables → Rank Settings → Continue numbering after ties**. @@ -595,21 +595,21 @@ You can further filter the results as you see fit by clicking **Filter Results** In *Keyword Extractor*, you can search for candidates of potential keywords (tokens that have far more or far less frequency in the observed file than in the reference file) in different files given a reference corpus, conduct different tests of statistical significance on each keyword and calculate the Bayes factor and effect size for each keyword using different measures. You can adjust the settings for the generated data via **Generation Settings**. You can disable the calculation of statistical significance and/or Bayes factor and/or effect size by setting **Generation Settings → Test of Statistical Significance / Measures of Bayes Factor / Measure of Effect Size** to **None**. -You can generate line charts or word clouds for keywords using any statistics. You can modify the settings for the generated figure via **Figure Settings**. +You can filter the results by clicking **Filter results** or search in *Data Table* for parts that might be of interest to you by clicking **Search in results**. -You can further filter the results as you see fit by clicking **Filter Results** or search in the results for the part that might be of interest to you by clicking **Search in Results**, both buttons residing at the right corner of the *Results Area*. +You can generate line charts or word clouds for keywords using any statistics. You can modify the settings for the generated figure via **Figure Settings**. - **3.9.1 Rank**
The rank of the keyword sorted by the p-value of the significance test conducted on the keyword in the first file in ascending order (by default). You can sort the results again by clicking the column headers. You can use continuous numbering after tied ranks (eg. 1/1/1/2/2/3 instead of 1/1/1/4/4/6) by checking **Menu Bar → Preferences → Settings → Tables → Rank Settings → Continue numbering after ties**. - **3.9.2 Keyword**
- The candidates of potential keywords. You can specify what should be counted as a "token" via **Token Settings**. + The potential keyword. You can specify what should be counted as a "token" via **Token Settings**. - **3.9.3 Frequency (in Reference File)**
- The number of co-occurrences of the keywords in the reference file. + The number of occurrences of the keyword in the reference file. - **3.9.4 Frequency (in Observed Files)**
- The number of co-occurrences of the keywords in each observed file. + The number of occurrences of the keyword in each observed file. - **3.9.5 Test Statistic**
The test statistic of the significance test conducted on the keyword in each file. You can change the test of statistical significance used via **Generation Settings → Test of Statistical Significance**. See section [4.4.4 Tests of Statistical Significance, Measures of Bayes Factor, & Measures of Effect Size](#doc-4-4-4) for more details. @@ -900,7 +900,7 @@ Vietnamese |CP1258 |✔ ### [4.4 Supported Measures](#doc) -#### [4.4.1 Measures of Readability](#doc) +#### [4.4.1 Readability Formulas](#doc) The readability of a text depends on several variables including the average sentence length, average word length in characters, average word length in syllables, number of monosyllabic words, number of polysyllabic words, number of difficult words, etc. It should be noted that some readability measures are **language-specific**, or applicable only to texts in languages for which *Wordless* have **built-in syllable tokenization support** (check [4.4.1](#doc-4-1) for reference), while others can be applied to texts in all languages. @@ -909,12 +909,9 @@ The following variables would be used in formulas:
**NumSentences**: Number of sentences
**NumWords**: Number of words
**NumWords1Syl**: Number of monosyllabic words
-**NumWords2+Syls**: Number of words with 2 or more syllables
-**NumWords3+Syls**: Number of words with 3 or more syllables
-**NumWords5+Syls**: Number of words with 5 or more syllables
-**NumWords6+Ltrs**: Number of words with 6 or more letters
-**NumWords7+Ltrs**: Number of words with 7 or more letters
-**NumWords3-Ltrs**: Number of words with 3 or less letters
+**NumWordsn+Syls**: Number of words with n or more syllables
+**NumWordsn+Ltrs**: Number of words with n or more letters
+**NumWordsn-Ltrs**: Number of words with n or less letters
**NumConjs**: Number of conjunctions
**NumPreps**: Number of prepositions
**NumProns**: Number of pronouns
@@ -1118,8 +1115,8 @@ Wheeler & Smith's Readability Formula: \text{Wheeler-Smith} = \frac{\text{NumWords}}{\text{NumUnits}} \times \frac{\text{NumWords2+Syls}}{\text{NumWords}} \times 10 --> -Measure of Readability|Formula|Supported Languages -----------------------|-------|:-----------------: +Readability Formula|Formula|Supported Languages +-------------------|-------|:-----------------: Al-Heeti's Readability Prediction Formula¹
([Al-Heeti, 1984, pp. 102, 104, 106](#ref-al-heeti-1984))|![Formula](/doc/measures/readability/rd.svg)|**Arabic** Automated Arabic Readability Index
([Al-Tamimi et al., 2013](#ref-al-tamimi-et-al-2013))|![Formula](/doc/measures/readability/aari.svg)|**Arabic** Automated Readability Index¹
([Smith & Senter, 1967, p. 8](#ref-smith-senter-1967)
Navy: [Kincaid et al., 1975, p. 14](#ref-kincaid-et-al-1975))|![Formula](/doc/measures/readability/ari.svg)|All languages @@ -1150,7 +1147,7 @@ Measure of Readability|Formula|Supported Languages McAlpine EFLAW Readability Score
([Nirmaldasan, 2009](#ref-nirmaldasan-2009))|![Formula](/doc/measures/readability/eflaw.svg)|**English** neue Wiener Literaturformeln¹
([Bamberger & Vanecek, 1984, p. 82](#ref-bamberger-vanecek-1984))|![Formula](/doc/measures/readability/nwl.svg)|**German**² neue Wiener Sachtextformel¹
([Bamberger & Vanecek, 1984, pp. 83–84](#ref-bamberger-vanecek-1984))|![Formula](/doc/measures/readability/nws.svg)|**German**² -OSMAN
([El-Haj & Rayson, 2016](#ref-elhaj-rayson-2016))|![Formula](/doc/measures/readability/osman.svg)
where **NumFaseehWords** is the number of words with 5 or more syllable which contains ء/ئ/ؤ/ذ/ظ or ends with وا/ون.

* The number of syllables in each word is estimated by adding up the number of short syllables and twice the number of long and stress syllables in each word.|**Arabic** +OSMAN
([El-Haj & Rayson, 2016](#ref-elhaj-rayson-2016))|![Formula](/doc/measures/readability/osman.svg)
where **NumFaseehWords** is the number of words which have 5 or more syllables and contain ء/ئ/ؤ/ذ/ظ or end with وا/ون.

* The number of syllables in each word is estimated by adding up the number of short syllables and twice the number of long and stress syllables in each word.|**Arabic** Rix
([Anderson, 1983](#ref-anderson-1983))|![Formula](/doc/measures/readability/rix.svg)|All languages SMOG Grade
([McLaughlin, 1969](#ref-mclaughlin-1969)
German: [Bamberger & Vanecek, 1984, p.78](#ref-bamberger-vanecek-1984))|![Formula](/doc/measures/readability/smog_grade.svg)

* A sample would be constructed using **the first 10 sentences, the last 10 sentences, and the 10 sentences at the middle of the text**, so the text should be **at least 30 sentences long**.|All languages² Spache Grade Level¹
([Spache, 1953](#ref-spache-1953)
Revised: [Spache, 1974](#ref-spache-1974))|![Formula](/doc/measures/readability/spache_grade_level.svg)

* **Three samples each of 100 words** would be taken randomly from the text and the results would be averaged out, so the text should be **at least 100 words long**.|All languages @@ -1165,7 +1162,7 @@ Measure of Readability|Formula|Supported Languages > 1. Requires **built-in part-of-speech tagging support** -#### [4.4.2 Measures of Lexical Diversity](#doc) +#### [4.4.2 Indicators of Lexical Diversity](#doc) Lexical diversity is the measurement of the extent to which the vocabulary used in the text varies. The following variables would be used in formulas:
@@ -1246,18 +1243,18 @@ Yule's Index of Diversity: \text{Index of Diversity} = \frac{\text{NumTokens}^2}{\sum_{f = 1}^{\text{f}_\text{max}}(\text{NumTypes}_f \times f^2) - \text{NumTokens}} --> -Measure of Lexical Diversity|Formula ----------------------------|------- +Indicator of Lexical Diversity|Formula +------------------------------|------- Brunét's Index
([Brunét, 1978](#ref-brunet-1978))|![Formula](/doc/measures/lexical_diversity/brunets_index.svg) Corrected TTR
([Carroll, 1964](#ref-carroll-1964))|![Formula](/doc/measures/lexical_diversity/cttr.svg) Fisher's Index of Diversity
([Fisher et al., 1943](#ref-fisher-et-al-1943))|![Formula](/doc/measures/lexical_diversity/fishers_index_of_diversity.svg)
where W₋₁ is the -1 branch of the [Lambert W function](https://en.wikipedia.org/wiki/Lambert_W_function) Herdan's Vₘ
([Herdan, 1955](#ref-herdan-1955))|![Formula](/doc/measures/lexical_diversity/herdans_vm.svg) -HD-D
([McCarthy & Jarvis, 2010](#ref-mccarthy-jarvis-2010))|For detailed calculation procedures, see reference.
The sample size could be modified via **Menu Bar → Preferences → Settings → Measures → Type-token Ratio → HD-D → Sample size**. +HD-D
([McCarthy & Jarvis, 2010](#ref-mccarthy-jarvis-2010))|For detailed calculation procedures, see reference.
The sample size could be modified via **Menu Bar → Preferences → Settings → Measures → Lexical Diversity → HD-D → Sample size**. Honoré's statistic
([Honoré, 1979](#ref-honore-1979))|![Formula](/doc/measures/lexical_diversity/honores_stat.svg) LogTTR¹
(Herdan: [Herdan, 1960, p. 28](#ref-herdan-1960)
Somers: [Somers, 1966](#ref-somers-1966)
Rubet: [Dugast, 1979](#ref-dugast-1979)
Maas: [Maas, 1972](#ref-maas-1972)
Dugast: [Dugast, 1978](#ref-dugast-1978); [Dugast, 1979](#ref-dugast-1979))|![Formula](/doc/measures/lexical_diversity/logttr.svg) -Mean Segmental TTR
([Johnson, 1944](#ref-johnson-1944))|![Formula](/doc/measures/lexical_diversity/msttr.svg)
where **n** is the number of equal-sized segment, the length of which could be modified via **Menu Bar → Preferences → Settings → Measures → Type-token Ratio → Mean Segmental TTR → Number of tokens in each segment**, **NumTypesSegᵢ** is the number of token types in the **i**-th segment, and **NumTokensSegᵢ** is the number of tokens in the **i**-th segment. -Measure of Textual Lexical Diversity
([McCarthy, 2005, pp. 95–96, 99–100](#ref-mccarthy-2005); [McCarthy & Jarvis, 2010](#ref-mccarthy-jarvis-2010))|For detailed calculation procedures, see references.
The factor size could be modified via **Menu Bar → Preferences → Settings → Measures → Type-token Ratio → Measure of Textual Lexical Diversity → Factor size**. -Moving-average TTR
([Covington & McFall, 2010](#ref-covington-mcfall-2010))|![Formula](/doc/measures/lexical_diversity/mattr.svg)
where **w** is the window size which could be modified via **Menu Bar → Preferences → Settings → Measures → Type-token Ratio → Moving-average TTR → Window size**, **NumTypesWindowₚ** is the number of token types within the moving window starting at position **p**, and **NumTokensWindowₚ** is the number of tokens within the moving window starting at position **p**. +Mean Segmental TTR
([Johnson, 1944](#ref-johnson-1944))|![Formula](/doc/measures/lexical_diversity/msttr.svg)
where **n** is the number of equal-sized segment, the length of which could be modified via **Menu Bar → Preferences → Settings → Measures → Lexical Diversity → Mean Segmental TTR → Number of tokens in each segment**, **NumTypesSegᵢ** is the number of token types in the **i**-th segment, and **NumTokensSegᵢ** is the number of tokens in the **i**-th segment. +Measure of Textual Lexical Diversity
([McCarthy, 2005, pp. 95–96, 99–100](#ref-mccarthy-2005); [McCarthy & Jarvis, 2010](#ref-mccarthy-jarvis-2010))|For detailed calculation procedures, see references.
The factor size could be modified via **Menu Bar → Preferences → Settings → Measures → Lexical Diversity → Measure of Textual Lexical Diversity → Factor size**. +Moving-average TTR
([Covington & McFall, 2010](#ref-covington-mcfall-2010))|![Formula](/doc/measures/lexical_diversity/mattr.svg)
where **w** is the window size which could be modified via **Menu Bar → Preferences → Settings → Measures → Lexical Diversity → Moving-average TTR → Window size**, **NumTypesWindowₚ** is the number of token types within the moving window starting at position **p**, and **NumTokensWindowₚ** is the number of tokens within the moving window starting at position **p**. Popescu-Mačutek-Altmann's B₁/B₂/B₃/B₄/B₅
([Popescu et al., 2008](#ref-popescu-et-al-2008))|![Formula](/doc/measures/lexical_diversity/popescu_macutek_altmanns_b1_b2_b3_b4_b5.svg) Popescu's R₁
([Popescu, 2009, pp. 18, 30, 33](#ref-popescu-2009))|For detailed calculation procedures, see reference. Popescu's R₂
([Popescu, 2009, pp. 35–36, 38](#ref-popescu-2009))|For detailed calculation procedures, see reference. diff --git a/pylintrc b/pylintrc index 439653de1..daaffe2a4 100644 --- a/pylintrc +++ b/pylintrc @@ -32,9 +32,6 @@ disable= # C0301, C0302 line-too-long, too-many-lines, - # C0413, C0415 - wrong-import-position, - import-outside-toplevel, # R0401 cyclic-import, diff --git a/tests/tests_measures/test_measures_readability.py b/tests/tests_measures/test_measures_readability.py index 4eaa8c6e6..557bc8340 100644 --- a/tests/tests_measures/test_measures_readability.py +++ b/tests/tests_measures/test_measures_readability.py @@ -58,6 +58,7 @@ def __init__(self, tokens_multilevel, lang = 'eng_us'): test_text_ara_0 = Wl_Test_Text(TOKENS_MULTILEVEL_0, lang = 'ara') test_text_ara_12 = Wl_Test_Text(TOKENS_MULTILEVEL_12, lang = 'ara') +test_text_ara_faseeh = Wl_Test_Text([[[['\u064B\u064B\u0621']]]], lang = 'ara') test_text_deu_0 = Wl_Test_Text(TOKENS_MULTILEVEL_0, lang = 'deu_de') test_text_deu_12 = Wl_Test_Text(TOKENS_MULTILEVEL_12, lang = 'deu_de') @@ -685,18 +686,28 @@ def test_nws(): assert nws_deu_12_3 == 0.2963 * ms + 0.1905 * sl - 1.1144 assert nws_eng_12 == 'no_support' +def test__get_num_syls_ara(): + assert wl_measures_readability._get_num_syls_ara('') == 0 + assert wl_measures_readability._get_num_syls_ara('\u064E\u0627') == 2 + assert wl_measures_readability._get_num_syls_ara('\u064Ea') == 1 + assert wl_measures_readability._get_num_syls_ara('\u064E') == 1 + assert wl_measures_readability._get_num_syls_ara('\u064B') == 2 + def test_osman(): osman_ara_0 = wl_measures_readability.osman(main, test_text_ara_0) osman_ara_12 = wl_measures_readability.osman(main, test_text_ara_12) + osman_ara_faseeh = wl_measures_readability.osman(main, test_text_ara_faseeh) osman_eng_12 = wl_measures_readability.osman(main, test_text_eng_12) print('OSMAN:') print(f'\tara/0: {osman_ara_0}') print(f'\tara/12: {osman_ara_12}') + print(f'\tara/faseeh: {osman_ara_faseeh}') print(f'\teng/12: {osman_eng_12}') assert osman_ara_0 == 'text_too_short' - assert osman_ara_12 == 200.791 - 1.015 * (12 / 3) - 24.181 * ((3 + 23 + 3 + 0) / 12) + assert osman_ara_12 == 200.791 - 1.015 * (12 / 3) - 24.181 * ((3 + 26 + 3 + 0) / 12) + assert osman_ara_faseeh == 200.791 - 1.015 * (1 / 1) - 24.181 * ((0 + 5 + 1 + 1) / 1) assert osman_eng_12 == 'no_support' def test_rix(): @@ -857,6 +868,7 @@ def test_wheeler_smiths_readability_formula(): test_eflaw() test_nwl() test_nws() + test__get_num_syls_ara() test_osman() test_rix() test_smog_grade() diff --git a/tests/tests_utils/test_misc.py b/tests/tests_utils/test_misc.py index b334e7faf..5e963554e 100644 --- a/tests/tests_utils/test_misc.py +++ b/tests/tests_utils/test_misc.py @@ -46,6 +46,24 @@ def test_change_file_owner_to_user(): os.remove('test') +class Widget: + def parent(self): + return 'parent' + +def test_find_wl_main(): + class Widget: + def parent(self): + return main + + widget = Widget() + widget.main = 'test' + + assert wl_misc.find_wl_main(widget) == 'test' + + del widget.main + + assert wl_misc.find_wl_main(widget) == main + def test_get_wl_ver(): assert re.search(r'^[0-9]+\.[0-9]+\.[0-9]$', str(wl_misc.get_wl_ver())) @@ -109,6 +127,7 @@ def test_normalize_nums(): test_get_linux_distro() test_change_file_owner_to_user() + test_find_wl_main() test_get_wl_ver() test_wl_get_proxies() test_wl_download() diff --git a/tests/tests_utils/test_threading.py b/tests/tests_utils/test_threading.py new file mode 100644 index 000000000..e1cc0dd79 --- /dev/null +++ b/tests/tests_utils/test_threading.py @@ -0,0 +1,55 @@ +# ---------------------------------------------------------------------- +# Wordless: Tests - Utilities - Threading +# Copyright (C) 2018-2024 Ye Lei (叶磊) +# +# This program is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . +# ---------------------------------------------------------------------- + +from tests import wl_test_init +from wordless.wl_dialogs import wl_dialogs_misc +from wordless.wl_utils import wl_threading + +main = wl_test_init.Wl_Test_Main() + +def test_wl_worker(): + dialog_progress = wl_dialogs_misc.Wl_Dialog_Progress(main, 'test') + wl_threading.Wl_Worker(main, dialog_progress, lambda: None) + +def test_wl_worker_no_progress(): + wl_threading.Wl_Worker_No_Progress(main, lambda: None) + +def test_wl_worker_no_callback(): + dialog_progress = wl_dialogs_misc.Wl_Dialog_Progress(main, 'test') + wl_threading.Wl_Worker_No_Callback(main, dialog_progress) + +def test_wl_thread(): + dialog_progress = wl_dialogs_misc.Wl_Dialog_Progress(main, 'test') + worker = wl_threading.Wl_Worker(main, dialog_progress, lambda: None) + worker.run = lambda: None + + wl_threading.Wl_Thread(worker) + +def test_wl_thread_no_progress(): + worker = wl_threading.Wl_Worker_No_Progress(main, lambda: None) + worker.run = lambda: None + + wl_threading.Wl_Thread_No_Progress(worker).start_worker() + +if __name__ == '__main__': + test_wl_worker() + test_wl_worker_no_progress() + test_wl_worker_no_callback() + + test_wl_thread() + test_wl_thread_no_progress() diff --git a/tests/tests_widgets/__init__.py b/tests/tests_widgets/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/tests/tests_widgets/test_boxes.py b/tests/tests_widgets/test_boxes.py new file mode 100644 index 000000000..a65e56801 --- /dev/null +++ b/tests/tests_widgets/test_boxes.py @@ -0,0 +1,164 @@ +# ---------------------------------------------------------------------- +# Wordless: Tests - Widgets - Boxes +# Copyright (C) 2018-2024 Ye Lei (叶磊) +# +# This program is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . +# ---------------------------------------------------------------------- + +from tests import wl_test_init +from wordless.wl_widgets import wl_boxes + +main = wl_test_init.Wl_Test_Main() + +def test_wl_combo_box(): + wl_boxes.Wl_Combo_Box(main) + +def test_wl_combo_box_adjustable(): + wl_boxes.Wl_Combo_Box_Adjustable(main) + +def test_wl_combo_box_enums(): + combo_box_enums = wl_boxes.Wl_Combo_Box_Enums(main, {'test1': 1, 'test2': 2}) + assert combo_box_enums.get_val() == 1 + + combo_box_enums.set_val(2) + assert combo_box_enums.get_val() == 2 + +def test_wl_combo_box_yes_no(): + combo_box_yes_no = wl_boxes.Wl_Combo_Box_Yes_No(main) + assert combo_box_yes_no.get_yes_no() + + combo_box_yes_no.set_yes_no(False) + assert not combo_box_yes_no.get_yes_no() + +def test_wl_combo_box_lang(): + wl_boxes.Wl_Combo_Box_Lang(main) + +def test_wl_combo_box_encoding(): + wl_boxes.Wl_Combo_Box_Encoding(main) + +def test_wl_combo_box_measure(): + mapping_measures = list(list(main.settings_global['mapping_measures'].values())[0].items()) + + combo_box_measure = wl_boxes.Wl_Combo_Box_Measure(main, list(main.settings_global['mapping_measures'])[0]) + assert combo_box_measure.get_measure() == mapping_measures[0][1] + + combo_box_measure.set_measure(mapping_measures[1][1]) + assert combo_box_measure.get_measure() == mapping_measures[1][1] + +def test_wl_combo_box_file_to_filter(): + table = wl_test_init.Wl_Test_Table(main) + table.settings['file_area']['files_open'] = [{'selected': True, 'name': 'test'}] + + combo_box_file_to_filter = wl_boxes.Wl_Combo_Box_File_To_Filter(main, table) + combo_box_file_to_filter.table_item_changed() + +def test_wl_combo_box_file(): + combo_box_file = wl_boxes.Wl_Combo_Box_File(main) + combo_box_file.wl_files_changed() + combo_box_file.get_file() + +def test_wl_combo_box_font_family(): + wl_boxes.Wl_Combo_Box_Font_Family(main) + +def test_wl_spin_box(): + wl_boxes.Wl_Spin_Box(main) + +def test_wl_spin_box_window(): + spin_box_window = wl_boxes.Wl_Spin_Box_Window(main) + spin_box_window.setValue(-100) + spin_box_window.stepBy(1) + spin_box_window.setValue(-100) + spin_box_window.stepBy(1) + +def test_wl_spin_box_font_size(): + wl_boxes.Wl_Spin_Box_Font_Size(main) + +def test_wl_spin_box_font_weight(): + wl_boxes.Wl_Spin_Box_Font_Weight(main) + +def test_wl_double_spin_box(): + wl_boxes.Wl_Double_Spin_Box(main) + +def test_wl_double_spin_box_alpha(): + wl_boxes.Wl_Double_Spin_Box_Alpha(main) + +def test_wl_spin_box_no_limit(): + _, checkbox_no_limit = wl_boxes.wl_spin_box_no_limit(main, double = True) + + checkbox_no_limit.setChecked(True) + checkbox_no_limit.setChecked(False) + + wl_boxes.wl_spin_box_no_limit(main, double = False) + +def test_wl_spin_boxes_min_max(): + spin_box_min, spin_box_max = wl_boxes.wl_spin_boxes_min_max(main, double = True) + + spin_box_min.setValue(100) + spin_box_max.setValue(1) + + wl_boxes.wl_spin_boxes_min_max(main, double = False) + +def test_wl_spin_boxes_min_max_no_limit(): + _, checkbox_min_no_limit, _, checkbox_max_no_limit = wl_boxes.wl_spin_boxes_min_max_no_limit(main, double = True) + + checkbox_min_no_limit.setChecked(True) + checkbox_min_no_limit.setChecked(False) + checkbox_max_no_limit.setChecked(True) + checkbox_max_no_limit.setChecked(False) + + wl_boxes.wl_spin_boxes_min_max_no_limit(main, double = False) + +def test_wl_spin_boxes_min_max_sync(): + checkbox_sync, _, spin_box_min, _, spin_box_max = wl_boxes.wl_spin_boxes_min_max_sync(main, double = True) + + checkbox_sync.setChecked(True) + spin_box_min.setValue(100) + spin_box_max.setValue(100) + + wl_boxes.wl_spin_boxes_min_max_sync(main, double = False) + +def test_wl_spin_boxes_min_max_sync_window(): + checkbox_sync, _, spin_box_left, _, spin_box_right = wl_boxes.wl_spin_boxes_min_max_sync_window(main) + + checkbox_sync.setChecked(True) + spin_box_left.setValue(100) + spin_box_right.setValue(100) + + wl_boxes.wl_spin_boxes_min_max_sync_window(main) + +if __name__ == '__main__': + test_wl_combo_box() + test_wl_combo_box_adjustable() + test_wl_combo_box_enums() + test_wl_combo_box_yes_no() + test_wl_combo_box_lang() + test_wl_combo_box_encoding() + test_wl_combo_box_measure() + test_wl_combo_box_file_to_filter() + test_wl_combo_box_file() + test_wl_combo_box_font_family() + + test_wl_spin_box() + test_wl_spin_box_window() + test_wl_spin_box_font_size() + test_wl_spin_box_font_weight() + + test_wl_double_spin_box() + test_wl_double_spin_box_alpha() + + test_wl_spin_box_no_limit() + test_wl_spin_boxes_min_max() + test_wl_spin_boxes_min_max_no_limit() + test_wl_spin_boxes_min_max_sync() + test_wl_spin_boxes_min_max_sync_window() diff --git a/tests/tests_widgets/test_buttons.py b/tests/tests_widgets/test_buttons.py new file mode 100644 index 000000000..af7ed6c06 --- /dev/null +++ b/tests/tests_widgets/test_buttons.py @@ -0,0 +1,47 @@ +# ---------------------------------------------------------------------- +# Wordless: Tests - Widgets - Buttons +# Copyright (C) 2018-2024 Ye Lei (叶磊) +# +# This program is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . +# ---------------------------------------------------------------------- + +from PyQt5.QtWidgets import QLineEdit + +from tests import wl_test_init +from wordless.wl_widgets import wl_buttons + +main = wl_test_init.Wl_Test_Main() + +def test_wl_button(): + wl_buttons.Wl_Button('test', main) + +def test_wl_button_browse(): + wl_buttons.Wl_Button_Browse(main, 'test', QLineEdit(), 'test', ['test']) + +def test_wl_button_color(): + button = wl_buttons.Wl_Button_Color(main) + button.get_color() + button.set_color('test') + + wl_buttons.wl_button_color(main, allow_transparent = True) + wl_buttons.wl_button_color(main, allow_transparent = False) + +def test_wl_button_restore_defaults(): + wl_buttons.Wl_Button_Restore_Defaults(main, 'test') + +if __name__ == '__main__': + test_wl_button() + test_wl_button_browse() + test_wl_button_color() + test_wl_button_restore_defaults() diff --git a/tests/tests_widgets/test_editors.py b/tests/tests_widgets/test_editors.py new file mode 100644 index 000000000..a3b2a6c1a --- /dev/null +++ b/tests/tests_widgets/test_editors.py @@ -0,0 +1,28 @@ +# ---------------------------------------------------------------------- +# Wordless: Tests - Widgets - Editors +# Copyright (C) 2018-2024 Ye Lei (叶磊) +# +# This program is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . +# ---------------------------------------------------------------------- + +from tests import wl_test_init +from wordless.wl_widgets import wl_editors + +main = wl_test_init.Wl_Test_Main() + +def test_wl_text_browser(): + wl_editors.Wl_Text_Browser(main) + +if __name__ == '__main__': + test_wl_text_browser() diff --git a/tests/tests_widgets/test_item_delegates.py b/tests/tests_widgets/test_item_delegates.py new file mode 100644 index 000000000..57d15164d --- /dev/null +++ b/tests/tests_widgets/test_item_delegates.py @@ -0,0 +1,63 @@ +# ---------------------------------------------------------------------- +# Wordless: Tests - Widgets - Item delegates +# Copyright (C) 2018-2024 Ye Lei (叶磊) +# +# This program is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . +# ---------------------------------------------------------------------- + +from PyQt5.QtWidgets import QComboBox + +from tests import wl_test_init +from wordless.wl_widgets import wl_item_delegates + +main = wl_test_init.Wl_Test_Main() + +def test_wl_item_delegate_uneditable(): + wl_item_delegates.Wl_Item_Delegate_Uneditable() + +def test_wl_item_delegate(): + item_delegate = wl_item_delegates.Wl_Item_Delegate(main, QComboBox) + item_delegate.createEditor(main, 'test', 'test') + item_delegate.set_enabled(True) + +def test_wl_item_delegate_combo_box(): + class Index: + def __init__(self, row, col): + self._row = row + self._col = col + + def row(self): + return self._row + + def column(self): + return self._col + + index_editable = Index(0, 0) + index_uneditable = Index(0, 1) + + item_delegate = wl_item_delegates.Wl_Item_Delegate_Combo_Box(main, row = 0, col = 0) + item_delegate.createEditor(main, 'test', index_editable) + assert item_delegate.createEditor(main, 'test', index_uneditable) is None + assert item_delegate.is_editable(index_editable) + assert not item_delegate.is_editable(index_uneditable) + +def test_wl_item_delegate_combo_box_custom(): + item_delegate = wl_item_delegates.Wl_Item_Delegate_Combo_Box_Custom(main, QComboBox) + item_delegate.createEditor(main, 'test', 'test') + +if __name__ == '__main__': + test_wl_item_delegate_uneditable() + test_wl_item_delegate() + test_wl_item_delegate_combo_box() + test_wl_item_delegate_combo_box_custom() diff --git a/tests/tests_widgets/test_labels.py b/tests/tests_widgets/test_labels.py new file mode 100644 index 000000000..eff9e087a --- /dev/null +++ b/tests/tests_widgets/test_labels.py @@ -0,0 +1,53 @@ +# ---------------------------------------------------------------------- +# Wordless: Tests - Widgets - Labels +# Copyright (C) 2018-2024 Ye Lei (叶磊) +# +# This program is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . +# ---------------------------------------------------------------------- + +from tests import wl_test_init +from wordless.wl_widgets import wl_labels + +main = wl_test_init.Wl_Test_Main() + +def test_wl_label(): + wl_labels.Wl_Label('test', main) + +def test_wl_label_important(): + wl_labels.Wl_Label_Important('test', main) + +def test_wl_label_hint(): + wl_labels.Wl_Label_Hint('test', main) + +def test_wl_label_html(): + wl_labels.Wl_Label_Html('test', main) + +def test_wl_label_html_centered(): + wl_labels.Wl_Label_Html_Centered('test', main) + +def test_wl_label_dialog(): + label = wl_labels.Wl_Label_Dialog('test', main) + label.set_text('test') + +def test_wl_label_dialog_no_wrap(): + wl_labels.Wl_Label_Dialog_No_Wrap('test', main) + +if __name__ == '__main__': + test_wl_label() + test_wl_label_important() + test_wl_label_hint() + test_wl_label_html() + test_wl_label_html_centered() + test_wl_label_dialog() + test_wl_label_dialog_no_wrap() diff --git a/tests/tests_widgets/test_layouts.py b/tests/tests_widgets/test_layouts.py new file mode 100644 index 000000000..a74ae1a77 --- /dev/null +++ b/tests/tests_widgets/test_layouts.py @@ -0,0 +1,59 @@ +# ---------------------------------------------------------------------- +# Wordless: Tests - Widgets - Layouts +# Copyright (C) 2018-2024 Ye Lei (叶磊) +# +# This program is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . +# ---------------------------------------------------------------------- + +from PyQt5.QtCore import Qt +from PyQt5.QtWidgets import QLabel + +from tests import wl_test_init +from wordless.wl_widgets import wl_layouts + +main = wl_test_init.Wl_Test_Main() + +def test_wl_layout(): + wl_layouts.Wl_Layout() + +def test_wl_wrapper(): + wrapper = wl_layouts.Wl_Wrapper(main) + wrapper.load_settings() + +def test_wl_wrapper_file_area(): + wl_layouts.Wl_Wrapper_File_Area(main) + +def test_wl_splitter(): + wl_layouts.Wl_Splitter(Qt.Vertical, main) + +def test_wl_scroll_area(): + wl_layouts.Wl_Scroll_Area(main) + +def test_wl_stacked_widget_resizable(): + stacked_widget = wl_layouts.Wl_Stacked_Widget_Resizable(main) + stacked_widget.addWidget(QLabel()) + stacked_widget.current_changed(0) + +def test_wl_separator(): + wl_layouts.Wl_Separator(main, orientation = 'hor') + wl_layouts.Wl_Separator(main, orientation = 'vert') + +if __name__ == '__main__': + test_wl_layout() + test_wl_wrapper() + test_wl_wrapper_file_area() + test_wl_splitter() + test_wl_scroll_area() + test_wl_stacked_widget_resizable() + test_wl_separator() diff --git a/tests/tests_widgets/test_widgets.py b/tests/tests_widgets/test_widgets.py new file mode 100644 index 000000000..64f7d1030 --- /dev/null +++ b/tests/tests_widgets/test_widgets.py @@ -0,0 +1,253 @@ +# ---------------------------------------------------------------------- +# Wordless: Tests - Widgets - Widgets +# Copyright (C) 2018-2024 Ye Lei (叶磊) +# +# This program is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . +# ---------------------------------------------------------------------- + +from PyQt5.QtWidgets import QTableView + +from tests import wl_test_init +from wordless.wl_widgets import wl_widgets + +main = wl_test_init.Wl_Test_Main() + +def test_wl_dialog_context_settings(): + dialog_context_settings = wl_widgets.Wl_Dialog_Context_Settings(main, tab = 'concordancer') + dialog_context_settings.multi_search_mode_changed() + + dialog_context_settings.incl_group_box.setChecked(True) + dialog_context_settings.excl_group_box.setChecked(True) + dialog_context_settings.token_settings_changed() + + dialog_context_settings.incl_group_box.setChecked(False) + dialog_context_settings.excl_group_box.setChecked(False) + dialog_context_settings.token_settings_changed() + + dialog_context_settings.load_settings(defaults = True) + + dialog_context_settings.settings_custom['incl']['context_window_left'] = -1 + dialog_context_settings.settings_custom['incl']['context_window_right'] = -1 + dialog_context_settings.settings_custom['excl']['context_window_left'] = -1 + dialog_context_settings.settings_custom['excl']['context_window_right'] = -1 + dialog_context_settings.load_settings(defaults = False) + + dialog_context_settings.settings_custom['incl']['context_window_left'] = 1 + dialog_context_settings.settings_custom['incl']['context_window_right'] = 1 + dialog_context_settings.settings_custom['excl']['context_window_left'] = 1 + dialog_context_settings.settings_custom['excl']['context_window_right'] = 1 + dialog_context_settings.load_settings(defaults = False) + + dialog_context_settings.incl_spin_box_context_window_left.setPrefix('L') + dialog_context_settings.incl_spin_box_context_window_right.setPrefix('L') + dialog_context_settings.excl_spin_box_context_window_left.setPrefix('L') + dialog_context_settings.excl_spin_box_context_window_right.setPrefix('L') + dialog_context_settings.save_settings() + + dialog_context_settings.incl_spin_box_context_window_left.setPrefix('R') + dialog_context_settings.incl_spin_box_context_window_right.setPrefix('R') + dialog_context_settings.excl_spin_box_context_window_left.setPrefix('R') + dialog_context_settings.excl_spin_box_context_window_right.setPrefix('R') + dialog_context_settings.save_settings() + +def test_wl_widgets_token_settings(): + ( + checkbox_words, _, _, _, _, _, + _, _, _, + checkbox_assign_pos_tags, checkbox_ignore_tags, checkbox_use_tags + ) = wl_widgets.wl_widgets_token_settings(main) + + checkbox_words.setChecked(True) + checkbox_words.setChecked(False) + + checkbox_assign_pos_tags.setChecked(True) + checkbox_assign_pos_tags.setChecked(False) + + checkbox_ignore_tags.setChecked(True) + checkbox_ignore_tags.setChecked(False) + + checkbox_use_tags.setChecked(True) + checkbox_use_tags.setChecked(False) + +def test_wl_widgets_token_settings_concordancer(): + _, checkbox_assign_pos_tags, checkbox_ignore_tags, checkbox_use_tags = wl_widgets.wl_widgets_token_settings_concordancer(main) + + checkbox_assign_pos_tags.setChecked(True) + checkbox_assign_pos_tags.setChecked(False) + + checkbox_ignore_tags.setChecked(True) + checkbox_ignore_tags.setChecked(False) + + checkbox_use_tags.setChecked(True) + checkbox_use_tags.setChecked(False) + +def test_wl_widgets_search_settings(): + ( + _, checkbox_multi_search_mode, + _, line_edit_search_term, _, _, + _, _, _, _, checkbox_match_without_tags, checkbox_match_tags + ) = wl_widgets.wl_widgets_search_settings(main, tab = 'concordancer') + + line_edit_search_term.setText('test') + checkbox_multi_search_mode.setChecked(True) + checkbox_multi_search_mode.setChecked(False) + + token_settings = main.settings_custom['concordancer']['token_settings'] + + token_settings['use_tags'] = True + checkbox_match_tags.token_settings_changed() + + token_settings['ignore_tags'] = False + token_settings['use_tags'] = False + checkbox_match_without_tags.setChecked(False) + checkbox_match_tags.setChecked(False) + checkbox_match_tags.token_settings_changed() + + checkbox_match_without_tags.setChecked(True) + + checkbox_match_tags.setChecked(True) + checkbox_match_tags.setChecked(False) + +def test_wl_widgets_search_settings_tokens(): + wl_widgets.wl_widgets_search_settings_tokens(main, tab = 'dependency_parser') + +def test_wl_widgets_context_settings(): + wl_widgets.wl_widgets_context_settings(main, tab = 'concordancer') + +def test_wl_widgets_measures_wordlist_generator(): + wl_widgets.wl_widgets_measures_wordlist_generator(main) + +def test_wl_widgets_measures_collocation_extractor(): + wl_widgets.wl_widgets_measures_collocation_extractor(main, tab = 'collocation_extractor') + +def test_wl_widgets_table_settings(): + table = QTableView() + table.table_settings = {'show_pct_data': True, 'show_cum_data': True, 'show_breakdown_file': True} + table.is_empty = lambda: False + table.toggle_pct_data = lambda: None + table.toggle_cum_data = lambda: None + table.toggle_breakdown_file = lambda: None + + wl_widgets.wl_widgets_table_settings(main, tables = [table]) + +def test_wl_widgets_table_settings_span_position(): + table = QTableView() + table.table_settings = { + 'show_pct_data': True, + 'show_cum_data': True, + 'show_breakdown_span_position': True, + 'show_breakdown_file': True + } + table.is_empty = lambda: False + table.toggle_pct_data_span_position = lambda: None + table.toggle_cum_data = lambda: None + table.toggle_breakdown_span_position = lambda: None + table.toggle_breakdown_file_span_position = lambda: None + + wl_widgets.wl_widgets_table_settings_span_position(main, tables = [table]) + +def test_wl_combo_box_file_fig_settings(): + combo_box_file_fig_settings = wl_widgets.Wl_Combo_Box_File_Fig_Settings(main) + combo_box_file_fig_settings.wl_files_changed() + + combo_box_file_fig_settings.clear() + combo_box_file_fig_settings.addItem('test') + combo_box_file_fig_settings.wl_files_changed() + +def test_wl_widgets_fig_settings(): + ( + _, combo_box_graph_type, + _, _, _, combo_box_use_data, _, _ + ) = wl_widgets.wl_widgets_fig_settings(main, tab = 'wordlist_generator') + + combo_box_graph_type.setCurrentText('Line chart') + combo_box_graph_type.setCurrentText(combo_box_graph_type.itemText(1)) + + combo_box_graph_type.setCurrentText('Line chart') + combo_box_use_data.setCurrentText('Frequency') + combo_box_use_data.setCurrentText(combo_box_use_data.itemText(1)) + + main.settings_custom['wordlist_generator']['fig_settings']['use_data'] = combo_box_use_data.itemText(0) + combo_box_use_data.measures_changed() + main.settings_custom['wordlist_generator']['fig_settings']['use_data'] = 'test' + combo_box_use_data.measures_changed() + + _, _, _, _, _, combo_box_use_data, _, _ = wl_widgets.wl_widgets_fig_settings(main, tab = 'collocation_extractor') + + main.settings_custom['collocation_extractor']['fig_settings']['use_data'] = combo_box_use_data.itemText(0) + combo_box_use_data.measures_changed() + main.settings_custom['collocation_extractor']['fig_settings']['use_data'] = 'test' + combo_box_use_data.measures_changed() + + _, _, _, _, _, combo_box_use_data, _, _ = wl_widgets.wl_widgets_fig_settings(main, tab = 'keyword_extractor') + + main.settings_custom['keyword_extractor']['fig_settings']['use_data'] = combo_box_use_data.itemText(0) + combo_box_use_data.measures_changed() + main.settings_custom['keyword_extractor']['fig_settings']['use_data'] = 'test' + combo_box_use_data.measures_changed() + +def test_wl_widgets_fig_settings_dependency_parsing(): + checkbox_show_pos_tags, _, _, _, _, _, _ = wl_widgets.wl_widgets_fig_settings_dependency_parsing(main) + + checkbox_show_pos_tags.setChecked(True) + checkbox_show_pos_tags.setChecked(False) + +def test_wl_widgets_filter(): + wl_widgets.wl_widgets_filter(main, 1, 100) + +def test_wl_widgets_filter_measures(): + wl_widgets.wl_widgets_filter_measures(main) + + main.wl_settings.wl_settings_changed.emit() + +def test_wl_widgets_filter_p_val(): + wl_widgets.wl_widgets_filter_p_val(main) + + main.wl_settings.wl_settings_changed.emit() + +def test_wl_widgets_num_sub_sections(): + wl_widgets.wl_widgets_num_sub_sections(main) + +def test_wl_widgets_use_data_freq(): + wl_widgets.wl_widgets_use_data_freq(main) + +def test_wl_widgets_direction(): + wl_widgets.wl_widgets_direction(main) + +if __name__ == '__main__': + test_wl_dialog_context_settings() + + test_wl_widgets_token_settings() + test_wl_widgets_token_settings_concordancer() + + test_wl_widgets_search_settings() + test_wl_widgets_context_settings() + + test_wl_widgets_measures_wordlist_generator() + test_wl_widgets_measures_collocation_extractor() + + test_wl_widgets_table_settings() + test_wl_widgets_table_settings_span_position() + + test_wl_combo_box_file_fig_settings() + test_wl_widgets_fig_settings() + test_wl_widgets_fig_settings_dependency_parsing() + + test_wl_widgets_filter() + test_wl_widgets_filter_measures() + test_wl_widgets_filter_p_val() + + test_wl_widgets_num_sub_sections() + test_wl_widgets_use_data_freq() + test_wl_widgets_direction() diff --git a/tests/wl_test_init.py b/tests/wl_test_init.py index 8fafb87f7..02d7ee075 100644 --- a/tests/wl_test_init.py +++ b/tests/wl_test_init.py @@ -24,12 +24,13 @@ import sys from PyQt5.QtCore import QObject -from PyQt5.QtWidgets import QApplication, QMainWindow, QStatusBar +from PyQt5.QtGui import QStandardItemModel +from PyQt5.QtWidgets import QApplication, QMainWindow, QStatusBar, QTableView from tests import wl_test_file_area from wordless import wl_file_area from wordless.wl_checks import wl_checks_misc -from wordless.wl_settings import wl_settings_default, wl_settings_global +from wordless.wl_settings import wl_settings, wl_settings_default, wl_settings_global from wordless.wl_utils import wl_misc SEARCH_TERMS = ['take'] @@ -83,6 +84,8 @@ def __init__(self, switch_lang_utils = 'default'): self.wl_file_area.file_type = 'observed' self.wl_file_area.settings_suffix = '' + self.wl_file_area.table_files = Wl_Test_Table(self) + self.wl_file_area.get_files = lambda: wl_file_area.Wrapper_File_Area.get_files(self.wl_file_area) self.wl_file_area.get_file_names = lambda: wl_file_area.Wrapper_File_Area.get_file_names(self.wl_file_area) self.wl_file_area.get_selected_files = lambda: wl_file_area.Wrapper_File_Area.get_selected_files(self.wl_file_area) @@ -100,6 +103,9 @@ def __init__(self, switch_lang_utils = 'default'): self.wl_file_area_ref.get_selected_files = lambda: wl_file_area.Wrapper_File_Area.get_selected_files(self.wl_file_area_ref) self.wl_file_area_ref.get_selected_file_names = lambda: wl_file_area.Wrapper_File_Area.get_selected_file_names(self.wl_file_area_ref) + # Settings + self.wl_settings = wl_settings.Wl_Settings(self) + def height(self): return 1080 @@ -215,6 +221,13 @@ class Wl_Exception_Tests_Lang_Util_Skipped(Exception): def __init__(self, lang_util): super().__init__(f'Tests for language utility "{lang_util}" is skipped!') +class Wl_Test_Table(QTableView): + def __init__(self, parent): + super().__init__(parent) + + self.setModel(QStandardItemModel()) + self.settings = wl_settings_default.init_settings_default(self) + # Select files randomly def select_test_files(main, no_files, ref = False): no_files = set(no_files) diff --git a/wordless/wl_colligation_extractor.py b/wordless/wl_colligation_extractor.py index 6e95dd1c1..d5fe35fca 100644 --- a/wordless/wl_colligation_extractor.py +++ b/wordless/wl_colligation_extractor.py @@ -26,7 +26,7 @@ import traceback import numpy -from PyQt5.QtCore import QCoreApplication, Qt +from PyQt5.QtCore import pyqtSignal, QCoreApplication, Qt from PyQt5.QtWidgets import QLabel, QGroupBox from wordless.wl_checks import wl_checks_work_area @@ -878,7 +878,7 @@ def update_gui_fig(self, err_msg, colligations_freqs_file, colligations_stats_fi wl_checks_work_area.check_err_fig(self.main, err_msg) class Wl_Worker_Colligation_Extractor(wl_threading.Wl_Worker): - worker_done = wl_threading.wl_pyqt_signal(str, dict, dict) + worker_done = pyqtSignal(str, dict, dict) def __init__(self, main, dialog_progress, update_gui): super().__init__(main, dialog_progress, update_gui) diff --git a/wordless/wl_collocation_extractor.py b/wordless/wl_collocation_extractor.py index 643b46d42..7ee14f1a4 100644 --- a/wordless/wl_collocation_extractor.py +++ b/wordless/wl_collocation_extractor.py @@ -26,7 +26,7 @@ import traceback import numpy -from PyQt5.QtCore import QCoreApplication, Qt +from PyQt5.QtCore import pyqtSignal, QCoreApplication, Qt from PyQt5.QtWidgets import QLabel, QGroupBox from wordless.wl_checks import wl_checks_work_area @@ -875,7 +875,7 @@ def update_gui_fig(self, err_msg, collocations_freqs_files, collocations_stats_f wl_checks_work_area.check_err_fig(self.main, err_msg) class Wl_Worker_Collocation_Extractor(wl_threading.Wl_Worker): - worker_done = wl_threading.wl_pyqt_signal(str, dict, dict) + worker_done = pyqtSignal(str, dict, dict) def __init__(self, main, dialog_progress, update_gui): super().__init__(main, dialog_progress, update_gui) diff --git a/wordless/wl_dialogs/wl_dialogs.py b/wordless/wl_dialogs/wl_dialogs.py index ecba9fe18..41ca92074 100644 --- a/wordless/wl_dialogs/wl_dialogs.py +++ b/wordless/wl_dialogs/wl_dialogs.py @@ -101,7 +101,7 @@ def __init__(self, main, width = 0, height = 0): class Wl_Dialog_Info(Wl_Dialog): def __init__(self, main, title, width = 0, height = 0, resizable = False, no_buttons = False): # Avoid circular imports - from wordless.wl_widgets import wl_layouts + from wordless.wl_widgets import wl_layouts # pylint: disable=import-outside-toplevel super().__init__(main, title, width, height, resizable) diff --git a/wordless/wl_main.py b/wordless/wl_main.py index f0a369eaa..25b8c5462 100644 --- a/wordless/wl_main.py +++ b/wordless/wl_main.py @@ -37,6 +37,8 @@ if sys.stderr is None: sys.stderr = open(os.devnull, 'w') # pylint: disable=unspecified-encoding, consider-using-with +# pylint: disable=wrong-import-position + import botok import matplotlib import nltk diff --git a/wordless/wl_measures/wl_measures_readability.py b/wordless/wl_measures/wl_measures_readability.py index 0e3d3060f..7a7be94a8 100644 --- a/wordless/wl_measures/wl_measures_readability.py +++ b/wordless/wl_measures/wl_measures_readability.py @@ -1068,38 +1068,38 @@ def nws(main, text): return nws # Estimate number of syllables in Arabic texts by counting short, long, and stress syllables -# Reference: https://github.com/textstat/textstat/blob/9bf37414407bcaaa45c498478ee383c8738e5d0c/textstat/textstat.py#L569 -def _get_num_syls_ara(text): - short_count = 0 - long_count = 0 - - # tashkeel: fatha | damma | kasra - tashkeel = [r'\u064E', r'\u064F', r'\u0650'] - char_list = list(re.sub(r"[^\w\s\']", '', text)) - - for t in tashkeel: - for i, c in enumerate(char_list): - if c != t: - continue - - # Only if a character is a tashkeel, has a successor and is followed by an alef, waw or yaaA - if ( - i + 1 < len(char_list) - and char_list[i + 1] in ['\u0627', '\u0648', '\u064a'] - ): - long_count += 1 +# References: +# https://github.com/drelhaj/OsmanReadability/blob/master/src/org/project/osman/process/Syllables.java +# https://github.com/textstat/textstat/blob/9bf37414407bcaaa45c498478ee383c8738e5d0c/textstat/textstat.py#L569 +def _get_num_syls_ara(word): + count_short = 0 + count_long = 0 + + # Tashkeel: fatha, damma, kasra + tashkeel = ['\u064E', '\u064F', '\u0650'] + + for i, char in enumerate(word): + if char not in tashkeel: + continue + + # Only if a character is a tashkeel, has a successor, and is followed by an alef, waw, or yeh + if i + 1 < len(word): + if word[i + 1] in ['\u0627', '\u0648', '\u064A']: + count_long += 1 else: - short_count += 1 + count_short += 1 + else: + count_short += 1 - # stress syllables: tanween fatih | tanween damm | tanween kasr | shadda - stress_pattern = re.compile(r'[\u064B\u064C\u064D\u0651]') - stress_count = len(stress_pattern.findall(text)) + # Stress syllables: tanween fatha, tanween damma, tanween kasra, shadda + count_stress = len(re.findall(r'[\u064B\u064C\u064D\u0651]', word)) - if short_count == 0: - text = re.sub(r'[\u0627\u0649\?\.\!\,\s*]', '', text) - short_count = len(text) - 2 + if count_short == 0: + word = re.sub(r'[\u0627\u0649\?\.\!\,\s]', '', word) + count_short = max(0, len(word) - 2) - return short_count + 2 * (long_count + stress_count) + # Reference: https://github.com/drelhaj/OsmanReadability/blob/405b927ef3fde200fa08efe12ec2f39b8716e4be/src/org/project/osman/process/OsmanReadability.java#L259 + return count_short + 2 * (count_long + count_stress) # OSMAN # Reference: El-Haj, M., & Rayson, P. (2016). OSMAN: A novel Arabic readability metric. In N. Calzolari, K. Choukri, T. Declerck, S. Goggi, M. Grobelnik, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016) (pp. 250–255). European Language Resources Association. http://www.lrec-conf.org/proceedings/lrec2016/index.html @@ -1120,9 +1120,13 @@ def osman(main, text): for word, num_syls in zip(text.words_flat, nums_syls_tokens): if ( num_syls > 4 + # Faseeh letters + # Reference: https://github.com/drelhaj/OsmanReadability/blob/405b927ef3fde200fa08efe12ec2f39b8716e4be/src/org/project/osman/process/OsmanReadability.java#L264 and ( - any((letter in word for letter in ['ء', 'ئ', 'ؤ', 'ذ', 'ظ'])) - or any((word.endswith(letters) for letters in ['وا', 'ون'])) + # Hamza (ء), yeh with hamza above (ئ), waw with hamza above (ؤ), zah (ظ), thal (ذ) + any((char in word for char in ['\u0621', '\u0626', '\u0624', '\u0638', '\u0630'])) + # Waw noon (ون), waw alef (وا) + or word.endswith(('\u0648\u0646', '\u0648\u0627')) ) ): h += 1 diff --git a/wordless/wl_ngram_generator.py b/wordless/wl_ngram_generator.py index e68a3cc02..5648c2315 100644 --- a/wordless/wl_ngram_generator.py +++ b/wordless/wl_ngram_generator.py @@ -23,7 +23,7 @@ import traceback import numpy -from PyQt5.QtCore import QCoreApplication, Qt +from PyQt5.QtCore import pyqtSignal, QCoreApplication, Qt from PyQt5.QtWidgets import QCheckBox, QLabel, QGroupBox from wordless.wl_checks import wl_checks_work_area @@ -755,7 +755,7 @@ def update_gui_fig(self, err_msg, ngrams_freq_files, ngrams_stats_files): wl_checks_work_area.check_err_fig(self.main, err_msg) class Wl_Worker_Ngram_Generator(wl_threading.Wl_Worker): - worker_done = wl_threading.wl_pyqt_signal(str, dict, dict) + worker_done = pyqtSignal(str, dict, dict) def __init__(self, main, dialog_progress, update_gui): super().__init__(main, dialog_progress, update_gui) diff --git a/wordless/wl_nlp/wl_nlp_utils.py b/wordless/wl_nlp/wl_nlp_utils.py index bff7ccad0..f6819923c 100644 --- a/wordless/wl_nlp/wl_nlp_utils.py +++ b/wordless/wl_nlp/wl_nlp_utils.py @@ -35,6 +35,7 @@ import pip import pymorphy3 import pyphen +from PyQt5.QtCore import pyqtSignal import sacremoses import spacy import spacy_pkuseg @@ -215,7 +216,7 @@ def update_gui_stanza(main, err_msg): return models_ok class Wl_Worker_Download_Model_Spacy(wl_threading.Wl_Worker): - worker_done = wl_threading.wl_pyqt_signal(str) + worker_done = pyqtSignal(str) def __init__(self, main, dialog_progress, update_gui, model_name): super().__init__(main, dialog_progress, update_gui, model_name = model_name) @@ -266,7 +267,7 @@ def run(self): self.worker_done.emit(self.err_msg) class Wl_Worker_Download_Model_Stanza(wl_threading.Wl_Worker): - worker_done = wl_threading.wl_pyqt_signal(str) + worker_done = pyqtSignal(str) def __init__(self, main, dialog_progress, update_gui, lang): super().__init__(main, dialog_progress, update_gui, lang = lang) diff --git a/wordless/wl_settings/wl_settings.py b/wordless/wl_settings/wl_settings.py index f8b762f47..74b5de04b 100644 --- a/wordless/wl_settings/wl_settings.py +++ b/wordless/wl_settings/wl_settings.py @@ -44,7 +44,7 @@ def __init__(self, main): ) # Avoid circular imports - from wordless.wl_settings import ( + from wordless.wl_settings import ( # pylint: disable=import-outside-toplevel wl_settings_general, wl_settings_files, wl_settings_sentence_tokenization, diff --git a/wordless/wl_utils/wl_threading.py b/wordless/wl_utils/wl_threading.py index 9a3143fd7..3cb5698b1 100644 --- a/wordless/wl_utils/wl_threading.py +++ b/wordless/wl_utils/wl_threading.py @@ -20,22 +20,6 @@ from PyQt5.QtCore import pyqtSignal, QObject, Qt, QThread -from wordless.wl_utils import wl_misc - -# Dict can only have strings as keys when emitting signals in PyQt 5.8.2 used on OS X 10.9 for backward compatibility -# This bug has been fixed in PyQt 5.9 -# See: https://stackoverflow.com/a/43977161 -def wl_pyqt_signal(*signal_args): - _, is_macos, _ = wl_misc.check_os() - - if is_macos: - signal_args = [ - object if arg is dict else arg - for arg in signal_args - ] - - return pyqtSignal(*signal_args) - # Workers class Wl_Worker(QObject): progress_updated = pyqtSignal(str) diff --git a/wordless/wl_widgets/wl_boxes.py b/wordless/wl_widgets/wl_boxes.py index 1e80f9817..6f3400c90 100644 --- a/wordless/wl_widgets/wl_boxes.py +++ b/wordless/wl_widgets/wl_boxes.py @@ -347,6 +347,12 @@ def max_changed(): spin_box_max ) = wl_spin_boxes_min_max(parent, val_min, val_max, double) + spin_box_min.valueChanged.disconnect() + spin_box_max.valueChanged.disconnect() + + spin_box_min.valueChanged.connect(min_changed) + spin_box_max.valueChanged.connect(max_changed) + checkbox_sync.stateChanged.connect(sync_changed) min_changed() diff --git a/wordless/wl_widgets/wl_buttons.py b/wordless/wl_widgets/wl_buttons.py index 327d0c103..6d3b49f14 100644 --- a/wordless/wl_widgets/wl_buttons.py +++ b/wordless/wl_widgets/wl_buttons.py @@ -18,17 +18,13 @@ import os -from PyQt5.QtCore import QCoreApplication, Qt +from PyQt5.QtCore import QCoreApplication from PyQt5.QtGui import QBrush, QColor, QPainter -from PyQt5.QtWidgets import ( - QCheckBox, QColorDialog, QFileDialog, QLabel, QPushButton, - QSizePolicy -) +from PyQt5.QtWidgets import QCheckBox, QColorDialog, QFileDialog, QPushButton from wordless.wl_checks import wl_checks_misc from wordless.wl_dialogs import wl_msg_boxes from wordless.wl_utils import wl_misc, wl_paths -from wordless.wl_widgets import wl_layouts _tr = QCoreApplication.translate @@ -38,36 +34,6 @@ def __init__(self, text, parent = None): self.main = wl_misc.find_wl_main(parent) -# Reference: https://stackoverflow.com/a/62893567 -class Wl_Button_Html(Wl_Button): - def __init__(self, text, parent = None): - super().__init__('', parent) - - self._label = QLabel(text, self) - - self._label.setTextFormat(Qt.RichText) - self._label.setAttribute(Qt.WA_TransparentForMouseEvents) - self._label.setSizePolicy( - QSizePolicy.Expanding, - QSizePolicy.Expanding, - ) - - self.setLayout(wl_layouts.Wl_Layout()) - self.layout().addWidget(self._label) - - self.layout().setContentsMargins(0, 0, 0, 0) - - def setText(self, text): - self._label.setText(text) - - self.updateGeometry() - - def sizeHint(self): - size = super().sizeHint() - size.setWidth(self._label.sizeHint().width()) - - return size - class Wl_Button_Browse(Wl_Button): def __init__(self, parent, line_edit, caption, filters, initial_filter = -1): super().__init__(_tr('wl_buttons', 'Browse...'), parent) diff --git a/wordless/wl_widgets/wl_layouts.py b/wordless/wl_widgets/wl_layouts.py index 692d4b0fa..7c8fec785 100644 --- a/wordless/wl_widgets/wl_layouts.py +++ b/wordless/wl_widgets/wl_layouts.py @@ -145,7 +145,7 @@ def __init__(self, parent, orientation = 'hor'): if orientation == 'hor': self.setFrameShape(QFrame.HLine) - else: + elif orientation == 'vert': self.setFrameShape(QFrame.VLine) self.setStyleSheet('color: #D0D0D0;') diff --git a/wordless/wl_widgets/wl_tables.py b/wordless/wl_widgets/wl_tables.py index 0c19452d5..72810b91b 100644 --- a/wordless/wl_widgets/wl_tables.py +++ b/wordless/wl_widgets/wl_tables.py @@ -1802,7 +1802,7 @@ def clr_table(self, num_headers = 1, confirm = False): return confirmed # Avoid circular imports -from wordless.wl_results import wl_results_filter, wl_results_search, wl_results_sort +from wordless.wl_results import wl_results_filter, wl_results_search, wl_results_sort # pylint: disable=wrong-import-position class Wl_Table_Data_Search(Wl_Table_Data): def __init__( diff --git a/wordless/wl_widgets/wl_widgets.py b/wordless/wl_widgets/wl_widgets.py index a8974000d..72e2b3680 100644 --- a/wordless/wl_widgets/wl_widgets.py +++ b/wordless/wl_widgets/wl_widgets.py @@ -732,7 +732,7 @@ def show_breakdown_file_changed(): ) # Figure Settings -class Wl_Combo_Box_File_Figure_Settings(wl_boxes.Wl_Combo_Box_File): +class Wl_Combo_Box_File_Fig_Settings(wl_boxes.Wl_Combo_Box_File): def wl_files_changed(self): if self.count() == 1: file_old = '' @@ -770,9 +770,9 @@ def use_data_changed(): checkbox_use_cumulative.setEnabled(False) def measures_changed_wordlist_generator(): - settings_global = parent.main.settings_global - settings_default = parent.main.settings_default[tab] - settings_custom = parent.main.settings_custom[tab] + settings_global = main.settings_global + settings_default = main.settings_default[tab] + settings_custom = main.settings_custom[tab] use_data_old = settings_custom['fig_settings']['use_data'] @@ -797,9 +797,9 @@ def measures_changed_wordlist_generator(): combo_box_use_data.setCurrentText(settings_default['fig_settings']['use_data']) def measures_changed_collocation_extractor(): - settings_global = parent.main.settings_global - settings_default = parent.main.settings_default[tab] - settings_custom = parent.main.settings_custom[tab] + settings_global = main.settings_global + settings_default = main.settings_default[tab] + settings_custom = main.settings_custom[tab] use_data_old = settings_custom['fig_settings']['use_data'] @@ -839,9 +839,9 @@ def measures_changed_collocation_extractor(): combo_box_use_data.setCurrentText(settings_default['fig_settings']['use_data']) def measures_changed_keyword_extractor(): - settings_global = parent.main.settings_global - settings_default = parent.main.settings_default[tab] - settings_custom = parent.main.settings_custom[tab] + settings_global = main.settings_global + settings_default = main.settings_default[tab] + settings_custom = main.settings_custom[tab] use_data_old = settings_custom['fig_settings']['use_data'] @@ -874,10 +874,12 @@ def measures_changed_keyword_extractor(): else: combo_box_use_data.setCurrentText(settings_default['fig_settings']['use_data']) + main = wl_misc.find_wl_main(parent) + label_graph_type = QLabel(_tr('wl_widgets', 'Graph type:'), parent) combo_box_graph_type = wl_boxes.Wl_Combo_Box(parent) label_sort_by_file = QLabel(_tr('wl_widgets', 'Sort by file:'), parent) - combo_box_sort_by_file = Wl_Combo_Box_File_Figure_Settings(parent) + combo_box_sort_by_file = Wl_Combo_Box_File_Fig_Settings(parent) label_use_data = QLabel(_tr('wl_widgets', 'Use data:'), parent) combo_box_use_data = wl_boxes.Wl_Combo_Box(parent) checkbox_use_pct = QCheckBox(_tr('wl_widgets', 'Use percentage data'), parent) @@ -893,12 +895,13 @@ def measures_changed_keyword_extractor(): combo_box_graph_type.addItem(_tr('wl_widgets', 'Network graph')) if tab in ['wordlist_generator', 'ngram_generator', 'collocation_extractor', 'colligation_extractor', 'keyword_extractor']: - if tab in ['wordlist_generator', 'ngram_generator']: - combo_box_use_data.measures_changed = measures_changed_wordlist_generator - elif tab in ['collocation_extractor', 'colligation_extractor']: - combo_box_use_data.measures_changed = measures_changed_collocation_extractor - elif tab == 'keyword_extractor': - combo_box_use_data.measures_changed = measures_changed_keyword_extractor + match tab: + case 'wordlist_generator' | 'ngram_generator': + combo_box_use_data.measures_changed = measures_changed_wordlist_generator + case 'collocation_extractor' | 'colligation_extractor': + combo_box_use_data.measures_changed = measures_changed_collocation_extractor + case 'keyword_extractor': + combo_box_use_data.measures_changed = measures_changed_keyword_extractor combo_box_use_data.measures_changed()