Skip to content

Releases: jawah/charset_normalizer

Version 2.0.5

14 Sep 19:39
2404237
Compare
Choose a tag to compare

Changes:

Internal: 🎨 The project now comply with: flake8, mypy, isort and black to ensure a better overall quality #81
Internal: 🎨 The MANIFEST.in was not exhaustive #78
Improvement: ✨ The BC-support with v1.x was improved, the old staticmethods are restored #82
Remove: πŸ”₯ The project no longer raise warning on tiny content given for detection, will be simply logged as warning instead #92
Improvement: ✨ The Unicode detection is slightly improved, see #93
Bugfix: πŸ› In some rare case, the chunks extractor could cut in the middle of a multi-byte character and could mislead the mess detection #95
Bugfix: πŸ› Some rare 'space' characters could trip up the UnprintablePlugin/Mess detection #96
Improvement: 🎨 Add syntax sugar __bool__ for results CharsetMatches list-container see #91

This release push further the detection coverage to 97 % !

Version 2.0.4

30 Jul 21:31
558d1e2
Compare
Choose a tag to compare

Changes:

  • Improvement: ❇️ Adjust the MD to lower the sensitivity, thus improving the global detection reliability (#69 #76)
  • Improvement: ❇️ Allow fallback on specified encoding if any (#71)
  • Bugfix: πŸ› The CLI no longer raise an unexpected exception when no encoding has been found (#70)
  • Bugfix: πŸ› Fix accessing the 'alphabets' property when the payload contains surrogate characters (#68)
  • Bugfix: πŸ› ✏️ The logger could mislead (explain=True) on detected languages and the impact of one MBCS match (in #72)
  • Bugfix: πŸ› Submatch factoring could be wrong in rare edge cases (in #72)
  • Bugfix: πŸ› Multiple files given to the CLI were ignored when publishing results to STDOUT. (After the first path) (in #72)
  • Internal: 🎨 Fix line endings from CRLF to LF for certain files (#67)

Version 2.0.3

16 Jul 15:37
3d76085
Compare
Choose a tag to compare

Changes:

  • Improvement: ✨ Part of the detection mechanism has been improved to be less sensitive, resulting in more accurate detection results. Especially ASCII. #63 Fix #62
  • Improvement: ✨According to the community wishes, the detection will fall back on ASCII or UTF-8 in a last-resort case. #64 Complete #62

Be assured that this project is disposed to listen to any of your concerns you may have. I know the vast majority did not expect requests to switch Chardet to Charset-Normalizer. I am inclined to make this change worth it, only together that we can achieve great things. Do not hesitate to leave feedback or bug report, will answer them all!

Version 2.0.2

14 Jul 22:48
08ea262
Compare
Choose a tag to compare

Changes:

  • Bugfix: πŸ› Empty/Too small JSON payload miss-detection fixed (#59) Thanks @tseaver for the report
  • Improvement: πŸŽ‡ Don't inject unicodedata2 into sys.modules (#57) @akx

Version 2.0.1

13 Jul 16:01
929f13c
Compare
Choose a tag to compare

Minor bug fixes release.

Changes:

  • Bugfix: πŸ› Make it work where there isn't a filesystem available, dropping assets frequencies.json #54 #55 original report by @sethmlarson
  • Improvement: ✨ You may now use aliases in cp_isolation and cp_exclusion arguments #47
  • Bugfix: πŸ› Using explain=False permanently disable the verbose output in the current runtime #47
  • Bugfix: πŸ› One log entry (language target preemptive) was not show in logs when using explain=True #47
  • Bugfix: πŸ› Fix undesired exception (ValueError) on getitem of instance CharsetMatches #52
  • Improvement: πŸ”§ Public function normalize default args values were not aligned with from_bytes #53

Version 2.0.0

02 Jul 19:18
36e1a39
Compare
Choose a tag to compare

This package is reaching its two years of existence, now is a good time for a nice refresh.

Changes: See PR #45

  • Performance: ⚑ 4x to 5 times faster than the previous 1.4.0 release.
  • Performance: ⚑ At least 2x faster than Chardet.
  • Performance: ⚑ Accent has been made on UTF-8 detection, should perform rather instantaneous.
  • Improvement: πŸ”™ The backward compatibility with Chardet has been greatly improved. The legacy detect function returns an identical charset name whenever possible.
  • Improvement: ❇️ The detection mechanism has been slightly improved, now Turkish content is detected correctly (most of the time)
  • Code: 🎨 The program has been rewritten to ease the readability and maintainability. (+Using static typing)
  • Tests: βœ”οΈ New workflows are now in place to verify the following aspects: Performance, Backward-Compatibility with Chardet, and Detection Coverage in addition to currents tests. (+CodeQL)
  • Dependency: βž– This package no longer require anything when used with Python 3.5 (Dropped cached_property)
  • Docs: ✏️ Performance claims have been updated, the guide to contributing, and the issue template.
  • Improvement: ❇️ Add --version argument to CLI
  • Bugfix: πŸ› The CLI output used the relative path of the file(s). Should be absolute.
  • Deprecation: πŸ”΄ Methods coherence_non_latin, w_counter, chaos_secondary_pass of the class CharsetMatch are now deprecated and scheduled for removal in v3.0
  • Improvement: ❇️ If no language was detected in content, trying to infer it using the encoding name/alphabets used.
  • Removal: πŸ”₯ Removed support for these languages: Catalan, Esperanto, Kazakh, Baque, VolapΓΌk, Azeri, Galician, Nynorsk, Macedonian, and Serbocroatian.
  • Improvement: ❇️ utf_7 detection has been reinstated.
  • Removal: πŸ”₯ The exception hook on UnicodeDecodeError has been removed.

After much consideration, this release won't drop Python 3.5 in v2.

Version 1.4.1

28 May 05:01
Compare
Choose a tag to compare

Changes :

  • Improvement: 🎨 Logger configuration/usage no longer conflict with others #44

Version 1.4.0

21 May 21:12
84c4dae
Compare
Choose a tag to compare

Changes :

Thanks to @potiuk for his tests/ideas that permitted us to improve the quality of this project.

  • Dependency: βž– Using standard logging instead of using the package loguru.
  • Dependency: βž– Dropping nose test framework in favor of the maintained pytest.
  • Dependency: βž– Choose to not use dragonmapper package to help with gibberish Chinese/CJK text.
  • Dependency: πŸ”§ βž– Require cached_property only for Python 3.5 due to constraint. Dropping for every other interpreter version.
  • Bugfix: πŸ› BOM marker in a CharsetNormalizerMatch instance could be False in rare cases even if obviously present. Due to the sub-match factoring process.
  • Improvement: πŸŽ‡ Return ASCII if given sequences fit. Given reasonable confidence.
  • Performance: ⚑ Huge improvement over the larges payload.
  • Change: πŸ”₯ Stop support for UTF-7 that does not contain a SIG. (Contributions are welcome to improve that point)
  • Feature: πŸŽ‡ CLI now produces JSON consumable output.
  • Dependency: Dropping PrettyTable, replaced with pure JSON output.
  • Bugfix: πŸ› Not searching properly for the BOM when trying utf32/16 parent codec.
  • Other: ⚑ Improving the package final size by compressing frequencies.json.

This project no longer requires anything except for python 3.5. It is still supported even if passed EOL.
Version 2.x will require Python 3.6+

Version 1.3.9

13 May 20:38
Compare
Choose a tag to compare

Changes :

  • Bugfix: πŸ› In some very rare cases, you may end up getting encode/decode errors due to a bad bytes payload #40

Version 1.3.8

12 May 21:32
Compare
Choose a tag to compare

Changes :

  • Bugfix: πŸ› Empty given payload for detection may cause an exception if trying to access the alphabets property. #39