⚡️ Speed up method GPTSw3Tokenizer._tokenize by 23%
#371
+6
−2
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📄 23% (0.23x) speedup for
GPTSw3Tokenizer._tokenizeinsrc/transformers/models/gpt_sw3/tokenization_gpt_sw3.py⏱️ Runtime :
6.21 milliseconds→5.07 milliseconds(best of11runs)📝 Explanation and details
The optimization replaces an inefficient character-by-character whitespace normalization with Python's built-in
str.translate()method, delivering a 22% speedup.Key optimization: In
preprocess_text(), the original code used a list comprehension with set membership testing for every character:The optimized version precomputes a translation table during initialization and uses
str.translate():Why this is faster:
str.translate()is implemented in C and operates at native speed with O(N) complexityPerformance impact: Line profiler shows whitespace normalization time dropped from 4.92ms to 216μs - a 95% reduction in that specific operation. The optimization is particularly effective for:
This is a classic example of replacing Python loops with optimized C implementations, especially valuable in tokenization workflows that process large volumes of text.
✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-GPTSw3Tokenizer._tokenize-mi9ysxtdand push.