Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Edge tokenization issues: Unicode parsing #116

Open
pcuenca opened this issue Aug 19, 2024 · 0 comments
Open

Edge tokenization issues: Unicode parsing #116

pcuenca opened this issue Aug 19, 2024 · 0 comments

Comments

@pcuenca
Copy link
Member

pcuenca commented Aug 19, 2024

          Regarding the [missing tokes in the parsed vocabulary](https://github.com/huggingface/swift-transformers/pull/113#issuecomment-2267520368), this is my documentation after tracking one of the issues.

First, we are parsing the JSON file (tokenizer.json) using JSONSerialization.jsonObject. This reads data as Foundation objects, parsing tokens from the vocab dictionary as NSString instances. This is a good thing. Strings cannot be used as keys in the vocab dictionary because equality only considers the Unicode canonical representation. Parsing the JSON and casting to [String : Int] would ignore multiple entries.

However, I found that JSONSerialization fails to correctly parse some strings. Consider the following test case:

    func testArrayParsingWithBOMPrefix() {
        // The second one starts with a BOM prefix
        let items = ["a", "\u{feff}a"]

        // Neither Strings nor NSStrings are equal
        XCTAssertNotEqual(items[0], items[1])
        XCTAssertNotEqual(items[0] as NSString, items[1] as NSString)

        // JSONDecoder works
        let jsonData = try! JSONSerialization.data(withJSONObject: items, options: [])
        let decoder = JSONDecoder()
        let decoded = try! decoder.decode([String].self, from: jsonData)
        XCTAssertEqual(decoded, items)

        // JSONSerialization seems to ignore the BOM.
        // The decoded array contains two items, but they are the same NSString.
        let ns_decoded = try! JSONSerialization.jsonObject(with: jsonData, options: []) as! NSArray
        XCTAssertEqual(ns_decoded.count, items.count)                               // passes
        XCTAssertNotEqual(ns_decoded[0] as! NSString, ns_decoded[1] as! NSString)   // fails
        XCTAssertEqual(ns_decoded as! [String], items)                              // fails

        // Compare unicodeScalars
        func scalars(_ string: String) -> [UInt32] {
            string.unicodeScalars.map { $0.value }
        }
        for (decoded, expected) in zip(ns_decoded, items) {
            let decodedScalars = scalars(decoded as! String)
            let expectedScalars = scalars(expected)
            XCTAssertEqual(decodedScalars, expectedScalars)         // first passes, second fails
        }
    }

There are two strings in the test array. The second one starts with a BOM prefix. The prefix is ignored when parsing the two NSStrings, as confirmed by looking at the unicode scalars in the debugger. Unfortunately, the Gemma vocab contains some duplicate entries with/without a BOM prefix, so reading them into a dictionary skips some entries.

Interestingly, all the tests pass if the BOM character is in the middle of the string. Replacing the test items with these works fine:

        // If the non-breaking space is inside the String, all tests pass
//        let items = ["ab", "a\u{feff}b"]

I suspect this is used for parsing, and the stream is incorrectly assumed to start with a BOM even though it's in the middle of the actual json data.

Also interestingly, JSONDecoder works and can decode the two distinct String instances in the array. We are not using JSONDecoder in this project because:

  • The structure of the JSON files to be parsed is quite open and flexible, I don't think it would be straightforward to write a decodable structure that represents it. Instead, we use dynamic member lookup to navigate the contents.
  • We can't use String instances for vocab keys, as mentioned above.

I'm not sure how to deal with this.

Originally posted by @pcuenca in #113 (comment)

@pcuenca pcuenca changed the title Edge Unicode parsing Edge tokenization issues: Unicode parsing Aug 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant