-
-
Notifications
You must be signed in to change notification settings - Fork 31k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
\w not helpful for non-Roman scripts #44795
Comments
When I try to use r"\w+(?u)" to find words in a unicode Devanagari text bad things happen. Words get chopped into small pieces. I think this is likely because vowel signs such as 093e are not considered to match \w. I think that if you wish \w to be useful for Indic I am using Python 2.4.4 on Windows XP SP2. I ran the following script to see the characters which I think ought to match \w but don't import re
import unicodedata text = "" parts = re.findall("\W(?u)", text) The odd character here is 0904. Its categorization seems to imply that you are using the uncode 3.0 database but perhaps later versions of Python are using the current 5.0 database. |
Python 2.4 is using Unicode 3.2. Python 2.5 ships with Unicode 4.1. We're likely to ship Unicode 5.x with Python 2.6 or a later release. Regarding the char classes: I don't think Mc, Mn and Me should be considered parts of a word. Those are marks which usually separate words. |
Vowel 'marks' are condensed vowel characters and are very much part of http://docs.python.org/dev/3.0/reference/lexical_analysis.html#identifiers-and-keywords For instance, the word 'hindi' has 3 consonants 'h', 'n', 'd', 2 vowels From a clp post asking why re does not see hindi as a word: हिन्दी .isapha and possibly other unicode methods need fixing also
>>> 'हिन्दी'.isalpha()#2.x and 3.0
False |
Unicode TR#18 defines \w as a shorthand for \p{alpha} which would include all marks. We should recursively check whether we |
In issue bpo-2636 I'm using the following: Alpha is Ll, Lo, Lt, Lu. These are what are specified at |
Am I correct in saying that this must stay open as it targets the re module but as given in msg81221 is fixed in the new regex module? |
I had to check what re does in Python 3.3: >>> print(len(re.match(r'\w+', 'हिन्दी').group()))
1 Regex does this: >>> print(len(regex.match(r'\w+', 'हिन्दी').group()))
6 |
Matthew, I think that is considered a single word in Sanscrit or Thai so Python 3.x is correct. In this case you've written the Sanscrit word for Hindi. |
I'm not sure what you're saying. The re module in Python 3.3 matches only the first codepoint, treating the second codepoint as not part of a word, whereas the regex module matches all 6 codepoints, treating them all as part of a single word. |
Maybe you could show us the byte-for-byte hex of the string you're testing so we can examine if it's really a code point intending word boundary or just a code point for the sake of beginning a new character. |
You could've obtained it from msg76556 or msg190100: >>> print(ascii('हिन्दी'))
'\u0939\u093f\u0928\u094d\u0926\u0940'
>>> import re, regex
>>> print(ascii(re.match(r"\w+", '\u0939\u093f\u0928\u094d\u0926\u0940').group()))
'\u0939'
>>> print(ascii(regex.match(r"\w+", '\u0939\u093f\u0928\u094d\u0926\u0940').group()))
'\u0939\u093f\u0928\u094d\u0926\u0940' |
Thanks Matthew and sorry to put you through more work; I just wanted to verify exactly which unicode (UTF-16 I take it) were being used to verify if the UNICODE standard expected them to be treated as unique words or single letters within a word. Sanskrit is an alphabet, not an ideograph so each symbol is considered a letter. So I believe your implementation is correct and yes, you are right, re is at fault. There are just accenting characters and letters in that sequence so they should be interpreted as a single word of 6 letters, as you determine, and not one of the first letter. Mind you, I misinterpreted msg190100 in that I thought you were using findall in which case the answer should be 1, but as far as length of extraction, yes, 6, I totally agree. Sorry for the misunderstanding. http://www.unicode.org/charts/PDF/U0900.pdf contains the code chart for Hindi. |
UTF-16 has nothing to do with it, that's just an encoding (a pair of them actually, UTF-16LE and UTF-16BE). And I don't know why you thought I was using findall in msg190100 when the examples were using match! :-) |
Let see Modules/_sre.c: #define SRE_UNI_IS_ALNUM(ch) Py_UNICODE_ISALNUM(ch)
#define SRE_UNI_IS_WORD(ch) (SRE_UNI_IS_ALNUM(ch) || (ch) == '_') >>> [ch.isalpha() for ch in '\u0939\u093f\u0928\u094d\u0926\u0940']
[True, False, True, False, True, False]
>>> import unicodedata
>>> [unicodedata.category(ch) for ch in '\u0939\u093f\u0928\u094d\u0926\u0940']
['Lo', 'Mc', 'Lo', 'Mn', 'Lo', 'Mc'] So the matching ends at U+093f because its category is a "spacing combining" (Mc), which is part of the Mark category, where the re module expects an alphanumeric character. """ \p{alpha} So if we want to respect this standard, the re module needs to be modified to accept other Unicode categories. |
Whatever I may have said before, I favor supporting the Unicode standard for \w, which is related to the standard for identifiers. This is one of 2 issues about \w being defined too narrowly. I am somewhat arbitrarily closing this as a duplicate of bpo-12731 (fewer digits ;-). There are 3 issues about tokenize.tokenize failing on valid identifiers, defined as \w sequences whose first char is an identifier itself (and therefore a start char). In msg313814 of bpo-32987, Serhiy indicates which start and continue identifier characters are matched by \W for re and regex. I am leaving bpo-24194 open as the tokenizer name issue. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: