why is 'regexp_tokenize' not working as the example?

While executing: 

textex = 'That U.S.A. poster-print costs $12.40...'
pattern =r'''(?x) #set flag to allow verbose regexps

... ([A-Z]\.)+ #abbreviations, e.g. U.S.A.

... | \w+(-\w+)* #words with optional internal hyphens

... | \$?\d+(\.\d+)?%? #currency and percentages,e.g. $12.40,82%

116

... | \.\.\. #ellipsis

... | [][.,;"'?():-_`] #these are separate tokens

... '''
nltk.regexp_tokenize(textex, pattern)

 

The output is :

[('', '', ''), ('', '', ''), ('', '-print', ''), ('', '', ''), ('', '', '')]

 

on jupyter notebook anaconda 3, python 3.5, ubuntu

Started by Laylaps at January 24, 2017 - 1:39 AM