Class WordDelimiterFilter
- All Implemented Interfaces:
Closeable,AutoCloseable
- split on intra-word delimiters (by default, all non alpha-numeric
characters):
"Wi-Fi"→"Wi", "Fi" - split on case transitions:
"PowerShot"→"Power", "Shot" - split on letter-number transitions:
"SD500"→"SD", "500" - leading and trailing intra-word delimiters on each subword are ignored:
"//hello---there, 'dude'"→"hello", "there", "dude" - trailing "'s" are removed for each subword:
"O'Neil's"→"O", "Neil"- Note: this step isn't performed in a separate filter because of possible subword combinations.
- combinations="0" causes no subword combinations:
"PowerShot"→0:"Power", 1:"Shot"(0 and 1 are the token positions) - combinations="1" means that in addition to the subwords, maximum runs of
non-numeric subwords are catenated and produced at the same position of the
last subword in the run:
"PowerShot"→0:"Power", 1:"Shot" 1:"PowerShot""A's+B'sinvalid input: '&C''s"-gt;0:"A", 1:"B", 2:"C", 2:"ABC""Super-Duper-XL500-42-AutoCoder!"→0:"Super", 1:"Duper", 2:"XL", 2:"SuperDuperXL", 3:"500" 4:"42", 5:"Auto", 6:"Coder", 6:"AutoCoder"
WordDelimiterFilter is to help match words with different
subword delimiters. For example, if the source text contained "wi-fi" one may
want "wifi" "WiFi" "wi-fi" "wi+fi" queries to all match. One way of doing so
is to specify combinations="1" in the analyzer used for indexing, and
combinations="0" (the default) in the analyzer used for querying. Given that
the current StandardTokenizer immediately removes many intra-word
delimiters, it is recommended that this filter be used after a tokenizer that
does not do this (such as WhitespaceTokenizer).-
Nested Class Summary
Nested classes/interfaces inherited from class org.apache.lucene.util.AttributeSource
AttributeSource.AttributeFactory, AttributeSource.State -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final intstatic final intstatic final intCauses all subword parts to be catenated:static final intCauses maximum runs of word parts to be catenated:static final intCauses maximum runs of word parts to be catenated:static final intstatic final intCauses number subwords to be generated:static final intCauses parts of words to be generated:static final intstatic final intCauses original words are preserved and added to the subword list (Defaults to false)static final intIf not set, causes case changes to be ignored (subwords will only be generated given SUBWORD_DELIM tokens)static final intIf not set, causes numeric changes to be ignored (subwords will only be generated given SUBWORD_DELIM tokens).static final intCauses trailing "'s" to be removed for each subwordstatic final intstatic final int -
Constructor Summary
ConstructorsConstructorDescriptionWordDelimiterFilter(TokenStream in, byte[] charTypeTable, int configurationFlags, CharArraySet protWords) Creates a new WordDelimiterFilterWordDelimiterFilter(TokenStream in, int configurationFlags, CharArraySet protWords) Creates a new WordDelimiterFilter usingWordDelimiterIterator.DEFAULT_WORD_DELIM_TABLEas its charTypeTable -
Method Summary
Modifier and TypeMethodDescriptionbooleanConsumers (i.e.,IndexWriter) use this method to advance the stream to the next token.voidreset()This method is called by a consumer before it begins consumption usingTokenStream.incrementToken().Methods inherited from class org.apache.lucene.analysis.TokenFilter
close, endMethods inherited from class org.apache.lucene.util.AttributeSource
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, restoreState, toString
-
Field Details
-
LOWER
public static final int LOWER- See Also:
-
UPPER
public static final int UPPER- See Also:
-
DIGIT
public static final int DIGIT- See Also:
-
SUBWORD_DELIM
public static final int SUBWORD_DELIM- See Also:
-
ALPHA
public static final int ALPHA- See Also:
-
ALPHANUM
public static final int ALPHANUM- See Also:
-
GENERATE_WORD_PARTS
public static final int GENERATE_WORD_PARTSCauses parts of words to be generated: "PowerShot" => "Power" "Shot"- See Also:
-
GENERATE_NUMBER_PARTS
public static final int GENERATE_NUMBER_PARTSCauses number subwords to be generated: "500-42" => "500" "42"- See Also:
-
CATENATE_WORDS
public static final int CATENATE_WORDSCauses maximum runs of word parts to be catenated: "wi-fi" => "wifi"- See Also:
-
CATENATE_NUMBERS
public static final int CATENATE_NUMBERSCauses maximum runs of word parts to be catenated: "wi-fi" => "wifi"- See Also:
-
CATENATE_ALL
public static final int CATENATE_ALLCauses all subword parts to be catenated: "wi-fi-4000" => "wifi4000"- See Also:
-
PRESERVE_ORIGINAL
public static final int PRESERVE_ORIGINALCauses original words are preserved and added to the subword list (Defaults to false) "500-42" => "500" "42" "500-42"- See Also:
-
SPLIT_ON_CASE_CHANGE
public static final int SPLIT_ON_CASE_CHANGEIf not set, causes case changes to be ignored (subwords will only be generated given SUBWORD_DELIM tokens)- See Also:
-
SPLIT_ON_NUMERICS
public static final int SPLIT_ON_NUMERICSIf not set, causes numeric changes to be ignored (subwords will only be generated given SUBWORD_DELIM tokens).- See Also:
-
STEM_ENGLISH_POSSESSIVE
public static final int STEM_ENGLISH_POSSESSIVECauses trailing "'s" to be removed for each subword "O'Neil's" => "O", "Neil"- See Also:
-
-
Constructor Details
-
WordDelimiterFilter
public WordDelimiterFilter(TokenStream in, byte[] charTypeTable, int configurationFlags, CharArraySet protWords) Creates a new WordDelimiterFilter- Parameters:
in- TokenStream to be filteredcharTypeTable- table containing character typesconfigurationFlags- Flags configuring the filterprotWords- If not null is the set of tokens to protect from being delimited
-
WordDelimiterFilter
Creates a new WordDelimiterFilter usingWordDelimiterIterator.DEFAULT_WORD_DELIM_TABLEas its charTypeTable- Parameters:
in- TokenStream to be filteredconfigurationFlags- Flags configuring the filterprotWords- If not null is the set of tokens to protect from being delimited
-
-
Method Details
-
incrementToken
Description copied from class:TokenStreamConsumers (i.e.,IndexWriter) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriateAttributeImpls with the attributes of the next token.The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use
AttributeSource.captureState()to create a copy of the current attribute state.This method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to
AttributeSource.addAttribute(Class)andAttributeSource.getAttribute(Class), references to allAttributeImpls that this stream uses should be retrieved during instantiation.To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in
TokenStream.incrementToken().- Specified by:
incrementTokenin classTokenStream- Returns:
- false for end of stream; true otherwise
- Throws:
IOException
-
reset
This method is called by a consumer before it begins consumption usingTokenStream.incrementToken().Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh.
If you override this method, always call
super.reset(), otherwise some internal state will not be correctly reset (e.g.,Tokenizerwill throwIllegalStateExceptionon further usage).NOTE: The default implementation chains the call to the input TokenStream, so be sure to call
super.reset()when overriding this method.- Overrides:
resetin classTokenFilter- Throws:
IOException
-