Class WordDelimiterFilter

All Implemented Interfaces:
Closeable, AutoCloseable

public final class WordDelimiterFilter extends TokenFilter
Splits words into subwords and performs optional transformations on subword groups. Words are split into subwords with the following rules:
  • split on intra-word delimiters (by default, all non alpha-numeric characters): "Wi-Fi""Wi", "Fi"
  • split on case transitions: "PowerShot""Power", "Shot"
  • split on letter-number transitions: "SD500""SD", "500"
  • leading and trailing intra-word delimiters on each subword are ignored: "//hello---there, 'dude'""hello", "there", "dude"
  • trailing "'s" are removed for each subword: "O'Neil's""O", "Neil"
    • Note: this step isn't performed in a separate filter because of possible subword combinations.
The combinations parameter affects how subwords are combined:
  • combinations="0" causes no subword combinations: "PowerShot"0:"Power", 1:"Shot" (0 and 1 are the token positions)
  • combinations="1" means that in addition to the subwords, maximum runs of non-numeric subwords are catenated and produced at the same position of the last subword in the run:
    • "PowerShot"0:"Power", 1:"Shot" 1:"PowerShot"
    • "A's+B'sinvalid input: '&C''s" -gt; 0:"A", 1:"B", 2:"C", 2:"ABC"
    • "Super-Duper-XL500-42-AutoCoder!"0:"Super", 1:"Duper", 2:"XL", 2:"SuperDuperXL", 3:"500" 4:"42", 5:"Auto", 6:"Coder", 6:"AutoCoder"
One use for WordDelimiterFilter is to help match words with different subword delimiters. For example, if the source text contained "wi-fi" one may want "wifi" "WiFi" "wi-fi" "wi+fi" queries to all match. One way of doing so is to specify combinations="1" in the analyzer used for indexing, and combinations="0" (the default) in the analyzer used for querying. Given that the current StandardTokenizer immediately removes many intra-word delimiters, it is recommended that this filter be used after a tokenizer that does not do this (such as WhitespaceTokenizer).
  • Field Details

    • LOWER

      public static final int LOWER
      See Also:
    • UPPER

      public static final int UPPER
      See Also:
    • DIGIT

      public static final int DIGIT
      See Also:
    • SUBWORD_DELIM

      public static final int SUBWORD_DELIM
      See Also:
    • ALPHA

      public static final int ALPHA
      See Also:
    • ALPHANUM

      public static final int ALPHANUM
      See Also:
    • GENERATE_WORD_PARTS

      public static final int GENERATE_WORD_PARTS
      Causes parts of words to be generated:

      "PowerShot" => "Power" "Shot"

      See Also:
    • GENERATE_NUMBER_PARTS

      public static final int GENERATE_NUMBER_PARTS
      Causes number subwords to be generated:

      "500-42" => "500" "42"

      See Also:
    • CATENATE_WORDS

      public static final int CATENATE_WORDS
      Causes maximum runs of word parts to be catenated:

      "wi-fi" => "wifi"

      See Also:
    • CATENATE_NUMBERS

      public static final int CATENATE_NUMBERS
      Causes maximum runs of word parts to be catenated:

      "wi-fi" => "wifi"

      See Also:
    • CATENATE_ALL

      public static final int CATENATE_ALL
      Causes all subword parts to be catenated:

      "wi-fi-4000" => "wifi4000"

      See Also:
    • PRESERVE_ORIGINAL

      public static final int PRESERVE_ORIGINAL
      Causes original words are preserved and added to the subword list (Defaults to false)

      "500-42" => "500" "42" "500-42"

      See Also:
    • SPLIT_ON_CASE_CHANGE

      public static final int SPLIT_ON_CASE_CHANGE
      If not set, causes case changes to be ignored (subwords will only be generated given SUBWORD_DELIM tokens)
      See Also:
    • SPLIT_ON_NUMERICS

      public static final int SPLIT_ON_NUMERICS
      If not set, causes numeric changes to be ignored (subwords will only be generated given SUBWORD_DELIM tokens).
      See Also:
    • STEM_ENGLISH_POSSESSIVE

      public static final int STEM_ENGLISH_POSSESSIVE
      Causes trailing "'s" to be removed for each subword

      "O'Neil's" => "O", "Neil"

      See Also:
  • Constructor Details

    • WordDelimiterFilter

      public WordDelimiterFilter(TokenStream in, byte[] charTypeTable, int configurationFlags, CharArraySet protWords)
      Creates a new WordDelimiterFilter
      Parameters:
      in - TokenStream to be filtered
      charTypeTable - table containing character types
      configurationFlags - Flags configuring the filter
      protWords - If not null is the set of tokens to protect from being delimited
    • WordDelimiterFilter

      public WordDelimiterFilter(TokenStream in, int configurationFlags, CharArraySet protWords)
      Creates a new WordDelimiterFilter using WordDelimiterIterator.DEFAULT_WORD_DELIM_TABLE as its charTypeTable
      Parameters:
      in - TokenStream to be filtered
      configurationFlags - Flags configuring the filter
      protWords - If not null is the set of tokens to protect from being delimited
  • Method Details

    • incrementToken

      public boolean incrementToken() throws IOException
      Description copied from class: TokenStream
      Consumers (i.e., IndexWriter) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriate AttributeImpls with the attributes of the next token.

      The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use AttributeSource.captureState() to create a copy of the current attribute state.

      This method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to AttributeSource.addAttribute(Class) and AttributeSource.getAttribute(Class), references to all AttributeImpls that this stream uses should be retrieved during instantiation.

      To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in TokenStream.incrementToken().

      Specified by:
      incrementToken in class TokenStream
      Returns:
      false for end of stream; true otherwise
      Throws:
      IOException
    • reset

      public void reset() throws IOException
      This method is called by a consumer before it begins consumption using TokenStream.incrementToken().

      Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh.

      If you override this method, always call super.reset(), otherwise some internal state will not be correctly reset (e.g., Tokenizer will throw IllegalStateException on further usage).

      NOTE: The default implementation chains the call to the input TokenStream, so be sure to call super.reset() when overriding this method.

      Overrides:
      reset in class TokenFilter
      Throws:
      IOException