Class CharTokenizer
- java.lang.Object
-
- org.apache.lucene.util.AttributeSource
-
- org.apache.lucene.analysis.TokenStream
-
- org.apache.lucene.analysis.Tokenizer
-
- org.apache.lucene.analysis.util.CharTokenizer
-
- All Implemented Interfaces:
Closeable,AutoCloseable
- Direct Known Subclasses:
LetterTokenizer,WhitespaceTokenizer
public abstract class CharTokenizer extends Tokenizer
An abstract base class for simple, character-oriented tokenizers.The base class also provides factories to create instances of
CharTokenizerusing Java 8 lambdas or method references. It is possible to create an instance which behaves exactly likeLetterTokenizer:Tokenizer tok = CharTokenizer.fromTokenCharPredicate(Character::isLetter);
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from class org.apache.lucene.util.AttributeSource
AttributeSource.State
-
-
Field Summary
Fields Modifier and Type Field Description static intDEFAULT_MAX_WORD_LEN-
Fields inherited from class org.apache.lucene.analysis.TokenStream
DEFAULT_TOKEN_ATTRIBUTE_FACTORY
-
-
Constructor Summary
Constructors Constructor Description CharTokenizer()Creates a newCharTokenizerinstanceCharTokenizer(AttributeFactory factory)Creates a newCharTokenizerinstanceCharTokenizer(AttributeFactory factory, int maxTokenLen)Creates a newCharTokenizerinstance
-
Method Summary
All Methods Static Methods Instance Methods Abstract Methods Concrete Methods Modifier and Type Method Description voidend()This method is called by the consumer after the last token has been consumed, afterTokenStream.incrementToken()returnedfalse(using the newTokenStreamAPI).static CharTokenizerfromSeparatorCharPredicate(IntPredicate separatorCharPredicate)Creates a new instance of CharTokenizer using a custom predicate, supplied as method reference or lambda expression.static CharTokenizerfromSeparatorCharPredicate(AttributeFactory factory, IntPredicate separatorCharPredicate)Creates a new instance of CharTokenizer with the supplied attribute factory using a custom predicate, supplied as method reference or lambda expression.static CharTokenizerfromTokenCharPredicate(IntPredicate tokenCharPredicate)Creates a new instance of CharTokenizer using a custom predicate, supplied as method reference or lambda expression.static CharTokenizerfromTokenCharPredicate(AttributeFactory factory, IntPredicate tokenCharPredicate)Creates a new instance of CharTokenizer with the supplied attribute factory using a custom predicate, supplied as method reference or lambda expression.booleanincrementToken()Consumers (i.e.,IndexWriter) use this method to advance the stream to the next token.protected abstract booleanisTokenChar(int c)Returns true iff a codepoint should be included in a token.voidreset()This method is called by a consumer before it begins consumption usingTokenStream.incrementToken().-
Methods inherited from class org.apache.lucene.analysis.Tokenizer
close, correctOffset, setReader
-
Methods inherited from class org.apache.lucene.util.AttributeSource
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, endAttributes, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, removeAllAttributes, restoreState, toString
-
-
-
-
Field Detail
-
DEFAULT_MAX_WORD_LEN
public static final int DEFAULT_MAX_WORD_LEN
- See Also:
- Constant Field Values
-
-
Constructor Detail
-
CharTokenizer
public CharTokenizer()
Creates a newCharTokenizerinstance
-
CharTokenizer
public CharTokenizer(AttributeFactory factory)
Creates a newCharTokenizerinstance- Parameters:
factory- the attribute factory to use for thisTokenizer
-
CharTokenizer
public CharTokenizer(AttributeFactory factory, int maxTokenLen)
Creates a newCharTokenizerinstance- Parameters:
factory- the attribute factory to use for thisTokenizermaxTokenLen- maximum token length the tokenizer will emit. Must be greater than 0 and less than MAX_TOKEN_LENGTH_LIMIT (1024*1024)- Throws:
IllegalArgumentException- if maxTokenLen is invalid.
-
-
Method Detail
-
fromTokenCharPredicate
public static CharTokenizer fromTokenCharPredicate(IntPredicate tokenCharPredicate)
Creates a new instance of CharTokenizer using a custom predicate, supplied as method reference or lambda expression. The predicate should returntruefor all valid token characters.This factory is intended to be used with lambdas or method references. E.g., an elegant way to create an instance which behaves exactly as
LetterTokenizeris:Tokenizer tok = CharTokenizer.fromTokenCharPredicate(Character::isLetter);
-
fromTokenCharPredicate
public static CharTokenizer fromTokenCharPredicate(AttributeFactory factory, IntPredicate tokenCharPredicate)
Creates a new instance of CharTokenizer with the supplied attribute factory using a custom predicate, supplied as method reference or lambda expression. The predicate should returntruefor all valid token characters.This factory is intended to be used with lambdas or method references. E.g., an elegant way to create an instance which behaves exactly as
LetterTokenizeris:Tokenizer tok = CharTokenizer.fromTokenCharPredicate(factory, Character::isLetter);
-
fromSeparatorCharPredicate
public static CharTokenizer fromSeparatorCharPredicate(IntPredicate separatorCharPredicate)
Creates a new instance of CharTokenizer using a custom predicate, supplied as method reference or lambda expression. The predicate should returntruefor all valid token separator characters. This method is provided for convenience to easily use predicates that are negated (they match the separator characters, not the token characters).This factory is intended to be used with lambdas or method references. E.g., an elegant way to create an instance which behaves exactly as
WhitespaceTokenizeris:Tokenizer tok = CharTokenizer.fromSeparatorCharPredicate(Character::isWhitespace);
-
fromSeparatorCharPredicate
public static CharTokenizer fromSeparatorCharPredicate(AttributeFactory factory, IntPredicate separatorCharPredicate)
Creates a new instance of CharTokenizer with the supplied attribute factory using a custom predicate, supplied as method reference or lambda expression. The predicate should returntruefor all valid token separator characters.This factory is intended to be used with lambdas or method references. E.g., an elegant way to create an instance which behaves exactly as
WhitespaceTokenizeris:Tokenizer tok = CharTokenizer.fromSeparatorCharPredicate(factory, Character::isWhitespace);
-
isTokenChar
protected abstract boolean isTokenChar(int c)
Returns true iff a codepoint should be included in a token. This tokenizer generates as tokens adjacent sequences of codepoints which satisfy this predicate. Codepoints for which this is false are used to define token boundaries and are not included in tokens.
-
incrementToken
public final boolean incrementToken() throws IOExceptionDescription copied from class:TokenStreamConsumers (i.e.,IndexWriter) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriateAttributeImpls with the attributes of the next token.The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use
AttributeSource.captureState()to create a copy of the current attribute state.This method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to
AttributeSource.addAttribute(Class)andAttributeSource.getAttribute(Class), references to allAttributeImpls that this stream uses should be retrieved during instantiation.To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in
TokenStream.incrementToken().- Specified by:
incrementTokenin classTokenStream- Returns:
- false for end of stream; true otherwise
- Throws:
IOException
-
end
public final void end() throws IOExceptionDescription copied from class:TokenStreamThis method is called by the consumer after the last token has been consumed, afterTokenStream.incrementToken()returnedfalse(using the newTokenStreamAPI). Streams implementing the old API should upgrade to use this feature.This method can be used to perform any end-of-stream operations, such as setting the final offset of a stream. The final offset of a stream might differ from the offset of the last token eg in case one or more whitespaces followed after the last token, but a WhitespaceTokenizer was used.
Additionally any skipped positions (such as those removed by a stopfilter) can be applied to the position increment, or any adjustment of other attributes where the end-of-stream value may be important.
If you override this method, always call
super.end().- Overrides:
endin classTokenStream- Throws:
IOException- If an I/O error occurs
-
reset
public void reset() throws IOExceptionDescription copied from class:TokenStreamThis method is called by a consumer before it begins consumption usingTokenStream.incrementToken().Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh.
If you override this method, always call
super.reset(), otherwise some internal state will not be correctly reset (e.g.,Tokenizerwill throwIllegalStateExceptionon further usage).- Overrides:
resetin classTokenizer- Throws:
IOException
-
-