All Classes Interface Summary Class Summary Enum Summary Exception Summary
| Class |
Description |
| AfterEffect |
This class acts as the base class for the implementations of the first
normalization of the informative content in the DFR framework.
|
| AfterEffect.NoAfterEffect |
Implementation used when there is no aftereffect.
|
| AfterEffectB |
Model of the information gain based on the ratio of two Bernoulli processes.
|
| AfterEffectL |
Model of the information gain based on Laplace's law of succession.
|
| AlreadyClosedException |
This exception is thrown when there is an attempt to
access something that has already been closed.
|
| Analyzer |
An Analyzer builds TokenStreams, which analyze text.
|
| Analyzer.GlobalReuseStrategy |
Deprecated.
|
| Analyzer.PerFieldReuseStrategy |
Deprecated.
|
| Analyzer.ReuseStrategy |
|
| Analyzer.TokenStreamComponents |
This class encapsulates the outer components of a token stream.
|
| AnalyzerWrapper |
Extension to Analyzer suitable for Analyzers which wrap
other Analyzers.
|
| AppendingDeltaPackedLongBuffer |
Utility class to buffer a list of signed longs in memory.
|
| AppendingPackedLongBuffer |
Utility class to buffer a list of signed longs in memory.
|
| ArrayUtil |
Methods for manipulating arrays.
|
| AtomicReader |
AtomicReader is an abstract class, providing an interface for accessing an
index.
|
| AtomicReaderContext |
|
| Attribute |
Base interface for attributes.
|
| AttributeImpl |
|
| AttributeReflector |
|
| AttributeSource |
An AttributeSource contains a list of different AttributeImpls,
and methods to add and get them.
|
| AttributeSource.AttributeFactory |
|
| AttributeSource.State |
This class holds the state of an AttributeSource.
|
| Automaton |
Finite-state automaton with regular expression operations.
|
| AutomatonProvider |
|
| AutomatonQuery |
A Query that will match terms against a finite-state machine.
|
| AveragePayloadFunction |
Calculate the final score as the average score of all payloads seen.
|
| BaseCompositeReader<R extends IndexReader> |
Base class for implementing CompositeReaders based on an array
of sub-readers.
|
| BaseDirectory |
Base implementation for a concrete Directory.
|
| BasicAutomata |
Construction of basic automata.
|
| BasicModel |
This class acts as the base class for the specific basic model
implementations in the DFR framework.
|
| BasicModelBE |
Limiting form of the Bose-Einstein model.
|
| BasicModelD |
Implements the approximation of the binomial model with the divergence
for DFR.
|
| BasicModelG |
Geometric as limiting form of the Bose-Einstein model.
|
| BasicModelIF |
An approximation of the I(ne) model.
|
| BasicModelIn |
The basic tf-idf model of randomness.
|
| BasicModelIne |
Tf-idf model of randomness, based on a mixture of Poisson and inverse
document frequency.
|
| BasicModelP |
Implements the Poisson approximation for the binomial model for DFR.
|
| BasicOperations |
Basic automata operations.
|
| BasicStats |
Stores all statistics commonly used ranking methods.
|
| BinaryDocValues |
A per-document byte[]
|
| BinaryDocValuesField |
Field that stores a per-document BytesRef value.
|
| Bits |
Interface for Bitset-like structures.
|
| Bits.MatchAllBits |
Bits impl of the specified length with all bits set.
|
| Bits.MatchNoBits |
Bits impl of the specified length with no bits set.
|
| BitsFilteredDocIdSet |
This implementation supplies a filtered DocIdSet, that excludes all
docids which are not in a Bits instance.
|
| BitUtil |
A variety of high efficiency bit twiddling routines.
|
| BlockPackedReader |
|
| BlockPackedReaderIterator |
|
| BlockPackedWriter |
A writer for large sequences of longs.
|
| BlockTermState |
|
| BlockTreeTermsReader |
A block-based terms index and dictionary that assigns
terms to variable length blocks according to how they
share prefixes.
|
| BlockTreeTermsReader.Stats |
|
| BlockTreeTermsWriter |
Block-based terms index and dictionary writer.
|
| BM25Similarity |
BM25 Similarity.
|
| BooleanClause |
A clause in a BooleanQuery.
|
| BooleanClause.Occur |
Specifies how clauses are to occur in matching documents.
|
| BooleanQuery |
A Query that matches documents matching boolean combinations of other
queries, e.g.
|
| BooleanQuery.TooManyClauses |
|
| BoostAttribute |
|
| BoostAttributeImpl |
|
| BroadWord |
Methods and constants inspired by the article
"Broadword Implementation of Rank/Select Queries" by Sebastiano Vigna, January 30, 2012:
algorithm 1: BroadWord.bitCount(long), count of set bits in a long
algorithm 2: BroadWord.select(long, int), selection of a set bit in a long,
bytewise signed smaller < 8 operator: BroadWord.smallerUpTo7_8(long,long).
|
| BufferedIndexInput |
Base implementation class for buffered IndexInput.
|
| BufferedIndexOutput |
|
| Builder<T> |
Builds a minimal FST (maps an IntsRef term to an arbitrary
output) from pre-sorted terms with outputs.
|
| Builder.Arc<T> |
Expert: holds a pending (seen but not yet serialized) arc.
|
| Builder.FreezeTail<T> |
Expert: this is invoked by Builder whenever a suffix
is serialized.
|
| Builder.UnCompiledNode<T> |
Expert: holds a pending (seen but not yet serialized) Node.
|
| ByteArrayDataInput |
DataInput backed by a byte array.
|
| ByteArrayDataOutput |
DataOutput backed by a byte array.
|
| ByteBlockPool |
Class that Posting and PostingVector use to write byte
streams into shared fixed-size byte[] arrays.
|
| ByteBlockPool.Allocator |
Abstract class for allocating and freeing byte
blocks.
|
| ByteBlockPool.DirectAllocator |
|
| ByteBlockPool.DirectTrackingAllocator |
|
| ByteDocValuesField |
Deprecated.
|
| ByteRunAutomaton |
Automaton representation for matching UTF-8 byte[].
|
| ByteSequenceOutputs |
An FST Outputs implementation where each output
is a sequence of bytes.
|
| BytesRef |
Represents byte[], as a slice (offset + length) into an
existing byte[].
|
| BytesRefFSTEnum<T> |
Enumerates all input (BytesRef) + output pairs in an
FST.
|
| BytesRefFSTEnum.InputOutput<T> |
Holds a single input (BytesRef) + output pair.
|
| BytesRefHash |
|
| BytesRefHash.BytesStartArray |
Manages allocation of the per-term addresses.
|
| BytesRefHash.DirectBytesStartArray |
|
| BytesRefHash.MaxBytesLengthExceededException |
|
| BytesRefIterator |
A simple iterator interface for BytesRef iteration.
|
| CachingCollector |
Caches all docs, and optionally also scores, coming from
a search, and is then able to replay them to another
collector.
|
| CachingTokenFilter |
This class can be used if the token attributes of a TokenStream
are intended to be consumed more than once.
|
| CachingWrapperFilter |
Wraps another Filter's result and caches it.
|
| CharacterRunAutomaton |
Automaton representation for matching char[].
|
| CharFilter |
Subclasses of CharFilter can be chained to filter a Reader
They can be used as Reader with additional offset
correction.
|
| CharSequenceOutputs |
An FST Outputs implementation where each output
is a sequence of characters.
|
| CharsRef |
Represents char[], as a slice (offset + length) into an existing char[].
|
| CharTermAttribute |
The term text of a Token.
|
| CharTermAttributeImpl |
|
| CheckIndex |
Basic tool and API to check the health of an index and
write a new segments file that removes reference to
problematic segments.
|
| CheckIndex.Status |
|
| CheckIndex.Status.DocValuesStatus |
Status from testing DocValues
|
| CheckIndex.Status.FieldNormStatus |
Status from testing field norms.
|
| CheckIndex.Status.SegmentInfoStatus |
Holds the status of each segment in the index.
|
| CheckIndex.Status.StoredFieldStatus |
Status from testing stored fields.
|
| CheckIndex.Status.TermIndexStatus |
Status from testing term index.
|
| CheckIndex.Status.TermVectorStatus |
Status from testing stored fields.
|
| ChecksumIndexInput |
Reads bytes through to a primary IndexInput, computing
checksum as it goes.
|
| ChecksumIndexOutput |
Writes bytes through to a primary IndexOutput, computing
checksum.
|
| CloseableThreadLocal<T> |
Java's builtin ThreadLocal has a serious flaw:
it can take an arbitrarily long amount of time to
dereference the things you had stored in it, even once the
ThreadLocal instance itself is no longer referenced.
|
| Codec |
Encodes/decodes an inverted index segment.
|
| CodecUtil |
Utility class for reading and writing versioned headers.
|
| CollectionStatistics |
Contains statistics for a collection (field)
|
| CollectionTerminatedException |
|
| CollectionUtil |
Methods for manipulating (sorting) collections.
|
| Collector |
Expert: Collectors are primarily meant to be used to
gather raw results from a search, and implement sorting
or custom result filtering, collation, etc.
|
| CommandLineUtil |
Class containing some useful methods used by command line tools
|
| CompiledAutomaton |
Immutable class holding compiled details for a given
Automaton.
|
| CompiledAutomaton.AUTOMATON_TYPE |
Automata are compiled into different internal forms for the
most efficient execution depending upon the language they accept.
|
| ComplexExplanation |
Expert: Describes the score computation for document and query, and
can distinguish a match independent of a positive value.
|
| CompositeReader |
Instances of this reader type can only
be used to get stored fields from the underlying AtomicReaders,
but it is not possible to directly retrieve postings.
|
| CompositeReaderContext |
|
| CompoundFileDirectory |
Class for accessing a compound stream.
|
| CompoundFileDirectory.FileEntry |
Offset/Length for a slice inside of a compound file
|
| CompressingStoredFieldsFormat |
|
| CompressingStoredFieldsIndexReader |
|
| CompressingStoredFieldsIndexWriter |
Efficient index format for block-based Codecs.
|
| CompressingStoredFieldsReader |
|
| CompressingStoredFieldsWriter |
|
| CompressingTermVectorsFormat |
A TermVectorsFormat that compresses chunks of documents together in
order to improve the compression ratio.
|
| CompressingTermVectorsReader |
|
| CompressingTermVectorsWriter |
|
| CompressionMode |
A compression mode.
|
| CompressionTools |
Simple utility class providing static methods to
compress and decompress binary data for stored fields.
|
| Compressor |
A data compressor.
|
| ConcurrentMergeScheduler |
|
| Constants |
Some useful constants.
|
| ConstantScoreQuery |
A query that wraps another query or a filter and simply returns a constant score equal to the
query boost for every document that matches the filter or query.
|
| ControlledRealTimeReopenThread<T> |
Utility class that runs a thread to manage periodicc
reopens of a ReferenceManager, with methods to wait for a specific
index changes to become visible.
|
| CorruptIndexException |
This exception is thrown when Lucene detects
an inconsistency in the index.
|
| Counter |
Simple counter class
|
| DataInput |
Abstract base class for performing read operations of Lucene's low-level
data types.
|
| DataOutput |
Abstract base class for performing write operations of Lucene's low-level
data types.
|
| DateTools |
Provides support for converting dates to strings and vice-versa.
|
| DateTools.Resolution |
Specifies the time granularity.
|
| Decompressor |
A decompressor.
|
| DefaultSimilarity |
Expert: Default scoring implementation which encodes norm values as a single byte before being stored.
|
| DerefBytesDocValuesField |
Deprecated.
|
| DFRSimilarity |
Implements the divergence from randomness (DFR) framework
introduced in Gianni Amati and Cornelis Joost Van Rijsbergen.
|
| Directory |
A Directory is a flat list of files.
|
| DirectoryReader |
|
| DisjunctionMaxQuery |
A query that generates the union of documents produced by its subqueries, and that scores each document with the maximum
score for that document as produced by any subquery, plus a tie breaking increment for any additional matching subqueries.
|
| Distribution |
The probabilistic distribution used to model term occurrence
in information-based models.
|
| DistributionLL |
Log-logistic distribution.
|
| DistributionSPL |
The smoothed power-law (SPL) distribution for the information-based framework
that is described in the original paper.
|
| DocIdBitSet |
Simple DocIdSet and DocIdSetIterator backed by a BitSet
|
| DocIdSet |
A DocIdSet contains a set of doc ids.
|
| DocIdSetIterator |
This abstract class defines methods to iterate over a set of non-decreasing
doc ids.
|
| DocsAndPositionsEnum |
Also iterates through positions.
|
| DocsEnum |
Iterates through the documents and term freqs.
|
| DocTermOrds |
This class enables fast access to multiple term ords for
a specified field across all docIDs.
|
| DocTermOrdsRangeFilter |
A range filter built on top of a cached multi-valued term field (in FieldCache).
|
| DocTermOrdsRewriteMethod |
Rewrites MultiTermQueries into a filter, using DocTermOrds for term enumeration.
|
| Document |
Documents are the unit of indexing and search.
|
| DocumentStoredFieldVisitor |
|
| DocValuesConsumer |
Abstract API that consumes numeric, binary and
sorted docvalues.
|
| DocValuesFormat |
Encodes/decodes per-document values.
|
| DocValuesProducer |
Abstract API that produces numeric, binary and
sorted docvalues.
|
| DocValuesProducer.SortedDocsWithField |
|
| DocValuesProducer.SortedSetDocsWithField |
|
| DoubleBarrelLRUCache<K extends DoubleBarrelLRUCache.CloneableKey,V> |
Simple concurrent LRU cache, using a "double barrel"
approach where two ConcurrentHashMaps record entries.
|
| DoubleBarrelLRUCache.CloneableKey |
Object providing clone(); the key class must subclass this.
|
| DoubleDocValuesField |
|
| DoubleField |
Field that indexes double values
for efficient range filtering and sorting.
|
| EliasFanoDecoder |
|
| EliasFanoDocIdSet |
A DocIdSet in Elias-Fano encoding.
|
| EliasFanoEncoder |
Encode a non decreasing sequence of non negative whole numbers in the Elias-Fano encoding
that was introduced in the 1970's by Peter Elias and Robert Fano.
|
| Explanation |
Expert: Describes the score computation for document and query.
|
| Field |
Expert: directly create a field for a document.
|
| Field.Index |
Deprecated.
|
| Field.Store |
Specifies whether and how a field should be stored.
|
| Field.TermVector |
Deprecated.
|
| FieldCache |
Expert: Maintains caches of term values.
|
| FieldCache.ByteParser |
Deprecated. |
| FieldCache.Bytes |
Field values as 8-bit signed bytes
|
| FieldCache.CacheEntry |
EXPERT: A unique Identifier/Description for each item in the FieldCache.
|
| FieldCache.CreationPlaceholder |
Placeholder indicating creation of this cache is currently in-progress.
|
| FieldCache.DoubleParser |
Interface to parse doubles from document fields.
|
| FieldCache.Doubles |
Field values as 64-bit doubles
|
| FieldCache.FloatParser |
Interface to parse floats from document fields.
|
| FieldCache.Floats |
Field values as 32-bit floats
|
| FieldCache.IntParser |
Interface to parse ints from document fields.
|
| FieldCache.Ints |
Field values as 32-bit signed integers
|
| FieldCache.LongParser |
Interface to parse long from document fields.
|
| FieldCache.Longs |
Field values as 64-bit signed long integers
|
| FieldCache.Parser |
Marker interface as super-interface to all parsers.
|
| FieldCache.ShortParser |
Deprecated. |
| FieldCache.Shorts |
Field values as 16-bit signed shorts
|
| FieldCacheDocIdSet |
Base class for DocIdSet to be used with FieldCache.
|
| FieldCacheRangeFilter<T> |
A range filter built on top of a cached single term field (in FieldCache).
|
| FieldCacheRewriteMethod |
Rewrites MultiTermQueries into a filter, using the FieldCache for term enumeration.
|
| FieldCacheSanityChecker |
Provides methods for sanity checking that entries in the FieldCache
are not wasteful or inconsistent.
|
| FieldCacheSanityChecker.Insanity |
Simple container for a collection of related CacheEntry objects that
in conjunction with each other represent some "insane" usage of the
FieldCache.
|
| FieldCacheSanityChecker.InsanityType |
An Enumeration of the different types of "insane" behavior that
may be detected in a FieldCache.
|
| FieldCacheTermsFilter |
A Filter that only accepts documents whose single
term value in the specified field is contained in the
provided set of allowed terms.
|
| FieldComparator<T> |
Expert: a FieldComparator compares hits so as to determine their
sort order when collecting the top results with TopFieldCollector.
|
| FieldComparator.ByteComparator |
Deprecated. |
| FieldComparator.DocComparator |
Sorts by ascending docID
|
| FieldComparator.DoubleComparator |
|
| FieldComparator.FloatComparator |
|
| FieldComparator.IntComparator |
|
| FieldComparator.LongComparator |
|
| FieldComparator.NumericComparator<T extends Number> |
Base FieldComparator class for numeric types
|
| FieldComparator.RelevanceComparator |
Sorts by descending relevance.
|
| FieldComparator.ShortComparator |
Deprecated. |
| FieldComparator.TermOrdValComparator |
Sorts by field's natural Term sort order, using
ordinals.
|
| FieldComparator.TermValComparator |
Sorts by field's natural Term sort order.
|
| FieldComparatorSource |
|
| FieldDoc |
Expert: A ScoreDoc which also contains information about
how to sort the referenced document.
|
| FieldInfo |
Access to the Field Info file that describes document fields and whether or
not they are indexed.
|
| FieldInfo.DocValuesType |
DocValues types.
|
| FieldInfo.IndexOptions |
Controls how much information is stored in the postings lists.
|
| FieldInfos |
Collection of FieldInfos (accessible by number or by name).
|
| FieldInfosFormat |
|
| FieldInfosReader |
|
| FieldInfosWriter |
|
| FieldInvertState |
This class tracks the number and position / offset parameters of terms
being added to the index.
|
| FieldMaskingSpanQuery |
Wrapper to allow SpanQuery objects participate in composite
single-field SpanQueries by 'lying' about their search field.
|
| Fields |
Flex API for access to fields and terms
|
| FieldsConsumer |
Abstract API that consumes terms, doc, freq, prox, offset and
payloads postings.
|
| FieldsProducer |
Abstract API that produces terms, doc, freq, prox, offset and
payloads postings.
|
| FieldType |
Describes the properties of a field.
|
| FieldType.NumericType |
Data type of the numeric value
|
| FieldValueFilter |
A Filter that accepts all documents that have one or more values in a
given field.
|
| FieldValueHitQueue<T extends FieldValueHitQueue.Entry> |
Expert: A hit queue for sorting by hits by terms in more than one field.
|
| FieldValueHitQueue.Entry |
|
| FileSwitchDirectory |
Expert: A Directory instance that switches files between
two other Directory instances.
|
| Filter |
Abstract base class for restricting which documents may
be returned during searching.
|
| FilterAtomicReader |
A FilterAtomicReader contains another AtomicReader, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality.
|
| FilterAtomicReader.FilterDocsAndPositionsEnum |
|
| FilterAtomicReader.FilterDocsEnum |
Base class for filtering DocsEnum implementations.
|
| FilterAtomicReader.FilterFields |
Base class for filtering Fields
implementations.
|
| FilterAtomicReader.FilterTerms |
Base class for filtering Terms implementations.
|
| FilterAtomicReader.FilterTermsEnum |
Base class for filtering TermsEnum implementations.
|
| FilterCodec |
A codec that forwards all its method calls to another codec.
|
| FilterDirectory |
Directory implementation that delegates calls to another directory.
|
| FilterDirectoryReader |
A FilterDirectoryReader wraps another DirectoryReader, allowing implementations
to transform or extend it.
|
| FilterDirectoryReader.StandardReaderWrapper |
A no-op SubReaderWrapper that simply returns the parent
DirectoryReader's original subreaders.
|
| FilterDirectoryReader.SubReaderWrapper |
Factory class passed to FilterDirectoryReader constructor that allows
subclasses to wrap the filtered DirectoryReader's subreaders.
|
| FilteredDocIdSet |
Abstract decorator class for a DocIdSet implementation
that provides on-demand filtering/validation
mechanism on a given DocIdSet.
|
| FilteredDocIdSetIterator |
Abstract decorator class of a DocIdSetIterator
implementation that provides on-demand filter/validation
mechanism on an underlying DocIdSetIterator.
|
| FilteredQuery |
A query that applies a filter to the results of another query.
|
| FilteredQuery.FilterStrategy |
Abstract class that defines how the filter ( DocIdSet) applied during document collection.
|
| FilteredQuery.RandomAccessFilterStrategy |
|
| FilteredTermsEnum |
Abstract class for enumerating a subset of all terms.
|
| FilteredTermsEnum.AcceptStatus |
Return value, if term should be accepted or the iteration should
END.
|
| FilterIterator<T> |
An Iterator implementation that filters elements with a boolean predicate.
|
| FixedBitSet |
|
| FixedBitSet.FixedBitSetIterator |
|
| FlagsAttribute |
This attribute can be used to pass different flags down the Tokenizer chain,
e.g.
|
| FlagsAttributeImpl |
|
| FloatDocValuesField |
|
| FloatField |
Field that indexes float values
for efficient range filtering and sorting.
|
| FlushInfo |
A FlushInfo provides information required for a FLUSH context.
|
| FSDirectory |
Base class for Directory implementations that store index
files in the file system.
|
| FSDirectory.FSIndexInput |
Base class for reading input from a RandomAccessFile
|
| FSDirectory.FSIndexOutput |
|
| FSLockFactory |
Base class for file system based locking implementation.
|
| FST<T> |
Represents an finite state machine (FST), using a
compact byte[] format.
|
| FST.Arc<T> |
Represents a single arc.
|
| FST.BytesReader |
Reads bytes stored in an FST.
|
| FST.INPUT_TYPE |
Specifies allowed range of each int input label for
this FST.
|
| FuzzyQuery |
Implements the fuzzy search query.
|
| FuzzyTermsEnum |
Subclass of TermsEnum for enumerating all terms that are similar
to the specified filter term.
|
| FuzzyTermsEnum.LevenshteinAutomataAttribute |
reuses compiled automata across different segments,
because they are independent of the index
|
| FuzzyTermsEnum.LevenshteinAutomataAttributeImpl |
Stores compiled automata as a list (indexed by edit distance)
|
| GrowableByteArrayDataOutput |
|
| GrowableWriter |
Implements PackedInts.Mutable, but grows the
bit count of the underlying packed ints on-demand.
|
| IBSimilarity |
Provides a framework for the family of information-based models, as described
in Stéphane Clinchant and Eric Gaussier.
|
| IndexableBinaryStringTools |
Deprecated.
|
| IndexableField |
Represents a single field for indexing.
|
| IndexableFieldType |
Describes the properties of a field.
|
| IndexCommit |
|
| IndexDeletionPolicy |
|
| IndexFileNames |
This class contains useful constants representing filenames and extensions
used by lucene, as well as convenience methods for querying whether a file
name matches an extension ( matchesExtension), as well as generating file names from a segment name,
generation and extension (
fileNameFromGeneration,
segmentFileName).
|
| IndexFormatTooNewException |
This exception is thrown when Lucene detects
an index that is newer than this Lucene version.
|
| IndexFormatTooOldException |
This exception is thrown when Lucene detects
an index that is too old for this Lucene version
|
| IndexInput |
Abstract base class for input from a file in a Directory.
|
| IndexNotFoundException |
Signals that no index was found in the Directory.
|
| IndexOutput |
Abstract base class for output to a file in a Directory.
|
| IndexReader |
IndexReader is an abstract class, providing an interface for accessing an
index.
|
| IndexReader.ReaderClosedListener |
A custom listener that's invoked when the IndexReader
is closed.
|
| IndexReaderContext |
A struct like class that represents a hierarchical relationship between
IndexReader instances.
|
| IndexSearcher |
Implements search over a single IndexReader.
|
| IndexSearcher.LeafSlice |
A class holding a subset of the IndexSearchers leaf contexts to be
executed within a single thread.
|
| IndexUpgrader |
This is an easy-to-use tool that upgrades all segments of an index from previous Lucene versions
to the current segment file format.
|
| IndexWriter |
An IndexWriter creates and maintains an index.
|
| IndexWriter.IndexReaderWarmer |
If DirectoryReader.open(IndexWriter,boolean) has
been called (ie, this writer is in near real-time
mode), then after a merge completes, this class can be
invoked to warm the reader on the newly merged
segment, before the merge commits.
|
| IndexWriterConfig |
Holds all the configuration that is used to create an IndexWriter.
|
| IndexWriterConfig.OpenMode |
|
| InfoStream |
|
| InPlaceMergeSorter |
Sorter implementation based on the merge-sort algorithm that merges
in place (no extra memory will be allocated).
|
| InputStreamDataInput |
|
| IntBlockPool |
|
| IntBlockPool.Allocator |
Abstract class for allocating and freeing int
blocks.
|
| IntBlockPool.DirectAllocator |
|
| IntBlockPool.SliceReader |
|
| IntBlockPool.SliceWriter |
|
| IntDocValuesField |
Deprecated.
|
| IntField |
Field that indexes int values
for efficient range filtering and sorting.
|
| IntroSorter |
Sorter implementation based on a variant of the quicksort algorithm
called introsort: when
the recursion level exceeds the log of the length of the array to sort, it
falls back to heapsort.
|
| IntSequenceOutputs |
An FST Outputs implementation where each output
is a sequence of ints.
|
| IntsRef |
Represents int[], as a slice (offset + length) into an
existing int[].
|
| IntsRefFSTEnum<T> |
Enumerates all input (IntsRef) + output pairs in an
FST.
|
| IntsRefFSTEnum.InputOutput<T> |
Holds a single input (IntsRef) + output pair.
|
| IOContext |
IOContext holds additional details on the merge/search context.
|
| IOContext.Context |
Context is a enumerator which specifies the context in which the Directory
is being used for.
|
| IOUtils |
This class emulates the new Java 7 "Try-With-Resources" statement.
|
| KeepOnlyLastCommitDeletionPolicy |
This IndexDeletionPolicy implementation that
keeps only the most recent commit and immediately removes
all prior commits after a new commit is done.
|
| KeywordAttribute |
This attribute can be used to mark a token as a keyword.
|
| KeywordAttributeImpl |
|
| Lambda |
The lambda (λw) parameter in information-based
models.
|
| LambdaDF |
Computes lambda as docFreq+1 / numberOfDocuments+1.
|
| LambdaTTF |
Computes lambda as totalTermFreq+1 / numberOfDocuments+1.
|
| LevenshteinAutomata |
Class to construct DFAs that match a word within some edit distance.
|
| LiveDocsFormat |
Format for live/deleted documents
|
| LiveFieldValues<S,T> |
Tracks live field values across NRT reader reopens.
|
| LiveIndexWriterConfig |
Holds all the configuration used by IndexWriter with few setters for
settings that can be changed on an IndexWriter instance "live".
|
| LMDirichletSimilarity |
Bayesian smoothing using Dirichlet priors.
|
| LMJelinekMercerSimilarity |
Language model based on the Jelinek-Mercer smoothing method.
|
| LMSimilarity |
Abstract superclass for language modeling Similarities.
|
| LMSimilarity.CollectionModel |
A strategy for computing the collection language model.
|
| LMSimilarity.DefaultCollectionModel |
Models p(w|C) as the number of occurrences of the term in the
collection, divided by the total number of tokens + 1.
|
| LMSimilarity.LMStats |
Stores the collection distribution of the current term.
|
| Lock |
An interprocess mutex lock.
|
| Lock.With |
Utility class for executing code with exclusive access.
|
| LockFactory |
Base class for Locking implementation.
|
| LockObtainFailedException |
This exception is thrown when the write.lock
could not be acquired.
|
| LockReleaseFailedException |
This exception is thrown when the write.lock
could not be released.
|
| LockStressTest |
Simple standalone tool that forever acquires & releases a
lock using a specific LockFactory.
|
| LockVerifyServer |
|
| LogByteSizeMergePolicy |
This is a LogMergePolicy that measures size of a
segment as the total byte size of the segment's files.
|
| LogDocMergePolicy |
This is a LogMergePolicy that measures size of a
segment as the number of documents (not taking deletions
into account).
|
| LogMergePolicy |
This class implements a MergePolicy that tries
to merge segments into levels of exponentially
increasing size, where each level has fewer segments than
the value of the merge factor.
|
| LongBitSet |
BitSet of fixed length (numBits), backed by accessible ( LongBitSet.getBits())
long[], accessed with a long index.
|
| LongDocValuesField |
Deprecated.
|
| LongField |
Field that indexes long values
for efficient range filtering and sorting.
|
| LongsRef |
Represents long[], as a slice (offset + length) into an
existing long[].
|
| LongValues |
Abstraction over an array of longs.
|
| Lucene3xCodec |
Deprecated.
|
| Lucene3xSegmentInfoFormat |
Deprecated.
|
| Lucene3xSegmentInfoReader |
Deprecated.
|
| Lucene40Codec |
Deprecated.
|
| Lucene40DocValuesFormat |
Deprecated.
|
| Lucene40FieldInfosFormat |
Deprecated.
|
| Lucene40LiveDocsFormat |
Lucene 4.0 Live Documents Format.
|
| Lucene40NormsFormat |
Deprecated.
|
| Lucene40PostingsBaseFormat |
Deprecated.
|
| Lucene40PostingsFormat |
Deprecated.
|
| Lucene40PostingsReader |
Deprecated.
|
| Lucene40SegmentInfoFormat |
Deprecated.
|
| Lucene40SegmentInfoReader |
Deprecated.
|
| Lucene40SegmentInfoWriter |
Deprecated. |
| Lucene40SkipListReader |
Deprecated.
|
| Lucene40StoredFieldsFormat |
Lucene 4.0 Stored Fields Format.
|
| Lucene40StoredFieldsReader |
Class responsible for access to stored document fields.
|
| Lucene40StoredFieldsWriter |
Class responsible for writing stored document fields.
|
| Lucene40TermVectorsFormat |
Lucene 4.0 Term Vectors format.
|
| Lucene40TermVectorsReader |
Lucene 4.0 Term Vectors reader.
|
| Lucene40TermVectorsWriter |
Lucene 4.0 Term Vectors writer.
|
| Lucene41Codec |
Deprecated.
|
| Lucene41PostingsBaseFormat |
|
| Lucene41PostingsFormat |
Lucene 4.1 postings format, which encodes postings in packed integer blocks
for fast decode.
|
| Lucene41PostingsReader |
Concrete class that reads docId(maybe frq,pos,offset,payloads) list
with postings format.
|
| Lucene41PostingsWriter |
Concrete class that writes docId(maybe frq,pos,offset,payloads) list
with postings format.
|
| Lucene41StoredFieldsFormat |
Lucene 4.1 stored fields format.
|
| Lucene42Codec |
Deprecated.
|
| Lucene42DocValuesFormat |
Deprecated.
|
| Lucene42FieldInfosFormat |
Deprecated.
|
| Lucene42NormsFormat |
Lucene 4.2 score normalization format.
|
| Lucene42TermVectorsFormat |
|
| Lucene45Codec |
Deprecated.
|
| Lucene45DocValuesConsumer |
|
| Lucene45DocValuesFormat |
Lucene 4.5 DocValues format.
|
| Lucene45DocValuesProducer |
|
| Lucene45DocValuesProducer.BinaryEntry |
metadata entry for a binary docvalues field
|
| Lucene45DocValuesProducer.NumericEntry |
metadata entry for a numeric docvalues field
|
| Lucene45DocValuesProducer.SortedSetEntry |
metadata entry for a sorted-set docvalues field
|
| Lucene46Codec |
Implements the Lucene 4.6 index format, with configurable per-field postings
and docvalues formats.
|
| Lucene46FieldInfosFormat |
Lucene 4.6 Field Infos format.
|
| Lucene46SegmentInfoFormat |
Lucene 4.6 Segment info format.
|
| Lucene46SegmentInfoReader |
|
| Lucene46SegmentInfoWriter |
|
| LucenePackage |
Lucene's package information, including version.
|
| MapOfSets<K,V> |
Helper class for keeping Lists of Objects associated with keys.
|
| MappingMultiDocsAndPositionsEnum |
Exposes flex API, merged from flex API of sub-segments,
remapping docIDs (this is used for segment merging).
|
| MappingMultiDocsEnum |
Exposes flex API, merged from flex API of sub-segments,
remapping docIDs (this is used for segment merging).
|
| MatchAllDocsQuery |
A query that matches all documents.
|
| MathUtil |
Math static utility methods.
|
| MaxNonCompetitiveBoostAttribute |
|
| MaxNonCompetitiveBoostAttributeImpl |
|
| MaxPayloadFunction |
Returns the maximum payload score seen, else 1 if there are no payloads on the doc.
|
| MergedIterator<T extends Comparable<T>> |
Provides a merged sorted view from several sorted iterators.
|
| MergeInfo |
A MergeInfo provides information required for a MERGE context.
|
| MergePolicy |
Expert: a MergePolicy determines the sequence of
primitive merge operations.
|
| MergePolicy.DocMap |
A map of doc IDs.
|
| MergePolicy.MergeAbortedException |
|
| MergePolicy.MergeException |
Exception thrown if there are any problems while
executing a merge.
|
| MergePolicy.MergeSpecification |
A MergeSpecification instance provides the information
necessary to perform multiple merges.
|
| MergePolicy.MergeTrigger |
|
| MergePolicy.OneMerge |
OneMerge provides the information necessary to perform
an individual primitive merge operation, resulting in
a single new segment.
|
| MergeScheduler |
Expert: IndexWriter uses an instance
implementing this interface to execute the merges
selected by a MergePolicy.
|
| MergeState |
Holds common state used during segment merging.
|
| MergeState.CheckAbort |
Class for recording units of work when merging segments.
|
| MergeState.DocMap |
Remaps docids around deletes during merge
|
| MinimizationOperations |
Operations for minimizing automata.
|
| MinPayloadFunction |
Calculates the minimum payload seen
|
| MMapDirectory |
|
| MonotonicAppendingLongBuffer |
Utility class to buffer signed longs in memory, which is optimized for the
case where the sequence is monotonic, although it can encode any sequence of
arbitrary longs.
|
| MonotonicBlockPackedReader |
|
| MonotonicBlockPackedWriter |
A writer for large monotonically increasing sequences of positive longs.
|
| MultiCollector |
|
| MultiDocsAndPositionsEnum |
Exposes flex API, merged from flex API of sub-segments.
|
| MultiDocsAndPositionsEnum.EnumWithSlice |
|
| MultiDocsEnum |
|
| MultiDocsEnum.EnumWithSlice |
|
| MultiDocValues |
A wrapper for CompositeIndexReader providing access to DocValues.
|
| MultiDocValues.MultiSortedDocValues |
Implements SortedDocValues over n subs, using an OrdinalMap
|
| MultiDocValues.MultiSortedSetDocValues |
Implements MultiSortedSetDocValues over n subs, using an OrdinalMap
|
| MultiDocValues.OrdinalMap |
maps per-segment ordinals to/from global ordinal space
|
| MultiFields |
Exposes flex API, merged from flex API of sub-segments.
|
| MultiLevelSkipListReader |
This abstract class reads skip lists with multiple levels.
|
| MultiLevelSkipListWriter |
This abstract class writes skip lists with multiple levels.
|
| MultiPhraseQuery |
|
| MultiReader |
|
| MultiSimilarity |
Implements the CombSUM method for combining evidence from multiple
similarity values described in: Joseph A.
|
| MultiTermQuery |
An abstract Query that matches documents
containing a subset of terms provided by a FilteredTermsEnum enumeration.
|
| MultiTermQuery.ConstantScoreAutoRewrite |
A rewrite method that tries to pick the best
constant-score rewrite method based on term and
document counts from the query.
|
| MultiTermQuery.RewriteMethod |
Abstract class that defines how the query is rewritten.
|
| MultiTermQuery.TopTermsBoostOnlyBooleanQueryRewrite |
A rewrite method that first translates each term into
BooleanClause.Occur.SHOULD clause in a BooleanQuery, but the scores
are only computed as the boost.
|
| MultiTermQuery.TopTermsScoringBooleanQueryRewrite |
A rewrite method that first translates each term into
BooleanClause.Occur.SHOULD clause in a BooleanQuery, and keeps the
scores as computed by the query.
|
| MultiTermQueryWrapperFilter<Q extends MultiTermQuery> |
|
| MultiTerms |
Exposes flex API, merged from flex API of
sub-segments.
|
| MultiTermsEnum |
|
| MutableBits |
Extension of Bits for live documents.
|
| MutableValue |
Base class for all mutable values.
|
| MutableValueBool |
|
| MutableValueDate |
|
| MutableValueDouble |
|
| MutableValueFloat |
|
| MutableValueInt |
|
| MutableValueLong |
|
| MutableValueStr |
|
| NamedSPILoader<S extends NamedSPILoader.NamedSPI> |
Helper class for loading named SPIs from classpath (e.g.
|
| NamedSPILoader.NamedSPI |
|
| NamedThreadFactory |
A default ThreadFactory implementation that accepts the name prefix
of the created threads as a constructor argument.
|
| NativeFSLockFactory |
|
| NearSpansOrdered |
A Spans that is formed from the ordered subspans of a SpanNearQuery
where the subspans do not overlap and have a maximum slop between them.
|
| NearSpansUnordered |
|
| NGramPhraseQuery |
This is a PhraseQuery which is optimized for n-gram phrase query.
|
| NIOFSDirectory |
An FSDirectory implementation that uses java.nio's FileChannel's
positional read, which allows multiple threads to read from the same file
without synchronizing.
|
| NIOFSDirectory.NIOFSIndexInput |
|
| NoDeletionPolicy |
|
| NoLockFactory |
|
| NoMergePolicy |
A MergePolicy which never returns merges to execute (hence it's
name).
|
| NoMergeScheduler |
|
| NoOutputs |
A null FST Outputs implementation; use this if
you just want to build an FSA.
|
| Normalization |
This class acts as the base class for the implementations of the term
frequency normalization methods in the DFR framework.
|
| Normalization.NoNormalization |
Implementation used when there is no normalization.
|
| NormalizationH1 |
Normalization model that assumes a uniform distribution of the term frequency.
|
| NormalizationH2 |
Normalization model in which the term frequency is inversely related to the
length.
|
| NormalizationH3 |
Dirichlet Priors normalization
|
| NormalizationZ |
Pareto-Zipf Normalization
|
| NormsFormat |
Encodes/decodes per-document score normalization values.
|
| NoSuchDirectoryException |
This exception is thrown when you try to list a
non-existent directory.
|
| NRTCachingDirectory |
Wraps a RAMDirectory
around any provided delegate directory, to
be used during NRT search.
|
| NumericDocValues |
A per-document numeric value.
|
| NumericDocValuesField |
Field that stores a per-document long value for scoring,
sorting or value retrieval.
|
| NumericRangeFilter<T extends Number> |
A Filter that only accepts numeric values within
a specified range.
|
| NumericRangeQuery<T extends Number> |
A Query that matches numeric values within a
specified range.
|
| NumericTokenStream |
|
| NumericTokenStream.NumericTermAttribute |
Expert: Use this attribute to get the details of the currently generated token.
|
| NumericTokenStream.NumericTermAttributeImpl |
|
| NumericUtils |
This is a helper class to generate prefix-encoded representations for numerical values
and supplies converters to represent float/double values as sortable integers/longs.
|
| NumericUtils.IntRangeBuilder |
|
| NumericUtils.LongRangeBuilder |
|
| OffsetAttribute |
The start and end character offset of a Token.
|
| OffsetAttributeImpl |
|
| OpenBitSet |
An "open" BitSet implementation that allows direct access to the array of words
storing the bits.
|
| OpenBitSetDISI |
|
| OpenBitSetIterator |
An iterator to iterate over set bits in an OpenBitSet.
|
| OrdTermState |
|
| Outputs<T> |
Represents the outputs for an FST, providing the basic
algebra required for building and traversing the FST.
|
| OutputStreamDataOutput |
|
| PackedDataInput |
A DataInput wrapper to read unaligned, variable-length packed
integers.
|
| PackedDataOutput |
A DataOutput wrapper to write unaligned, variable-length packed
integers.
|
| PackedInts |
Simplistic compression for array of unsigned long values.
|
| PackedInts.Decoder |
A decoder for packed integers.
|
| PackedInts.Encoder |
An encoder for packed integers.
|
| PackedInts.Format |
A format to write packed ints.
|
| PackedInts.FormatAndBits |
Simple class that holds a format and a number of bits per value.
|
| PackedInts.Header |
Header identifying the structure of a packed integer array.
|
| PackedInts.Mutable |
A packed integer array that can be modified.
|
| PackedInts.NullReader |
|
| PackedInts.Reader |
A read-only random access array of positive integers.
|
| PackedInts.ReaderIterator |
Run-once iterator interface, to decode previously saved PackedInts.
|
| PackedInts.Writer |
A write-once Writer.
|
| PackedLongDocValuesField |
Deprecated.
|
| PagedBytes |
Represents a logical byte[] as a series of pages.
|
| PagedBytes.Reader |
Provides methods to read BytesRefs from a frozen
PagedBytes.
|
| PagedGrowableWriter |
|
| PagedMutable |
|
| PairOutputs<A,B> |
An FST Outputs implementation, holding two other outputs.
|
| PairOutputs.Pair<A,B> |
Holds a single pair of two outputs.
|
| ParallelAtomicReader |
|
| ParallelCompositeReader |
|
| PayloadAttribute |
The payload of a Token.
|
| PayloadAttributeImpl |
|
| PayloadFunction |
An abstract class that defines a way for Payload*Query instances to transform
the cumulative effects of payload scores for a document.
|
| PayloadNearQuery |
This class is very similar to
SpanNearQuery except that it factors
in the value of the payloads located at each of the positions where the
TermSpans occurs.
|
| PayloadSpanUtil |
Experimental class to get set of payloads for most standard Lucene queries.
|
| PayloadTermQuery |
This class is very similar to
SpanTermQuery except that it factors
in the value of the payload located at each of the positions where the
Term occurs.
|
| PerFieldDocValuesFormat |
Enables per field docvalues support.
|
| PerFieldPostingsFormat |
Enables per field postings support.
|
| PerFieldSimilarityWrapper |
Provides the ability to use a different Similarity for different fields.
|
| PersistentSnapshotDeletionPolicy |
A SnapshotDeletionPolicy which adds a persistence layer so that
snapshots can be maintained across the life of an application.
|
| PForDeltaDocIdSet |
DocIdSet implementation based on pfor-delta encoding.
|
| PForDeltaDocIdSet.Builder |
|
| PhraseQuery |
A Query that matches documents containing a particular sequence of terms.
|
| PositionIncrementAttribute |
Determines the position of this token
relative to the previous Token in a TokenStream, used in phrase
searching.
|
| PositionIncrementAttributeImpl |
|
| PositionLengthAttribute |
Determines how many positions this
token spans.
|
| PositionLengthAttributeImpl |
|
| PositiveIntOutputs |
An FST Outputs implementation where each output
is a non-negative long value.
|
| PositiveScoresOnlyCollector |
A Collector implementation which wraps another
Collector and makes sure only documents with
scores > 0 are collected.
|
| PostingsBaseFormat |
|
| PostingsConsumer |
Abstract API that consumes postings for an individual term.
|
| PostingsFormat |
Encodes/decodes terms, postings, and proximity data.
|
| PostingsReaderBase |
The core terms dictionaries (BlockTermsReader,
BlockTreeTermsReader) interact with a single instance
of this class to manage creation of DocsEnum and
DocsAndPositionsEnum instances.
|
| PostingsWriterBase |
|
| PrefixFilter |
A Filter that restricts search results to values that have a matching prefix in a given
field.
|
| PrefixQuery |
A Query that matches documents containing terms with a specified prefix.
|
| PrefixTermsEnum |
Subclass of FilteredTermEnum for enumerating all terms that match the
specified prefix filter term.
|
| PrintStreamInfoStream |
InfoStream implementation over a PrintStream
such as System.out.
|
| PriorityQueue<T> |
A PriorityQueue maintains a partial ordering of its elements such that the
least element can always be found in constant time.
|
| Query |
The abstract base class for queries.
|
| QueryBuilder |
Creates queries from the Analyzer chain.
|
| QueryWrapperFilter |
Constrains search results to only match those which also match a provided
query.
|
| RAMDirectory |
|
| RAMFile |
Represents a file in RAM as a list of byte[] buffers.
|
| RAMInputStream |
|
| RAMOutputStream |
|
| RamUsageEstimator |
Estimates the size (memory representation) of Java objects.
|
| RamUsageEstimator.JvmFeature |
JVM diagnostic features.
|
| RateLimitedDirectoryWrapper |
|
| RateLimiter |
Abstract base class to rate limit IO.
|
| RateLimiter.SimpleRateLimiter |
Simple class to rate limit IO.
|
| ReaderManager |
Utility class to safely share DirectoryReader instances across
multiple threads, while periodically reopening.
|
| ReaderSlice |
Subreader slice from a parent composite reader.
|
| ReaderUtil |
|
| RecyclingByteBlockAllocator |
|
| RecyclingIntBlockAllocator |
|
| RefCount<T> |
Manages reference counting for a given object.
|
| ReferenceManager<G> |
Utility class to safely share instances of a certain type across multiple
threads, while periodically refreshing them.
|
| ReferenceManager.RefreshListener |
Use to receive notification when a refresh has
finished.
|
| RegExp |
Regular Expression extension to Automaton.
|
| RegexpQuery |
|
| RollingBuffer<T extends RollingBuffer.Resettable> |
Acts like forever growing T[], but internally uses a
circular buffer to reuse instances of T.
|
| RollingBuffer.Resettable |
Implement to reset an instance
|
| RunAutomaton |
Finite-state automaton with fast run operation.
|
| ScoreCachingWrappingScorer |
A Scorer which wraps another scorer and caches the score of the
current document.
|
| ScoreDoc |
|
| Scorer |
Expert: Common scoring functionality for different types of queries.
|
| Scorer.ChildScorer |
A child Scorer and its relationship to its parent.
|
| ScoringRewrite<Q extends Query> |
Base rewrite method that translates each term into a query, and keeps
the scores as computed by the query.
|
| SearcherFactory |
|
| SearcherLifetimeManager |
Keeps track of current plus old IndexSearchers, closing
the old ones once they have timed out.
|
| SearcherLifetimeManager.PruneByAge |
Simple pruner that drops any searcher older by
more than the specified seconds, than the newest
searcher.
|
| SearcherLifetimeManager.Pruner |
|
| SearcherManager |
Utility class to safely share IndexSearcher instances across multiple
threads, while periodically reopening.
|
| SegmentCommitInfo |
Embeds a [read-only] SegmentInfo and adds per-commit
fields.
|
| SegmentInfo |
Information about a segment such as it's name, directory, and files related
to the segment.
|
| SegmentInfoFormat |
Expert: Controls the format of the
SegmentInfo (segment metadata file).
|
| SegmentInfoReader |
Specifies an API for classes that can read SegmentInfo information.
|
| SegmentInfos |
A collection of segmentInfo objects with methods for operating on
those segments in relation to the file system.
|
| SegmentInfos.FindSegmentsFile |
Utility class for executing code that needs to do
something with the current segments file.
|
| SegmentInfoWriter |
Specifies an API for classes that can write out SegmentInfo data.
|
| SegmentReader |
IndexReader implementation over a single segment.
|
| SegmentReader.CoreClosedListener |
Called when the shared core for this SegmentReader
is closed.
|
| SegmentReadState |
Holder class for common parameters used during read.
|
| SegmentWriteState |
Holder class for common parameters used during write.
|
| SentinelIntSet |
A native int hash-based set where one value is reserved to mean "EMPTY" internally.
|
| SerialMergeScheduler |
A MergeScheduler that simply does each merge
sequentially, using the current thread.
|
| SetOnce<T> |
A convenient class which offers a semi-immutable object wrapper
implementation which allows one to set the value of an object exactly once,
and retrieve it many times.
|
| SetOnce.AlreadySetException |
|
| ShortDocValuesField |
Deprecated.
|
| Similarity |
Similarity defines the components of Lucene scoring.
|
| Similarity.SimScorer |
|
| Similarity.SimWeight |
Stores the weight for a query across the indexed collection.
|
| SimilarityBase |
A subclass of Similarity that provides a simplified API for its
descendants.
|
| SimpleFSDirectory |
A straightforward implementation of FSDirectory
using java.io.RandomAccessFile.
|
| SimpleFSDirectory.SimpleFSIndexInput |
|
| SimpleFSLockFactory |
|
| SimpleMergedSegmentWarmer |
A very simple merged segment warmer that just ensures
data structures are initialized.
|
| SingleInstanceLockFactory |
Implements LockFactory for a single in-process instance,
meaning all locking will take place through this one instance.
|
| SingleTermsEnum |
Subclass of FilteredTermsEnum for enumerating a single term.
|
| SingletonSortedSetDocValues |
Exposes multi-valued view over a single-valued instance.
|
| SloppyMath |
Math functions that trade off accuracy for speed.
|
| SlowCompositeReaderWrapper |
|
| SmallFloat |
Floating point numbers smaller than 32 bits.
|
| SnapshotDeletionPolicy |
|
| Sort |
Encapsulates sort criteria for returned hits.
|
| SortedBytesDocValuesField |
Deprecated.
|
| SortedDocValues |
A per-document byte[] with presorted values.
|
| SortedDocValuesField |
Field that stores
a per-document BytesRef value, indexed for
sorting.
|
| SortedSetDocValues |
A per-document set of presorted byte[] values.
|
| SortedSetDocValuesField |
Field that stores
a set of per-document BytesRef values, indexed for
faceting,grouping,joining.
|
| Sorter |
Base class for sorting algorithms implementations.
|
| SortField |
Stores information about how to sort documents by terms in an individual
field.
|
| SortField.Type |
Specifies the type of the terms to be sorted, or special types such as CUSTOM
|
| SpanFirstQuery |
Matches spans near the beginning of a field.
|
| SpanMultiTermQueryWrapper<Q extends MultiTermQuery> |
|
| SpanMultiTermQueryWrapper.SpanRewriteMethod |
Abstract class that defines how the query is rewritten.
|
| SpanMultiTermQueryWrapper.TopTermsSpanBooleanQueryRewrite |
A rewrite method that first translates each term into a SpanTermQuery in a
BooleanClause.Occur.SHOULD clause in a BooleanQuery, and keeps the
scores as computed by the query.
|
| SpanNearPayloadCheckQuery |
Only return those matches that have a specific payload at
the given position.
|
| SpanNearQuery |
Matches spans which are near one another.
|
| SpanNotQuery |
Removes matches which overlap with another SpanQuery or
within a x tokens before or y tokens after another SpanQuery.
|
| SpanOrQuery |
Matches the union of its clauses.
|
| SpanPayloadCheckQuery |
Only return those matches that have a specific payload at
the given position.
|
| SpanPositionCheckQuery |
Base class for filtering a SpanQuery based on the position of a match.
|
| SpanPositionCheckQuery.AcceptStatus |
|
| SpanPositionRangeQuery |
|
| SpanQuery |
Base class for span-based queries.
|
| Spans |
Expert: an enumeration of span matches.
|
| SpanScorer |
Public for extension only.
|
| SpanTermQuery |
Matches spans containing a term.
|
| SpanWeight |
Expert-only.
|
| SpecialOperations |
Special automata operations.
|
| SPIClassIterator<S> |
Helper class for loading SPI classes from classpath (META-INF files).
|
| State |
Automaton state.
|
| StatePair |
Pair of states.
|
| StoredField |
|
| StoredFieldsFormat |
Controls the format of stored fields
|
| StoredFieldsReader |
Codec API for reading stored fields.
|
| StoredFieldsWriter |
Codec API for writing stored fields:
|
| StoredFieldVisitor |
Expert: provides a low-level means of accessing the stored field
values in an index.
|
| StoredFieldVisitor.Status |
|
| StraightBytesDocValuesField |
Deprecated.
|
| StringField |
A field that is indexed but not tokenized: the entire
String value is indexed as a single token.
|
| StringHelper |
Methods for manipulating strings.
|
| Term |
A Term represents a word from text.
|
| TermContext |
|
| TermQuery |
A Query that matches documents containing a term.
|
| TermRangeFilter |
A Filter that restricts search results to a range of term
values in a given field.
|
| TermRangeQuery |
A Query that matches documents within an range of terms.
|
| TermRangeTermsEnum |
Subclass of FilteredTermEnum for enumerating all terms that match the
specified range parameters.
|
| Terms |
Access to the terms in a specific field.
|
| TermsConsumer |
Abstract API that consumes terms for an individual field.
|
| TermsEnum |
|
| TermsEnum.SeekStatus |
|
| TermSpans |
Expert:
Public for extension only
|
| TermState |
Encapsulates all required internal state to position the associated
TermsEnum without re-seeking.
|
| TermStatistics |
Contains statistics for a specific term
|
| TermStats |
Holder for per-term statistics.
|
| TermToBytesRefAttribute |
This attribute is requested by TermsHashPerField to index the contents.
|
| TermVectorsFormat |
Controls the format of term vectors
|
| TermVectorsReader |
Codec API for reading term vectors:
|
| TermVectorsWriter |
Codec API for writing term vectors:
|
| TextField |
A field that is indexed and tokenized, without term
vectors.
|
| TFIDFSimilarity |
Implementation of Similarity with the Vector Space Model.
|
| ThreadInterruptedException |
Thrown by lucene on detecting that Thread.interrupt() had
been called.
|
| TieredMergePolicy |
Merges segments of approximately equal size, subject to
an allowed number of segments per tier.
|
| TieredMergePolicy.MergeScore |
Holds score and explanation for a single candidate
merge.
|
| TimeLimitingCollector |
The TimeLimitingCollector is used to timeout search requests that
take longer than the maximum allowed search time limit.
|
| TimeLimitingCollector.TimeExceededException |
Thrown when elapsed search time exceeds allowed search time.
|
| TimeLimitingCollector.TimerThread |
Thread used to timeout search requests.
|
| TimSorter |
|
| Token |
A Token is an occurrence of a term from the text of a field.
|
| Token.TokenAttributeFactory |
Expert: Creates a TokenAttributeFactory returning Token as instance for the basic attributes
and for all other attributes calls the given delegate factory.
|
| TokenFilter |
A TokenFilter is a TokenStream whose input is another TokenStream.
|
| Tokenizer |
A Tokenizer is a TokenStream whose input is a Reader.
|
| TokenStream |
A TokenStream enumerates the sequence of tokens, either from
Fields of a Document or from query text.
|
| TokenStreamToAutomaton |
Consumes a TokenStream and creates an Automaton
where the transition labels are UTF8 bytes (or Unicode
code points if unicodeArcs is true) from the TermToBytesRefAttribute.
|
| TopDocs |
|
| TopDocsCollector<T extends ScoreDoc> |
A base class for all collectors that return a TopDocs output.
|
| TopFieldCollector |
|
| TopFieldDocs |
|
| TopScoreDocCollector |
A Collector implementation that collects the top-scoring hits,
returning them as a TopDocs.
|
| TopTermsRewrite<Q extends Query> |
Base rewrite method for collecting only the top terms
via a priority queue.
|
| ToStringUtils |
|
| TotalHitCountCollector |
Just counts the total number of hits.
|
| TrackingDirectoryWrapper |
A delegating Directory that records which files were
written to and deleted.
|
| TrackingIndexWriter |
|
| Transition |
Automaton transition.
|
| TwoPhaseCommit |
An interface for implementations that support 2-phase commit.
|
| TwoPhaseCommitTool |
A utility for executing 2-phase commit on several objects.
|
| TwoPhaseCommitTool.CommitFailException |
|
| TwoPhaseCommitTool.PrepareCommitFailException |
|
| TypeAttribute |
A Token's lexical type.
|
| TypeAttributeImpl |
|
| UnicodeUtil |
Class to encode java's UTF16 char[] into UTF8 byte[]
without always allocating a new byte[] as
String.getBytes("UTF-8") does.
|
| UpgradeIndexMergePolicy |
|
| UTF32ToUTF8 |
Converts UTF-32 automata to the equivalent UTF-8 representation.
|
| Util |
Static helper methods.
|
| Util.FSTPath<T> |
Represents a path in TopNSearcher.
|
| Util.MinResult<T> |
|
| Util.TopNSearcher<T> |
Utility class to find top N shortest paths from start
point(s).
|
| VerifyingLockFactory |
A LockFactory that wraps another LockFactory and verifies that each lock obtain/release
is "correct" (never results in two processes holding the
lock at the same time).
|
| Version |
Use by certain classes to match version compatibility
across releases of Lucene.
|
| VirtualMethod<C> |
A utility for keeping backwards compatibility on previously abstract methods
(or similar replacements).
|
| WAH8DocIdSet |
DocIdSet implementation based on word-aligned hybrid encoding on
words of 8 bits.
|
| WAH8DocIdSet.Builder |
|
| WeakIdentityMap<K,V> |
|
| Weight |
Expert: Calculate query weights and build query scorers.
|
| WildcardQuery |
Implements the wildcard search query.
|