public abstract class ATNSimulator extends Object
| Modifier and Type | Field and Description |
|---|---|
ATN |
atn |
static DFAState |
ERROR
Must distinguish between missing edge and edge we know leads nowhere
|
protected PredictionContextCache |
sharedContextCache
The context cache maps all PredictionContext objects that are equals()
to a single cached copy.
|
| Constructor and Description |
|---|
ATNSimulator(ATN atn,
PredictionContextCache sharedContextCache) |
| Modifier and Type | Method and Description |
|---|---|
void |
clearDFA()
Clear the DFA cache used by the current instance.
|
PredictionContext |
getCachedContext(PredictionContext context) |
PredictionContextCache |
getSharedContextCache() |
abstract void |
reset() |
public static final DFAState ERROR
public final ATN atn
protected final PredictionContextCache sharedContextCache
This cache makes a huge difference in memory and a little bit in speed. For the Java grammar on java.*, it dropped the memory requirements at the end from 25M to 16M. We don't store any of the full context graphs in the DFA because they are limited to local context only, but apparently there's a lot of repetition there as well. We optimize the config contexts before storing the config set in the DFA states by literally rebuilding them with cached subgraphs only.
I tried a cache for use during closure operations, that was whacked after each adaptivePredict(). It cost a little bit more time I think and doesn't save on the overall footprint so it's not worth the complexity.
public ATNSimulator(ATN atn, PredictionContextCache sharedContextCache)
public abstract void reset()
public void clearDFA()
UnsupportedOperationException - if the current instance does not
support clearing the DFA.public PredictionContextCache getSharedContextCache()
public PredictionContext getCachedContext(PredictionContext context)