|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use Token | |
org.apache.lucene.analysis | API and code to convert text into indexable tokens. |
org.apache.lucene.analysis.de | Support for indexing and searching of German text. |
org.apache.lucene.analysis.standard | A grammar-based tokenizer constructed with JavaCC. |
Uses of Token in org.apache.lucene.analysis |
Methods in org.apache.lucene.analysis that return Token | |
abstract Token |
TokenStream.next()
Returns the next token in the stream, or null at EOS. |
Token |
PorterStemFilter.next()
Returns the next input Token, after being stemmed |
Token |
CharTokenizer.next()
Returns the next token in the stream, or null at EOS. |
Token |
LowerCaseFilter.next()
|
Token |
StopFilter.next()
Returns the next input Token whose termText() is not a stop word. |
Uses of Token in org.apache.lucene.analysis.de |
Methods in org.apache.lucene.analysis.de that return Token | |
Token |
GermanStemFilter.next()
|
Uses of Token in org.apache.lucene.analysis.standard |
Methods in org.apache.lucene.analysis.standard that return Token | |
Token |
StandardTokenizer.next()
Returns the next token in the stream, or null at EOS. |
Token |
StandardFilter.next()
Returns the next token in the stream, or null at EOS. |
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |