| Package | Description |
|---|---|
| com.hankcs.hanlp.classification.corpus | |
| com.hankcs.hanlp.classification.models | |
| com.hankcs.hanlp.classification.tokenizers |
| Modifier and Type | Field and Description |
|---|---|
protected ITokenizer |
AbstractDataSet.tokenizer |
| Modifier and Type | Method and Description |
|---|---|
ITokenizer |
IDataSet.getTokenizer()
获取分词器
|
ITokenizer |
AbstractDataSet.getTokenizer() |
| Modifier and Type | Method and Description |
|---|---|
IDataSet |
IDataSet.setTokenizer(ITokenizer tokenizer)
设置分词器
|
IDataSet |
AbstractDataSet.setTokenizer(ITokenizer tokenizer) |
| Modifier and Type | Field and Description |
|---|---|
ITokenizer |
AbstractModel.tokenizer
分词器
|
| Modifier and Type | Class and Description |
|---|---|
class |
BigramTokenizer |
class |
BlankTokenizer
使用\\s(如空白符)进行切分的分词器
|
class |
HanLPTokenizer |
Copyright © 2014–2021 码农场. All rights reserved.