这个问题生成的令牌是我申请一个固定的FEMMES.COM无法正常令牌化(How do I get french text FEMMES.COM to index as language variants of FEMMES)如何我可以保证语言分析应用于由WordDelimiterTokenFilter
失败的测试案例后面临的新形势:#FEMMES2017应该标记为Femmes,Femme,2017.
我的方法使用MappingCharFilter是不正确的,而且真的只是一个创可贴。什么是正确的方法来让这个失败的测试案例通过?
当前索引配置
"analyzers": [
{
"@odata.type": "#Microsoft.Azure.Search.CustomAnalyzer",
"name": "text_language_search_custom_analyzer",
"tokenizer": "text_language_search_custom_analyzer_ms_tokenizer",
"tokenFilters": [
"lowercase",
"text_synonym_token_filter",
"asciifolding",
"language_word_delim_token_filter"
],
"charFilters": [
"html_strip",
"replace_punctuation_with_comma"
]
},
{
"@odata.type": "#Microsoft.Azure.Search.CustomAnalyzer",
"name": "text_exact_search_Index_custom_analyzer",
"tokenizer": "text_exact_search_Index_custom_analyzer_tokenizer",
"tokenFilters": [
"lowercase",
"asciifolding"
],
"charFilters": []
}
],
"tokenizers": [
{
"@odata.type": "#Microsoft.Azure.Search.MicrosoftLanguageStemmingTokenizer",
"name": "text_language_search_custom_analyzer_ms_tokenizer",
"maxTokenLength": 300,
"isSearchTokenizer": false,
"language": "french"
},
{
"@odata.type": "#Microsoft.Azure.Search.StandardTokenizerV2",
"name": "text_exact_search_Index_custom_analyzer_tokenizer",
"maxTokenLength": 300
}
],
"tokenFilters": [
{
"@odata.type": "#Microsoft.Azure.Search.SynonymTokenFilter",
"name": "text_synonym_token_filter",
"synonyms": [
"ca => ça",
"yeux => oeil",
"oeufs,oeuf,Œuf,Œufs,œuf,œufs",
"etre,ete"
],
"ignoreCase": true,
"expand": true
},
{
"@odata.type": "#Microsoft.Azure.Search.WordDelimiterTokenFilter",
"name": "language_word_delim_token_filter",
"generateWordParts": true,
"generateNumberParts": true,
"catenateWords": false,
"catenateNumbers": false,
"catenateAll": false,
"splitOnCaseChange": true,
"preserveOriginal": false,
"splitOnNumerics": true,
"stemEnglishPossessive": true,
"protectedWords": []
}
],
"charFilters": [
{
"@odata.type": "#Microsoft.Azure.Search.MappingCharFilter",
"name": "replace_punctuation_with_comma",
"mappings": [
"#=>,",
"$=>,",
"€=>,",
"£=>,",
"%=>,",
"&=>,",
"+=>,",
"/=>,",
"==>,",
"<=>,",
">=>,",
"@=>,",
"_=>,",
"µ=>,",
"§=>,",
"¤=>,",
"°=>,",
"!=>,",
"?=>,",
"\"=>,",
"'=>,",
"`=>,",
"~=>,",
"^=>,",
".=>,",
":=>,",
";=>,",
"(=>,",
")=>,",
"[=>,",
"]=>,",
"{=>,",
"}=>,",
"*=>,",
"-=>,"
]
}
]
分析API调用
{
"analyzer": "text_language_search_custom_analyzer",
"text": "#femmes2017"
}
分析API响应
{
"@odata.context": "https://one-adscope-search-eu-prod.search.windows.net/$metadata#Microsoft.Azure.Search.V2016_09_01.AnalyzeResult",
"tokens": [
{
"token": "femmes",
"startOffset": 1,
"endOffset": 7,
"position": 0
},
{
"token": "2017",
"startOffset": 7,
"endOffset": 11,
"position": 1
}
]
}
这是我们正在取代的网站在这一点上的优势。他们的SOLR配置允许这个链。 –
您可以在WordDelimiter标记过滤器之后始终使用Lucene Stemmer标记过滤器,但记住它会阻止分析器产生的所有标记。 – Yahnoosh
你是说在这个页面上StemmerTokenFilter? https://docs.microsoft.com/en-us/rest/api/searchservice/custom-analyzers-in-azure-search 描述是“语言特定的词干过滤器”。所以这只会表现出来,并且没有词形化呢? 我想没有HunspellStemFilterFactory等价物,我可以只喂这个.dic和.aff文件旧网站有? –