2017-05-02 22 views
0

这个问题生成的令牌是我申请一个固定的FEMMES.COM无法正常令牌化(How do I get french text FEMMES.COM to index as language variants of FEMMES如何我可以保证语言分析应用于由WordDelimiterTokenFilter

失败的测试案例后面临的新形势:#FEMMES2017应该标记为Femmes,Femme,2017.

我的方法使用MappingCharFilter是不正确的,而且真的只是一个创可贴。什么是正确的方法来让这个失败的测试案例通过?

当前索引配置

"analyzers": [ 
    { 
     "@odata.type": "#Microsoft.Azure.Search.CustomAnalyzer", 
     "name": "text_language_search_custom_analyzer", 
     "tokenizer": "text_language_search_custom_analyzer_ms_tokenizer", 
     "tokenFilters": [ 
     "lowercase", 
     "text_synonym_token_filter", 
     "asciifolding", 
     "language_word_delim_token_filter" 
     ], 
     "charFilters": [ 
     "html_strip", 
     "replace_punctuation_with_comma" 
     ] 
    }, 
    { 
     "@odata.type": "#Microsoft.Azure.Search.CustomAnalyzer", 
     "name": "text_exact_search_Index_custom_analyzer", 
     "tokenizer": "text_exact_search_Index_custom_analyzer_tokenizer", 
     "tokenFilters": [ 
     "lowercase", 
     "asciifolding" 
     ], 
     "charFilters": [] 
    } 
    ], 
    "tokenizers": [ 
    { 
     "@odata.type": "#Microsoft.Azure.Search.MicrosoftLanguageStemmingTokenizer", 
     "name": "text_language_search_custom_analyzer_ms_tokenizer", 
     "maxTokenLength": 300, 
     "isSearchTokenizer": false, 
     "language": "french" 
    }, 
    { 
     "@odata.type": "#Microsoft.Azure.Search.StandardTokenizerV2", 
     "name": "text_exact_search_Index_custom_analyzer_tokenizer", 
     "maxTokenLength": 300 
    } 
    ], 
    "tokenFilters": [ 
    { 
     "@odata.type": "#Microsoft.Azure.Search.SynonymTokenFilter", 
     "name": "text_synonym_token_filter", 
     "synonyms": [ 
     "ca => ça", 
     "yeux => oeil", 
     "oeufs,oeuf,Œuf,Œufs,œuf,œufs", 
     "etre,ete" 
     ], 
     "ignoreCase": true, 
     "expand": true 
    }, 
    { 
     "@odata.type": "#Microsoft.Azure.Search.WordDelimiterTokenFilter", 
     "name": "language_word_delim_token_filter", 
     "generateWordParts": true, 
     "generateNumberParts": true, 
     "catenateWords": false, 
     "catenateNumbers": false, 
     "catenateAll": false, 
     "splitOnCaseChange": true, 
     "preserveOriginal": false, 
     "splitOnNumerics": true, 
     "stemEnglishPossessive": true, 
     "protectedWords": [] 
    } 
    ], 
    "charFilters": [ 
    { 
     "@odata.type": "#Microsoft.Azure.Search.MappingCharFilter", 
     "name": "replace_punctuation_with_comma", 
     "mappings": [ 
     "#=>,", 
     "$=>,", 
     "€=>,", 
     "£=>,", 
     "%=>,", 
     "&=>,", 
     "+=>,", 
     "/=>,", 
     "==>,", 
     "<=>,", 
     ">=>,", 
     "@=>,", 
     "_=>,", 
     "µ=>,", 
     "§=>,", 
     "¤=>,", 
     "°=>,", 
     "!=>,", 
     "?=>,", 
     "\"=>,", 
     "'=>,", 
     "`=>,", 
     "~=>,", 
     "^=>,", 
     ".=>,", 
     ":=>,", 
     ";=>,", 
     "(=>,", 
     ")=>,", 
     "[=>,", 
     "]=>,", 
     "{=>,", 
     "}=>,", 
     "*=>,", 
     "-=>," 
     ] 
    } 
    ] 

分析API调用

{ 
    "analyzer": "text_language_search_custom_analyzer", 
    "text": "#femmes2017" 
} 

分析API响应

{ 
    "@odata.context": "https://one-adscope-search-eu-prod.search.windows.net/$metadata#Microsoft.Azure.Search.V2016_09_01.AnalyzeResult", 
    "tokens": [ 
    { 
     "token": "femmes", 
     "startOffset": 1, 
     "endOffset": 7, 
     "position": 0 
    }, 
    { 
     "token": "2017", 
     "startOffset": 7, 
     "endOffset": 11, 
     "position": 1 
    } 
    ] 
} 

回答

0

输入文本是通过分析仪的部件,以便处理:炭过滤器 - >标记器 - >标记过滤器。在你的情况下,标记器在标记由WordDelimiter标记过滤器处理之前执行词形化。不幸的是,微软的词干和混淆器不可用作独立的标记过滤器,你可以在WordDelimiter标记过滤器之后应用。您将需要添加另一个令牌过滤器,以根据您的要求规范化WordDelimiter令牌过滤器的输出。只有在这种情况下,您可以将SynonymsTokenFilter移动到分析器链的末尾,并将其映射到femmetfemme。这显然不是一个很好的解决方法,因为它对你正在处理的数据非常具体。希望我提供的信息将帮助您找到更通用的解决方案。

+0

这是我们正在取代的网站在这一点上的优势。他们的SOLR配置允许这个链。 –

+0

您可以在WordDelimiter标记过滤器之后始终使用Lucene Stemmer标记过滤器,但记住它会阻止分析器产生的所有标记。 – Yahnoosh

+0

你是说在这个页面上StemmerTokenFilter? https://docs.microsoft.com/en-us/rest/api/searchservice/custom-analyzers-in-azure-search 描述是“语言特定的词干过滤器”。所以这只会表现出来,并且没有词形化呢? 我想没有HunspellStemFilterFactory等价物,我可以只喂这个.dic和.aff文件旧网站有? –

相关问题