discourse/lib/validators/search_tokenize_chinese_validator.rb
Alan Guo Xiang Tan 930f51e175 FEATURE: Split up text segmentation for Chinese and Japanese.
* Chinese segmenetation will continue to rely on cppjieba
* Japanese segmentation will use our port of TinySegmenter
* Korean currently does not rely on segmentation which was dropped in c677877e4f
* SiteSetting.search_tokenize_chinese_japanese_korean has been split
into SiteSetting.search_tokenize_chinese and
SiteSetting.search_tokenize_japanese respectively
2022-02-07 09:21:14 +08:00

15 lines
276 B
Ruby

# frozen_string_literal: true
class SearchTokenizeChineseValidator
def initialize(opts = {})
end
def valid_value?(value)
!SiteSetting.search_tokenize_japanese
end
def error_message
I18n.t("site_settings.errors.search_tokenize_japanese_enabled")
end
end