2
0
Fork 0
mirror of https://github.com/discourse/discourse.git synced 2025-09-06 10:50:21 +08:00

FEATURE: much improved and simplified crawler detection

- phase one does it match 'trident|webkit|gecko|chrome|safari|msie|opera'
    yes- well it is possibly a browser

- phase two does it match 'rss|bot|spider|crawler|facebook|archive|wayback|ping|monitor'
    probably a crawler then

Based off: https://gist.github.com/SamSaffron/6cfad7ea3e6df321ffb7a84f93720a53
This commit is contained in:
Sam 2018-01-16 15:41:13 +11:00
parent eaca2cb049
commit 7b562d2f46
3 changed files with 26 additions and 10 deletions

View file

@ -1,17 +1,23 @@
module CrawlerDetection
# added 'ia_archiver' based on https://meta.discourse.org/t/unable-to-archive-discourse-pages-with-the-internet-archive/21232
# added 'Wayback Save Page' based on https://meta.discourse.org/t/unable-to-archive-discourse-with-the-internet-archive-save-page-now-button/22875
# added 'Swiftbot' based on https://meta.discourse.org/t/how-to-add-html-markup-or-meta-tags-for-external-search-engine/28220
def self.to_matcher(string)
escaped = string.split('|').map { |agent| Regexp.escape(agent) }.join('|')
Regexp.new(escaped)
Regexp.new(escaped, Regexp::IGNORECASE)
end
def self.crawler?(user_agent)
# this is done to avoid regenerating regexes
@non_crawler_matchers ||= {}
@matchers ||= {}
matcher = (@matchers[SiteSetting.crawler_user_agents] ||= to_matcher(SiteSetting.crawler_user_agents))
matcher.match?(user_agent)
possibly_real = (@non_crawler_matchers[SiteSetting.non_crawler_user_agents] ||= to_matcher(SiteSetting.non_crawler_user_agents))
if user_agent.match?(possibly_real)
known_bots = (@matchers[SiteSetting.crawler_user_agents] ||= to_matcher(SiteSetting.crawler_user_agents))
user_agent.match?(known_bots)
else
true
end
end
end