
问:申请的SSL证书绑定的是www的域名,https://www.dtjxpj.com/,但是现在只能访问不带www的,虚拟主机设置了ssl之后,无法打开
答:您好,目前测试访问显示,您打不开报什么错呢,麻烦截图浏览器报错并ping www.dtjxpj.com然后截图反馈在工单,以便我司协助
非常感谢您长期对我司的支持!
问:好像是缓存问题,另外怎么强制https呢,因为这个域名http上一任的持有人似乎用于不法用途被举报的次数太多,打开每次都会提示安全风险,所以现在加了https,但是怎么访问域名还是会先在http
答:您好,查看到已经添加了301,http跳转到https,当前测试可以正常访问。
问:您好,请问如何屏蔽除百度、搜狗、谷歌等主流搜索以外的引擎呢?这个帮忙的代码是否就是不屏蔽百度谷歌等搜索引擎呢?https://www.west.cn/faq/list.asp?unid=820<rule name="Block spider"> <match url="(^robots.txt$)" ignoreCase="false" negate="true" /> <conditions> <add input="{HTTP_USER_AGENT}" pattern="SemrushBot|Webdup|AcoonBot|AhrefsBot|Ezooms|EdisterBot|EC2LinkFinder|jikespider|Purebot|MJ12bot|WangIDSpider|WBSearchBot|Wotbox|xbfMozilla|Yottaa|YandexBot|Jorgee|SWEBot|spbot|TurnitinBot-Agent|curl|perl|Python|Wget|Xenu|ZmEu" ignoreCase="true" /> </conditions> <action type="AbortRequest" /></rule>我想要保留百度、搜索、谷歌等爬虫,其他的爬虫需要屏蔽,请问该如何修改代码?谢谢您!我还有过滤Ip、域名以及301https,我这样插入屏蔽非主流爬虫的代码正确吗?<configuration> <system.webServer> <rewrite> <rules> <rule name="Block website access"> <match url="^(.*)$" ignoreCase="false" /> <conditions logicalGrouping="MatchAny"> <add input="{HTTP_REFERER}" pattern="www.123.com" /> <add input="{HTTP_REFERER}" pattern="www.111.org" /> <add input="{HTTP_REFERER}" pattern="www.12.cn" /> </conditions> <action type="AbortRequest" /> </rule> <rule name="Block spider"> <match url="(^robots.txt$)" ignoreCase="false" negate="true" /> <conditions> <add input="{HTTP_USER_AGENT}" pattern="SemrushBot|Webdup|AcoonBot|AhrefsBot|Ezooms|EdisterBot|EC2LinkFinder|jikespider|Purebot|MJ12bot|WangIDSpider|WBSearchBot|Wotbox|xbfMozilla|Yottaa|YandexBot|Jorgee|SWEBot|spbot|TurnitinBot-Agent|curl|perl|Python|Wget|Xenu|ZmEu" ignoreCase="true" /> </conditions> <action type="AbortRequest" /> </rule> <rule name="Block website ip" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAny"> <add input="%{HTTP_X_FORWARDED_FOR}&%{REMOTE_ADDR}&%{HTTP_X_Real_IP}" pattern="(127.0.0.1|127.0.0.1)" /> </conditions> <action type="AbortRequest" /> </rule> <rule name="301" stopProcessing="true"> <match url="^(.*)$" ignoreCase="false" /> <conditions logicalGrouping="MatchAll"> <add input="{HTTP_FROM_HTTPS}" pattern="^on$" negate="true" /> </conditions> <action type="Redirect" url="https://www.abc.com/{R:1}" redirectType="Permanent" /> </rule> </rules> </rewrite> </system.webServer> </configuration>
答:您好,是的,只需要屏蔽的时候不写这几个您想保留的蜘蛛,就不会被屏蔽,另您写入代码的位置正确
非常感谢您长期对我司的支持!
问:您好,我使用观察一下,非常感谢您的帮助!
答:您好,不客气,非常感谢您长期对我司的支持.由此给您带来的不便之处,敬请原谅!谢谢!


