出于搜索引擎优化的目的,我想让一些文件可以通过网址http://example.ca/robots.txt访问,但我遇到了一个奇怪的问题。这些文件可以通过firefox访问,但Chrome和谷歌机器人无法获取这些文件!
我的路线:
# Map static resources from the /public folder to the /assets URL path
GET /assets/*file controllers.Assets.at(path="/public", file)
# Robots and Humans files
GET /$file<(robots|humans).txt> controllers.Assets.at(path="/public", file)
GET /$file<MJ12_576CD562EFAFA1742768BA479A39BFF9.txt> controllers.Assets.at(path="/public", file)
答案 0 :(得分:21)
我不确定它是否会有所作为,但请尝试:
GET /robots.txt controllers.Assets.at(path="/public", file="robots.txt")
GET /humans.txt controllers.Assets.at(path="/public", file="humans.txt")
GET /MJ12_576CD562EFAFA1742768BA479A39BFF9.txt controllers.Assets.at(path="/public", file="MJ12_576CD562EFAFA1742768BA479A39BFF9.txt")