编程新手
无法从属于同一网站的某个域抓取内容。
例如,我可以抓取it.example.com
,es.example.com
,pt.example.com
,但当我尝试对fr.example.com
或us.example.com
执行相同操作时,我得到:
2017-12-17 14:20:27 [scrapy.extensions.telnet] DEBUG: Telnet console
listening on 127.0.0.1:6025
2017-12-17 14:21:27 [scrapy.extensions.logstats] INFO: Crawled 0 pages
(at
0 pages/min), scraped 0 items (at 0 items/min)
2017-12-17 14:22:27 [scrapy.extensions.logstats] INFO: Crawled 0 pages
(at
0 pages/min), scraped 0 items (at 0 items/min)
2017-12-17 14:22:38 [scrapy.downloadermiddlewares.retry] DEBUG:
Retrying
<GET https://fr.example.com/robots.txt> (failed 1 times): TCP
connection
timed out: 110: Connection timed out.
这是蜘蛛 some.py
import scrapy
import itertools
class SomeSpider(scrapy.Spider):
name = 'some'
allowed_domains = ['https://fr.example.com']
def start_requests(self):
categories = [ 'thing1', 'thing2', 'thing3',]
base = "https://fr.example.com/things?t={category}&p={index}"
for category, index in itertools.product(categories, range(1, 11)):
yield scrapy.Request(base.format(category=category, index=index))
def parse(self, response):
response.selector.remove_namespaces()
info1 = response.css("span.info1").extract()
info2 = response.css("span.info2").extract()
for item in zip(info1, info2):
scraped_info = {
'info1': item[0],
'info2': item[1]
}
yield scraped_info
我尝试了什么:
从不同的IP运行蜘蛛(相同域的相同问题)
添加IP池(不起作用)
在Stackoverflow上的某处找到:在setting.py
中,设置
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95
Safari/537.36'
ROBOTSTXT_OBEY = False
欢迎任何想法!
答案 0 :(得分:0)
尝试使用requests
包而不是scrapy
访问该网页,看看它是否有效。
import requests
url = 'fr.example.com'
response = requests.get(url)
print(response.text)