我正在尝试创建一个蜘蛛,该蜘蛛从名为https://en.wikipedia.org/wiki/North_Korea_and_weapons_of_mass_destruction的维基百科页面开始,然后抓取输入的文本和图像文件。除了我只得到第一个答复(似乎不去以下页面,它似乎似乎可以工作。任何帮助将不胜感激。
这是我的代码:
import scrapy
from scrapy.spiders import Request
from scrapy.linkextractors import LinkExtractor
import re
BASE_URL = 'http://en.wikipedia.org'
PROTOCOL = 'https:'
class MissleSpiderBio(scrapy.Spider):
name = 'weapons_bio'
allowed_domains = ['https://en.wikipedia.org']
start_urls = ['https://en.wikipedia.org/wiki/...'] //url above
def parse(self, response):
filename = response.url.split('/')[-1]
h4s = response.xpath('//h4')
text = response.css("#mw-content-text > div > p:nth- \
child(2)::text").extract()
if text:
images = response.css("#mw-content-text > div > table>
tbody > tr:nth-child(2) > td > a >
img::attr(src)").extract()
yield {'body': text, 'image_urls':[PROTOCOL+
images[0]]}
else:
yield {'empty': "not found"}
for next_page in response.css('#mw-content-text > div > ul
> li > b > a::attr(href)').extract():
print(BASE_URL + next_page)
yield response.follow(BASE_URL + next_page, \
callback=self.parse)
答案 0 :(得分:1)
您可以尝试的几件事
代替
中的http
BASE_URL = 'http://en.wikipedia.org'
将其设置为
BASE_URL = 'https://en.wikipedia.org'
第二件事,将此行注释掉
allowed_domains = ['https://en.wikipedia.org']
我认为这就是为什么它不关注链接