如何使用scrapy从长长的网址列表中提取/提取所有内容?

时间:2016-10-31 21:23:47

标签: python selenium web-scraping scrapy web-crawler

我想访问,然后从网址列表中提取内容。例如,考虑这个website,我想提取每个帖子的内容。因此,根据发布的答案,我尝试了以下内容:

# -*- coding: utf-8 -*-
import scrapy
from selenium import webdriver
import urllib

class Test(scrapy.Spider):
    name = "test"
    allowed_domains = ["https://sfbay.craigslist.org/search/jjj?employment_type=2"]
    start_urls = (
        'https://sfbay.craigslist.org/search/jjj?employment_type=2',
    )

    def parse(self, response):
        driver = webdriver.Firefox()
        driver.get(response)
        links = driver.find_elements_by_xpath('''.//a[@class='hdrlnk']''')
        links = [x.get_attribute('href') for x in links]
        for x in links:
            print(x)

然而,我不明白如何在一个单一的运动中从一长串链接中删除所有内容,而不指定目标网址...任何想法如何做到这一点?我也尝试了类似于video的东西,我仍然被困......

更新 基于@quasarseeker回答我试过:

from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from test.items import TestItems


class TestSpider(CrawlSpider):

name = "test"
allowed_domains = ["https://sfbay.craigslist.org/search/jjj?employment_type=2"]
start_urls = (
                'https://sfbay.craigslist.org/search/jjj?employment_type=2',
            )


rules = (  # Rule to parse through all pages
        Rule(LinkExtractor(allow=(), restrict_xpaths=("//a[@class='button next']",)),
             follow=True),
        # Rule to parse through all listings on a page
        Rule(LinkExtractor(allow=(), restrict_xpaths=("/p[@class='row']/a",)),
             callback="parse_obj", follow=True),)


    def parse_obj(self, response):
        item = TestItem()
        item['url'] = []
        for link in LinkExtractor(allow=(), deny=self.allowed_domains).extract_links(response):
            item['url'].append(link.url)
        print('\n\n\n\n**********************\n\n\n\n',item)
        return item

然而,我没有得到任何东西:

2016-11-03 08:46:24 [scrapy] INFO: Scrapy 1.2.0 started (bot: test)
2016-11-03 08:46:24 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'test.spiders', 'BOT_NAME': 'test', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['test.spiders']}
2016-11-03 08:46:24 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats', 'scrapy.extensions.corestats.CoreStats']
2016-11-03 08:46:24 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-11-03 08:46:24 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-11-03 08:46:24 [scrapy] INFO: Enabled item pipelines:
[]
2016-11-03 08:46:24 [scrapy] INFO: Spider opened
2016-11-03 08:46:24 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-11-03 08:46:24 [scrapy] DEBUG: Crawled (200) <GET https://sfbay.craigslist.org/robots.txt> (referer: None)
2016-11-03 08:46:25 [scrapy] DEBUG: Crawled (200) <GET https://sfbay.craigslist.org/search/jjj?employment_type=2> (referer: None)
2016-11-03 08:46:25 [scrapy] DEBUG: Filtered offsite request to 'sfbay.craigslist.org': <GET https://sfbay.craigslist.org/search/jjj?employment_type=2&s=100>
2016-11-03 08:46:25 [scrapy] INFO: Closing spider (finished)
2016-11-03 08:46:25 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 516,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 18481,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 11, 3, 14, 46, 25, 230629),
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'offsite/domains': 1,
 'offsite/filtered': 1,
 'request_depth_max': 1,
 'response_received_count': 2,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2016, 11, 3, 14, 46, 24, 258110)}
2016-11-03 08:46:25 [scrapy] INFO: Spider closed (finished)

2 个答案:

答案 0 :(得分:0)

我不使用Selenium(但是BeautifulSoup),因此可以有更好的解决方案。

您可以获取课程a的所有hdrlnk代码,然后您可以从此代码中获取href。现在您已拥有所有链接的列表,您可以转到此页面并获取内容。

from selenium import webdriver

driver = webdriver.Firefox()
driver.get('https://sfbay.craigslist.org/search/jjj?employment_type=2')

# get all `a` with `class=hdrlnk`
links = driver.find_elements_by_xpath('.//a[@class="hdrlnk"]')
#links = driver.find_elements_by_css_selector('a.hdrlnk')
#links = driver.find_elements_by_class_name('hdrlnk')

# get all `href` from all `a`
links = [x.get_attribute('href') for x in links]

# visit pages
for x in links:
    print(x)

    # follow link
    driver.get(x)

    # ... here get page content ...

    # ... EDIT ...

    # ... using `elements` (with `s`) ...
    #content = driver.find_elements_by_xpath('.//*[@id="postingbody"]')
    #content = driver.find_elements_by_css_selector('#postingbody')
    content = driver.find_elements_by_id('postingbody')
    #print([x.text for x in content])
    #print([x.text for x in content][0])
    print(''.join([x.text for x in content]))

    # ... using `element` (without `s`) ...
    #content = driver.find_element_by_xpath('.//*[@id="postingbody"]')
    #content = driver.find_element_by_css_selector('#postingbody')
    content = driver.find_element_by_id('postingbody')
    print(content.text)

答案 1 :(得分:-1)

这可以通过Scrapy轻松完成。但是您需要修改规则并将LinkExtractor指向您想要抓取的所有页面的xpath。在示例中提供的网页中,它看起来像这样:

 rules = (Rule(LinkExtractor(allow=("//p[@class='row']/a")), \
                  callback='parse_obj', follow=True),)

这将使规则能够解析xpath中包含的每个列表 -

 /p[@class='row']/a

然后调用parse_obj()。

我也注意到列表页面也有分页。如果您正在寻找解析列表的每个页面,您需要包含一个规则来首先解析分页按钮,然后通过每个页面上的链接,然后最终调用您的函数。你的代码最终看起来像这样:

rules = (   #Rule to parse through all pages        
            Rule (LinkExtractor(allow=(),restrict_xpaths=("//a[@class='button next']",)),               
                follow= True),
            #Rule to parse through all listings on a page
            Rule (LinkExtractor(allow=(),restrict_xpaths=("//p[@class='row']/a",)), 
                callback="parse_obj" , follow= True),)