使用BeautifulSoup查找与特定关键字相关的链接

时间:2019-02-28 13:13:30

标签: python web-scraping beautifulsoup web-crawler

我必须修改此代码,以便抓取仅保留包含特定关键字的链接。就我而言,我是在刮报纸页上查找与“英国脱欧”一词有关的新闻。

example of target link

我尝试修改方法parse_links,使其只保留其中包含“英国脱欧”的链接(或“ a”标签),但似乎不起作用。

我应该在哪里放置条件?

import requests
from bs4 import BeautifulSoup
from queue import Queue, Empty
from concurrent.futures import ThreadPoolExecutor
from urllib.parse import urljoin, urlparse

class MultiThreadScraper:

    def __init__(self, base_url):

        self.base_url = base_url
        self.root_url = '{}://{}'.format(urlparse(self.base_url).scheme, urlparse(self.base_url).netloc)
        self.pool = ThreadPoolExecutor(max_workers=20)
        self.scraped_pages = set([])
        self.to_crawl = Queue(10)
        self.to_crawl.put(self.base_url)

    def parse_links(self, html):
        soup = BeautifulSoup(html, 'html.parser')
        links = soup.find_all('a', href=True)
        for link in links:
            url = link['href']
            if url.startswith('/') or url.startswith(self.root_url):
                url = urljoin(self.root_url, url)
                if url not in self.scraped_pages:
                    self.to_crawl.put(url)

    def scrape_info(self, html):
        return

    def post_scrape_callback(self, res):
        result = res.result()
        if result and result.status_code == 200:
            self.parse_links(result.text)
            self.scrape_info(result.text)

    def scrape_page(self, url):
        try:
            res = requests.get(url, timeout=(3, 30))
            return res
        except requests.RequestException:
            return

    def run_scraper(self):
        while True:
            try:
                target_url = self.to_crawl.get(timeout=60)
                if target_url not in self.scraped_pages:
                    print("Scraping URL: {}".format(target_url))
                    self.scraped_pages.add(target_url)
                    job = self.pool.submit(self.scrape_page, target_url)
                    job.add_done_callback(self.post_scrape_callback)
            except Empty:
                return
            except Exception as e:
                print(e)
                continue
if __name__ == '__main__':
    s = MultiThreadScraper("https://elpais.com/")
    s.run_scraper()

3 个答案:

答案 0 :(得分:2)

您可以使用方法getText()获取元素的文本,并检查字符串是否实际上包含“ Brexit”:

if "Brexit" in link.getText().split():
     url = link["href"]

答案 1 :(得分:1)

您需要导入re模块以获取特定的文本值。请尝试以下代码。

import re
 links = soup.find_all('a', text=re.compile("Brexit"))

这应该返回仅包含英国退欧的链接。

答案 2 :(得分:1)

我在此功能中添加了一个检查。看看这是否对您有帮助:

def parse_links(self, html):
    soup = BeautifulSoup(html, 'html.parser')
    links = soup.find_all('a', href=True)
    for link in links:
        if 'BREXIT' in link.text.upper():  #<------ new if statement
            url = link['href']
            if url.startswith('/') or url.startswith(self.root_url):
                url = urljoin(self.root_url, url)
                if url not in self.scraped_pages:
                    self.to_crawl.put(url)