避免在抓取页面时复制某些内容

时间:2020-10-30 19:35:18

标签: python web-scraping beautifulsoup web-crawler

在保存我要抓取的结果时遇到一些困难。 请参考此代码(对于我的具体情况,此代码略有更改):

import bs4, requests
import pandas as pd
import re
import time

headline=[]
corpus=[]
dates=[]
tag=[]  

start=1
url="https://www.imolaoggi.it/category/cron/"

while True:
    r = requests.get(url)
    soup = bs4.BeautifulSoup(r.text, 'html')


    headlines=soup.find_all('h3')
    corpora=soup.find_all('p') 
    dates=soup.find_all('time', attrs={'class':'entry-date published updated'}) 
    tags=soup.find_all('span', attrs={'class':'cat-links'})
    for t in headlines:
        headline.append(t.text)
    
    for s in corpora:
        corpus.append(s.text)
        
    for d in date:
        dates.append(d.text)
    
    for c in tags:
        tag.append(c.text)
    if soup.find_all('a', attrs={'class':'page-numbers'}):
      url = f"https://www.imolaoggi.it/category/cron/page/{page}"
      page +=1
    else:
      break

创建数据框

df = pd.DataFrame(list(zip(date, headline, tag, corpus)), 
               columns =['Date', 'Headlines', 'Tags', 'Corpus']) 

我想保存此链接中的所有页面。该代码有效,但似乎它每次(即每页)为语料库写两个相同的句子:

enter image description here

我认为这是由于我选择的标签导致的:

corpora=soup.find_all('p') 

这会导致我的数据框中的行未对齐,因为将数据保存在列表中,并且如果与其他语料相比,语料库将在以后开始正确地进行抓取。

希望您能帮忙了解如何解决此问题。

2 个答案:

答案 0 :(得分:0)

import requests
from bs4 import BeautifulSoup
from concurrent.futures import ThreadPoolExecutor
import pandas as pd


def main(req, num):
    r = req.get("https://www.imolaoggi.it/category/cron/page/{}/".format(num))
    soup = BeautifulSoup(r.content, 'html.parser')
    goal = [(x.time.text, x.h3.a.text, x.select_one("span.cat-links").get_text(strip=True), x.p.get_text(strip=True))
            for x in soup.select("div.entry-content")]
    return goal


with ThreadPoolExecutor(max_workers=30) as executor:
    with requests.Session() as req:
        fs = [executor.submit(main, req, num) for num in range(1, 2937)]
        allin = []
        for f in fs:
            allin.extend(f.result())
        df = pd.DataFrame.from_records(
            allin, columns=["Date", "Title", "Tags", "Content"])
        print(df)
        df.to_csv("result.csv", index=False)

答案 1 :(得分:0)

您关闭了,但是选择器关闭了,并且对某些变量进行了错误命名。

我会这样使用css选择器:

eadline=[]
corpus=[]
date_list=[]
tag_list=[]  


headlines=soup.select('h3.entry-title')
corpora=soup.select('div.entry-meta + p') 
dates=soup.select('div.entry-meta  span.posted-on')
tags=soup.select('span.cat-links')

for t in headlines:
    headline.append(t.text)

for s in corpora:
        corpus.append(s.text.strip())

for d in dates:
        date_list.append(d.text)

for c in tags:
        tag_list.append(c.text)

df = pd.DataFrame(list(zip(date_list, headline, tag_list, corpus)), 
               columns =['Date', 'Headlines', 'Tags', 'Corpus']) 
df

输出:

    Date    Headlines   Tags    Corpus
0   30 Ottobre 2020     Roma: con spranga di ferro danneggia 50 auto i...   CRONACA, NEWS   Notte di vandalismi a Colli Albani dove un uom...
1   30 Ottobre 2020\n30 Ottobre 2020    Aggressione con machete: grave un 28enne, arre...   CRONACA, NEWS   Roma - Ha impugnato il suo machete e lo ha agi...
2   30 Ottobre 2020\n30 Ottobre 2020    Deep State e globalismo, Mons. Viganò scrive a...   CRONACA, NEWS   LETTERA APERTA\r\nAL PRESIDENTE DEGLI STATI UN...
3   30 Ottobre 2020     Meluzzi e Scandurra: “Sacrificare libertà per ...   CRONACA, NEWS   "Sacrificare la libertà per la sicurezza è un ...