我试图抓取德国政党的新闻页面并将所有信息存储在数据框中(" python beginner")。当我想将整个文本甚至日期存储到数据帧中时,只存在一个小问题。看起来只有文本的最后一个元素(p ... / p)才会存储在行中。我认为问题出现是因为循环上的迭代会产生误导。
import pandas as pd
import requests
from time import sleep
from random import randint
from time import time
import numpy as np
from urllib.request import urlopen
data = pd.DataFrame()
teaser = ()
title = []
content = ()
childrenUrls = []
mainPage = "https://www.fdp.de"
start_time = time()
counter = 0
#for i in list(map(lambda x: x+1, range(3))):
for i in range(3):
counter = counter + 1
sleep(randint(1,3))
elapsed_time = time() - start_time
print('Request: {}; Frequency: {} requests/s'.format(counter, counter/elapsed_time))
url = "https://www.fdp.de/seite/aktuelles?page="+str(i)
#print(url)
r = requests.get(url)
soup = BeautifulSoup(r.content, 'html.parser')
uls = soup.find_all('div', {'class': 'field-title'})
for ul in uls:
for li in ul.find_all('h2'):
for link in li.find_all('a'):
url = link.get('href')
contents = link.text
print(contents)
childrenUrls = mainPage+url
print(childrenUrls)
childrenPages = urllib2.urlopen(childrenUrls)
soupCP = BeautifulSoup(childrenPages, 'html.parser')
#content1 = soupCP.findAll('p').get_text()
#print(content1)
for content in soupCP.findAll('p'):
#for message in content.get('p'):
content = content.text.strip()
print(content)
for teaser in soupCP.find_all('div', class_ = 'field-teaser'):
teaser = teaser.text.strip()
print(date)
for title in soupCP.find_all('title'):
title = title.text.strip()
print(ttt)
df = pd.DataFrame(
{'teaser': teaser,
'title' : title,
'content' : content}, index=[counter])
data = pd.concat([data, df])
#join(str(v) for v in value_list)
答案 0 :(得分:2)
您必须将每个循环中的文本保存在列表中,而不是保存在简单的字符串变量中。在每次迭代中,您的代码重新定义变量的值;这导致丢失以前的数据。
一个好方法,就是在这里使用list comprehension。用以下代码替换代码的最后3 for
个循环:
content = [x.text.strip() for x in soupCP.find_all('p')]
teaser = [x.text.strip() for x in soupCP.find_all('div', class_='field-teaser')]
title = [x.text.strip() for x in soupCP.find_all('title')]
df = pd.DataFrame(
{'teaser': teaser,
'title': title,
'content': content}, index=[counter])
data = pd.concat([data, df])
列表理解的简单解释:
行content = [x.text.strip() for x in soupCP.find_all('p')]
相当于:
content = []
for x in soupCP.find_all('p'):
content.append(x.text.strip())