Python - 废弃分页站点并将结果写入文件

时间:2016-12-14 13:06:59

标签: python ajax web-scraping beautifulsoup

我是一个完整的编程初学者,所以如果我不能很好地表达我的问题,请原谅我。我正在尝试编写一个脚本,该脚本将查看一系列新闻页面,并将记录文章标题及其链接。我已经设法完成第一页的完成,问题是获取后续页面的内容。通过在stackoverflow中搜索,我想我找到了一个解决方案,可以让脚本访问多个URL,但它似乎覆盖了从它访问的每个页面中提取的内容,所以我总是得到相同数量的记录文章文件。可能有帮助的东西:我知道URL遵循以下模型:“/ ultimas /?page = 1”,“/ ultimas /?page = 2”等等,它似乎是使用AJAX来请求新文章

这是我的代码:

import csv
import requests
from bs4 import BeautifulSoup as Soup
import urllib
r = base_url = "http://agenciabrasil.ebc.com.br/"
program_url = base_url + "/ultimas/?page="

for page in range(1, 4):
    url =  "%s%d" % (program_url, page)
    soup = Soup(urllib.urlopen(url))



letters = soup.find_all("div", class_="titulo-noticia")

letters[0]

lobbying = {}
for element in letters:
    lobbying[element.a.get_text()] = {}

letters[0].a["href"]
prefix = "http://agenciabrasil.ebc.com.br"

for element in letters:
    lobbying[element.a.get_text()]["link"] = prefix + element.a["href"]



for item in lobbying.keys():
    print item + ": " + "\n\t" + "link: " + lobbying[item]["link"] + "\n\t"

import os, csv
os.chdir("...")

with open("lobbying.csv", "w") as toWrite:
    writer = csv.writer(toWrite, delimiter=",")
    writer.writerow(["name", "link",])
    for a in lobbying.keys():
        writer.writerow([a.encode("utf-8"), lobbying[a]["link"]])

        import json

with open("lobbying.json", "w") as writeJSON:
    json.dump(lobbying, writeJSON)

print "Fim"

关于如何将每个页面的内容添加到最终文件的任何帮助都将非常感激。谢谢!

2 个答案:

答案 0 :(得分:1)

如果服务于同一目的,这个怎么样:

import csv, requests
from lxml import html

base_url = "http://agenciabrasil.ebc.com.br"
program_url = base_url + "/ultimas/?page={0}"
outfile = open('scraped_data.csv', 'w', newline='')
writer = csv.writer(outfile)
writer.writerow(["Caption","Link"])
for url in [program_url.format(page) for page in range(1, 4)]:
    response = requests.get(url)
    tree = html.fromstring(response.text)
    for title in tree.xpath("//div[@class='noticia']"):
        caption = title.xpath('.//span[@class="field-content"]/a/text()')[0]
        policy = title.xpath('.//span[@class="field-content"]/a/@href')[0] 
        writer.writerow([caption , base_url + policy])

答案 1 :(得分:0)

由于您的文件未正确缩进,看起来您的for循环中的代码(for page in range(1, 4):)未被调用:

如果你整理你的代码,它就可以了:

import csv, requests, os, json, urllib
from bs4 import BeautifulSoup as Soup

r = base_url = "http://agenciabrasil.ebc.com.br/"
program_url = base_url + "/ultimas/?page="

for page in range(1, 4):
    url =  "%s%d" % (program_url, page)
    soup = Soup(urllib.urlopen(url))



    letters = soup.find_all("div", class_="titulo-noticia")

    lobbying = {}
    for element in letters:
        lobbying[element.a.get_text()] = {}

    prefix = "http://agenciabrasil.ebc.com.br"

    for element in letters:
        lobbying[element.a.get_text()]["link"] = prefix + element.a["href"]



    for item in lobbying.keys():
        print item + ": " + "\n\t" + "link: " + lobbying[item]["link"] + "\n\t"

#os.chdir("...")

with open("lobbying.csv", "w") as toWrite:
    writer = csv.writer(toWrite, delimiter=",")
    writer.writerow(["name", "link",])
    for a in lobbying.keys():
        writer.writerow([a.encode("utf-8"), lobbying[a]["link"]])


with open("lobbying.json", "w") as writeJSON:
    json.dump(lobbying, writeJSON)

print "Fim"