Python(BeautifulSoup) - 输出到文本文件不显示所有结果

时间:2017-02-17 15:31:52

标签: python python-3.x web-scraping beautifulsoup

我创建了一个食品卫生刮刀,根据用户输入的邮政编码显示结果。使用我在下面发布的代码,一切都在运行时完美运行,结果正确输出到控制台。

我想将结果输出到文本文件中。

我的代码是:

import requests
import time
import sys
from bs4 import BeautifulSoup


class RestaurantScraper(object):

def __init__(self, pc):
    self.pc = pc        # the input postcode
    self.max_page =         self.find_max_page()        # The number of page available
    self.restaurants = list()       # the final list of restaurants where the scrape data will at the end of process

def run(self):
    for url in self.generate_pages_to_scrape():
        restaurants_from_url = self.scrape_page(url)
        self.restaurants += restaurants_from_url     # we increment the  restaurants to the global restaurants list

def create_url(self):
"""
Create a core url to scrape
:return: A url without pagination (= page 1)
"""
    return "https://www.scoresonthedoors.org.uk/search.php?name=&address=&postcode=" + self.pc + \
       "&distance=1&search.x=8&search.y=6&gbt_id=0&award_score=&award_range=gt"

def create_paginated_url(self, page_number):
"""
Create a paginated url
:param page_number: pagination (integer)
:return: A url paginated
"""
    return self.create_url() + "&page={}".format(str(page_number))

def find_max_page(self):
"""
Function to find the number of pages for a specific search.
:return: The number of pages (integer)
"""
    time.sleep(5)
    r = requests.get(self.create_url())
    soup = BeautifulSoup(r.content, "lxml")
    pagination_soup = soup.findAll("div", {"id": "paginator"})
    pagination = pagination_soup[0]
    page_text = pagination("p")[0].text
    return int(page_text.replace('Page 1 of ', ''))

def generate_pages_to_scrape(self):
"""
Generate all the paginated url using the max_page attribute previously scraped.
:return: List of urls
"""
    return [self.create_paginated_url(page_number) for page_number in range(1, self.max_page + 1)]

def scrape_page(self, url):
"""
This is coming from your original code snippet. This probably need a bit of work, but you get the idea.
:param url: Url to scrape and get data from.
:return:
"""
    time.sleep(5)
    r = requests.get(url)
    soup = BeautifulSoup(r.content, "lxml")
    g_data = soup.findAll("div", {"class": "search-result"})
    ratings = soup.select('div.rating-image img[alt]')
    restaurants = list()
    for item in g_data:
        name = print (item.find_all("a", {"class": "name"})[0].text)
        restaurants.append(name)
    try:
        print (item.find_all("span", {"class": "address"})[0].text)
    except:
        pass
    try:
        for rating in ratings:
            bleh = rating['alt']
            print (bleh)[0].text
    except:
        pass
return restaurants

if __name__ == '__main__':
    pc = input('Give your post code')
    sys.stdout=open("test.txt","w")
    scraper = RestaurantScraper(pc)
    scraper.run()
    print ("{} restaurants scraped".format(str(len(scraper.restaurants))))

stdout命令输出到文本文件,但问题是只有一半的结果打印到文本文件。

可能是因为我在错误的地方有stdout命令导致它中途停止?如果没有stdout命令,所有结果都会通过控制台正确显示。

非常感谢任何阅读本文和/或提供帮助以解决我的问题的人。

2 个答案:

答案 0 :(得分:1)

我加载了你的代码然后运行它。它运行得很好。我不确定你的问题是什么,但它可能只是你正在尝试在文件仍在写时查看它。当我在程序运行时查看文件时,它是空白的。当我在程序完成或被中断后查看文件时,它已被填充。

这是因为answer to stack overflow question explaining std.out and stdbuf in detail

如果您需要立即打印到文件,请考虑使用以下内容:

file_name = open('your_file.whatever', 'w')

# Then, wherever you use print, replace with:
file_name.write('stuff to write')

# Then, when you are done writing things:
file_name.close()

类似写入的替代方法,如果您不想保持文件打开并且不想写入以前的写入,则需要写入文件。

file_name = open('your_file.whatever', 'a')
file_name.write('stuff you need to add')
file_name.close()

有用的参考:explanation of std.out vs print.

答案 1 :(得分:0)

也许在scrape-page中的try-except语句中引发了异常(即,rating没有' alt'键),这将退出整个for循环。尝试通过在except代码块中添加print语句来进行调试。