由于Python中的Web抓取而恢复磁盘空间

时间:2013-11-13 04:29:11

标签: python python-2.7 selenium web-scraping web-crawler

我使用以下Python代码进行网页抓取新闻网站收集新闻文章:

import mechanize
import re
import time
from selenium import webdriver
from bs4 import BeautifulSoup


url = "http://www.thehindu.com/archive/web/2013/07/01/"

link_dictionary = {}
driver = webdriver.Firefox()
driver.get(url)
time.sleep(10)
soup = BeautifulSoup(driver.page_source)

for tag_li in soup.findAll('li', attrs={"data-section":"Op-Ed"}):
    for link in tag_li.findAll('a'):
        link_dictionary[link.string] = link.get('href')
        urlnew = link_dictionary[link.string]
        brnew =  mechanize.Browser()
        htmltextnew = brnew.open(urlnew).read()            
        articletext = ""
        soupnew = BeautifulSoup(htmltextnew)
        for tag in soupnew.findAll('p'):
            articletext += tag.text
        print "opinion " + re.sub('\s+', ' ', articletext, flags=re.M)
driver.close()

以上代码适用于某一天。当我运行此代码一两个月时,它消耗了我的C:\驱动器大约3GB的内存空间(我正在使用Windows7)。

我不知道如何以及为什么消耗这么多内存。有人可以向我解释这个现象,并帮助恢复丢失的记忆吗?我是Python编程的新手。

2 个答案:

答案 0 :(得分:3)

你做了一些disk cleanup。通过这个你应该能够恢复3-4GB左右的东西。为了获得更多恢复,您可以更多地恢复磁盘空间,可能需要删除一些应用程序数据。

答案 1 :(得分:2)

link_dictionary = {}将继续增长。

你永远不会读这个,似乎不需要它。

试试这个:

import mechanize
import re
import time
from selenium import webdriver
from bs4 import BeautifulSoup


url = "http://www.thehindu.com/archive/web/2013/07/01/"

driver = webdriver.Firefox()
driver.get(url)
time.sleep(10)
soup = BeautifulSoup(driver.page_source)

for tag_li in soup.findAll('li', attrs={"data-section":"Op-Ed"}):
    for link in tag_li.findAll('a'): 
        urlnew = link.get('href')
        brnew =  mechanize.Browser()
        htmltextnew = brnew.open(urlnew).read()            
        articletext = ""
        soupnew = BeautifulSoup(htmltextnew)
        for tag in soupnew.findAll('p'):
            articletext += tag.text
        print "opinion " + re.sub('\s+', ' ', articletext, flags=re.M)
driver.close()