我尝试使用beautifulsoup从craigslist中提取一些链接,但是它会将链接拉100次而不是一次

时间:2014-08-10 20:42:15

标签: python python-2.7 web-scraping beautifulsoup

所以我试图从craigslist中提取最新电视节目的链接。我已经知道我得到了我想要的信息,但由于某种原因,它在将信息移动到下一个链接之前将其拉出100次。我不确定为什么会这样做?

import urllib2
from bs4 import BeautifulSoup
import re
import time
import csv
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
# id url
url = ('http://omaha.craigslist.org/sya/')
# this opens the url
ourUrl = opener.open(url).read()
# now we are passing the url to beautiful soup
soup = BeautifulSoup(ourUrl)

for link in soup.findAll('a', attrs={'class': re.compile("hdrlnk")}):
    find = re.compile('/sys/(.*?)"')
    #time.sleep(1)
    timeset = time.strftime("%m-%d %H:%M") # current date and time
    for linka in soup.findAll('a', attrs={'href': re.compile("^/sys/")}):
        find = re.compile('/sys/(.*?)"')
        searchTv = re.search(find, str(link))
        Tv = searchTv.group(1)
        opener = urllib2.build_opener()
        opener.addheaders = [('User-agent', 'Mozilla/5.0')]
        url = ('http://omaha.craigslist.org/sys/' + Tv)
        ourUrl = opener.open(url).read()
        soup = BeautifulSoup(ourUrl)
        print "http://omaha.craigslist.org/sys/" + Tv
        try:
            outfile = open('C:/Python27/Folder/Folder/Folder/craigstvs.txt', 'a')
            outfile.write(timeset + "; " + link.text + "; " + "http://omaha.craigslist.org/sys/" + Tv + '\n')
            timeset = time.strftime("%m-%d %H:%M") # current date and time
        except:
            print "No go--->" + str(link.text)

这是一个输出内容的例子:08-10 15:19; MAC迷你英特尔核心wifi dvdrw伟大的cond; http://omaha.craigslist.org/sys/4612480593.html 这正是我想要完成的事情,除了它提取信息大约100多次。然后继续下一个列表......我处于静止状态,无法弄明白。 任何帮助将不胜感激,提前谢谢!

@alexce的治疗:

import scrapy
import csv
from tutorial.items import DmozItem
import re
import urllib2
from scrapy.selector import HtmlXPathSelector
from scrapy.spider import BaseSpider
import html2text

class DmozSpider(scrapy.Spider):
    name = "dmoz"
    allowed_domains = ["http://omaha.craigslist.org"]
    start_urls = [
        "http://omaha.craigslist.org/sya/",

    ]

    def parse(self, response):
        for sel in response.xpath('//html'):
            #title = sel.xpath('a/text()').extract()
            link = sel.xpath('/html/body/article/section/div/div[2]/p/span/span[2]/a').extract()[0:4]
            #at this point it doesn't repeat itself, which is good!
            #desc = sel.xpath('text()').extract()
    print link

1 个答案:

答案 0 :(得分:1)

这里不需要嵌套循环。其他说明/改进:

  • opener.open()结果可以是passed directlyBeautifulSoup构造函数,不需要read()
  • urlopener可以定义一次并在循环中重复使用以跟踪链接
  • 使用find_all()代替findAll()
  • 使用urljoin()来连接网址
  • 使用csv模块来编写分隔数据
  • 在处理文件时使用with context manager

完整的固定版本:

import csv
import re
import time
import urllib2
from urlparse import urljoin
from bs4 import BeautifulSoup

BASE_URL = 'http://omaha.craigslist.org/sys/'
URL = 'http://omaha.craigslist.org/sya/'
FILENAME = 'C:/Python27/Folder/Folder/Folder/craigstvs.txt'

opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
soup = BeautifulSoup(opener.open(URL))

with open(FILENAME, 'a') as f:
    writer = csv.writer(f, delimiter=';')
    for link in soup.find_all('a', class_=re.compile("hdrlnk")):
        timeset = time.strftime("%m-%d %H:%M")

        item_url = urljoin(BASE_URL, link['href'])
        item_soup = BeautifulSoup(opener.open(item_url))

        # do smth with the item_soup? or why did you need to follow this link?

        writer.writerow([timeset, link.text, item_url])

以下是代码产生的内容:

08-10 16:56;Dell Inspiron-15 Laptop;http://omaha.craigslist.org/sys/4612666460.html
08-10 16:56;computer????;http://omaha.craigslist.org/sys/4612637389.html
08-10 16:56;macbook 13 inch 160 gig wifi dvdrw ;http://omaha.craigslist.org/sys/4612480237.html
08-10 16:56;MAC mini intel core wifi dvdrw great cond ;http://omaha.craigslist.org/sys/4612480593.html
...

只是旁注,因为您需要关注链接,获取数据并将其输出到csv文件中。这听起来像Scrapy这样的声音非常合适。有RulesLink Extractors,它可以serialize crawled items开箱即用csv