Python Scraper只将范围内的最后一个URL发送到CSV编写器

时间:2017-05-11 04:20:48

标签: python csv beautifulsoup

所以我的刮刀只将范围内的最后一个URL发送回CSV编写器。 我无法弄清楚我错过了哪里。希望一双新鲜的眼睛可以提供帮助。

以下代码:

import requests
from bs4 import BeautifulSoup
import csv

urls = ["https://www.realestate.com.au/property/1-125-mansfield-st-berwick-vic-3806",
"https://www.realestate.com.au/property/1-13-park-ave-mosman-nsw-2088",
"https://www.realestate.com.au/property/1-17-sarton-rd-clayton-vic-3168",
"https://www.realestate.com.au/property/1-2-bridge-st-northcote-vic-3070",
"https://www.realestate.com.au/property/1-2-marara-rd-caulfield-south-vic-3162",]
results = {}
for url in urls:
    resp = requests.get(url)
    if resp.status_code != 200:
        print('Failed in url {}'.format(url))
        continue
    soup = BeautifulSoup(resp.text, 'html')
    link = soup.find(name='a', attrs={'class': lambda x: x and 'property-value__btn-listing' in x}) # find just takes the first one, so no repeated links
    href = link.get('href')
    results[url] = href
print href

1 个答案:

答案 0 :(得分:2)

缩进print href。 如果它不在循环中,则不会为所有URL打印。将打印href的最后一个值。希望这有帮助!