我正在使用Python 3.5并试图抓取一个网址列表(来自同一网站),代码如下:
import urllib.request
from bs4 import BeautifulSoup
url_list = ['URL1',
'URL2','URL3]
def soup():
for url in url_list:
sauce = urllib.request.urlopen(url)
for things in sauce:
soup_maker = BeautifulSoup(things, 'html.parser')
return soup_maker
# Scraping
def getPropNames():
for propName in soup.findAll('div', class_="property-cta"):
for h1 in propName.findAll('h1'):
print(h1.text)
def getPrice():
for price in soup.findAll('p', class_="room-price"):
print(price.text)
def getRoom():
for theRoom in soup.findAll('div', class_="featured-item-inner"):
for h5 in theRoom.findAll('h5'):
print(h5.text)
for soups in soup():
getPropNames()
getPrice()
getRoom()
到目前为止,如果我打印汤,得到propNames,getPrice或getRoom它们似乎工作。但我似乎无法通过每个网址并打印getPropNames,getPrice和getRoom。
只有几个月学习Python所以非常感谢一些帮助!
答案 0 :(得分:0)
想想这段代码的作用:
def soup():
for url in url_list:
sauce = urllib.request.urlopen(url)
for things in sauce:
soup_maker = BeautifulSoup(things, 'html.parser')
return soup_maker
让我举个例子:
def soup2():
for url in url_list:
print(url)
for thing in ['a', 'b', 'c']:
print(url, thing)
maker = 2 * thing
return maker
url_list = ['one', 'two', 'three']
的输出是:
one
('one', 'a')
你现在看到了吗?到底是怎么回事?
基本上你的汤函数首先返回return
- 不返回任何迭代器,任何列表;只有第一个BeautifulSoup
- 你很幸运(或不幸运)这是可迭代的:)
所以改变代码:
def soup3():
soups = []
for url in url_list:
print(url)
for thing in ['a', 'b', 'c']:
print(url, thing)
maker = 2 * thing
soups.append(maker)
return soups
然后输出是:
one
('one', 'a')
('one', 'b')
('one', 'c')
two
('two', 'a')
('two', 'b')
('two', 'c')
three
('three', 'a')
('three', 'b')
('three', 'c')
但是我相信这也行不通:)只是想知道酱汁的回复:sauce = urllib.request.urlopen(url)
以及你的代码迭代的内容:for things in sauce
- 意味着{{1}是的。
快乐的编码。
答案 1 :(得分:0)
每个get*
函数都使用一个全局变量soup
,它在任何地方都没有正确设置。即使它是,它也不是一个好方法。使soup
成为函数参数,例如:
def getRoom(soup):
for theRoom in soup.findAll('div', class_="featured-item-inner"):
for h5 in theRoom.findAll('h5'):
print(h5.text)
for soup in soups():
getPropNames(soup)
getPrice(soup)
getRoom(soup)
其次,您应该从yield
而不是soup()
开始return
将其变为生成器。否则,您需要返回BeautifulSoup
个对象的列表。
def soups():
for url in url_list:
sauce = urllib.request.urlopen(url)
for things in sauce:
soup_maker = BeautifulSoup(things, 'html.parser')
yield soup_maker
我还建议使用XPath或CSS选择器来提取HTML元素:https://stackoverflow.com/a/11466033/2997179。