我是一名新手,正在建立一个网络刮刀,它将抓住(并最终导出到csv)所有英国麦当劳地址,邮政编码和电话号码。我使用聚合器而不是麦当劳网站。
https://www.localstore.co.uk/stores/75639/mcdonalds-restaurant/
我借用并重新调整了一些代码:
from bs4 import BeautifulSoup
from urllib2 import urlopen
BASE_URL = "https://www.localstore.co.uk/stores/75639/mcdonalds-restaurant/"
def get_category_links(section_url):
html = urlopen(section_url).read()
soup = BeautifulSoup(html, "lxml")
boccat = soup.find("tr")
category_links = [BASE_URL + tr.a["href"] for tr in boccat.findAll("h2")]
return category_links
def get_restaurant_details(category_url):
html = urlopen(category_url).read()
soup = BeautifulSoup(html, "lxml")
streetAddress = soup.find("span", "streetAddress").string
addressLocality = [h2.string for h2 in soup.findAll("span", "addressLocality")]
addressRegion = [h2.string for h2 in soup.findAll("span", "addressRegion")]
postalCode = [h2.string for h2 in soup.findAll("span", "postalCode")]
phoneNumber = [h2.string for h2 in soup.findAll("td", "b")]
return {"streetAddress": streetAddress,
"addressLocality": addressLocality,
"postalCode": postalCode,
"addressRegion": addressRegion,
"phoneNumber": phoneNumber}
我认为我没有抓住数据 - 就像我运行以下行一样:
print(postalCode)
或
print(addressLocality)
我收到以下错误
NameError: name 'postalCode' is not defined
对我做错了什么想法?
答案 0 :(得分:3)
正如其他人所评论的那样,你需要先实际调用你的函数。
做这样的事情
if __name__ == '__main__':
res = "https://www.localstore.co.uk/store/329213/mcdonalds-restaurant/london/"
print(get_restaurant_details(res)["postalCode"])
你的两个职能之后。我刚刚访问了该网站并获得了一个适用于您的程序的URL,但我从未实际测试过它。你现在面临的主要问题是你实际上并没有做任何事情。你需要调用一个函数!