所以我使用BeautifulSoup构建了一个网络刮板来抓取Craigslist页面上的每个广告。这是我到目前为止所得到的:
import requests
from bs4 import BeautifulSoup, SoupStrainer
import bs4
page = "http://miami.craigslist.org/search/roo?query=brickell"
search_html = requests.get(page).text
roomSoup = BeautifulSoup(search_html, "html.parser")
ad_list = roomSoup.find_all("a", {"class":"hdrlnk"})
#print ad_list
ad_ls = [item["href"] for item in ad_list]
#print ad_ls
ad_urls = ["miami.craigslist.org" + ad for ad in ad_ls]
#print ad_urls
url_str = [str(unicode) for unicode in ad_urls]
# What's in url_str?
for url in url_str:
print url
当我跑步时,我得到:
miami.craigslist.org/mdc/roo/4870912192.html miami.craigslist.org/mdc/roo/4858122981.html miami.craigslist.org/mdc/roo/4870665175.html miami.craigslist.org/mdc/roo/4857247075.html miami.craigslist.org/mdc/roo/4870540048.html ...
这正是我想要的:一个包含页面上每个广告的网址的列表。
我的下一步是从每个页面中提取一些内容;因此构建另一个BeautifulSoup对象。但是我被迫做空了:
for url in url_str:
ad_html = requests.get(str(url)).text
在这里,我们终于得到了一个问题:这个错误究竟是什么?我唯一能理解的是最后两行:
Traceback (most recent call last): File "webscraping.py", line 24,
in <module>
ad_html = requests.get(str(url)).text File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/api.py",
line 65, in get
return request('get', url, **kwargs) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/api.py",
line 49, in request
response = session.request(method=method, url=url, **kwargs) File
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/sessions.py",
line 447, in request
prep = self.prepare_request(req) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/sessions.py",
line 378, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks), File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/models.py",
line 303, in prepare
self.prepare_url(url, params) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/models.py",
line 360, in prepare_url
"Perhaps you meant http://{0}?".format(url)) requests.exceptions.MissingSchema: Invalid URL
u'miami.craigslist.org/mdc/roo/4870912192.html': No schema supplied.
Perhaps you meant http://miami.craigslist.org/mdc/roo/4870912192.html?
看起来问题是我的所有链接都以u&#39;开头,所以requests.get()不起作用。这就是为什么你看到我几乎试图用str()强制所有的URL成为常规字符串。不管我做什么,我都会遇到这个错误。还有别的东西我不见了吗?我完全误解了我的问题吗?
提前多多谢谢!
答案 0 :(得分:1)
看起来你误解了这个问题
消息:
u'miami.craigslist.org/mdc/roo/4870912192.html': No schema supplied.
Perhaps you meant http://miami.craigslist.org/mdc/roo/4870912192.html?
表示在网址
之前缺少http://
(架构)
所以更换
ad_urls = ["miami.craigslist.org" + ad for ad in ad_ls]
通过
ad_urls = ["http://miami.craigslist.org" + ad for ad in ad_ls]
应该做的工作