我正在尝试在Scrapy蜘蛛中使用urlparse.urljoin
来编译要抓取的网址列表。目前,我的蜘蛛什么都没有返回,但没有丢失任何错误。所以我试图检查我是否正在编译urls。
我的尝试是使用str.join
在闲置状态下对此进行测试,如下所示:
>>> href = ['lphs.asp?id=598&city=london',
'lphs.asp?id=480&city=london',
'lphs.asp?id=1808&city=london',
'lphs.asp?id=1662&city=london',
'lphs.asp?id=502&city=london',]
>>> for x in href:
base = "http:/www.url-base.com/destination/"
final_url = str.join(base, x)
print(final_url)
返回的一行:
lhttp:/www.url-base.com/destination/phttp:/www.url-base.com/destination/hhttp:/www.url-base.com/destination/shttp:/www.url-base.com/destination/.http:/www.url-base.com/destination/ahttp:/www.url-base.com/destination/shttp:/www.url-base.com/destination/phttp:/www.url-base.com/destination/?http:/www.url-base.com/destination/ihttp:/www.url-base.com/destination/dhttp:/www.url-base.com/destination/=http:/www.url-base.com/destination/5http:/www.url-base.com/destination/9http:/www.url-base.com/destination/8http:/www.url-base.com/destination/&http:/www.url-base.com/destination/chttp:/www.url-base.com/destination/ihttp:/www.url-base.com/destination/thttp:/www.url-base.com/destination/yhttp:/www.url-base.com/destination/=http:/www.url-base.com/destination/lhttp:/www.url-base.com/destination/ohttp:/www.url-base.com/destination/nhttp:/www.url-base.com/destination/dhttp:/www.url-base.com/destination/ohttp:/www.url-base.com/destination/n
我认为从我的例子来看很明显str.join
的行为方式不一样 - 如果确实如此,那么这就是为什么我的蜘蛛不遵循这些链接! - 但是,对此进行确认会很好。
如果这不是正确的测试方法,我该如何测试这个过程?
更新
尝试使用下面的urlparse.urljoin
:
来自urllib.parse import urlparse
>>> from urllib.parse import urlparse
>>> for x in href:
base = "http:/www.url-base.com/destination/"
final_url = urlparse.urljoin(base, x)
print(final_url)
投掷AttributeError: 'function' object has no attribute 'urljoin'
更新 - 有问题的蜘蛛功能
def parse_links(self, response):
room_links = response.xpath('//form/table/tr/td/table//a[div]/@href').extract() # insert xpath which contains the href for the rooms
for link in room_links:
base_url = "http://www.example.com/followthrough"
final_url = urlparse.urljoin(base_url, link)
print(final_url)
# This is not joing the final_url right
yield Request(final_url, callback=parse_links)
更新
我刚刚在闲置时再次测试:
>>> from urllib.parse import urljoin
>>> from urllib import parse
>>> room_links = ['lphs.asp?id=562&city=london',
'lphs.asp?id=1706&city=london',
'lphs.asp?id=1826&city=london',
'lphs.asp?id=541&city=london',
'lphs.asp?id=1672&city=london',
'lphs.asp?id=509&city=london',
'lphs.asp?id=428&city=london',
'lphs.asp?id=614&city=london',
'lphs.asp?id=336&city=london',
'lphs.asp?id=412&city=london',
'lphs.asp?id=611&city=london',]
>>> for link in room_links:
base_url = "http:/www.url-base.com/destination/"
final_url = urlparse.urljoin(base_url, link)
print(final_url)
扔了这个:
Traceback (most recent call last):
File "<pyshell#34>", line 3, in <module>
final_url = urlparse.urljoin(base_url, link)
AttributeError: 'function' object has no attribute 'urljoin'
答案 0 :(得分:1)
您会看到由此产生的输出:
for x in href:
base = "http:/www.url-base.com/destination/"
final_url = str.join(base, href) # <-- 'x' instead of 'href' probably intended here
print(final_url)
来自urllib
库的 urljoin
行为不同,请参阅文档。它不是简单的字符串连接。
修改强>
根据您的评论,我认为您使用的是Python 3.使用该import语句,您可以导入urlparse
函数。这就是你得到那个错误的原因。导入并直接使用该功能:
from urllib.parse import urljoin
...
final_url = urljoin(base, x)
或导入parse
模块并使用如下函数:
from urllib import parse
...
final_url = parse.urljoin(base, x)