我试图通过关键字抓取这个xml页面的链接,但是urllib2给我带来了我无法修复python3的错误......
from bs4 import BeautifulSoup
import requests
import smtplib
import urllib2
from lxml import etree
url = 'https://store.fabspy.com/sitemap_products_1.xml?from=5619742598&to=9172987078'
hdr = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'Accept-Encoding': 'none',
'Accept-Language': 'en-US,en;q=0.8',
'Connection': 'keep-alive'}
proxies = {'https': '209.212.253.44'}
req = urllib2.Request(url, headers=hdr, proxies=proxies)
try:
page = urllib2.urlopen(req)
except urllib2.HTTPError as e:
print(e.fp.read())
content = page.read()
def parse(self, response):
try:
print(response.status)
print('???????????????????????????????????')
if response.status == 200:
self.driver.implicitly_wait(5)
self.driver.get(response.url)
print(response.url)
print('!!!!!!!!!!!!!!!!!!!!')
# DO STUFF
except httplib.BadStatusLine:
pass
while True:
soup = BeautifulSoup(a.context, 'lxml')
links = soup.find_all('loc')
for link in links:
if 'notonesite' and 'winter' in link.text:
print(link.text)
jake = link.text
我只是尝试通过代理发送urllib请求,以查看链接是否在站点地图上...
答案 0 :(得分:7)
urllib2
在Python3中不可用。您应该使用urllib.error
和urllib.request
:
import urllib.request
import urllib.error
...
req = (url, headers=hdr) # doesn't take a proxies argument though...
...
try:
page = urllib.request.urlopen(req)
except urllib.error.HTTPError as e:
...
......等等。但请注意,urllib.request.Request()
不会使用proxies
参数。有关代理处理,请参阅the documentation。