我写了一个脚本来查找SO问题中的拼写错误'标题。 我用了大约一个月。这个工作正常。
但是现在,当我尝试运行它时,我得到了它。
Traceback (most recent call last):
File "copyeditor.py", line 32, in <module>
find_bad_qn(i)
File "copyeditor.py", line 15, in find_bad_qn
html = urlopen(url)
File "/usr/lib/python3.4/urllib/request.py", line 161, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.4/urllib/request.py", line 469, in open
response = meth(req, response)
File "/usr/lib/python3.4/urllib/request.py", line 579, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python3.4/urllib/request.py", line 507, in error
return self._call_chain(*args)
File "/usr/lib/python3.4/urllib/request.py", line 441, in _call_chain
result = func(*args)
File "/usr/lib/python3.4/urllib/request.py", line 587, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found
这是我的代码
import json
from urllib.request import urlopen
from bs4 import BeautifulSoup
from enchant import DictWithPWL
from enchant.checker import SpellChecker
my_dict = DictWithPWL("en_US", pwl="terms.dict")
chkr = SpellChecker(lang=my_dict)
result = []
def find_bad_qn(a):
url = "https://stackoverflow.com/questions?page=" + str(a) + "&sort=active"
html = urlopen(url)
bsObj = BeautifulSoup(html, "html5lib")
que = bsObj.find_all("div", class_="question-summary")
for div in que:
link = div.a.get('href')
name = div.a.text
chkr.set_text(name.lower())
list1 = []
for err in chkr:
list1.append(chkr.word)
if (len(list1) > 1):
str1 = ' '.join(list1)
result.append({'link': link, 'name': name, 'words': str1})
print("Please Wait.. it will take some time")
for i in range(298314,298346):
find_bad_qn(i)
for qn in result:
qn['link'] = "https://stackoverflow.com" + qn['link']
for qn in result:
print(qn['link'], " Error Words:", qn['words'])
url = qn['link']
更新
这是导致问题的网址。即使此网址存在。
https://stackoverflow.com/questions?page=298314&sort=active
我尝试将范围更改为更低的值。现在工作正常。
为什么上述网址会发生这种情况?
答案 0 :(得分:3)
显然,每页的默认显示问题数为50,因此您在循环中定义的范围超出了每页50个问题的可用页数。范围应适应在每页50个问题的总页数内。
此代码将捕获404错误,这是您收到错误的原因,并在您超出范围时忽略它。
from urllib.request import urlopen
def find_bad_qn(a):
url = "https://stackoverflow.com/questions?page=" + str(a) + "&sort=active"
try:
urlopen(url)
except:
pass
print("Please Wait.. it will take some time")
for i in range(298314,298346):
find_bad_qn(i)
答案 1 :(得分:2)
我有完全相同的问题。我想使用urllib获取的url存在,并且可以使用常规浏览器访问,但是urllib告诉我404。
对我来说,解决方案不是使用urllib:
import requests
requests.get(url)
这对我有用。
答案 2 :(得分:1)
默认的“用户代理”似乎没有Mozilla那样多的访问权限。
尝试导入请求,并将, headers={'User-Agent': 'Mozilla/5.0'}
附加到网址末尾。
即:
from urllib.request import Request, urlopen
url = "https://stackoverflow.com/questions?page=" + str(a) + "&sort=active"
req = Request(url, headers={'User-Agent': 'Mozilla/5.0'})
html = urlopen(req)
答案 3 :(得分:0)
这是因为网址不存在,请重新检查您的网址。我在重新检查时也遇到了同样的问题,我发现我的 URL 不正确,然后我更改了它