我正在编写一个脚本,用于从网站中提取内部链接。转到列表中的内部链接时,它将无法识别的链接追加到列表中。
当它附加了所有内部链接后,我想打破循环。
addr = "http://andnow.com/"
base_addr = "{0.scheme}://{0.netloc}/".format(urlsplit(addr))
o = urlparse(addr)
domain = o.hostname
i_url = []
def internal_crawl(url):
headers = {'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:32.0) Gecko/20100101 Firefox/32.0'}
r = requests.get(url, headers = headers).content
soup = BeautifulSoup( r, "html.parser")
i_url.append(url)
try:
for link in [h.get('href') for h in soup.find_all('a')]:
if domain in link and "mailto:" not in link and "tel:" and not link.startswith('#'):
if link not in i_url:
i_url.append(link)
# print(link)
elif "http" not in link and "tel:" not in link and "mailto:" not in link and not link.startswith('#'):
internal = base_addr + link
if link not in i_url:
i_url.append(internal)
print(i_url)
except Exception:
print("exception")
internal_crawl(base_addr)
for l in i_url:
internal_crawl(l)
我尝试添加以下代码,但无法正常工作。我不确定这是否是因为我的列表正在更改。
for x in i_url:
if x == i_url[-1]:
break
如果同一项目连续两次出现在列表中,有没有办法打破循环?
答案 0 :(得分:0)
不确定您要做什么。如果我理解正确,一种方法是:
prev = None
for x in i_url:
if x == prev:
break
# do stuff
prev = x
答案 1 :(得分:0)
这是你的追求吗
y = None
i_url = ["x", "y","z", "z","a"]
for x in i_url:
if x==y :
print ("found ", x)
break
else:
y=x