尝试使用正确的关键字收集此页面上的特定链接,到目前为止我已经:
from bs4 import BeautifulSoup
import random
url = 'http://www.thenextdoor.fr/en/4_adidas-originals'
r = requests.get(url)
soup = BeautifulSoup(r.text, 'lxml')
raw = soup.findAll('a', {'class':'add_to_compare'})
links = raw['href']
keyword1 = 'adidas'
keyword2 = 'thenextdoor'
keyword3 = 'uncaged'
for link in links:
text = link.text
if keyword1 in text and keyword2 in text and keyword3 in text:
我试图提取this link
答案 0 :(得分:3)
您可以检查all()
是否全部存在以及any()
是否存在1
from bs4 import BeautifulSoup
import requests
res = requests.get("http://www.thenextdoor.fr/en/4_adidas-originals").content
soup = BeautifulSoup(res)
atags = soup.find_all('a', {'class':'add_to_compare'})
links = [atag['href'] for atag in atags]
keywords = ['adidas', 'thenextdoor', 'Uncaged']
for link in links:
if all(keyword in link for keyword in keywords):
print link
输出:
http://www.thenextdoor.fr/en/clothing/2042-adidas-originals-Ultraboost-Uncaged-2303002052017.html
http://www.thenextdoor.fr/en/clothing/2042-adidas-originals-Ultraboost-Uncaged-2303002052017.html
答案 1 :(得分:1)
或者,您可以使用a function作为href
的{{1}}属性值,一次性 :
find_all()