我有这样的代码
onexurl = "https://1xbet.com/en/live/Football/"
reply = requests.get(onexurl)
soup = BeautifulSoup(reply.content, "html.parser")
links = soup.find_all("a", {"class": "c-events__name"})
print(links)
urls = []
for matchlink in links:
urls.append("https://1xbet.com/en/"+(matchlink.get("href")))
print(urls)
从页面获取链接。
结果之一如下:
https://1xbet.com/en/live/Football/24581-AFC-Champions-League/207140194--/
但是原始的源代码是这样的:
<a href="live/Football/24581-AFC-Champions-League/207140194-Kashima-Antlers-Guangzhou-Evergrande/" class="c-events__name"><span title="Kashima Antlers — Guangzhou Evergrande " class="c-events__teams"><div class="c-events-scoreboard__team-wrap"><div class="c-events__team">Kashima Antlers</div> <!----> <!----></div> <div class="c-events-scoreboard__team-wrap"><div class="c-events__team"> Guangzhou Evergrande</div> <!----> <!----></div> <!----> <!----> <!----></span></a>
为什么(matchlink.get("href")
无法获得链接的全文?
答案 0 :(得分:2)
import requests
from bs4 import BeautifulSoup
onexurl = "https://1xbet.com/en/live/Football/"
reply = requests.get(onexurl)
soup = BeautifulSoup(reply.content, "html.parser")
links = soup.find_all("a", {"class": "c-events__name"})
urls = []
for matchlink in links:
url = "https://1xbet.com/en/"+(matchlink["href"]).replace('--/', '')
teams = matchlink.text
remaining_url = ( teams.strip().replace('\n', '-').replace('(', '-').replace(')', '-').replace(' ', '-').replace('--', '-'))
final_url = url + '-' + remaining_url
urls.append(final_url.lower())
print(urls)
哪个提供了URL列表:
['https://1xbet.com/en/live/football/1999982-5h5-dragon-league-league-b/207278079-manchester-city-team-manchester-united-team', 'https://1xbet.com/en/live/football/1471313-indonesia-liga-1/207271440-badak-lampung-kalteng-putra', 'https://1xbet.com/en/live/football/1471313-indonesia-liga-1/207271451-psm-makassar-ps-tira', ]
答案 1 :(得分:0)
这里还有其他事情。让我们检查一下解析器的行为:
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="live/Football/24581-AFC-Champions-League/207140194-Kashima-Antlers-Guangzhou-Evergrande/" class="c-events__name"><span title="Kashima Antlers — Guangzhou Evergrande " class="c-events__teams"><div class="c-events-scoreboard__team-wrap"><div class="c-events__team">Kashima Antlers</div> <!----> <!----></div> <div class="c-events-scoreboard__team-wrap"><div class="c-events__team"> Guangzhou Evergrande</div> <!----> <!----></div> <!----> <!----> <!----></span></a>,
<a href="live/Football/24581-AFC-Champions-League/207140194-Kashima-Antlers-Guangzhou-Evergrande/" class="c-events__name"><span title="Kashima Antlers — Guangzhou Evergrande " class="c-events__teams"><div class="c-events-scoreboard__team-wrap"><div class="c-events__team">Kashima Antlers</div> <!----> <!----></div> <div class="c-events-scoreboard__team-wrap"><div class="c-events__team"> Guangzhou Evergrande</div> <!----> <!----></div> <!----> <!----> <!----></span></a>
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>"""
soup = BeautifulSoup(html_doc, 'html.parser')
for link in soup.find_all('a', {"class": "c-events__name"}):
print(link.get('href'))
返回您提供的href
标签的完整值。
live/Football/24581-AFC-Champions-League/207140194-Kashima-Antlers-Guangzhou-Evergrande/
live/Football/24581-AFC-Champions-League/207140194-Kashima-Antlers-Guangzhou-Evergrande/
下一步是检查链接如何添加到您创建的保存链接的列表中。我们可以简化此表达式:
my_list = []
for ml in links:
my_list.append("http://url.com/" + ml.get("href"))
了解列表:
my_list = ["http://url.com/" + ml.get("href") for ml in links]
并且href值应存储在一个不错的列表中。如果损坏了,请确保您的BeautifulSoup filtering返回了您认为的样子。