我要刮擦https://www.ebay.co.uk/sch/i.html?_from=R40&_sacat=0&_nkw=xbox&_pgn=2&_skc=50&rt=nc并获取磁贴(Microsoft Xbox 360 E 250 GB黑色控制台,Microsoft Xbox One S 1TB Console白色和2个无线控制器等)。在适当的时候,我想为Python脚本提供不同的eBay URL,但是出于这个问题,我只想关注一个特定的eBay URL。
然后我想将它们的标题添加到要写入Excel的数据框中。我想我自己可以做这部分。
没用-
for post in soup.findAll('a',id='ListViewInner'):
print (post.get('href'))
没用-
for post in soup.findAll('a',id='body'):
print (post.get('href'))
没用-
for post in soup.findAll('a',id='body'):
print (post.get('href'))
h1 = soup.find("a",{"class":"lvtitle"})
print(h1)
没用-
for post in soup.findAll('a',attrs={"class":"left-center"}):
print (post.get('href'))
没用-
for post in soup.findAll('a',{'id':'ListViewInner'}):
print (post.get('href'))
这给了我指向网页错误部分的链接,我知道href是超链接而不是标题,但是我认为如果下面的代码有效,我可以将其修改为标题-
for post in soup.findAll('a'):
print (post.get('href'))
这是我所有的代码-
import pandas as pd
from pandas import ExcelWriter
from pandas import ExcelFile
import urllib.request
from bs4 import BeautifulSoup
#BaseURL, Syntax1 and Syntax2 should be standard across all
#Ebay URLs, whereas Request and PageNumber can change
BaseURL = "https://www.ebay.co.uk/sch/i.html?_from=R40&_sacat=0&_nkw="
Syntax1 = "&_skc=50&rt=nc"
Request = "xbox"
Syntax2 = "&_pgn="
PageNumber ="2"
URL = BaseURL + Request + Syntax2 + PageNumber + Syntax1
print (URL)
HTML = urllib.request.urlopen(URL).read()
#print(HTML)
soup=b(HTML,"html.parser")
#print (soup)
for post in soup.findAll('a'):
print (post.get('href'))
答案 0 :(得分:1)
使用速度更快的CSS选择器。
import requests
from bs4 import BeautifulSoup
url = 'https://www.ebay.co.uk/sch/i.html?_from=R40&_sacat=0&_nkw=xbox&_pgn=2&_skc=50&rt=nc'
Res = requests.get(url)
soup = BeautifulSoup(Res.text,'html.parser')
for post in soup.select("#ListViewInner a"):
print(post.get('href'))
使用format()
函数代替串联字符串。
import pandas as pd
from pandas import ExcelWriter
from pandas import ExcelFile
import urllib.request
from bs4 import BeautifulSoup
BaseURL = "https://www.ebay.co.uk/sch/i.html?_from=R40&_sacat=0&_nkw={}&_pgn={}&_skc={}&rt={}"
skc = "50"
rt = "nc"
Request = "xbox"
PageNumber = "2"
URL = BaseURL.format(Request,PageNumber,skc,rt)
print(URL)
HTML = urllib.request.urlopen(URL).read()
soup = BeautifulSoup(HTML,"html.parser")
for post in soup.select('#ListViewInner a'):
print(post.get('href'))