我正在为网站[https://edp.by/shop/womens-fragrances/][1]创建解析器 首先,我从网站上获得了所有链接以浏览网站
import requests
from bs4 import BeautifulSoup
def get_html(url):
r = requests.get(url,'lxml')
return r.text
url='https://edp.by/'
html=get_html(url)
soup=BeautifulSoup(html, )
x = soup.findAll("div", {"class": "row mainmenu"})
#print(x)
links=[]
for i in x:
z=i.find_all("ul", {"class": "nav navbar-nav"})[0].find_all("a", {"class": "dropdown-toggle"})
print(233,z,len(z),type(z))
for i in z:
q=i["href"]
links.append(url+str(q))
然后我正在尝试从链接中获取每种产品:
url='https://edp.by/shop/womens-fragrances/'
html=get_html(url)
soup=BeautifulSoup(html, )
#x = soup.findAll("div", {"class": "row"})
#print()
action = soup.find('form').get('action')
print(action)
结果是:/search/
但是在网站上,我通过Google代码分析器看到了所有结构
<form method="get" action="/shop/womens-fragrances/">
<div class="rr-widget" data-rr-widget-category-id="594" data-rr-widget-id="516e7cba0d422d00402a14b4" data-rr-widget-width="100%"></div>
<div class="shop_block">
<div class="shop_table">
<div class="col-md-4 col-sm-4 col-xs-12 product">
<div class="block">
<a href="/shop/womens-fragrances/43653/">
<img src="/images/no-image.png" class="text-center" alt="" title="">
<p class="fch"></p>
<p class="tch">0,00 руб. </p>
</a>
我想获取产品,图像和文本的链接,但是bs4没有显示它。是什么原因,我怎么能得到它? 我也尝试了机械汤,也没有结果
import mechanicalsoup
browser = mechanicalsoup.StatefulBrowser()
browser.open(links[0])
form = browser.select_form('form')
action = form.form.attrs['action']
print(action) `/search/`
答案 0 :(得分:1)
get-aduser -Filter {Enabled -eq $true} -Properties SamAccountName,Name,EmployeeNumber,DistinguishedName,Created,msDS-parentdistname | where {($_.EmployeeNumber -eq $null) -and ($_.PrimaryGroup -eq 'CN=Domain Users,CN=Users,DC=OURDOMAIN,DC=net')} | Sort-Object msDS-parentdistname | FT SamAccountName,Name,EmployeeNumber,DistinguishedName,Created,msDS-parentdistname | export-csv Users.csv
将仅获得该标签的首次出现。带有.find()
标签的6个元素。您可以使用<form>
,然后进行遍历,您将看到它是该列表中的第三个索引位置:
.find_all()
输出:
import requests
from bs4 import BeautifulSoup
def get_html(url):
r = requests.get(url,'lxml')
return r.text
url='https://edp.by/'
html=get_html(url)
soup=BeautifulSoup(html, )
x = soup.findAll("div", {"class": "row mainmenu"})
#print(x)
links=[]
for i in x:
z=i.find_all("ul", {"class": "nav navbar-nav"})[0].find_all("a", {"class": "dropdown-toggle"})
print(233,z,len(z),type(z))
for i in z:
q=i["href"]
links.append(url+str(q))
url='https://edp.by/shop/womens-fragrances/'
html=get_html(url)
soup=BeautifulSoup(html, 'html.parser')
#x = soup.findAll("div", {"class": "row"})
#print()
actions = soup.find_all('form')
for action in actions:
alpha = action.get('action')
print (alpha)