i am trying to get these data from the website name Flipkart.com but i am facing error i am using BeautifulSoup & selenium. i cant understand why this error is comming & i also tried many solutions available on internet.
请问有什么解决方法,我应该尝试其他任何方法进行擦拭。
该网站正在使用Selenium驱动程序打开,但无法从该网站获取数据,也无法理解为什么会发生这种情况
here is my code which i am writing ans executing.
from selenium import webdriver
from bs4 import BeautifulSoup
import pandas as pd
#driver = webdriver.Chrome('/usr/local/bin/chromedriver')
driver = webdriver.Chrome(executable_path='chromedriver.exe')
products=[] #List to store name of the product
prices=[] #List to store price of the product
ratings=[] #List to store rating of the product
content=driver.get("https://www.flipkart.com/mobiles/pr?sid=tyy%2C4io&p%5B%5D=facets.brand%255B%255D%3DRealme&otracker=nmenu_sub_Electronics_0_Realme")
soup = BeautifulSoup(content, 'lxml')
print(soup)
for a in soup.findAll('div', attrs={'class':'bhgxx2 col-12-12'}):
name=a.find('div', attrs={'class':'_3wU53n'})
price=a.find('div', attrs={'class':'_1vC4OE _2rQ-NK'})
rating=a.find('div', attrs={'class':'hGSR34'})
products.append(name.text)
prices.append(price.text)
ratings.append(rating.text)
print(rating.text)
df = pd.DataFrame({'Product Name':products,'Price':prices,'Rating':ratings})
print(df)
df.to_csv('products.csv', index=False, encoding='utf-8')
here is my error which i am getting from command.
Traceback (most recent call last):
File "C:\MachineLearning\WebScraping\web.py", line 10, in <module>
soup = BeautifulSoup(content, 'lxml')
File "C:\Users\karti\AppData\Local\Programs\Python\Python37-32\lib\site-packages\bs4\__init__.py", line 267, in __init__
elif len(markup) <= 256 and (
TypeError: object of type 'NoneType' has no len()
答案 0 :(得分:0)
使用driver.get(url)
加载页面后,必须使用driver.page_source
来获取页面源。 driver.get(url)
不返回任何内容。
from selenium import webdriver
driver = webdriver.Chrome(executable_path='/path/to/chromedriver')
driver.get("https://www.flipkart.com/mobiles/pr?sid=tyy%2C4io&p%5B%5D=facets.brand%255B%255D%3DRealme&otracker=nmenu_sub_Electronics_0_Realme")
print(driver.page_source)
您的代码的另一个问题是类bhgxx2 col-12-12
在该页面中被多次使用。其中一些没有产品。这将在您的for循环内给您AttributeError
。
您的代码的有效版本:
from selenium import webdriver
from bs4 import BeautifulSoup
import pandas as pd
driver = webdriver.Chrome(executable_path='/path/to/chromedriver')
products = [] # List to store name of the product
prices = [] # List to store price of the product
ratings = [] # List to store rating of the product
driver.get("https://www.flipkart.com/mobiles/pr?sid=tyy%2C4io&p%5B%5D=facets.brand%255B%255D%3DRealme&otracker=nmenu_sub_Electronics_0_Realme")
soup = BeautifulSoup(driver.page_source, 'lxml')
for a in soup.findAll('div', attrs={'class':'bhgxx2 col-12-12'}):
try:
name = a.find('div', attrs={'class':'_3wU53n'})
price = a.find('div', attrs={'class':'_1vC4OE _2rQ-NK'})
rating = a.find('div', attrs={'class':'hGSR34'})
products.append(name.text)
prices.append(price.text)
ratings.append(rating.text)
except AttributeError:
pass
df = pd.DataFrame({'Product Name': products, 'Price': prices, 'Rating': ratings})
print(df)
df.to_csv('products.csv', index=False, encoding='utf-8')
输出
Price Product Name Rating
0 ₹5,999 Realme C2 (Diamond Black, 16 GB) 4.4
1 ₹5,999 Realme C2 (Diamond Blue, 16 GB) 4.4
2 ₹8,999 Realme 3 (Radiant Blue, 32 GB) 4.5
3 ₹8,999 Realme 3 (Dynamic Black, 32 GB) 4.5
4 ₹9,999 Realme 3 (Dynamic Black, 64 GB) 4.5
5 ₹10,999 Realme 3 (Diamond Red, 64 GB) 4.4
...