网络抓取速度慢但不确定为什么

时间:2018-03-31 15:23:42

标签: python-3.x selenium web-scraping beautifulsoup selenium-chromedriver

我有很多网页报废要做,所以我改用无头浏览器,希望能让事情变得更快,但它并没有提高速度。

我查看了这个堆栈溢出帖子,但我不理解有人写的答案Is Selenium slow, or is my code wrong?

这是我的慢代码:

# followed this tutorial https://medium.com/@stevennatera/web-scraping-with-selenium-and-chrome-canary-on-macos-fc2eff723f9e
from selenium import webdriver
options = webdriver.ChromeOptions()
options.binary_location = '/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary'
options.add_argument('window-size=800x841')
options.add_argument('headless')
driver = webdriver.Chrome(chrome_options=options)
driver.get('https://poshmark.com/search?')
xpath='//input[@id="user-search-box"]'
searchBox=driver.find_element_by_xpath(xpath)

brand="anthropology"

style="headband"

searchBox.send_keys(' '.join([brand,style]))

from selenium.webdriver.common.keys import Keys
#EQUIValent of hitting enter key
searchBox.send_keys(Keys.ENTER)




url=driver.current_url
print(url)
import requests
response=requests.get(url)
print(response)


print(response.text)
# using beautiful soup to grab the listins:






#______________________________


#print(response)
html=response.content
from bs4 import BeautifulSoup
from urllib.parse import urljoin



#print(html)
soup=BeautifulSoup(html,'html.parser')

#'a' as in links or anchore tags
anchore_tags=soup.find_all('a')


#print(x)




# finding the hyper links
#href is the hyperlink
hyper_links=[link.get("href") for link in soup.find_all("a")]
#print(hyper_links)

                        #(Better visual link this )
                        #href is the hyperlink
                        # for link in soup.find_all("a"):
                        #
                        #     print(link.get("href"))

clothing_listings=set([listing for listing in hyper_links if listing and "listing" in listing]) #  if the element and the word listing is in the element (becuase there could be a hyperlink that is NONE whcich is why we need the and )
# turning the list into a set because some of them are repeated
print(len(clothing_listings))
print(set(clothing_listings))
print(len(set(clothing_listings)))

#for somereason a link that is called unlike is showing up so im geting rid of those
clothing_listings=set([listing for listing in hyper_links if listing and "unlike" in listing]) #  if the element and the word listing is in the element (becuase there could be a hyperlink that is NONE whcich is why we need the and )
print(len(clothing_listings))# this is the correct size of the amount of clothing items by that search





driver.quit()

为什么要刮这么长时间?

1 个答案:

答案 0 :(得分:2)

您正在使用requests来获取网址。那么,为什么不用它来完成整个任务。您使用selenium的部分似乎是多余的。您只需使用它打开链接,然后使用requests获取生成的URL。您所要做的就是传递相应的标题,您可以通过查看Chrome或Firefox中的开发者工具的网络标签来收集这些标题。

rh = {
    'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
    'accept-encoding': 'gzip, deflate, br',
    'accept-language': 'en-US,en;q=0.9',
    'referer': 'https://poshmark.com/search?',
    'upgrade-insecure-requests': '1',
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36'
}

修改网址以搜索特定字词:

query = 'anthropology headband'
url = 'https://poshmark.com/search?query={}&type=listings&department=Women'.format(query)

然后,使用BeautifulSoup。此外,您可以使用您想要的任何属性来缩小您搜索的链接范围。在您的情况下,它是class的{​​{1}}属性。

covershot-con

结果如下:

r = requests.get(url, headers = rh)
soup = BeautifulSoup(r.content, 'lxml')

links = soup.find_all('a', {'class': 'covershot-con'})

编辑(提示):

  1. 使用for i in links: print(i['href']) /listing/Anthro-Beaded-Headband-5a78fb899a9455e90aef438e /listing/NWT-ANTHROPOLOGIE-Twisted-Vines-Crystal-Headband-5abbfb4a07003ad2dc58142f /listing/Anthropologie-Nicole-Co-White-Floral-Headband-59dea5adeaf0302a5600bc41 /listing/NWT-ANTHROPOLOGIE-Namrata-Spring-Blossom-Headband-5ab5509d72769b52ba31829e . . . /listing/Anthropologie-By-Lilla-Spiky-Blue-Headband-59064f2ffbf6f90bfb01b854 /listing/Anthropologie-Beaded-Headband-5ab2cfe79d20f01a73ab0ddb /listing/Anthropologie-Floral-Hawaiian-Headband-59d09eb941b4e0e1710871ec 作为最后的手段(当所有其他方法都失败时)。就像@Gilles Quenot所说,selenium不是为了快速执行网络请求。

  2. 了解如何使用selenium库(使用标题,传递数据等)。他们的documentation page绰绰有余。它足以满足大多数抓取任务,并且速度很快。

  3. 即使对于需要执行JS的页面,如果你能弄清楚如何使用像requests这样的库来执行JS部分,你也可以使用requests