如何通过硒从一个目录中抓取信息

时间:2018-12-30 15:27:02

标签: python selenium

从目录站点抓取联系信息

我正在从目录站点抓取联系信息。 this is not a link 我需要用硒擦。它需要3个步骤, 1.从网站获取公司网址。 2.从下一页/所有页面获取所有公司网址。 3.抓取所有联系信息,例如公司名称,网站,电子邮件。等等 代码如下,但是我遇到两个问题。

# -*- coding: utf-8 -*-
from time import sleep
from scrapy import Spider
from selenium import webdriver
from scrapy.selector import Selector
from scrapy.http import Request
from selenium.common.exceptions import NoSuchElementException
import pandas as pd 
results = list()
driver = webdriver.Chrome('D:\chromedriver_win32\chromedriver.exe')
MAX_PAGE_NUM = 2
for i in range(1, MAX_PAGE_NUM):
    page_num = str(i)
url ="http://www.arabianbusinesscommunity.com/category/Industrial-Automation-Process-Control/" + page_num
driver.get(url)
sleep(5)
sel = Selector(text=driver.page_source)
companies = sel.xpath('//*[@id="categorypagehtml"]/div[1]/div[7]/ul/li/b//@href').extract()
for i in range(0, len(companies)):
    print(companies[i])

    results.append(companies[i])
    print('---')
    for result in results:
        url1 = "http://www.arabianbusinesscommunity.com" +result
        print(url1)
        driver.get(url1)
        sleep(5)

        sel = Selector(text=driver.page_source)


        name = sel.css('h2::text').extract_first()

        country = sel.xpath('//*[@id="companypagehtml"]/div[1]/div[2]/ul[1]/li[1]/span[4]/text()').extract_first()
        if country:
           country = country.strip()
        web = sel.xpath('//*[@id="companypagehtml"]/div[1]/div[2]/ul[1]/li[4]/a/@href').extract_first()
        email = sel.xpath('//a[contains(@href, "mailto:")]/@href').extract_first()
records = [] 
records.append((web,email,country,name))
df = pd.DataFrame(records, columns=['web','email', 'country', 'name']) 

我如上所述编写代码,但是有两个问题。 1.我只能获取最新的公司信息。 2.每次是循环的迭代时,计算机总是单击之前单击的所有URL。

有人可以帮助解决问题吗?

1 个答案:

答案 0 :(得分:1)

这里的代码可从所有页面获取所有公司的详细信息:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC


driver = webdriver.Chrome()
baseUrl = "http://www.arabianbusinesscommunity.com/category/Industrial-Automation-Process-Control"
driver.get(baseUrl)

wait = WebDriverWait(driver, 5)
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, ".search-result-list li")))

# Get last page number
lastPageHref = driver.find_element(By.CSS_SELECTOR, ".PagedList-skipToLast a").get_attribute("href")
hrefArray = lastPageHref.split("/")
lastPageNum = int(hrefArray[len(hrefArray) - 1])

# Get all URLs for the first page and save them in companyUrls list
js = 'return [...document.querySelectorAll(".search-result-list li b a")].map(e=>e.href)'
companyUrls = driver.execute_script(js)

# Iterate through all pages and get all companies URLs
for i in range(2, lastPageNum):
    driver.get(baseUrl + "/" + str(i))
    companyUrls.extend(driver.execute_script(js))

# Open each company page and get all details
companies = []
for url in companyUrls:
    driver.get(url)
    company = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "#companypagehtml")))
    name = company.find_element_by_css_selector("h2").text
    email = driver.execute_script('var e = document.querySelector(".email"); if (e!=null) { return e.textContent;} return "";')
    website = driver.execute_script('var e = document.querySelector(".website"); if (e!=null) { return e.textContent;} return "";')
    phone = driver.execute_script('var e = document.querySelector(".phone"); if (e!=null) { return e.textContent;} return "";')
    fax = driver.execute_script('var e = document.querySelector(".fax"); if (e!=null) { return e.textContent;} return "";')
    country = company.find_element_by_xpath(".//li[@class='location']/span[last()]").text.replace(",", "").strip()
    address = ''.join([e.text.strip() for e in company.find_elements_by_xpath(".//li[@class='location']/span[position() != last()]")])