使用python(在mac上)来清除Linkedin上的公司列表 - 默认为重试或< 999>错误

时间:2017-11-29 12:03:47

标签: python html error-handling web-scraping linkedin

我是新手,我正在尝试自动从Linkedin上的每个公司页面中提取详细信息。

我正在修改我发现的一段代码,这些代码不会超出requests.get,我的输出会立即默认重试。当我将标头作为参数启用时会发生这种情况。当我把它留下来时,我实际得到一个< 999>响应。

关于如何在这里取得进展的任何想法?如何移动解决999错误,或者如果程序立即默认重试添加的标题,我如何理解错误。

from lxml import html
import csv, os, json 
import requests
from time import sleep
import certifi
import urllib3
urllib3.disable_warnings()



def linkedin_companies_parser(url):
for i in range(5):
    try:

        print("looking at the headers")
        headers = {
        "accept" : "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
        "accept-encoding" : "gzip, deflate, sdch, br",
        "accept-language" : "en-US,en;q=0.8,ms;q=0.6",
        "user-agent" : " Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"}

        print ("Fetching :",url)   
        response = requests.get(url, headers = headers, verify=False)
        print (response)
        formatted_response = response.content.replace('<!--', '').replace('-->', '')
        print (formatted_response)
        doc = html.fromstring(formatted_response)
        print ("we have come here")

        datafrom_xpath = doc.xpath('//code[@id="stream-promo-top-bar-embed-id-content"]//text()')
        content_about = doc.xpath('//code[@id="stream-about-section-embed-id-content"]')
        if not content_about:
            content_about = doc.xpath('//code[@id="stream-footer-embed-id-content"]')
        if content_about:
            pass
            # json_text = content_about[0].html_content().replace('<code id="stream-footer-embed-id-content"><!--','').replace('<code id="stream-about-section-embed-id-content"><!--','').replace('--></code>','')

        if datafrom_xpath:
            try:
                json_formatted_data = json.loads(datafrom_xpath[0])

                company_name = json_formatted_data['companyName'] if 'companyName' in json_formatted_data.keys() else None
                size = json_formatted_data['size'] if 'size' in json_formatted_data.keys() else None
                industry = json_formatted_data['industry'] if 'industry' in json_formatted_data.keys() else None
                description = json_formatted_data['description'] if 'description' in json_formatted_data.keys() else None
                follower_count = json_formatted_data['followerCount'] if 'followerCount' in json_formatted_data.keys() else None
                year_founded = json_formatted_data['yearFounded'] if 'yearFounded' in json_formatted_data.keys() else None
                website = json_formatted_data['website'] if 'website' in json_formatted_data.keys() else None
                type = json_formatted_data['companyType'] if 'companyType' in json_formatted_data.keys() else None
                specialities = json_formatted_data['specialties'] if 'specialties' in json_formatted_data.keys() else None

                if "headquarters" in json_formatted_data.keys():
                    city = json_formatted_data["headquarters"]['city'] if 'city' in json_formatted_data["headquarters"].keys() else None
                    country = json_formatted_data["headquarters"]['country'] if 'country' in json_formatted_data['headquarters'].keys() else None
                    state = json_formatted_data["headquarters"]['state'] if 'state' in json_formatted_data['headquarters'].keys() else None
                    street1 = json_formatted_data["headquarters"]['street1'] if 'street1' in json_formatted_data['headquarters'].keys() else None
                    street2 = json_formatted_data["headquarters"]['street2'] if 'street2' in json_formatted_data['headquarters'].keys() else None
                    zip = json_formatted_data["headquarters"]['zip'] if 'zip' in json_formatted_data['headquarters'].keys() else None
                    street = street1 + ', ' + street2
                else:
                    city = None
                    country = None
                    state = None
                    street1 = None
                    street2 = None
                    street = None
                    zip = None

                data = {
                            'company_name': company_name,
                            'size': size,
                            'industry': industry,
                            'description': description,
                            'follower_count': follower_count,
                            'founded': year_founded,
                            'website': website,
                            'type': type,
                            'specialities': specialities,
                            'city': city,
                            'country': country,
                            'state': state,
                            'street': street,
                            'zip': zip,
                            'url': url
                        }
                return data
            except:
                print ("cant parse page"), url

        # Retry in case of captcha or login page redirection
        if len(response.content) < 2000 or "trk=login_reg_redirect" in url:
            if response.status_code == 404:
                print ("linkedin page not found")
            else:
                raise ValueError('redirecting to login page or captcha found')
    except :
        print ("retrying :"),url

def readurls():
companyurls = ['https://www.linkedin.com/company/tata-consultancy-services']
extracted_data = []
for url in companyurls:
    extracted_data.append(linkedin_companies_parser(url))
    f = open('data.json', 'w')
    json.dump(extracted_data, f, indent=4)

if __name__ == "__main__":
readurls()

1 个答案:

答案 0 :(得分:1)

从Linkedin发送的状态代码999通常表示由于机器人活动或某些其他安全原因而拒绝访问。

最好在无头模式下使用Chrome或Firefox模拟实际用户并抓取页面。它将无需手动设置cookie或传递标题,从而节省了大量时间。

您可以将 Selenium 与python一起使用,以自动执行浏览器导航和抓取。

PS:确保您没有从AWS或其他受欢迎的托管网站运行您的程序,因为这些IP范围被Linkedin阻止,用于未经身份验证的会话。