领英个人资料名称抓取

时间:2020-04-13 16:27:40

标签: python web-scraping beautifulsoup

我一直在尝试仅从我拥有的一系列LinkedIn URL中抓取配置文件名称。我正在将bs4与python一起使用。但是无论我做什么,bs4都会返回空数组。发生什么事了?

import requests
from bs4 import BeautifulSoup
import numpy as np
import pandas as pd
import re
r1 = requests.get("https://www.linkedin.com/in/agazdecki/")
coverpage = r1.content
soup1 = BeautifulSoup(coverpage, 'html5lib')
name_container = soup1.find_all("li", class_ = "inline t-24 t-black t-normal break-words")
print(name_container)

3 个答案:

答案 0 :(得分:2)

如果您尝试在不使用JavaScript的情况下加载页面,则会看到 您要查找的元素不存在。换句话说,整个 LinkedIn页面上加载了Javascript(例如single-page applications。事实上,BeautifulSoup正在运行 如预期并解析其获取的页面,该页面包含JavaScript代码和 不是您期望的页面。

>>> coverpage = r1.content
>>> coverpage
b'<html><head>\n<script type="text/javascript">\nwindow.onload =
function() {\n  // Parse the tracking code from cookies.\n  var trk =
"bf";\n  var trkInfo = "bf";\n  var cookies = document.cookie.split(";
");\n  for (var i = 0; i < cookies.length; ++i) {\n    if
((cookies[i].indexOf("trkCode=") == 0) && (cookies[i].length > 8)) {\n
 trk = cookies[i].substring(8);\n    }\n    else if
((cookies[i].indexOf("trkInfo=") == 0) && (cookies[i].length > 8)) {\n
 trkInfo = cookies[i].substring(8);\n    }\n  }\n\n  if
(window.location.protocol == "http:") {\n    // If "sl" cookie is set,
redirect to https.\n    for (var i = 0; i < cookies.length; ++i) {\n
 if ((cookies[i].indexOf("sl=") == 0) && (cookies[i].length > 3)) {\n
 window.location.href = "https:" +
window.location.href.substring(window.location.protocol.length);\n
 return;\n      }\n    }\n  }\n\n  // Get the new domain. For international
domains such as\n  // fr.linkedin.com, we convert it to www.linkedin.com\n
 var domain = "www.linkedin.com";\n  if (domain != location.host) {\n
 var subdomainIndex = location.host.indexOf(".linkedin");\n    if
(subdomainIndex != -1) {\n      domain = "www" +
location.host.substring(subdomainIndex);\n    }\n  }\n\n
 window.location.href = "https://" + domain + "/authwall?trk=" + trk +
"&trkInfo=" + trkInfo +\n      "&originalReferer=" +
document.referrer.substr(0, 200) +\n      "&sessionRedirect=" +
encodeURIComponent(window.location.href);\n}\n</script>\n</head></html>'

您可以尝试使用Selenium之类的东西。

答案 1 :(得分:1)

  1. 第一个错误:您正在使用请求来获取页面,但必须知道,必须首先登录才能使用会话。

  2. 第二个错误:您正在使用css选择器来获取由JavaScript动态生成并由浏览器呈现的元素,因此,如果您查看页面的源代码,则不会发现{{1 }}标签或li或个人资料名称,但json对象中的代码标签除外。

我假设您正在使用会话

class

输出:

import requests , re , json
from bs4 import BeautifulSoup

r1 = requests.Session.get("https://www.linkedin.com/in/agazdecki/", headers={"User-Agent": "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36"})
soup = BeautifulSoup(r1.content, 'html.parser')
info_tag = soup.find('code',text=re.compile('"data":{"firstName":'))
data = json.loads(info_tag.text)
first_name = data['data']['firstName']
last_name = data['data']['lastName']
occupation = data['data']['occupation']
print('First Name :' , first_name)
print('Last Name :' , last_name)
print('occupation :' , occupation)

答案 2 :(得分:0)

我建议使用硒抓取数据。
here

下载Chrome WebDriver
from selenium import webdriver

driver = webdriver.Chrome("Path to your Chrome Webdriver")

#login using webdriver
driver.get('https://www.linkedin.com/login?trk=guest_homepage-basic_nav-header-signin')
username = driver.find_element_by_id('username')
username.send_keys('your email_id here')
password = driver.find_element_by_id('password')
password.send_keys('your password here')
sign_in_button = driver.find_element_by_xpath('//*[@type="submit"]')
sign_in_button.click()


driver.get('https://www.linkedin.com/in/agazdecki/') #change profile_url here.

name = driver.find_element_by_xpath('//li[@class = "inline t-24 t-black t-normal break-words"]').text
print(name)