在从Google学术搜索中检索href时发现问题

时间:2019-07-03 16:47:17

标签: python xpath href google-scholar

在抓取Google学术搜索的链接和文章名称时遇到了麻烦。我不确定问题是否出在我的代码或用于获取数据的xpath上,或者可能是两者都存在?

在过去的几个小时中,我已经尝试调试/咨询其他stackoverflow查询,但没有成功。

import scrapy
from scrapyproj.items import ScrapyProjItem

class scholarScrape(scrapy.Spider):

    name = "scholarScraper"
    allowed_domains = "scholar.google.com"
    start_urls=["https://scholar.google.com/scholar?hl=en&oe=ASCII&as_sdt=0%2C44&q=rare+disease+discovery&btnG="]

    def parse(self,response):
        item = ScrapyProjItem()
        item['hyperlink'] = item.xpath("//h3[class=gs_rt]/a/@href").extract()
        item['name'] = item.xpath("//div[@class='gs_rt']/h3").extract()
        yield item

我收到的错误消息说:“ AttributeError:xpath”,所以我认为问题出在我用来尝试检索数据的路径上,但是我也可能会误会吗?

2 个答案:

答案 0 :(得分:0)

添加我的评论作为答案,因为它可以解决问题:

问题出在scrapyproj.items.ScrapyProjItem对象上:它们没有xpath属性。这是官方的刮板课程吗?我认为您打算在xpath上致电response

item['hyperlink'] = response.xpath("//h3[class=gs_rt]/a/@href").extract()
item['name'] = response.xpath("//div[@class='gs_rt']/h3").extract()

此外,第一个路径表达式可能需要在属性值“ gs_rt”周围加上一组引号:

item['hyperlink'] = response.xpath("//h3[class='gs_rt']/a/@href").extract()

除此之外,XPath表达式还可以。

答案 1 :(得分:0)

使用 bs4 的其他解决方案:

from bs4 import BeautifulSoup
import requests, lxml, os

headers = {
    'User-agent':
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
# Without proxy it will probably start to block requests after several sended requests.
proxies = {
  'http': os.getenv('HTTP_PROXY')
}

html = requests.get('https://scholar.google.com/citations?hl=en&user=m8dFEawAAAAJ', headers=headers, proxies=proxies).text
soup = BeautifulSoup(html, 'lxml')

# Container where all articles located
for article_info in soup.select('#gsc_a_b .gsc_a_t'):
  # title CSS selector
  title = article_info.select_one('.gsc_a_at').text
  # Same title CSS selector, except we're trying to get "data-href" attribute
  # Note, it will be relative link, so you need to join it with absolute link after extracting.
  title_link = article_info.select_one('.gsc_a_at')['data-href']
  print(f'Title: {title}\nTitle link: https://scholar.google.com{title_link}\n')

# Part of the output:
'''
Title: Automating Gödel's Ontological Proof of God's Existence with Higher-order Automated Theorem Provers.
Title link: https://scholar.google.com/citations?view_op=view_citation&hl=en&user=m8dFEawAAAAJ&citation_for_view=m8dFEawAAAAJ:-f6ydRqryjwC
'''

或者,您可以对来自 SerpApi 的 Google Scholar Author Articles API 执行相同的操作。主要区别在于,即使您使用 selenium,您也不必考虑寻找好的代理,尝试解决 CAPTCHA。这是一个付费 API,可免费试用 5,000 次搜索。

要集成的代码:

from serpapi import GoogleSearch
import os

params = {
  "api_key": os.getenv("API_KEY"),
  "engine": "google_scholar_author",
  "author_id": "9PepYk8AAAAJ",
}

search = GoogleSearch(params)
results = search.get_dict()

for article in results['articles']:
  article_title = article['title']
  article_link = article['link']

# Part of the output:
'''
Title: p-GaN gate HEMTs with tungsten gate metal for high threshold voltage and low gate current
Link: https://scholar.google.com/citations?view_op=view_citation&hl=en&user=9PepYk8AAAAJ&citation_for_view=9PepYk8AAAAJ:bUkhZ_yRbTwC
'''
<块引用>

免责声明,我为 SerpApi 工作。