如何爬行arxiv的理智?

时间:2019-08-01 15:20:56

标签: python web-crawler

我要抓取“链接”,“标题”和“摘要”

我该如何抓取?

我尝试过

import requests
import json

url = 'http://www.arxiv-sanity.com/top?timefilter=year&vfilter=all'
res = requests.get(url)
text = res.text
# print(text)

d = json.loads(text)
print(d['title'], d['link'], d['abstract'])

但是发生SONDecodeError: Expecting value: line 1 column 1 (char 0)

2 个答案:

答案 0 :(得分:1)

该URL返回HTML,而不是json响应。因此,您无法JSON解码。

答案 1 :(得分:0)

使用BeautifulSoup:

import requests
import json
from bs4 import BeautifulSoup as bs

url = 'http://www.arxiv-sanity.com/top?timefilter=year&vfilter=all'
res = requests.get(url)
text = res.text
soup=bs(text, "html.parser")
extract=soup.select('script')[6]

target = extract.decode().split('var papers = ')[1]
target2 = target.replace("}, {","}xxx{").replace('[{','{').replace('}];','}')
final = target2.split('xxx')

for i in range(len(final)):
    if i == len(final)-1:
        last = final[i].split('var pid')[0]
        d = json.loads(last)        
        print(d['title'],d['link'],d['abstract'])
    else:
        d = json.loads(final[i])
        print(d['title'],d['link'],d['abstract'])

示例输出:

BERT: Pre-training of Deep Bidirectional Transformers for Language       Understanding 
http://arxiv.org/abs/1810.04805v2 
We introduce a new language representation model called BERT, which stands
for Bidirectional Encoder Representations from Transformers. Unlike recent
language representation models, BERT is designed to pre-train deep
bidirectional representations from unlabeled text by jointly conditioning on
both left and right context in all layers...