在Python HTML Scraping中处理xhr请求

时间:2017-08-18 15:36:28

标签: python html ajax web-scraping

我需要从journal_url中抓取整个HTML,这个示例的目的是http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1467-6281/issues。我已经按照本网站上几个问题上显示的请求示例进行了跟踪,但是我没有使用.text或.json()方法返回正确的HTML。我的目标是显示整个HTML,包括每年下面的有序列表和音量下拉。

import requests
import pandas as pd
import http.cookiejar
for i in range(0,len(df)):
         journal_name = df.loc[i,"Journal Full Title"]
         journal_url = df.loc[i,"URL"]+"/issues"
         access_start = df.loc[i,"Content Start Date"]
         access_end = df.loc[i,"Content End Date"]
         #cj = http.cookiejar.CookieJar()
         #opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))

         headers = {"X-Requested-With": "XMLHttpRequest",
               "User-Agent": "Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36"}

         r = requests.get(journal_url, headers=headers)

         response = r.text
         print(response)

1 个答案:

答案 0 :(得分:1)

如果你的最终目标是从该页面解析上面提到的内容,那么它就是:

import requests ; from bs4 import BeautifulSoup

base_link = "http://onlinelibrary.wiley.com" ; main_link = "http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1467-6281/issues"

def abacus_scraper(main_link):
    soup = BeautifulSoup(requests.get(main_link).text, "html.parser")
    for titles in soup.select("a.issuesInYear"):
        title = titles.select("span")[0].text
        title_link = titles.get("href")
        main_content(title, title_link)

def main_content(item, link):
    broth = BeautifulSoup(requests.get(base_link + link).text, "html.parser")
    elems = [issue.text for issue in broth.select("div.issue a")]
    print(item, elems)

abacus_scraper(main_link)