为什么我无法抓取页面?

时间:2018-10-17 19:08:18

标签: python web-crawler

我正在尝试在网站上刮擦一张桌子,然后将其转换为CSV格式。尽管有我的代码,但什么都没显示。你能告诉我出什么问题了吗?

URL:http://www.multiclick.co.kr/sub/gamepatch/gamerank.html

不用担心语言。请在日历上比今天早一两天的任何时间设置日期,然后单击放大镜。然后,您将可以看到一张桌子。

# Load the required modules
import urllib
from bs4 import BeautifulSoup
import pandas as pd

# Open up the page
url = "http://www.multiclick.co.kr/sub/gamepatch/gamerank.html"
web_page = urllib.request.Request(
        url,
        data = None, 
        headers={'User-Agent': ("Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) "
                                "AppleWebKit/537.36 (KHTML, like Gecko) " 
                                "Chrome/35.0.1916.47 Safari/537.36")})
type(web_page)
web_page = urllib.request.urlopen(web_page)

# Parse the page
soup = BeautifulSoup(web_page, "html.parser")
print(soup)

# Get the table
    # Get the columns
    # Get the rows
    # Stack them altogether

# Save it as a csv form

1 个答案:

答案 0 :(得分:0)

Ais @ mx0表示,而不是获取主页面,而是获取ajax调用,例如:

import csv
import requests

link = "http://ws.api.thelog.co.kr/service/info/rank/2018-10-18"

req = requests.get(link)
content = req.json()
with open('ranks.csv', 'w', newline='') as csvfile:
    csv_writer = csv.writer(csvfile, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL)
    # write column titles
    csv_writer.writerow(['gameRank', 'gameName', 'gameTypeName', 'gameShares', 'publisher', 'gameRankUpDown'])
    # write values
    for row in content["list"]:
        csv_writer.writerow(list(row.values()))