使用他们的API的刮痧课程不会超过100门课程

时间:2017-05-30 15:48:31

标签: python-3.x curl web-scraping beautifulsoup web-crawler

这是我使用的curl命令 - >

 curl "https://api.coursera.org/api/courses.v1?start=1&limit=11?includes=instructorIds,partnerIds,specializations,s12nlds,v1Details,v2Details&fields=instructorIds,partnerIds,specializations,s12nlds,description" 

我玩了查询参数-start和limit,但它只重复了2150个课程中相同的100个课程。这是课程目录API的链接 - >

https://docs.google.com/document/d/15gwppUMLp0s1OhbzFZvFSeTbvFkRfSFIkiIKrEP6cUA/edit

Python代码:

 import requests
 import json
 from bs4 import BeautifulSoup
 import csv
 import sys
 reload(sys)
 sys.setdefaultencoding('utf-8')


if __name__ == "__main__":
headers = ({
    "x-user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 
    (KHTML, like Gecko) Chrome/53.0.2785.92 Safari/537.36 
    FKUA/website/41/website/Desktop"})
d = open('result.json', 'r')
data = json.load(d)
print(data)
d.close()

with open("coursera.csv", 'a') as f:

    # Wrote the header once and toggle comment

    header = f.write('instructorIds' + ',' + 'courseType' + ',' + 'name' + ',' + 'partnerIds' + ',' +
                     'slug' + ',' + 'specializations' + ',' + 'course_id' + ',' + 'description' + "\n")


    for i in range(len(data['elements'])):

                instructorIds = data['elements'][i]['instructorIds']

                instructorIds = str(instructorIds)
                if instructorIds:
                    instructorIds = instructorIds.rstrip().replace(',', '')
                    instructorIds = instructorIds.rstrip().replace('\n', '')
                    instructorIds = instructorIds.rstrip().replace('u', '')
                    instructorIds = instructorIds.rstrip().replace('[', '')
                    instructorIds = instructorIds.rstrip().replace(']', '')
                else:
                    instructorIds = ' '
                print(instructorIds)
                courseType = data['elements'][i]['courseType']
                courseType = str(courseType)
                print(courseType)
                name = data['elements'][i]['name']
                name = str(name)
                print(name)
                partnerIds = data['elements'][i]['partnerIds']
                partnerIds = str(partnerIds)
                if partnerIds:
                    partnerIds = partnerIds.rstrip().replace(',', '')
                    partnerIds = partnerIds.rstrip().replace('\n', '')
                    partnerIds = partnerIds.rstrip().replace('u', '')
                    partnerIds = partnerIds.rstrip().replace('[', '')
                    partnerIds = partnerIds.rstrip().replace(']', '')
                else:
                    partnerIds = ' '
                print(partnerIds)
                slug = data['elements'][i]['slug']
                slug = str(slug)
                print(slug)
                specializations = data['elements'][i]['specializations']
                specializations = str(specializations)
                if specializations:
                    specializations = specializations.rstrip().replace(',', '')
                    specializations = specializations.rstrip().replace('\n', '')
                    specializations = specializations.rstrip().replace('u', '')
                    specializations = specializations.rstrip().replace('[', '')
                    specializations = specializations.rstrip().replace(']', '')
                else:
                    specializations = ' '
                print(specializations)
                course_id = data['elements'][i]['id']
                course_id = str(course_id)
                print(course_id)
                description = data['elements'][i]['description']
                description = str(description)
                print(description)




                if description:
                          description = description.rstrip().replace(',', '')
                          description = description.rstrip().replace('\n', '')
                else:
                     description = ' '

                                ####################################################################
                    ### writing the attributes in a csv file


                f.write(instructorIds + ',' + courseType + ',' + name + ',' + partnerIds + ',' + slug + ',' + specializations + ',' + course_id + ',' + description + "\n")

请建议我可以学习所有课程。

2 个答案:

答案 0 :(得分:2)

如果将“限制”设置为2150,则可以通过单个请求获得所有结果。示例:

url = "https://api.coursera.org/api/courses.v1?start=0&limit=2150&includes=instructorIds,partnerIds,specializations,s12nlds,v1Details,v2Details&fields=instructorIds,partnerIds,specializations,s12nlds,description"
data = requests.get(url).json()
print(len(data['elements']))

答案 1 :(得分:0)

Scrapy无法处理站点地图文件吗?在coursera网站上有sitemap index,特别是one sub sitemap file,列出了所有课程的页面。

如果没有,用StormCrawler抓取它应该很容易。