如何获取查询字符串参数以构成用于Web爬网的URL?

时间:2019-06-05 19:58:36

标签: python xmlhttprequest web-crawler

我正在做一个Web爬网项目,该项目需要从视频网站上200,000个作者视频中的每个视频中收集用户评论。最近,该网站更新了其URL,在其API URL中创建了一个新参数(_signature)。有什么建议来获取这个新参数吗?

示例Web API URL和URL如下:https://www.ixigua.com/api/comment_module/video_comment?_signature=vhm.AAgEAy3U9zpQ3OMV74fpuAAOL1&item_id=6698972531753222663&group_id=6698972531753222663&offset=10 指的是:https://www.ixigua.com/i6698972531753222663/

https://www.ixigua.com/api/comment_module/video_comment?_signature=Xs6IHAAgEABXgvIJnklsal7OiAAAAI3&item_id=6699046583612211720&group_id=6699046583612211720&offset=10 指的是:https://www.ixigua.com/i6699046583612211720/

我要接触最初的20万名作者,是他们的item_id / group_id的列表(我将它们存储在Amazon A3中,您可以在下面的代码中找到它)。另外,item_id与group_id相同。因此,继续前进,我需要的只是_signature。

对于不同的作者,此网站为他们分配一个唯一的_signature。在示例API URL中,第一作者的_signature是:vh-m.AAgEAy3U9zpQ3OMV74fpuAAOL1。第二个是Xs6IHAAgEABXgvIJnklsal7OiAAAAI3

这就是我遇到的麻烦。我浏览了该网站,并在XHR下找到它作为查询字符串参数的一部分,但不知道如何获取它。在此更新之前,API URL不包含_signature。我的原始代码运行顺利,如下所示:

class Id1Spider(scrapy.Spider):
    name = 'id1'
    allowed_domains = ['www.ixigua.com']
    df = pd.read_csv('https://s3.amazonaws.com/xiguaid/group_id.csv')
    df = df.iloc[32640:189680,1] 

    list_id = df.unique().tolist()
    i = 0
    start_urls = ["https://www.ixigua.com/api/comment/list/?group_id="+ str(list_id[i]) +"&item_id="+ str(list_id[i]) +"&offset=0&count=20"]
    user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36'
    offset = 0
    count = 20

    def parse(self, response):
        data = json.loads(response.body)
        comments = data['data']['comments']
        total = data['data']['total']


        for ele in list(comments):
            try:
                comments_id=ele['user']['user_id']
                comments_text=ele['text']
                reply_count=ele['reply_count']
                digg_count=ele['digg_count']
                create_time=ele['create_time']              

            except:
                pass

            item = {
                    'comments_id':comments_id,
                    'comments_text':comments_text,
                    'reply_count':reply_count,
                    'digg_count':digg_count,
                    'create_time':create_time,
                    'item_id': self.list_id[self.i]
                } 
            yield item

        if data['data']['has_more']:
            self.offset += 20
            if self.offset > total:
                self.offset = total
            elif self.offset <= total: 
                self.offset = self.offset

            next_page_url = "https://www.ixigua.com/api/comment/list/?group_id="+ str(self.list_id[self.i]) +"&item_id="+ str(self.list_id[self.i]) +"&offset=" + str(self.offset)+"&count=" + str(self.count)
            yield scrapy.Request(url = next_page_url, callback = self.parse)

        else: 
            self.offset = 0
            self.count = 20
            self.i = self.i + 1
            try:
                next_page_url = "https://www.ixigua.com/api/comment/list/?group_id="+ str(self.list_id[self.i]) +"&item_id="+ str(self.list_id[self.i]) +"&offset=" + str(self.offset)+"&count=" + str(self.count)
                yield scrapy.Request(url = next_page_url, callback = self.parse)
            except:  
                pass

在此先感谢您的任何建议!!!因为这是我的第一篇文章,所以请不要告诉我是否有解决此问题的信息。

0 个答案:

没有答案