将Python-Scopus API结果导出为CSV

时间:2016-09-26 11:23:13

标签: python csv python-3.5 scopus

我是Python的新手,所以不确定是否可以这样做,但我希望它可以!

我已经访问了Scopus API并设法运行搜索查询,该查询在pandas数据帧中提供了以下结果:

                                                            search-results
entry                    [{'@_fa': 'true', 'affiliation': [{'@_fa': 'tr...
link                     [{'@_fa': 'true', '@ref': 'self', '@type': 'ap...
opensearch:Query         {'@role': 'request', '@searchTerms': 'AFFIL(un...
opensearch:itemsPerPage                                                200
opensearch:startIndex                                                    0
opensearch:totalResults                                             106652

如果可能,我想将106652结果导出到csv文件中,以便对它们进行分析。这有可能吗?

1 个答案:

答案 0 :(得分:0)

首先,您需要获得所有结果(请参阅问题中的评论)。 您需要的数据(搜索结果)位于"条目"名单。 您可以提取该列表并将其附加到支持列表,迭代直到获得所有结果。在这里我循环,在每一轮我从结果总数中减去下载的项目(计数)。

        found_items_num = 1
        start_item = 0
        items_per_query = 25
        max_items = 2000
        JSON = []

        print ('GET data from Search API...')

        while found_items_num > 0:

            resp = requests.get(self._url,
                                headers={'Accept': 'application/json', 'X-ELS-APIKey': MY_API_KEY},
                                params={'query': query, 'view': view, 'count': items_per_query,
                                        'start': start_item})

            print ('Current query url:\n\t{}\n'.format(resp.url))

            if resp.status_code != 200:
                # error
                raise Exception('ScopusSearchApi status {0}, JSON dump:\n{1}\n'.format(resp.status_code, resp.json()))

            # we set found_items_num=1 at initialization, on the first call it has to be set to the actual value
            if found_items_num == 1:
                found_items_num = int(resp.json().get('search-results').get('opensearch:totalResults'))
                print ('GET returned {} articles.'.format(found_items_num))

            if found_items_num == 0:
                pass
            else:
                # write fetched JSON data to a file.
                out_file = os.path.join(str(start_item) + '.json')

                with open(out_file, 'w') as f:
                    json.dump(resp.json(), f, indent=4)
                    f.close()

                # check if results number exceed the given limit
                if found_items_num > max_items:
                    print('WARNING: too many results, truncating to {}'.format(max_items))
                    found_items_num = max_items



                # check if returned some result
                if 'entry' in resp.json().get('search-results', []):
                    # combine entries to make a single JSON
                    JSON += resp.json()['search-results']['entry']
            # set counters for the next cycle
            self._found_items_num -= self._items_per_query
            self._start_item += self._items_per_query
            print ('Still {} results to be downloaded'.format(self._found_items_num if self._found_items_num > 0 else 0))

        # end while - finished downloading JSON data

然后,在此之外,您可以像这样保存完整的文件......

out_file = os.path.join('articles.json')
        with open(out_file, 'w') as f:
            json.dump(JSON, f, indent=4)
            f.close()

或者您可以按照csv writer(未经测试,您可以搜索' json到cvs python'并获得许多指南)将json数据转换为csv