Google新闻抓取器翻页

时间:2017-05-04 03:58:47

标签: python python-3.x web-scraping web-crawler google-crawlers

继续以前的工作来抓取有关查询的所有新闻结果并返回标题和网址,我正在改进抓取工具以获取Google新闻中所有网页的所有结果。目前的代码似乎只能返回第一页Googel新闻搜索结果。将不胜感激知道如何获得所有页面结果。非常感谢!

我的代码如下:

import requests
from bs4 import BeautifulSoup
import time
import datetime
from random import randint 
import numpy as np
import pandas as pd


query2Google = input("What do you want from Google News?\n")

def QGN(query2Google):
    s = '"'+query2Google+'"' #Keywords for query
    s = s.replace(" ","+")
    date = str(datetime.datetime.now().date()) #timestamp
    filename =query2Google+"_"+date+"_"+'SearchNews.csv' #csv filename
    f = open(filename,"wb")
    url = "http://www.google.com.sg/search?q="+s+"&tbm=nws&tbs=qdr:y" # URL for query of news results within one year and sort by date 

    #htmlpage = urllib2.urlopen(url).read()
    time.sleep(randint(0, 2))#waiting 

    htmlpage = requests.get(url)
    print("Status code: "+ str(htmlpage.status_code))
    soup = BeautifulSoup(htmlpage.text,'lxml')

    df = []
    for result_table in soup.findAll("div", {"class": "g"}):
        a_click = result_table.find("a")
        #print ("-----Title----\n" + str(a_click.renderContents()))#Title

        #print ("----URL----\n" + str(a_click.get("href"))) #URL

        #print ("----Brief----\n" + str(result_table.find("div", {"class": "st"}).renderContents()))#Brief

        #print ("Done")
        df=np.append(df,[str(a_click.renderContents()).strip("b'"),str(a_click.get("href")).strip('/url?q='),str(result_table.find("div", {"class": "st"}).renderContents()).strip("b'")])


        df = np.reshape(df,(-1,3))
        df1 = pd.DataFrame(df,columns=['Title','URL','Brief'])
    print("Search Crawl Done!")

    df1.to_csv(filename, index=False,encoding='utf-8')
    f.close()
    return

QGN(query2Google)

2 个答案:

答案 0 :(得分:0)

曾经有一个ajax api,但它不再可用了 如果你想获得多个页面,你仍然可以用for循环修改你的脚本;如果你想获得所有页面,你可以用while循环修改脚本。
示例:

url = "http://www.google.com.sg/search?q="+s+"&tbm=nws&tbs=qdr:y&start="  
pages = 10    # the number of pages you want to crawl # 

for next in range(0, pages*10, 10) : 
    page = url + str(next)
    time.sleep(randint(1, 5))    # you may need longer than that #
    htmlpage = requests.get(page)    # you should add User-Agent and Referer #
    print("Status code: " + str(htmlpage.status_code))
    if htmlpage.status_code != 200 : 
        break    # something went wrong #  
    soup = BeautifulSoup(htmlpage.text, 'lxml')

    ... process response here ...

    next_page = soup.find('td', { 'class':'b', 'style':'text-align:left' }) 
    if next_page is None or next_page.a is None : 
        break    # there are no more pages #

请记住,谷歌不喜欢机器人,你可能会被禁止。
您可以在headers中添加“User-Agent”和“Referer”来模拟Web浏览器,并使用time.sleep(random.uniform(2, 6))来模拟人类......或使用selenium。

答案 1 :(得分:0)

您还可以在查询结尾添加& num = 25,然后您将返回包含该数量结果的网页。在此示例中,您将返回25个谷歌搜索结果。