解析谷歌图片搜索结果

时间:2013-10-16 11:16:17

标签: python selenium python-requests

我在解析谷歌图片搜索结果时遇到问题。我试过selenium webdriver。它给了我100个结果,但它很慢。我决定申请一个包含requests模块的页面,它只返回了20个结果。我怎样才能获得相同的100个结果?有没有办法分页或什么?
这是selenium代码:

_url = r'imgurl=([^&]+)&'

for search_url in lines:
    driver.get(normalize_search_url(search_url))

    images = driver.find_elements(By.XPATH, u"//div[@class='rg_di']")
    print "{0} results for {1}".format(len(images), ' '.join(driver.title.split(' ')[:-3]))
    with open('urls/{0}.txt'.format(search_url.strip().replace('\t', '_')), 'ab') as f:
        for image in images:
            url = image.find_element(By.TAG_NAME, u"a")
            u = re.findall(_url, url.get_attribute("href"))
            for item in u:
                f.write(item)
                f.write('\n')

这里是requests代码:

_url = r'imgurl=([^&]+)&'

for search_url in lines[:10]:
    print normalize_search_url(search_url)
    links = 0
    request = requests.get(normalize_search_url(search_url))
    soup = BeautifulSoup(request.text)
    file = 'cars2/{0}.txt'.format(search_url.strip().replace(' ', '_'))
    with open(file, 'ab') as f:
        for image in soup.find_all('a'):
            if 'imgurl' in image.get('href'):
                links += 1
            u = re.findall(_url, image.get("href"))
            for item in u:
                f.write(item)
                f.write('\n')
                print item
        print "{0} links extracted for {1}".format(links, ' '.join(soup.title.name.split(' ')[:-3]))

1 个答案:

答案 0 :(得分:1)

我从未尝试过使用硒,但你尝试使用谷歌的搜索引擎API吗?它可能对您有用:https://developers.google.com/products/#google-search

此外,他们对API的限制是每天100个请求,所以我认为你不会超过100个

相关问题