从网站下载所有图像[不是缩略图] Python

时间:2013-12-21 15:40:50

标签: python image search

我有一堆url存储在列表中。 迭代地,我必须以原始质量下载文件夹中的所有图像。现在我只能下载该网站中所有内容的缩略图。作为项目的一部分,我需要原始质量的图像。我从谷歌图片搜索获得图像,我在这里显示的代码。但经过几次迭代后,它会通过检索相同的链接来下载相同的图像。

import os
import sys
import time
from urllib import FancyURLopener
import urllib2
import simplejson
import time


# Define search term
searchTerm = "sachin"

# Replace spaces ' ' in search term for '%20' in order to comply with request
searchTerm = searchTerm.replace(' ','%20')


# Start FancyURLopener with defined version 
class MyOpener(FancyURLopener): 
version = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11'
myopener = MyOpener()

# Set count to 0
count= 0

for i in range(0,100):
# Notice that the start changes for each iteration in order to request a new set of    images for each loop
 url = ('https://ajax.googleapis.com/ajax/services/search/images?' + 'v=1.0&q='+searchTerm+'&start='+str(i*10)+'&userip=MyIP')
 print url
 request = urllib2.Request(url, None, {'Referer': 'testing'})
 response = urllib2.urlopen(request)

# Get results using JSON
results = simplejson.load(response)
if results["responseStatus"] == 200:
    data = results['responseData']
    dataInfo = data['results']

# Iterate for each result and get unescaped url
for myUrl in dataInfo:
    count = count + 1
    my_url = myUrl['unescapedUrl']
    print my_url
    f = open("C:\\Sarath\\links.txt", "a")
    f.write(my_url+ "\n")
    myopener.retrieve(myUrl['unescapedUrl'],str(count)+'.jpg')
    time.sleep(1)

0 个答案:

没有答案