获得500万缩短网址的最快版本的最快方法

时间:2017-03-01 02:13:05

标签: python url multiprocessing urllib2

我正在开展一个项目,我需要扩展500万个缩短的网址。任何URL缩短器都可以缩短这些URL。最快的方法是什么?

当前代码:

import csv
import pandas as pd
from urllib2 import urlopen
import urllib2
import threading
import time



def urlResolution(url,tweetId,w):

    try:

        print "Entered Function"
        print "Original Url:",url

        hdr = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
       'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
       'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
       'Accept-Encoding': 'none',
       'Accept-Language': 'en-US,en;q=0.8',
       'Connection': 'keep-alive'}

        #header has been added since some sites give an error otherwise
        req = urllib2.Request(url, headers=hdr)
        temp = urlopen(req)
        newUrl = temp.geturl()
        print "Resolved Url:",newUrl
        if newUrl!= 'None':
            print "in if condition"
            w.writerow([tweetId,newUrl])

    except Exception,e:
        print "Throwing exception"
        print str(e)
        return None


def urlResolver(urlFile):
    df=pd.read_csv(urlFile, delimiter="\t")

    df['Url']
    df2 = df[["Tweet ID","Url"]].copy()
    start = time.time()

    df3 = df2[df2.Url!="None"]

    list_url = []
    n=0
    w = csv.writer(open("OUTPUT_FILE.tsv", "w"), delimiter = '\t')
        w.writerow(["Tweet ID","Url"])

    maxC = 0
    while maxC < df3.shape[0]:
        #creates threads
        #only 40 threads are created at a time, since for large number of threads it gives <too many open files> error
        threads = [threading.Thread(target=urlResolution, args=(df3.iloc[n]['Url'],df3.iloc[n]['Tweet ID'],w)) for n in range(maxC,maxC+40)]


        for thread in threads:
                thread.start()
        for thread in threads:
                thread.join()
        if maxC+40 >= df3.shape[0]:
            threads = [threading.Thread(target=urlResolution, args=(df3.iloc[n]['Url'],df3.iloc[n]['Tweet ID'],w)) for n in range(maxC,df3.shape[0])]

                    print "threads complete"
                    for thread in threads:
                            thread.start()
                    for thread in threads:
                            thread.join()   
            break
        maxC = maxC + 40
    print "Elapsed Time: %s" % (time.time() - start)

    w.close()




if __name__ == '__main__':
    df3 = urlResolver("INPUT_FILE.tsv")

我使用urllib2(用于url扩展)在python中编写了一个多处理程序,但它看起来真的很慢。

有关如何进一步加快速度的提示吗?

0 个答案:

没有答案