Python Jupyter Notebook无法运行我的代码,不断重新连接

时间:2019-05-28 17:48:41

标签: python url multiprocessing jupyter-notebook urllib

为什么这段代码无法在Jupyter Notebook上正常运行。 它保持重新连接,没有任何结果。我尝试建立一个数据库并从Web服务器上尽可能快地抓取数据。我使用线程来加快处理过程,并遍历多个URL(每个不同的URL代表不同的日子)。

import pandas as pd
import datetime
import urllib
import requests
from pprint import pprint
import time
from io import StringIO
from multiprocessing import Process, Pool

symbols = ['AAP']

start = time.time()
dflist = []


def load(date):
    if date is None:
        return
    url = "http://regsho.finra.org/FNYXshvol{}.txt".format(date)
    try:
        df = pd.read_csv(url,delimiter='|')
        if any(df['Symbol'].isin(symbols)):
            stocks = df[df['Symbol'].isin(symbols)]
            print(stocks.to_string(index=False, header=False))
            # Save stocks to mysql
        else:
            print(f'No stock found for {date}' )
    except urllib.error.HTTPError:
        pass


pool = []
numdays = 365
start_date = datetime.datetime(2019, 1, 15 )  #year - month - day
datelist = [
        (start_date - datetime.timedelta(days=x)).strftime('%Y%m%d') for x in range(0, numdays)
        ]

pool = Pool(processes=16)
pool.map(load, datelist)

pool.close()
pool.join()

print(time.time() - start)

想知道我该如何解决并使之工作

0 个答案:

没有答案