在Python中下载CSV文件

时间:2015-03-07 22:04:45

标签: python csv python-3.x web-scraping

所以我试图使用下面的代码下载股票数据

from urllib import request

#Download all daily stock data
for firm in ["SONC"]:
  for year in ["2009", "2010", "2011", "2012", "2013", "2014", "2015"]:
    for month in ["01","02","03","04","05","06","07","08","09","10","11","12"]:
      # Retrieve the webpage as a string
      response = request.urlopen("https://www.quandl.com/api/v1/datasets/WIKI/"+firm+".csv?trim_start="+year+"-"+month+"-01&trim_end="+year+"-"+month+"-31&collapse=daily")
      csv = response.read()

      # Save the string to a file
      csvstr = str(csv).strip("b'")

      lines = csvstr.split("\\n")
      f = open(""+firm+"_"+year+""+month+".csv", "w")
      for line in lines:
        f.write(line + "\n")
      f.close()

但我遇到了问题。也就是说它只适用于一次迭代(所以如果我只有一家公司,一年,一个月它可以工作)但不适用于多个

以下是我收到的错误消息

Traceback (most recent call last):
  File "C:/Users/kdaftari/Desktop/ECON431_Program.py", line 8, in <module>
    response = request.urlopen("https://www.quandl.com/api/v1/datasets/WIKI/"+firm+".csv?trim_start="+year+"-"+month+"-01&trim_end="+year+"-"+month+"-31&collapse=daily")
  File "C:\Python34\lib\urllib\request.py", line 161, in urlopen
    return opener.open(url, data, timeout)
  File "C:\Python34\lib\urllib\request.py", line 469, in open
    response = meth(req, response)
  File "C:\Python34\lib\urllib\request.py", line 579, in http_response
    'http', request, response, code, msg, hdrs)
  File "C:\Python34\lib\urllib\request.py", line 507, in error
    return self._call_chain(*args)
  File "C:\Python34\lib\urllib\request.py", line 441, in _call_chain
    result = func(*args)
  File "C:\Python34\lib\urllib\request.py", line 587, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 422: Unprocessable Entity

2 个答案:

答案 0 :(得分:3)

您发送的日期无效;服务器告诉你2月31日没有:

$ curl -D - -s "http://www.quandl.com/api/v1/datasets/WIKI/SONC.csv?trim_start=2009-02-01&trim_end=2009-02-31&collapse=daily"
HTTP/1.1 422 Unprocessable Entity
Cache-Control: no-cache
Content-Disposition: filename=WIKI-SONC.csv
Content-Type: text/csv
Date: Sat, 07 Mar 2015 22:28:59 GMT
Server: nginx
Status: 422 Unprocessable Entity
X-RateLimit-Limit: 50
X-RateLimit-Remaining: 38
X-Request-Id: b5d774b5-e916-40ef-92c4-443ceccf2ba6
X-Runtime: 0.025214
Content-Length: 117
Connection: keep-alive

error
trim_end:You provided 2009-02-31 for trim_end. This is not a recognized date format. Please provide yyyy-mm-dd

请注意正文中的错误消息。

您可以轻松使用datetime.date()个对象生成正确的日期:

from datetime import date, timedelta

for firm in ["SONC"]:
    for year in range(2009, 2016):
        for month in range(1, 13):
            startdate = date(year, month, 1)
            enddate = date(year + (month // 12), month % 12 + 1, 1) - timedelta(days=1)
            url = 'http://www.quandl.com/api/v1/datasets/WIKI/{}.csv?trim_start={:%Y-%m-%d}&trim_end={:%Y-%m-%d}&collapse=daily'.format(
                firm, startdate, enddate)

答案 1 :(得分:2)

您正尝试使用urllib.request下载this个网址,并使用错误422 Unprocessable Entity下载网络服务器响应。

此外,如果您在服务器响应中看到,您将看到该服务器将错误描述为:

error
trim_end:You provided 2009-02-31 for trim_end. This is not a recognized date format. Please provide yyyy-mm-dd

根据Martijn Pieters的建议:2009-02-31是不正确的日期。

我在这里为您修复了代码:

import calendar
import time
from urllib import request, error as urllib_error

#Download all daily stock data
for firm in ["SONC"]:
    for year in range(2009, 2016): # from 2009 to 2015 inclusive
        for month in range(1, 13):   # from 1 to 12 inclusive
            # Get number of days in month
            days_in_month = calendar.monthrange(year, month)[1]

            # Retrieve the webpage as a string
            url = "https://www.quandl.com/api/v1/datasets/WIKI/{firm}.csv" \
                "?trim_start={year}-{month}-01&trim_end={year}-{month}-{days_in_month}" \
                "&collapse=daily".format(
                    firm=firm, year=year, month=month, days_in_month=days_in_month)

            # For easier debugging
            print(url)

            sleep_time = 1
            while True:
                try:
                    response = request.urlopen(url)
                    csv = response.read()
                except urllib_error.HTTPError as ex:
                    if ex.code == 429:  # Too Many Requests
                        print("Server replied with 'Too many requests', sleeping for a second...")
                        time.sleep(sleep_time)

                        # Increase sleep time so that retries doesn't overload server
                        sleep_time = min(2 * sleep_time, 60)

            # Save the string to a file
            file_name = "{firm}_{year}_{month}.csv".format(
                firm=firm, year=year, month=month)
            with open(file_name, "wb") as f:
                f.write(csv)