是否有更好,更简单的方法来下载多个文件?

时间:2017-04-12 02:20:29

标签: python urllib

我去纽约MTA网站下载了一些旋转门数据,并提出了一个只下载2017年数据的脚本。

这是脚本:

import urllib
import re

html = urllib.urlopen('http://web.mta.info/developers/turnstile.html').read()
links = re.findall('href="(data/\S*17[01]\S*[a-z])"', html)

for link in links:
    txting = urllib.urlopen('http://web.mta.info/developers/'+link).read()
    lin = link[20:40]
    fhand = open(lin,'w')
    fhand.write(txting)
    fhand.close()

是否有更简单的方法来编写此脚本?

2 个答案:

答案 0 :(得分:2)

根据@dizzyf的建议,您可以使用BeautifulSoup从网页中获取href值。

from BS4 import BeautifulSoup
soup = BeautifulSoup(html)
links = [link.get('href') for link in soup.find_all('a') 
                          if 'turnstile_17' in link.get('href')]

如果您不必使用Python获取文件,(并且您使用的是wget command的系统),则可以将链接写入文件:

with open('url_list.txt','w') as url_file:
    for url in links:
        url_file.writeline(url)

然后使用wget

下载它们
$ wget -i url_list.txt

wget -i将文件中的所有网址下载到当前目录中,保留文件名。

答案 1 :(得分:0)

以下代码可以满足您的需求。

import requests
import bs4
import time
import random
import re

pattern = '2017'
url_base = 'http://web.mta.info/developers/'
url_home = url_base + 'turnstile.html'
response = requests.get(url_home)
data = dict()

soup = bs4.BeautifulSoup(response.text)
links = [link.get('href') for link in soup.find_all('a', 
text=re.compile('2017'))]
for link in links:
    url = url_base + link
    print "Pulling data from:", url
    response = requests.get(url)
    data[link] = response.text # I don't know what you want to do with the data so here I just store it to a dict, but you could store it to a file as you did in your example.
    not_a_robot = random.randint(2, 15)
    print "Waiting %d seconds before next query." % not_a_robot
    time.sleep(not_a_robot) # some APIs will throttle you if you hit them too quickly