作为Python的新手(2.7),我正在寻找下一个建议:
我有一个csv文件,其中存储的http链接在一列逗号分隔。
http://example.com/file.pdf,
http://example.com/file.xls,
http://example.com/file.xlsx,
http://example.com/file.doc,
主要目的是遍历所有这些链接并按原始扩展名称下载文件。
所以我的搜索结果和帮助给了我下一个脚本:
import urllib2
import pandas as pd
links = pd.read_csv('links.csv', sep=',', header =(0))
url = links # I know this part wrong by don`n know how to do right
user_agent = 'Mozilla 5.0 (Windows 7; Win64; x64)'
file_name = "tessst" # here the files name by how to get their original names
u = urllib2.Request(url, headers = {'User-Agent' : user_agent})
req = urllib2.urlopen(u)
f = open(file_name, 'wb')
f.write(req.read())
f.close()
请任何帮助
P S对熊猫不太确定 - 也许csv更好?
答案 0 :(得分:1)
如果我可以假设您的CSV文件只是一列,包含链接,那么这将有效。
import csv, sys
import requests
import urllib2
import os
filename = 'test.csv'
with open(filename, 'rb') as f:
reader = csv.reader(f)
try:
for row in reader:
if 'http' in row[0]:
#print row
rev = row[0][::-1]
i = rev.index('/')
tmp = rev[0:i]
#print tmp[::-1]
rq = urllib2.Request(row[0])
res = urllib2.urlopen(rq)
if not os.path.exists("./"+tmp[::-1]):
pdf = open("./" + tmp[::-1], 'wb')
pdf.write(res.read())
pdf.close()
else:
print "file: ", tmp[::-1], "already exist"
except csv.Error as e:
sys.exit('file %s, line %d: %s' % (filename, reader.line_num, e))