# pick up the file which needs to be processed
current_file = file_names[0]
print "Processing current file: " + current_file
key = bucket.get_key(current_file)
print "Processing key: " + str(key)
key.get_contents_to_filename(working_dir + "test_stats_temp.dat")
print "Current directory: ",outputdir
print "File to process:",current_file
处理测试输出:ds = 2013-08-27
处理当前文件:output / test_count_day / ds = 2013-08-27 / task_201308270934_0003_r_000000
处理密钥:密钥:hadoop.test.com,output / test_count_day / ds = 2013-08-27 / task_201308270934_0003_r_000000
Traceback (most recent call last):
File "queue_consumer.py", line 493, in <module>
test_process.load_test_cnv_stats_daily(datestring,working_dir,mysqlconn,s3_conn,test_output_bucket,test_output)
File "/home/sbr/aaa/test_process.py", line 46, in load_test_cnv_stats_daily
key.get_contents_to_filename(working_dir + "test_stats_temp.dat")
File "/usr/lib/python2.7/dist-packages/boto/s3/key.py", line 1275, in get_contents_to_filename
fp = open(filename, 'wb')
IOError: [Errno 2] No such file or directory: '/home/sbr/aaa/test_stats_temp.dat'
当我从S3输出向DB提取数据时,我收到了此错误。我在这里很困惑。如何处理这个问题?
答案 0 :(得分:2)
错误:
IOError: [Errno 2] No such file or directory: '/home/sbr/aaa/test_stats_temp.dat'
表示使用working_dir
设置的路径不存在。创建目录将修复它。