我正在研究数据管道。在其中一个步骤中,来自S3的CSV由RedShift DataNode使用。我的RedShift表有78列。检查:
SELECT COUNT(*) FROM information_schema.columns WHERE table_name = 'my_table';
失败的RedshiftCopyActivity'stl_load_errors'表显示“行号未找到分隔符”(1214)错误,第1行,列名称空间(这是第二列,varchar(255))在位置0上。消费的CSV行看起来像这样:
0,my.namespace.string,2119652,458031,S,60,2015-05-02,2015-05-02 14:51:02,2015-05-02 14:51:14.0,1,Counter,1,Counter 01,91,Chaymae,0,,,,227817,1,Dine In,5788,2015-05-02 14:51:02,2015-05-02 14:51:27,17.45,0.00,0.00,17.45,,91,Chaymae,0,0.00,12,M,A,-1,13,F,0,0,2,2.50,F,1094055,Coleslaw Md Upt,8,Sonstige,900,Sides,901,Sides,0.00,0.00,0,,,0.0000,0,0,,,0.00,0.0000,0.0000,0,,,0.00,0.0000,,1,Woche Counter,127,Coleslaw Md Upt,2,2.50
在简单替换(“,”到“\ n”)后,我有78行,所以看起来数据应该匹配...我坚持这一点。也许有人知道如何找到有关错误的更多信息或查看解决方案?
修改
查询:
select d.query, substring(d.filename,14,20),
d.line_number as line,
substring(d.value,1,16) as value,
substring(le.err_reason,1,48) as err_reason
from stl_loaderror_detail d, stl_load_errors le
where d.query = le.query
and d.query = pg_last_copy_id();
结果为0行。
答案 0 :(得分:5)
我想通了,也许对其他人有用:
事实上有两个问题。
INT IDENTITY(1,1)
类型,而CSV中我有0
值。从CSV中删除第一列后,即使没有指定的列,映射的所有内容都会被复制而不会出现问题... DELIMITER ','
commandOption 已添加到 S3ToRedshiftCopyActivity 以强制使用逗号。没有它,RedShift将来自命名空间(my.namespace.string)的点识别为分隔符。答案 1 :(得分:0)
您需要添加FORMAT AS JSON's3://yourbucketname/aJsonPathFile.txt'。 AWS尚未提到这一点。请注意,这仅在您的数据采用
之类的json格式时才有效{'attr1':'val1','attr2':'val2'} {'attr1':'val1','attr2':'val2'} {'attr1':'val1','attr2':'val2'} {'attr1':'val1','attr2':'val2'}