Pivotal HDB -Complaints"数据线太长。可能是因为csv数据无效"

时间:2016-01-11 19:58:50

标签: postgresql greenplum hawq

我们有一个小型关键Hadoop-hawq集群。我们已经创建了外部表并指向hadoop文件。

给定环境:

产品版本: x86_64-unknown-linux-gnu上的(HAWQ 1.3.0.2 build 14421),由GCC gcc(GCC)4.4.2编译

尝试:

当我们尝试使用命令从外部表读取时。 即

test=# select count(*) from EXT_TAB ; GETTING following error : ERROR: data line too long. likely due to invalid csv data (seg0 slice1 SEG0.HOSTNAME.COM:40000 pid=447247) 
DETAIL: External table trcd_stg0, line 12059 of pxf://hostname/tmp/def_rcd/?profile=HdfsTextSimple: "2012-08-06 00:00:00.0^2012-08-06 00:00:00.0^6552^2016-01-09 03:15:43.427^0005567^COMPLAINTS ..."  :

其他信息:

外部表的DDL是:

CREATE READABLE EXTERNAL TABLE sysprocompanyb.trcd_stg0
(
    "DispDt" DATE,
    "InvoiceDt" DATE,
    "ID" INTEGER,
    time timestamp without time zone,
    "Customer" CHAR(7),
    "CustomerName" CHARACTER VARYING(30),
    "MasterAccount" CHAR(7),
    "MasterAccName" CHAR(30),
    "SalesOrder" CHAR(6),
    "SalesOrderLine" NUMERIC(4, 0),
    "OrderStatus" CHAR(200),
    "MStockCode" CHAR(30),
    "MStockDes" CHARACTER VARYING(500),
    "MWarehouse" CHAR(200),
    "MOrderQty" NUMERIC(10, 3),
    "MShipQty" NUMERIC(10, 3),
    "MBackOrderQty" NUMERIC(10, 3),
    "MUnitCost" NUMERIC(15, 5),
    "MPrice" NUMERIC(15, 5),
    "MProductClass" CHAR(200),
    "Salesperson" CHAR(200),
    "CustomerPoNumber" CHAR(30),
    "OrderDate" DATE,
    "ReqShipDate" DATE,
    "DispatchesMade" CHAR(1),
    "NumDispatches" NUMERIC(4, 0),
    "OrderValue" NUMERIC(26, 8),
    "BOValue" NUMERIC(26, 8),
    "OrdQtyInEaches" NUMERIC(21, 9),
    "BOQtyInEaches" NUMERIC(21, 9),
    "DispQty" NUMERIC(38, 3),
    "DispQtyInEaches" NUMERIC(38, 9),
    "CustomerClass" CHAR(200),
    "MLineShipDate" DATE
)
LOCATION (
    'pxf://HOSTNAME-HA/tmp/def_rcd/?profile=HdfsTextSimple'
)
FORMAT 'CSV' (delimiter '^' null '' escape '"' quote '"')
ENCODING 'UTF8';

非常感谢任何帮助?

2 个答案:

答案 0 :(得分:2)

基于源代码: https://github.com/apache/incubator-hawq/blob/e48a07b0d8a5c8d41d2d4aaaa70254867b11ee11/src/backend/commands/copy.c

cstate->line_buf.len >= gp_max_csv_line_length为真时发生错误。 根据:http://hawq.docs.pivotal.io/docs-hawq/guc_config-gp_max_csv_line_length.html

csv的默认长度为1048576字节。您是否检查了csv文件长度并尝试增加此设置的值?

答案 1 :(得分:2)

检查第12059行的列数是否与分隔字段数相匹配。如果在解析期间某些行被组合在一起,那么我们可能会超过最大行长度。这通常是因为数据不佳而发生的 echo $ LINE | awk -F" ^" '(总数=总+ NF); END {print total}'