SQL Server:批量加载失败。第1行第1列的数据文件中的列太长

时间:2011-12-15 01:16:01

标签: sql-server bcp

有人请帮帮我。现在已经看了几个小时,但却无处可去。

我使用以下脚本在SQL Express 2008 R2中创建了一个表:

CREATE TABLE Features
(
ID int not null identity(1,1 ),
StopID varchar(10), 
Code int,
Name varchar(100),
Summary varchar(200),
Lat real,
Lon real,
street varchar(100),
city varchar(50),
region varchar(50),
postcode varchar(10),
country varchar(20),
zone_id varchar(20),
the_geom geography


 CONSTRAINT [PK_Features] PRIMARY KEY CLUSTERED 
(
    [ID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]

然后我使用bcp工具创建了针对我的数据库表创建的以下格式文件:

10.0
12
1       SQLCHAR             2       100     ","    2     StopID               Latin1_General_CI_AS
2       SQLINT              1       4       ","    3     Code                 ""
3       SQLCHAR             2       100     ","    4     Name                 Latin1_General_CI_AS
4       SQLCHAR             2       200     ","    5     Summary              Latin1_General_CI_AS
5       SQLFLT4             1       4       ","    6     Lat                  ""
6       SQLFLT4             1       4       ","    7     Lon                  ""
7       SQLCHAR             2       100     ","    8     street               Latin1_General_CI_AS
8       SQLCHAR             2       50      ","    9     city                 Latin1_General_CI_AS
9       SQLCHAR             2       50      ","    10    region               Latin1_General_CI_AS
10      SQLCHAR             2       10      ","    11    postcode             Latin1_General_CI_AS
11      SQLCHAR             2       20      ","    12    country              Latin1_General_CI_AS
12      SQLCHAR             2       20      "\r\n"    13    zone_id              Latin1_General_CI_AS

此文件已被修改,以删除ID和the_geom字段,因为这些字段不在我的数据文件中。

然后我尝试批量插入包含以下内容的1行csv:

a,8,S,,45.439869,-75.695839,,,,,,

我所得到的一切:

Msg 4866, Level 16, State 7, Line 35
The bulk load failed. The column is too long in the data file for row 1, column 1. Verify that the field terminator and row terminator are specified correctly.
Msg 7399, Level 16, State 1, Line 35
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 35
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".

任何指针都会有所帮助,因为我无法想出这一点。

4 个答案:

答案 0 :(得分:8)

问题是由我的格式文件中的默认前缀长度设置引起的。我从中导入的数据文件不是使用bcp创建的,因此我必须将所有字段的前缀长度设置为0,如下所示:

0.0
12
1       SQLCHAR             0       100     ","    2     StopID               Latin1_General_CI_AS
2       SQLINT              0       4       ","    3     Code                 ""
3       SQLCHAR             0       100     ","    4     Name                 Latin1_General_CI_AS
4       SQLCHAR             0       200     ","    5     Summary              Latin1_General_CI_AS
5       SQLFLT4             0       4       ","    6     Lat                  ""
6       SQLFLT4             0       4       ","    7     Lon                  ""
7       SQLCHAR             0       100     ","    8     street               Latin1_General_CI_AS
8       SQLCHAR             0       50      ","    9     city                 Latin1_General_CI_AS
9       SQLCHAR             0       50      ","    10    region               Latin1_General_CI_AS
10      SQLCHAR             0       10      ","    11    postcode             Latin1_General_CI_AS
11      SQLCHAR             0       20      ","    12    country              Latin1_General_CI_AS
12      SQLCHAR             0       20      "\r\n"    13    zone_id              Latin1_General_CI_AS

通过此更改,导入成功。

答案 1 :(得分:6)

试试这个,

ROWTERMINATOR = '0x0a'

答案 2 :(得分:0)

对于它的价值,我遇到了同样的问题,因为我的CSV中日期字段的预期格式与实际格式之间存在冲突。我在CSV中更改了日期格式,并且工作正常。

答案 3 :(得分:0)

我今天遇到了这个问题,但仅适用于文本值超过8000个字符的COLUMNS的特定ROWS。不管我的FMT文件是否为SQLCHAR 0 0表示最大值,但在管道的某处,最大值为8000

我正在使用AZURE SQL,并试图读取Azure Blob容器中的CSV。