我使用下面的代码将DataTable导出到Csv文件。
public static void DataTableToCsv(System.Data.DataTable dt, string csvFile)
{
try
{
StringBuilder sb = new StringBuilder();
var columnNames = dt.Columns.Cast<DataColumn>().Select(column => "\"" + column.ColumnName.Replace("\"", "\"\"") + "\"").ToArray();
sb.AppendLine(string.Join("\t", columnNames));
foreach (DataRow row in dt.Rows) // Out of Memory Exception Here
{
var fields = row.ItemArray.Select(field => "\"" + field.ToString().Replace("\"", "\"\"") + "\"").ToArray();
sb.AppendLine(string.Join("\t", fields));
}
TextWriter sUrl = new StreamWriter(csvFile, true, Encoding.Unicode);
sUrl.WriteLine(sb.ToString());
sUrl.Close();
}
catch (Exception ex)
{
throw;
}
}
基本上我将文件夹中的数百张纸张合并到DataTable中,这些纸张具有不同的常用列等。所以总行数变得非常巨大,500k甚至100万。
使用上面的代码,我得到Out Of Memory Exception
。有任何建议如何解决这个问题?
答案 0 :(得分:0)
单独写出每个CSV行,而不是将它们组合成一个巨大的StringBuilder缓冲区。你做“sb.AppendLine()”的每个地方,都要改为“sUrl.WriteLine()”。
E.g:
public static void DataTableToCsv(System.Data.DataTable dt, string csvFile)
{
try
{
using (TextWriter sUrl = new StreamWriter(csvFile, true, Encoding.Unicode))
{
var columnNames = dt.Columns.Cast<DataColumn>().Select(column => "\"" + column.ColumnName.Replace("\"", "\"\"") + "\"").ToArray();
sUrl.WriteLine(string.Join("\t", columnNames));
foreach (DataRow row in dt.Rows) // Out of Memory Exception Here
{
var fields = row.ItemArray.Select(field => "\"" + field.ToString().Replace("\"", "\"\"") + "\"").ToArray();
sUrl.WriteLine(string.Join("\t", fields));
}
}
}
catch (Exception ex)
{
Debug.WriteLine(ex.ToString());
throw;
}
}