我创建了一个PowerShell函数,该函数从.csv
文件(第一行是标题)中批量复制数据,并将数据插入到SQL Server数据库表中。
查看我的代码:
function BulkCsvImport($sqlserver, $database, $table, $csvfile, $csvdelimiter, $firstrowcolumnnames) {
Write-Host "Bulk Import Started."
$elapsed = [System.Diagnostics.Stopwatch]::StartNew()
[void][Reflection.Assembly]::LoadWithPartialName("System.Data")
[void][Reflection.Assembly]::LoadWithPartialName("System.Data.SqlClient")
# 50k worked fastest and kept memory usage to a minimum
$batchsize = 50000
# Build the sqlbulkcopy connection, and set the timeout to infinite
$connectionstring = "Data Source=$sqlserver;Integrated Security=true;Initial Catalog=$database;"
# Wipe the bulk insert table first
Invoke-Sqlcmd -Query "TRUNCATE TABLE $table" -ServerInstance $sqlserver -Database $database
$bulkcopy = New-Object Data.SqlClient.SqlBulkCopy($connectionstring, [System.Data.SqlClient.SqlBulkCopyOptions]::TableLock)
$bulkcopy.DestinationTableName = $table
$bulkcopy.bulkcopyTimeout = 0
$bulkcopy.batchsize = $batchsize
# Create the datatable, and autogenerate the columns.
$datatable = New-Object System.Data.DataTable
# Open the text file from disk
$reader = New-Object System.IO.StreamReader($csvfile)
$columns = (Get-Content $csvfile -First 1).Split($csvdelimiter)
if ($firstrowcolumnnames -eq $true) { $null = $reader.readLine() }
foreach ($column in $columns) {
$null = $datatable.Columns.Add()
}
# Read in the data, line by line
while (($line = $reader.ReadLine()) -ne $null) {
$null = $datatable.Rows.Add($line.Split($csvdelimiter))
$i++;
if (($i % $batchsize) -eq 0) {
$bulkcopy.WriteToServer($datatable)
Write-Host "$i rows have been inserted in $($elapsed.Elapsed.ToString())."
$datatable.Clear()
}
}
# Add in all the remaining rows since the last clear
if($datatable.Rows.Count -gt 0) {
$bulkcopy.WriteToServer($datatable)
$datatable.Clear()
}
# Clean Up
$reader.Close();
$reader.Dispose()
$bulkcopy.Close();
$bulkcopy.Dispose()
$datatable.Dispose()
Write-Host "Bulk Import Completed. $i rows have been inserted into the database."
# Write-Host "Total Elapsed Time: $($elapsed.Elapsed.ToString())"
# Sometimes the Garbage Collector takes too long to clear the huge datatable.
$i = 0
[System.GC]::Collect()
}
尽管如此,我还是希望对上述内容进行修改,以使.csv
文件中的列名与SQL Server数据库表中的列名匹配。它们应该是相同的。目前,数据正在导入到错误的数据库列中。
我可以通过修改上述功能来获得帮助吗?
答案 0 :(得分:0)
我将使用现有的开源解决方案:
有效地将非常大(和很小)的CSV文件导入SQL Server。
Import-DbaCsv利用.NET的超快速SqlBulkCopy类将CSV文件导入SQL Server。
参数:
-ColumnMap
默认情况下,批量复制尝试自动映射列。当没有 可以正常工作,此参数会有所帮助。
export class Component implements OnInit {
disabledDates :Date[];
constructor(...) { }
ngOnInit() {
...
this.getUnavaibilities();
}
getUnavaibilities(){
this.homesService.getUnavaibilities(this.Id).subscribe(
reponse => this.UnavailabilitieshandleSuccesfulResponse(reponse));
}
UnavailabilitieshandleSuccesfulResponse(reponse){
this.disabledDates = reponse;
console.log(this.disabledDates) // this working fine !
//output from console.log
// Array(4) [ "2019-09-27T00:00:00", "2019-09-28T00:00:00", "2019-09-
//29T00:00:00", "2019-09-30T00:00:00" ]
}
}
将CSV列“文本”插入到SQL列“名字”中,并将CSV列编号插入到SQL列“电话号码”中。所有其他列都将被忽略,因此为空或默认值。