Do batched SQL statements increase chances of deadlock errors?

时间:2018-04-20 01:18:16

标签: c# database tsql batch-processing deadlock

I have a C# project which connects to a TSQL database. The project runs multiple sequential update statements on a single table, eg.:

 private void updateRows() {
      string update1 = "UPDATE WITH (ROWLOCK) table SET ... WHERE ...;"
      string update2 = "UPDATE WITH (ROWLOCK) table SET ... WHERE ...;"
      string update3 = "UPDATE WITH (ROWLOCK) table SET ... WHERE ...;"

      // execute updates one after the other
 }

Ideally, these statements will be batched to avoid making multiple round-trips to/from the database:

string update = "
    UPDATE WITH (ROWLOCK) table SET ... WHERE ...;
    GO
    UPDATE WITH (ROWLOCK) table SET ... WHERE ...;
    GO
    UPDATE WITH (ROWLOCK) table SET ... WHERE ...;
    GO
";

My question is, if the statements are batched, does this increase the chance of deadlock errors occurring due to table scans?

As there is less time between each statement, I imagine that this could increase chances of deadlocks as one update could introduce a row or page lock, which may not be released by the time the next statement is executed. However if the update statements were not batched, then there is more time for row or page locks to be released between each update statement, therefore less chance of deadlocks occurring.

1 个答案:

答案 0 :(得分:1)

我想你不会喜欢我的答案,这是我的2美分,让我试着解释

  1. 首先,您的行锁可能不起作用,您可能最终得到DDL的表锁,不允许SQL服务器在您的事务中应用行锁定。
  2. SQL喜欢设置操作,在一次更新大型数据集时效果很好。
  3. 我有一个类似的问题,我需要更新大量的用户事务,但我在系统中没有备用IO。我最终使用'ETL like'更新,

    在C#中,我使用批量插入来一次性获取数据库中的所有数据。这是我的方法。

    protected void BulkImport(DataTable table, string tableName)
    {
        if (!CanConnect)
            return;
    
        var options = SqlBulkCopyOptions.FireTriggers | SqlBulkCopyOptions.CheckConstraints |
                        SqlBulkCopyOptions.UseInternalTransaction;
        using (var bulkCopy = new SqlBulkCopy(_con.ConnectionString, options))
        {
            bulkCopy.DestinationTableName = tableName;
            bulkCopy.BulkCopyTimeout = 30;
            try
            {
                lock(table){
                bulkCopy.WriteToServer(table);
                table.Rows.Clear();
                table.AcceptChanges();
                }
            }
            catch (Exception ex)
            {
                var msg = $"Error: Failed the writing to {tableName}, the error:{ex.Message}";
                Logger?.Enqueue(msg);
                try
                {
                    var TE= Activator.CreateInstance(ex.GetType(), new object[] { $"{msg}, {ex.Message}", ex });
                    Logger?.Enqueue(TE as Exception);
                }
                catch
                {
                    Logger?.Enqueue(ex);
    
                }
    
            }
            finally
            {
                bulkCopy.Close();
            }
        }
    }
    

    请注意,DataTable不是线程安全的,您需要在与它交互(插入行,清除表)时锁定DataTable。

    然后我将数据转储到临时表中并使用merge语句将数据放入我需要它的数据库中。

    我在50个左右的表上每秒执行+ 100k记录,到目前为止还没有任何性能或死锁问题。