使用Azure表存储时,我遇到了巨大的性能瓶颈。我的愿望是使用表作为一种缓存,因此一个漫长的过程可能会产生数百到数千行的数据。然后可以通过分区和行键快速查询数据。
查询工作速度非常快(仅使用分区和行键时速度极快,速度稍慢,但在搜索特定匹配项的属性时仍然可以接受)。
但是,插入和删除行都非常慢。
澄清
我想澄清一下,即使插入一批100件物品也需要几秒钟。这不仅仅是数千行总吞吐量的问题。当我只插入100时它会影响我。
以下是我的表格批量插入代码的示例:
static async Task BatchInsert( CloudTable table, List<ITableEntity> entities )
{
int rowOffset = 0;
while ( rowOffset < entities.Count )
{
Stopwatch sw = Stopwatch.StartNew();
var batch = new TableBatchOperation();
// next batch
var rows = entities.Skip( rowOffset ).Take( 100 ).ToList();
foreach ( var row in rows )
batch.Insert( row );
// submit
await table.ExecuteBatchAsync( batch );
rowOffset += rows.Count;
Trace.TraceInformation( "Elapsed time to batch insert " + rows.Count + " rows: " + sw.Elapsed.ToString( "g" ) );
}
}
我正在使用批处理操作,这是调试输出的一个示例:
Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Starting asynchronous request to http://127.0.0.1:10002/devstoreaccount1.
Microsoft.WindowsAzure.Storage Verbose: 4 : b08a07da-fceb-4bec-af34-3beaa340239b: StringToSign = POST..multipart/mixed; boundary=batch_6d86d34c-5e0e-4c0c-8135-f9788ae41748.Tue, 30 Jul 2013 18:48:38 GMT./devstoreaccount1/devstoreaccount1/$batch.
Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Preparing to write request data.
Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Writing request data.
Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Waiting for response.
Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Response received. Status code = 202, Request ID = , Content-MD5 = , ETag = .
Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Response headers were processed successfully, proceeding with the rest of the operation.
Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Processing response body.
Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Operation completed successfully.
iisexpress.exe Information: 0 : Elapsed time to batch insert 100 rows: 0:00:00.9351871
如您所见,此示例需要大约1秒钟才能插入100行。我的开发机器(3.4 Ghz四核)的平均值似乎约为0.8秒。
这看起来很荒谬。
以下是批量删除操作的示例:
Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Starting asynchronous request to http://127.0.0.1:10002/devstoreaccount1.
Microsoft.WindowsAzure.Storage Verbose: 4 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: StringToSign = POST..multipart/mixed; boundary=batch_7e3d229f-f8ac-4aa0-8ce9-ed00cb0ba321.Tue, 30 Jul 2013 18:47:41 GMT./devstoreaccount1/devstoreaccount1/$batch.
Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Preparing to write request data.
Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Writing request data.
Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Waiting for response.
Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Response received. Status code = 202, Request ID = , Content-MD5 = , ETag = .
Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Response headers were processed successfully, proceeding with the rest of the operation.
Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Processing response body.
Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Operation completed successfully.
iisexpress.exe Information: 0 : Elapsed time to batch delete 100 rows: 0:00:00.6524402
持续超过0.5秒。
我也将此部署到Azure(小实例),并记录了20分钟的时间来插入28000行。
我目前正在使用存储客户端库的2.1 RC版本:MSDN Blog
我一定是做错了。有什么想法吗?
更新
我尝试了并行性与整体速度改进的净效果(以及8个最大逻辑处理器),但在我的开发机器上每秒仍然只有150行插入。
总体而言,我无法说清楚,当部署到Azure(小实例)时可能更糟糕。
我增加了线程池,并按照this advice增加了我的WebRole的最大HTTP连接数。
我仍然觉得我遗漏了一些限制我的插入/删除到150 ROPS的基本原理。
更新2
在分析部署到Azure的小型实例的一些诊断日志后(使用内置于2.1 RC Storage Client的新日志记录),我有更多信息。
批量插入的第一个存储客户端日志位于635109046781264034
刻度:
caf06fca-1857-4875-9923-98979d850df3: Starting synchronous request to https://?.table.core.windows.net/.; TraceSource 'Microsoft.WindowsAzure.Storage' event
然后差不多3秒钟后,我在635109046810104314
滴答声中看到了这个日志:
caf06fca-1857-4875-9923-98979d850df3: Preparing to write request data.; TraceSource 'Microsoft.WindowsAzure.Storage' event
然后再添加几个日志,这些日志占用0.15秒,以635109046811645418
刻度结束此日期,这将包裹插入内容:
caf06fca-1857-4875-9923-98979d850df3: Operation completed successfully.; TraceSource 'Microsoft.WindowsAzure.Storage' event
我不知道该怎么做,但是在我检查的批量插入日志中它非常一致。
更新3
以下是用于并行批量插入的代码。在此代码中,仅用于测试,我确保将每批100个插入到一个唯一的分区中。
static async Task BatchInsert( CloudTable table, List<ITableEntity> entities )
{
int rowOffset = 0;
var tasks = new List<Task>();
while ( rowOffset < entities.Count )
{
// next batch
var rows = entities.Skip( rowOffset ).Take( 100 ).ToList();
rowOffset += rows.Count;
string partition = "$" + rowOffset.ToString();
var task = Task.Factory.StartNew( () =>
{
Stopwatch sw = Stopwatch.StartNew();
var batch = new TableBatchOperation();
foreach ( var row in rows )
{
row.PartitionKey = row.PartitionKey + partition;
batch.InsertOrReplace( row );
}
// submit
table.ExecuteBatch( batch );
Trace.TraceInformation( "Elapsed time to batch insert " + rows.Count + " rows: " + sw.Elapsed.ToString( "F2" ) );
} );
tasks.Add( task );
}
await Task.WhenAll( tasks );
}
如上所述,这确实有助于缩短插入数千行的总时间,但每批100仍然需要几秒钟。
更新4
因此,我使用VS2012.2创建了一个全新的Azure Cloud Service项目,将Web角色作为单页模板(包含TODO示例的新模板)。
这是开箱即用,没有新的NuGet包或任何东西。它默认使用Storage Client库v2,以及EDM和相关库v5.2。
我只是将HomeController代码修改为以下内容(使用一些随机数据来模拟我想要存储在真实应用中的列):
public ActionResult Index( string returnUrl )
{
ViewBag.ReturnUrl = returnUrl;
Task.Factory.StartNew( () =>
{
TableTest();
} );
return View();
}
static Random random = new Random();
static double RandomDouble( double maxValue )
{
// the Random class is not thread safe!
lock ( random ) return random.NextDouble() * maxValue;
}
void TableTest()
{
// Retrieve storage account from connection-string
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting( "CloudStorageConnectionString" ) );
// create the table client
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
// retrieve the table
CloudTable table = tableClient.GetTableReference( "test" );
// create it if it doesn't already exist
if ( table.CreateIfNotExists() )
{
// the container is new and was just created
Trace.TraceInformation( "Created table named " + "test" );
}
Stopwatch sw = Stopwatch.StartNew();
// create a bunch of objects
int count = 28000;
List<DynamicTableEntity> entities = new List<DynamicTableEntity>( count );
for ( int i = 0; i < count; i++ )
{
var row = new DynamicTableEntity()
{
PartitionKey = "filename.txt",
RowKey = string.Format( "$item{0:D10}", i ),
};
row.Properties.Add( "Name", EntityProperty.GeneratePropertyForString( i.ToString() ) );
row.Properties.Add( "Data", EntityProperty.GeneratePropertyForString( string.Format( "data{0}", i ) ) );
row.Properties.Add( "Value1", EntityProperty.GeneratePropertyForDouble( RandomDouble( 10000 ) ) );
row.Properties.Add( "Value2", EntityProperty.GeneratePropertyForDouble( RandomDouble( 10000 ) ) );
row.Properties.Add( "Value3", EntityProperty.GeneratePropertyForDouble( RandomDouble( 1000 ) ) );
row.Properties.Add( "Value4", EntityProperty.GeneratePropertyForDouble( RandomDouble( 90 ) ) );
row.Properties.Add( "Value5", EntityProperty.GeneratePropertyForDouble( RandomDouble( 180 ) ) );
row.Properties.Add( "Value6", EntityProperty.GeneratePropertyForDouble( RandomDouble( 1000 ) ) );
entities.Add( row );
}
Trace.TraceInformation( "Elapsed time to create record rows: " + sw.Elapsed.ToString() );
sw = Stopwatch.StartNew();
Trace.TraceInformation( "Inserting rows" );
// batch our inserts (100 max)
BatchInsert( table, entities ).Wait();
Trace.TraceInformation( "Successfully inserted " + entities.Count + " rows into table " + table.Name );
Trace.TraceInformation( "Elapsed time: " + sw.Elapsed.ToString() );
Trace.TraceInformation( "Done" );
}
static async Task BatchInsert( CloudTable table, List<DynamicTableEntity> entities )
{
int rowOffset = 0;
var tasks = new List<Task>();
while ( rowOffset < entities.Count )
{
// next batch
var rows = entities.Skip( rowOffset ).Take( 100 ).ToList();
rowOffset += rows.Count;
string partition = "$" + rowOffset.ToString();
var task = Task.Factory.StartNew( () =>
{
var batch = new TableBatchOperation();
foreach ( var row in rows )
{
row.PartitionKey = row.PartitionKey + partition;
batch.InsertOrReplace( row );
}
// submit
table.ExecuteBatch( batch );
Trace.TraceInformation( "Inserted batch for partition " + partition );
} );
tasks.Add( task );
}
await Task.WhenAll( tasks );
}
这是我得到的输出:
iisexpress.exe Information: 0 : Elapsed time to create record rows: 00:00:00.0719448
iisexpress.exe Information: 0 : Inserting rows
iisexpress.exe Information: 0 : Inserted batch for partition $100
...
iisexpress.exe Information: 0 : Successfully inserted 28000 rows into table test
iisexpress.exe Information: 0 : Elapsed time: 00:01:07.1398928
这比我的其他应用程序快一点,超过460 ROPS。这仍然是不可接受的。再次在这个测试中,我的CPU(8个逻辑处理器)几乎被淘汰,磁盘访问几乎空闲。
我对错误感到茫然。
更新5
一轮又一轮的调整和调整已经取得了一些进展,但我不能比500-700(ish)ROPS更快地进行批量InsertOrReplace操作(批量为100)。
此测试在Azure云中完成,使用一个小实例(或两个)。根据下面的评论,我已经接受了这样一个事实,即本地测试充其量只会很慢。
以下是几个例子。每个例子都是它自己的PartitionKey:
Successfully inserted 904 rows into table org1; TraceSource 'w3wp.exe' event
Elapsed time: 00:00:01.3401031; TraceSource 'w3wp.exe' event
Successfully inserted 4130 rows into table org1; TraceSource 'w3wp.exe' event
Elapsed time: 00:00:07.3522871; TraceSource 'w3wp.exe' event
Successfully inserted 28020 rows into table org1; TraceSource 'w3wp.exe' event
Elapsed time: 00:00:51.9319217; TraceSource 'w3wp.exe' event
也许这是我的MSDN Azure帐户有一些性能上限?我不知道。
此时我觉得我已经完成了这件事。也许它足够快,可以用于我的目的,或者我可能会遵循不同的路径。
结论
以下所有答案都很好!
对于我的具体问题,我已经能够在小型Azure实例上看到高达2k ROPS的速度,更典型的是大约1k。由于我需要降低成本(因此降低实例大小),这定义了我将能够使用表格的内容。
感谢大家的帮助。
答案 0 :(得分:13)
基本概念 - 使用并行来加速这一点。
第1步 - 给你的线程池提供足够的线程来关闭它 - ThreadPool.SetMinThreads(1024,256);
第2步 - 使用分区。我使用guids作为Ids,我使用最后一个字符分成256个独特的分区(实际上我将它们分组为N个子集,在我的情况下是48个分区)
第3步 - 使用任务插入,我使用表refs的对象池
public List<T> InsertOrUpdate(List<T> items)
{
var subLists = SplitIntoPartitionedSublists(items);
var tasks = new List<Task>();
foreach (var subList in subLists)
{
List<T> list = subList;
var task = Task.Factory.StartNew(() =>
{
var batchOp = new TableBatchOperation();
var tableRef = GetTableRef();
foreach (var item in list)
{
batchOp.Add(TableOperation.InsertOrReplace(item));
}
tableRef.ExecuteBatch(batchOp);
ReleaseTableRef(tableRef);
});
tasks.Add(task);
}
Task.WaitAll(tasks.ToArray());
return items;
}
private IEnumerable<List<T>> SplitIntoPartitionedSublists(IEnumerable<T> items)
{
var itemsByPartion = new Dictionary<string, List<T>>();
//split items into partitions
foreach (var item in items)
{
var partition = GetPartition(item);
if (itemsByPartion.ContainsKey(partition) == false)
{
itemsByPartion[partition] = new List<T>();
}
item.PartitionKey = partition;
item.ETag = "*";
itemsByPartion[partition].Add(item);
}
//split into subsets
var subLists = new List<List<T>>();
foreach (var partition in itemsByPartion.Keys)
{
var partitionItems = itemsByPartion[partition];
for (int i = 0; i < partitionItems.Count; i += MaxBatch)
{
subLists.Add(partitionItems.Skip(i).Take(MaxBatch).ToList());
}
}
return subLists;
}
private void BuildPartitionIndentifiers(int partitonCount)
{
var chars = new char[] { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f' }.ToList();
var keys = new List<string>();
for (int i = 0; i < chars.Count; i++)
{
var keyA = chars[i];
for (int j = 0; j < chars.Count; j++)
{
var keyB = chars[j];
keys.Add(string.Concat(keyA, keyB));
}
}
var keySetMaxSize = Math.Max(1, (int)Math.Floor((double)keys.Count / ((double)partitonCount)));
var keySets = new List<List<string>>();
if (partitonCount > keys.Count)
{
partitonCount = keys.Count;
}
//Build the key sets
var index = 0;
while (index < keys.Count)
{
var keysSet = keys.Skip(index).Take(keySetMaxSize).ToList();
keySets.Add(keysSet);
index += keySetMaxSize;
}
//build the lookups and datatable for each key set
_partitions = new List<string>();
for (int i = 0; i < keySets.Count; i++)
{
var partitionName = String.Concat("subSet_", i);
foreach (var key in keySets[i])
{
_partitionByKey[key] = partitionName;
}
_partitions.Add(partitionName);
}
}
private string GetPartition(T item)
{
var partKey = item.Id.ToString().Substring(34,2);
return _partitionByKey[partKey];
}
private string GetPartition(Guid id)
{
var partKey = id.ToString().Substring(34, 2);
return _partitionByKey[partKey];
}
private CloudTable GetTableRef()
{
CloudTable tableRef = null;
//try to pop a table ref out of the stack
var foundTableRefInStack = _tableRefs.TryPop(out tableRef);
if (foundTableRefInStack == false)
{
//no table ref available must create a new one
var client = _account.CreateCloudTableClient();
client.RetryPolicy = new ExponentialRetry(TimeSpan.FromSeconds(1), 4);
tableRef = client.GetTableReference(_sTableName);
}
//ensure table is created
if (_bTableCreated != true)
{
tableRef.CreateIfNotExists();
_bTableCreated = true;
}
return tableRef;
}
结果 - 存储帐户最大存储空间为19-22kops
如果您对完整资源感兴趣,打击我
需要咆哮?使用多个存储帐户!
这是几个月的试验和错误,测试,在桌子上敲打我的头。我真的希望它有所帮助。
答案 1 :(得分:10)
好的,第3个回答了一个魅力?
一些东西 - 存储模拟器 - 来自一位认真研究它的朋友。
“一切都在单个数据库中击中一个表(更多分区不会影响任何事情)。每个表插入操作至少有3个sql操作。每个批处理都在一个事务中。根据事务隔离级别,这些批处理将并行执行的能力有限。
由于sql server行为,串行批处理应该比单个插入更快。 (单个插入本质上是每个刷新到磁盘的事务,而真正的事务作为一个组刷新到磁盘)。“
使用多个分区的IE不会影响模拟器上的性能,而它会影响实际的天蓝色存储。
同时启用日志记录并稍微检查日志 - c:\ users \ username \ appdata \ local \ developmentstorage
批量大小100似乎提供最佳的真实性能,关闭naggle,关闭期望100,加强连接限制。
还要确保你不会意外地插入重复项,这会导致错误,并且一路慢下来。
并针对实际存储进行测试。有一个相当不错的库可以为你处理大部分内容 - http://www.nuget.org/packages/WindowsAzure.StorageExtensions/,只是确保你实际上在添加上调用ToList,因为它在枚举之前不会真正执行。此外,该库使用dynamictableentity,因此序列化有一个小的性能,但它确实允许您使用没有TableEntity内容的纯POCO对象。
~JT
答案 2 :(得分:6)
经历了大量的痛苦,实验,终于能够获得单表分区的最佳吞吐量(每秒2,000多次批量写入操作),以及Azure存储帐户(每秒3,500多次批量写入操作)的更高吞吐量表存储。我尝试了所有不同的方法,但是以编程方式设置.net连接限制(我尝试了配置示例,但对我不起作用)解决了问题(基于Microsoft提供的White Paper),如下所示:
ServicePoint tableServicePoint = ServicePointManager
.FindServicePoint(_StorageAccount.TableEndpoint);
//This is a notorious issue that has affected many developers. By default, the value
//for the number of .NET HTTP connections is 2.
//This implies that only 2 concurrent connections can be maintained. This manifests itself
//as "underlying connection was closed..." when the number of concurrent requests is
//greater than 2.
tableServicePoint.ConnectionLimit = 1000;
每个存储帐户都有20K +批量写入操作的人,请分享您的经验。
答案 3 :(得分:5)
为了更有趣,这里有一个新的答案 - 隔离的独立测试,它可以为生产中的写入性能提供一些惊人的数字,并且可以更好地避免IO阻塞和连接管理。我很有兴趣看到这对你有用,因为我们的写入速度很快(> 7kps)。
webconfig
<system.net>
<connectionManagement>
<add address="*" maxconnection="48"/>
</connectionManagement>
</system.net>
对于测试,我使用的是基于音量的参数,因此像25000个项目,24个分区,批量大小100似乎总是最好的,并且引用计数为20.这是使用TPL数据流(http://www.nuget.org/packages/Microsoft.Tpl.Dataflow/)对于BufflerBlock,它提供了一个很好的等待线程安全表引用拉。
public class DyanmicBulkInsertTestPooledRefsAndAsynch : WebTest, IDynamicWebTest
{
private int _itemCount;
private int _partitionCount;
private int _batchSize;
private List<TestTableEntity> _items;
private GuidIdPartitionSplitter<TestTableEntity> _partitionSplitter;
private string _tableName;
private CloudStorageAccount _account;
private CloudTableClient _tableClient;
private Dictionary<string, List<TestTableEntity>> _itemsByParition;
private int _maxRefCount;
private BufferBlock<CloudTable> _tableRefs;
public DyanmicBulkInsertTestPooledRefsAndAsynch()
{
Properties = new List<ItemProp>();
Properties.Add(new ItemProp("ItemCount", typeof(int)));
Properties.Add(new ItemProp("PartitionCount", typeof(int)));
Properties.Add(new ItemProp("BatchSize", typeof(int)));
Properties.Add(new ItemProp("MaxRefs", typeof(int)));
}
public List<ItemProp> Properties { get; set; }
public void SetProps(Dictionary<string, object> propValuesByPropName)
{
_itemCount = (int)propValuesByPropName["ItemCount"];
_partitionCount = (int)propValuesByPropName["PartitionCount"];
_batchSize = (int)propValuesByPropName["BatchSize"];
_maxRefCount = (int)propValuesByPropName["MaxRefs"];
}
protected override void SetupTest()
{
base.SetupTest();
ThreadPool.SetMinThreads(1024, 256);
ServicePointManager.DefaultConnectionLimit = 256;
ServicePointManager.UseNagleAlgorithm = false;
ServicePointManager.Expect100Continue = false;
_account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString"));
_tableClient = _account.CreateCloudTableClient();
_tableName = "testtable" + new Random().Next(100000);
//create the refs
_tableRefs = new BufferBlock<CloudTable>();
for (int i = 0; i < _maxRefCount; i++)
{
_tableRefs.Post(_tableClient.GetTableReference(_tableName));
}
var tableRefTask = GetTableRef();
tableRefTask.Wait();
var tableRef = tableRefTask.Result;
tableRef.CreateIfNotExists();
ReleaseRef(tableRef);
_items = TestUtils.GenerateTableItems(_itemCount);
_partitionSplitter = new GuidIdPartitionSplitter<TestTableEntity>();
_partitionSplitter.BuildPartitions(_partitionCount);
_items.ForEach(o =>
{
o.ETag = "*";
o.Timestamp = DateTime.Now;
o.PartitionKey = _partitionSplitter.GetPartition(o);
});
_itemsByParition = _partitionSplitter.SplitIntoPartitionedSublists(_items);
}
private async Task<CloudTable> GetTableRef()
{
return await _tableRefs.ReceiveAsync();
}
private void ReleaseRef(CloudTable tableRef)
{
_tableRefs.Post(tableRef);
}
protected override void ExecuteTest()
{
Task.WaitAll(_itemsByParition.Keys.Select(parition => Task.Factory.StartNew(() => InsertParitionItems(_itemsByParition[parition]))).ToArray());
}
private void InsertParitionItems(List<TestTableEntity> items)
{
var tasks = new List<Task>();
for (int i = 0; i < items.Count; i += _batchSize)
{
int i1 = i;
var task = Task.Factory.StartNew(async () =>
{
var batchItems = items.Skip(i1).Take(_batchSize).ToList();
if (batchItems.Select(o => o.PartitionKey).Distinct().Count() > 1)
{
throw new Exception("Multiple partitions batch");
}
var batchOp = new TableBatchOperation();
batchItems.ForEach(batchOp.InsertOrReplace);
var tableRef = GetTableRef.Result();
tableRef.ExecuteBatch(batchOp);
ReleaseRef(tableRef);
});
tasks.Add(task);
}
Task.WaitAll(tasks.ToArray());
}
protected override void CleanupTest()
{
var tableRefTask = GetTableRef();
tableRefTask.Wait();
var tableRef = tableRefTask.Result;
tableRef.DeleteIfExists();
ReleaseRef(tableRef);
}
我们目前正在开发一个可以处理多个存储帐户的版本,希望能够获得一些疯狂的速度。此外,我们在8个核心虚拟机上运行这些大型数据集,但是对于新的非阻塞IO,它应该在有限的vm上运行良好。祝你好运!
public class SimpleGuidIdPartitionSplitter<T> where T : IUniqueId
{
private ConcurrentDictionary<string, string> _partitionByKey = new ConcurrentDictionary<string, string>();
private List<string> _partitions;
private bool _bPartitionsBuilt;
public SimpleGuidIdPartitionSplitter()
{
}
public void BuildPartitions(int iPartCount)
{
BuildPartitionIndentifiers(iPartCount);
}
public string GetPartition(T item)
{
if (_bPartitionsBuilt == false)
{
throw new Exception("Partitions Not Built");
}
var partKey = item.Id.ToString().Substring(34, 2);
return _partitionByKey[partKey];
}
public string GetPartition(Guid id)
{
if (_bPartitionsBuilt == false)
{
throw new Exception("Partitions Not Built");
}
var partKey = id.ToString().Substring(34, 2);
return _partitionByKey[partKey];
}
#region Helpers
private void BuildPartitionIndentifiers(int partitonCount)
{
var chars = new char[] { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f' }.ToList();
var keys = new List<string>();
for (int i = 0; i < chars.Count; i++)
{
var keyA = chars[i];
for (int j = 0; j < chars.Count; j++)
{
var keyB = chars[j];
keys.Add(string.Concat(keyA, keyB));
}
}
var keySetMaxSize = Math.Max(1, (int)Math.Floor((double)keys.Count / ((double)partitonCount)));
var keySets = new List<List<string>>();
if (partitonCount > keys.Count)
{
partitonCount = keys.Count;
}
//Build the key sets
var index = 0;
while (index < keys.Count)
{
var keysSet = keys.Skip(index).Take(keySetMaxSize).ToList();
keySets.Add(keysSet);
index += keySetMaxSize;
}
//build the lookups and datatable for each key set
_partitions = new List<string>();
for (int i = 0; i < keySets.Count; i++)
{
var partitionName = String.Concat("subSet_", i);
foreach (var key in keySets[i])
{
_partitionByKey[key] = partitionName;
}
_partitions.Add(partitionName);
}
_bPartitionsBuilt = true;
}
#endregion
}
internal static List<TestTableEntity> GenerateTableItems(int count)
{
var items = new List<TestTableEntity>();
var random = new Random();
for (int i = 0; i < count; i++)
{
var itemId = Guid.NewGuid();
items.Add(new TestTableEntity()
{
Id = itemId,
TestGuid = Guid.NewGuid(),
RowKey = itemId.ToString(),
TestBool = true,
TestDateTime = DateTime.Now,
TestDouble = random.Next() * 1000000,
TestInt = random.Next(10000),
TestString = Guid.NewGuid().ToString(),
});
}
var dupRowKeys = items.GroupBy(o => o.RowKey).Where(o => o.Count() > 1).Select(o => o.Key).ToList();
if (dupRowKeys.Count > 0)
{
throw new Exception("Dupicate Row Keys");
}
return items;
}
还有一件事 - 您的时间安排以及框架如何受到影响指向此http://blogs.msdn.com/b/windowsazurestorage/archive/2013/08/08/net-clients-encountering-port-exhaustion-after-installing-kb2750149-or-kb2805227.aspx