我们有一个使用Microsoft SQL Server作为数据库后端的企业应用程序。我们遇到了一些公平的案例,客户已将应用程序扩展到一个巨大的数据库中,并且正在运行的一些查询会导致自身和其他用户遇到锁定和性能问题。
我们已尝试应用尽可能多的索引,并且所有查询都是极限,但我们有一个应用程序必须适合许多不同的客户类型,因此很难创建一个适合所有的解决方案。我们没有为每个客户应用客户特定索引/性能的资源。
我们知道导致问题的主要查询是为驱动报告和kpi而生成的查询。
我的问题是,是否有一种方法可以分散应用程序的负载,因此日常使用不会受到报告/ kpi生成的阻碍。也许我们可以反复镜像/复制数据库,以便将日常操作发送到SQL实体A和数据密集型查询并发送到SQL实体B?因此,数据密集型查询对日常事务没有影响,我们可以将查询排入SQL实体B。
在这种情况下,SQL实体A和B需要始终保持对齐,但SQL实体B将始终是只读的。
有人可以建议我们尝试实现这一目标的任何途径吗?或者我是否应该考虑另一种让我们获得绩效的方法。
由于
答案 0 :(得分:3)
似乎你可以使用任何复制选项,并且没问题。 在我以前的一个工作中,我们使用了Log Shipping http://technet.microsoft.com/en-us/library/ms187103.aspx来实现这一目的。
此外,您可以使用复制类型:http://technet.microsoft.com/en-us/library/bb677158.aspx并查看哪一种最适合您,因为您可以做的不仅仅是关于辅助数据库的报告。
如果我没记错我的初始体验,可以非常轻松地设置日志传送,这样您就可以从那里开始。
答案 1 :(得分:2)
啊......性能调优SQL Server等。人。我最喜欢的东西!
任何人都可以建议我们尝试实现这一目标吗?
根据您提供的信息,我会垂直分区数据。含义为实际OLTP(CRUD事务)维护一个数据库(服务器A),为KPI(服务器B)维护一个数据库。
对于复制,我会使用事务复制 - 当正确执行时,延迟将低于一秒。我不能想到这是不合适的实际情况。实际上,大多数报告都是在前一天结束时进行的,并且实时"实时"通常意味着持续5分钟
为了管理复制过程,我将从一个简单的控制台应用程序开始,期望在适当的时候扩展它以满足要求。控制台应用程序应该使用以下命名空间(在第二个想法,可能以后可能用于SQL2012)
using Microsoft.SqlServer.Management.Common;
using Microsoft.SqlServer.Management.Smo;
using Microsoft.SqlServer.Replication;
使用控制台应用程序,您可以在单个界面中管理发布,订阅和任何跟踪令牌。它将是一个配置PAIN(所有这些权限,密码和路径),但一旦它启动并运行,您将能够为数据和报表服务器优化事务数据库。报告。
我会有一个复制拓扑,实际上每个表对一个大型表进行一次订阅,对其余一个进行单个订阅(查找表,视图sp' s)。我会复制主键但不复制约束,表引用,触发器(依赖于源数据库的完整性)。您也不应该复制索引 - 可以手动配置/优化它们以用于报告服务器。
您还可以选择哪些文章适合KPI,即(无需复制文本,varchar(max)等)
下面发布了一些辅助功能,以帮助您前进。
或者我应该考虑另一种方法来获取我们 表现胜利?
在我谦逊的经历中,总有一些事情可以用来提高绩效。它归结为时间 - >成本 - >受益。有时候对功能的一点折衷会给你带来很多性能上的好处。
魔鬼在细节中,但有一点需要注意......
进一步随意的想法
您已经确定了一个基础架构问题 - 混合OLTP和BI /报告。我不清楚你的经历以及你的表现问题有多糟糕,所以复制绝对是正确的方法,如果你在"灭火"你可以尝试的模式。
order by
子句,KPI很昂贵。确保您的聚簇索引正确(REM:它们不需要存在于主键上)。一旦掌握了数据,请尝试在客户端上进行排序关于你的配置的更多信息会很有用 - 当你说巨大的,多大的?有多大/什么类型的磁盘,什么是RAM规格等等 - 我问的原因是......你可能会花费接下来的40个工作日(500美元+ /天?)调整 - 那会买你的相当多的硬件! - 更多RAM,更多磁盘,更快的磁盘 - 用于tempdb或索引分区的SSD。换句话说......你可能会问太多的硬件(而你的老板对你的要求太高了)
接下来,您描述一个企业应用程序,这是一个Enterpise SQL Server许可证。如果是这样你很幸运 - 你可以创建模式绑定分区视图并将查询委托给"更正"服务器。但是这个模型存在问题 - 即连接,但它确实为您提供了有效的替代选项。
复制代码
我知道我在某个地方。在下面找到RMO的一些帮助函数,您可能会发现这些函数在开始复制时很有用。在过去的某个时刻,它是实时代码,但可能比我想象的更长 - 请将其视为伪。
(如果您愿意,请尽快与您取得联系)
public static class RMOHelper
{
public static void PreparePublicationDb(MyServer Src, MyServer Dist)
{
ReplicationDatabase publicationDb = new ReplicationDatabase(Src.Database, Src.ServerConnection);
if (publicationDb.LoadProperties())
{
if (!publicationDb.EnabledTransPublishing)
{
publicationDb.EnabledTransPublishing = true;
}
// If the Log Reader Agent does not exist, create it.
if (!publicationDb.LogReaderAgentExists)
{
// Specify the Windows account under which the agent job runs.
// This account will be used for the local connection to the
// Distributor and all agent connections that use Windows Authentication.
publicationDb.LogReaderAgentProcessSecurity.Login = Dist.WinUId;
publicationDb.LogReaderAgentProcessSecurity.Password = Dist.WinPwd;
// Explicitly set authentication mode for the Publisher connection
// to the default value of Windows Authentication.
publicationDb.LogReaderAgentPublisherSecurity.WindowsAuthentication = true;
// Create the Log Reader Agent job.
publicationDb.CreateLogReaderAgent();
DeleteJobAgentSchedule(publicationDb.LogReaderAgentName);
}
}
else
{
throw new ApplicationException(String.Format(
"The {0} database does not exist at {1}.",
publicationDb,
Src.ServerName));
}
}
public static TransPublication PrepareTransPublication(MyServer Src, MyServer Dist, string publicationName)
{
// Set the required properties for the transactional publication.
TransPublication publication = new TransPublication();
publication.ConnectionContext = Src.ServerConnection;
publication.Name = publicationName;
publication.DatabaseName = Src.Database;
if (publicationName == "relation")
{
float d = 0;
}
// Specify a transactional publication (the default).
publication.Type = PublicationType.Transactional;
publication.ConflictRetention = 4;
publication.RetentionPeriod = 72;
// Activate the publication so that we can add subscriptions.
publication.Status = State.Active;
// Enable push and pull subscriptions and independent Distribition Agents.
publication.Attributes = PublicationAttributes.AllowPull|PublicationAttributes.AllowPush|PublicationAttributes.IndependentAgent;
//publication.Attributes &= PublicationAttributes.AllowSyncToAlternate;
// Specify the Windows account under which the Snapshot Agent job runs.
// This account will be used for the local connection to the
// Distributor and all agent connections that use Windows Authentication.
publication.SnapshotGenerationAgentProcessSecurity.Login = Dist.WinUId;
publication.SnapshotGenerationAgentProcessSecurity.Password = Dist.WinPwd;
// Explicitly set the security mode for the Publisher connection
// Windows Authentication (the default).
publication.SnapshotGenerationAgentPublisherSecurity.WindowsAuthentication = true;
publication.SnapshotGenerationAgentProcessSecurity.Login =Dist.WinUId;
publication.SnapshotGenerationAgentProcessSecurity.Password = Dist.WinPwd;
publication.AltSnapshotFolder = @"\\192.168.35.4\repldata\";
if (!publication.IsExistingObject)
{
// Create the transactional publication.
publication.Create();
// Create a Snapshot Agent job for the publication.
publication.CreateSnapshotAgent();
// DeleteJobAgentSchedule(ByVal jobID As Guid) As Boolean
}
else
{
//throw new ApplicationException(String.Format(
// "The {0} publication already exists.", publicationName));
}
return publication;
}
public static TransArticle PrepareTransArticle(TransPublication TransPub, Happy.MI.Replication.Article Article)
{
TransArticle TransArticle = new TransArticle();
TransArticle.ConnectionContext = TransPub.ConnectionContext;
TransArticle.Name = Article.Name;
TransArticle.DatabaseName = TransPub.DatabaseName;
TransArticle.SourceObjectName = Article.Name;
TransArticle.SourceObjectOwner = "dbo";
TransArticle.PublicationName = TransPub.Name;
//article.Type = ArticleOptions.LogBased;
//article.FilterClause = "DiscontinuedDate IS NULL";
// Ensure that we create the schema owner at the Subscriber.
if (TransArticle.IsExistingObject)
{
//do somethinbg??
}
else
{
TransArticle.SchemaOption |= CreationScriptOptions.Schema;
TransArticle.SchemaOption |= CreationScriptOptions.AttemptToDropNonArticleDependencies;
if (!Article.ObjectType.HasValue)
{
throw new Exception(string.Format("unknown schema object type for trans article {0}", Article.Name));
}
if (Article.ObjectType.Value== DataAccessADO.ObjectType.USER_TABLE)
{
TransArticle.SchemaOption |= CreationScriptOptions.ClusteredIndexes;
TransArticle.SchemaOption |= CreationScriptOptions.DriChecks;
TransArticle.SchemaOption |= CreationScriptOptions.DriDefaults;
TransArticle.SchemaOption |= CreationScriptOptions.DriPrimaryKey;
TransArticle.SchemaOption |= CreationScriptOptions.DriUniqueKeys;
//TransArticle.SchemaOption |= CreationScriptOptions.ExtendedProperties;
//TransArticle.SchemaOption |= CreationScriptOptions.NonClusteredIndexes;
TransArticle.Type = ArticleOptions.LogBased;
TransArticle.AddReplicatedColumns(Article.IncludedColumns.ToArray());
}
else if (Article.ObjectType.Value == DataAccessADO.ObjectType.VIEW)
{
TransArticle.Type= ArticleOptions.ViewSchemaOnly;
}
else if (Article.ObjectType.Value == DataAccessADO.ObjectType.SQL_SCALAR_FUNCTION)
{
TransArticle.Type = ArticleOptions.FunctionSchemaOnly;
}
else if (Article.ObjectType.Value == DataAccessADO.ObjectType.SQL_STORED_PROCEDURE)
{
TransArticle.Type = ArticleOptions.ProcSchemaOnly;
}
else
{
throw new Exception(string.Format("unsupported schema object type {0}", Article.ObjectType.Value));
}
// Create the article.
TransArticle.Create();
}
return TransArticle;
}
public static TransSubscription PrepareSubscription(TransPublication TransPub, MyServer Src, MyServer Dest, MyServer Dist)
{
// Define the push subscription.
//TransPullSubscription subscription = new TransPullSubscription();
//subscription.ConnectionContext = Dest.ServerConnection;
//subscription.PublisherName = Src.ServerName;
//subscription.PublicationName = TransPub.Name;
//subscription.PublicationDBName = Src.Database;
//subscription.DatabaseName = Dest.Database;
TransSubscription subscription = new TransSubscription();
subscription.ConnectionContext = TransPub.ConnectionContext;
subscription.PublicationName = TransPub.Name;
subscription.DatabaseName = TransPub.DatabaseName;
subscription.SubscriptionDBName = Dest.Database;
subscription.SubscriberName = Dest.ServerName;
subscription.LoadProperties();
//subscription.Remove();
// Specify the Windows login credentials for the Distribution Agent job.
subscription.SynchronizationAgentProcessSecurity.Login = Dist.WinUId;
subscription.SynchronizationAgentProcessSecurity.Password = Dist.WinPwd;
if(!subscription.IsExistingObject){
// Create the push subscription.
// By default, subscriptions to transactional publications are synchronized
// continuously, but in this case we only want to synchronize on demand.
subscription.AgentSchedule.FrequencyType = ScheduleFrequencyType.Continuously;
subscription.Create();
PrepareSnapshot(TransPub, Src, Dist);
}
return subscription;
}
public static void PrepareSnapshot(TransPublication TPub, MyServer Src, MyServer Dist)
{
SnapshotGenerationAgent agent = new SnapshotGenerationAgent();
agent.Distributor = Dist.ServerName;
agent.DistributorSecurityMode = SecurityMode.Standard;
agent.DistributorLogin = Dist.SQLUId;
agent.DistributorPassword = Dist.WinPwd;
agent.Publisher = TPub.SqlServerName;
agent.PublisherSecurityMode = SecurityMode.Standard;
agent.PublisherLogin = Src.SQLUId;
agent.PublisherPassword = Src.SQLPwd;
agent.Publication = TPub.Name;
agent.PublisherDatabase = TPub.DatabaseName;
agent.ReplicationType = ReplicationType.Transactional;
// Start the agent synchronously.
agent.GenerateSnapshot();
}
public static void ApplySubscription(Happy.MI.Replication.Subscription _subscription)
{
Happy.MI.Replication.Publication p = _subscription.Publication;
RMOHelper.PreparePublicationDb(_subscription.Publication.Src, _subscription.Publication.Dist);
TransPublication TransPub = RMOHelper.PrepareTransPublication(p.Src, p.Dist, p.PublicationName);
foreach (Happy.MI.Replication.Article a in p.Articles)
{
a.LoadProperties();
TransArticle ta = RMOHelper.PrepareTransArticle(TransPub, a);
ta.ConnectionContext.Disconnect();
}
TransSubscription TransSub = RMOHelper.PrepareSubscription(TransPub, p.Src, _subscription.Dest, p.Dist);
if (TransSub.LoadProperties() && TransSub.AgentJobId == null)
{
// Start the Distribution Agent asynchronously.
TransSub.SynchronizeWithJob();
}
TransSub.ConnectionContext.Disconnect();
//foreach (Happy.MI.Replication.Subscription s in p.Subscriptions)
//{
// TransSubscription TransSub = RMOHelper.PrepareSubscription(TransPub, p.Src, s.Dest, p.Dist);
// if (TransSub.LoadProperties() && TransSub.AgentJobId == null)
// {
// // Start the Distribution Agent asynchronously.
// TransSub.SynchronizeWithJob();
// }
// TransSub.ConnectionContext.Disconnect();
//}
//TransPub.ConnectionContext.Disconnect();
}
public static void Create(Happy.MI.Replication.Publication p)
{
RMOHelper.PreparePublicationDb(p.Src, p.Dist);
TransPublication TransPub = RMOHelper.PrepareTransPublication(p.Src, p.Dist, p.PublicationName);
foreach (Happy.MI.Replication.Article a in p.Articles)
{
a.LoadProperties();
RMOHelper.PrepareTransArticle(TransPub, a);
}
foreach (Happy.MI.Replication.Subscription s in p.Subscriptions)
{
TransSubscription TransSub = RMOHelper.PrepareSubscription(TransPub, p.Src, s.Dest, p.Dist);
if (TransSub.LoadProperties() && TransSub.AgentJobId == null)
{
// Start the Distribution Agent asynchronously.
TransSub.SynchronizeWithJob();
}
}
}
private static void DeleteJobAgentSchedule(string s)
{
// Private Function DeleteSchedule(ByVal scheduleID As Integer) As Boolean
// Dim result As Boolean
// If (scheduleID > 0) Then
// Dim msdbConnectionString As String = Me.PublicationConnectionString.Replace(String.Format("Initial Catalog={0};", Me.PublicationDbName), "Initial Catalog=msdb;")
// Dim db As New SQLDataAccessHelper.DBObject(msdbConnectionString)
// '-- Delete Job Schedule
// Dim parameters As New List(Of System.Data.SqlClient.SqlParameter)
// parameters.Add(New System.Data.SqlClient.SqlParameter("@schedule_id", SqlDbType.Int))
// parameters.Add(New System.Data.SqlClient.SqlParameter("@force_delete", SqlDbType.Bit))
// parameters(0).Value = scheduleID
// parameters(1).Value = True
// Dim rowsAffected As Integer
// result = (db.RunNonQueryProcedure("sp_delete_schedule", parameters, rowsAffected) = 0)
// db.Connection.Close()
// db.Connection.Dispose()
// Else
// Throw New ArgumentException("DeleteSchedule(): ScheduleID must be greater than 0")
// End If
// Return result
//End Function
}
public static int PublicationEstimatedTimeBehind(Happy.MI.Replication.Subscription s)
{
PublicationMonitor mon = new PublicationMonitor();
mon.DistributionDBName = s.Publication.Dist.Database;
mon.PublisherName = s.Publication.Src.ServerName;
mon.PublicationDBName = s.Publication.Src.Database;
mon.Name = s.Publication.PublicationName;
mon.ConnectionContext = s.Publication.Src.ServerConnection;
DataSet ds1 = mon.EnumSubscriptions2(SubscriptionResultOption.AllSubscriptions);
ds1.WriteXml(@"c:\desktop\ds1.xml");
//ds.Tables[0].ToString();
if (mon.LoadProperties())
{
PendingCommandInfo pci = mon.TransPendingCommandInfo(s.Dest.ServerName, s.Dest.Database, SubscriptionOption.Push);
return pci.EstimatedTimeBehind;
}
else
{
throw new Exception(string.Format("Unable to load properties for subscription [{0}][{1}]",s.Dest.ServerName, s.Publication.PublicationName));
}
}
public static int TraceTokenPost(Happy.MI.Replication.Subscription s)
{
TransPublication TransPub = new TransPublication();
TransPub.ConnectionContext = s.Publication.Src.ServerConnection;
TransPub.Name = s.Publication.PublicationName;
TransPub.DatabaseName = s.Publication.Src.Database;
if (TransPub.LoadProperties())
{
return TransPub.PostTracerToken();
}
return 0;
}
public static bool TraceTokenReceive(Happy.MI.Replication.Subscription s, int TokenId){
PublicationMonitor mon = new PublicationMonitor();
mon.DistributionDBName = s.Publication.Dist.Database;
mon.PublisherName = s.Publication.Src.ServerName;
mon.PublicationDBName = s.Publication.Src.Database;
mon.Name = s.Publication.PublicationName;
mon.ConnectionContext = s.Publication.Src.ServerConnection;
if (mon.LoadProperties())
{
DataSet ds= mon.EnumTracerTokenHistory(TokenId);
int latency;
string str = ds.Tables[0].Rows[0]["overall_latency"].ToString();
bool res = int.TryParse(str, out latency);
return res;
}
else
{
throw new Exception(string.Format("Unable to load properties for subscription [{0}][{1}]", s.Dest.ServerName, s.Publication.PublicationName));
}
}
public static void Cmd(string cnct)
{
string script = System.IO.File.ReadAllText(@"C:\tfs\CA\Dev\MI\Happy.MI\PostReplicationScripts\GC1.txt");
SqlConnection connection = new SqlConnection(cnct+";Connection Timeout=5");
Server server = new Server(new ServerConnection(connection));
//server.ConnectionContext.InfoMessage += new System.Data.SqlClient.SqlInfoMessageEventHandler(ConnectionContext_InfoMessage);
server.ConnectionContext.ExecuteNonQuery(script);
server.ConnectionContext.Disconnect();
}
}
答案 2 :(得分:1)
您可以查看分区表,以报告/ BI操作不会影响您的日常OLTP性能的方式对数据进行分区。当您需要清除旧数据时,它还可以节省一些宝贵的时间。
答案 3 :(得分:1)
看看ScaleArc。它是SQL连接管理器,通过在多个实例之间对读取和写入进行分区来实现负载平衡。这意味着你必须签署做复制..
答案 4 :(得分:0)
我会说在决定解决问题的任何方法之前将问题分成小块。
当您真正需要时,SQL SERVER的数据库镜像,复制和其他高可用性或DR功能就在那里。 但这些功能也不能实现100%的实时同步。正如其他经验丰富的DBA已经提到的那样,您必须计划“计划停机时间”和/或“几分钟延迟”,并且如果您选择这些选项,还应相应地设置客户的期望。
这些功能可能会随着问题的转移而转变,但实际上并没有解决它,除非我们首先查看根本原因。 以下建议看起来像通用语句,但同样是这个问题。你的问题太广泛而无法覆盖,首先需要在有人回答之前发现很多方面。
现在我想指出的是,对手头的问题提出小问题。
你提到过:“正在运行的某些查询会导致自身和其他用户出现锁定和性能问题”
Are these queries blocking other reads and/or write? If both lets handle separately.
Ideally any read should not be blocked by other read/write. What type of ISOLATIONLVEL you have in DB?
If “READ COMMITED”, or any other more strict level than think about SNAPSHOT ISOLATION.
Does the queries have lot of table and/or index hints in them ?
Try not to optimize queries by hints as first option instead treat as last option.
if issue is blocking of write/write then couple of point you can consider.
Does write queries written properly to acquires appropriate locks. (due to the table hints if any)
Have you look at the server configurations MAX memory/Thread/DOP/AUTO Statistics Async?
Can the large insert/update be queued from APP tier ?
Can the large insert/update be chunked in smaller batch ?
Can the large insert/update be executed as Asynchronous operation ?
Can you take advantage of Partitioning in database?
以上所有“可以提问”需要更多输入,具体取决于您的第一个答案。 如果数据库或代码最初不是为了适应这种功能而设计的,那么这是开始的好时机 想一想。
In terms of performance are you seeing write is getting slower or read ?
What is causing slowness?
CPU, Memory, DISK IO, DB properties, Too many Object Recompilations ?
Have you extracted the Execution plan for identified main queries ?
If queries are very long and complex instead look for how can we simplify/optimize the logic ?
Does Tables are overly indexed ?
Write operation can suffer severely if tables are over optimized for read by adding lot of indexes.
See the index fragmentation and statistics. You must have the db maintenance plan in place. How soon the indexes become fragmented again.
Is there lot of aggregations and calculation in the query that runs frequently?
If query has lot of aggregations/UDF’s/Views that runs frequently we can also find out if we can store semi aggregated data separately.
Does the reporting query retrieves lot of data?
If the queries serving results to report they may end up being retuning thousands of rows. Think about what does user do with this much of results on UI?
Does it really necessary ? if not can we limit the result set to return certain number of row based on user settings.
If yes, can we implement PAGING for this queries. (that can be controlled by setting as well)
If this much of data is feeding another sub-system (SSRS) then anyways any reporting tool will have some latency depending on how much data we are dumping in front of user.
“我们尝试应用尽可能多的索引并将所有查询都调整到极限但我们有一个应用程序必须适合许多不同的客户类型,因此很难创建一个解决方案适合所有人。 我们没有资源为每位客户应用客户特定的索引/性能。“
Can we find out customizations and think about how can we implement separately?
This is a huge piece by itself. But I am telling from my own experience that it took us almost an year
to transform our DB design that can accommodate over 300 clients w/o worrying how one customization will
affect other Client custom functionality and/or our core product features.
If you can manage to get a right plan laid out first, sure you can get resources to accomplish that.
“我们知道导致问题的主要查询是为了提高报告和kpi而生成的查询。”
How many tables this queries covers ?
If numbers less 30% of DB, then instead of whole DB,
we should think around these tables and queries only.
If you find any/some of above points you haven’t visited yet do so.
You will find very simple things that can save you lot.
It is better to look at root of the problem instead covering/overcoming it temporarily by using alternatives.
Many of the DBAs and Developer on this community will be happy to assist you for “END-TO-END Resolution” or “help as needed” .
答案 5 :(得分:0)
您可以创建快照复制,使用一个数据库进行生产,另一个用于报告。将报告索引移动到报告数据库,并将应用程序所需的其他索引保留在应用程序使用的数据库中。