dbccrinkfile()是否需要每个文件均衡的扩展计数?

时间:2019-03-04 18:51:19

标签: sql-server tsql

我希望这是问这个问题的正确地方,我正在使用兆字节db,服务器40+核,mem 500G +,SAN上的db文件(无法控制lun段),现在的瓶颈是文件争用。 当前运行的是2014年,但最终的“发布”可能是企业的最新版本。

最后一个问题将是当文件为偶数时如何自动停止dbccrinkfile(1,emptyfile)。以下是上下文。下面的代码有点繁琐。

按架构大小(我们不使用dbo)进行分隔,以获取最大的块数据使用量。找到磁盘上大小约为200G的候选对象,并将它们放入带有4个文件ea的文件组中。最后阶段是将[primary]分为4个。以下是我将模式移入文件组的过程,它将无法以所需的方式用于主模式。

alter proc dbo.admin_MoveSchema ( 
         @schema varchar(128) 
        ,@exec bit = 0 
        ,@storage varchar(2048) = null 
        ,@filegroup varchar(128) = null 
        ,@files int = 4 
) 
as 
/*  *************************************************************************** 
 *  example:   
    exec dbo.admin_MoveSchema @schema = 'Schema1', @exec = 1 
 *  *************************************************************************** 
 */ 

begin try 
    declare 
         @msg varchar(255) 
        ,@separator varchar(255) = replicate('=',128)
    set nocount on; 
    if nullif(@storage,'') is null 
        begin 
            set @storage = (select top 1 reverse(right(reverse(physical_name),abs(charindex('\',reverse(physical_name)) - len(physical_name)))) + '\' from sys.database_files where physical_name like '%.mdf') 
        end; 
    set @filegroup = rtrim(ltrim(replace(replace(@filegroup,']',''),'[','')));
    if nullif(@filegroup,'') is null 
        begin 
            set @filegroup = 'FG_' + @schema  /* will obviously fail if the schema name is already at max size of 128 */ 
        end; 
    declare 
         @s nvarchar(max) 
        ,@i int = 1 
    set @msg = 'Creating Filegroup ' + @filegroup; 
    print @separator; 
    print '||' + replicate(' ', 39) + @msg; 
    raiserror(@separator,0,0) with nowait; 
    begin try 
        set @s = ' 
if not exists (select 1 from sys.filegroups where name = ''' + @filegroup + ''') and ''' + @filegroup + ''' <> ''primary'' 
    begin 
        alter database ' + quotename(db_name()) + ' 
        add filegroup ' + quotename(@filegroup) + '; 
        print ''' + quotename(@filegroup) + ' added!'' 
    end 
        else 
    begin 
        print ''' + quotename(@filegroup) + ' exists.'' 
    end;' 
        if @exec = 1 
            begin 
                exec(@s);
            end 
                else 
            begin 
                print(@s); 
            end; 
        set @msg = 'Creating Files for ' + @filegroup; 
        print @separator; 
        print '||' + replicate(' ', 39) + @msg; 
        raiserror(@separator,0,0) with nowait; 
        while @i <= @files 
            begin 
                set @s = '
if not exists (select * from sys.sysfiles where name = ''' + @filegroup + '_' + right('00' + rtrim(@i),3) + ''') 
    begin 
        alter database [' + db_name() + '] 
        add file 
        ( 
            name = ' + quotename(@filegroup + '_' + right('00' + rtrim(@i),3)) + ', 
            filename = ''' + @storage + @filegroup + '_' + right('00' + rtrim(@i),3) + '.ndf'', 
            size = 8mb, 
            maxsize = unlimited, 
            filegrowth = 10% 
        ) 
        to filegroup ' + quotename(@filegroup) + '; 
        print ''added file: ' + quotename(@filegroup + '_' + right('00' + rtrim(@i),3)) + ''';
    end
        else
    begin
        print ''' + quotename(@filegroup + '_' + right('00' + rtrim(@i),3)) + ' exists'';
    end; '; 
                if @exec = 1 
                    begin 
                        exec(@s); 
                    end 
                        else 
                    begin 
                        print(@s); 
                    end; 
                set @i = @i + 1; 
            end; 
    end try 
    begin catch 
        print error_message(); 
    end catch; 

    declare 
         @schema_name varchar(128) 
        ,@table_name varchar(128) 
        ,@column_name varchar(max) -- collection of columns with asc/desc
        ,@index_name varchar(128) 
    set @msg = 'Moving tables in the schema ' + quotename(@schema) + ' to filegroup ' + quotename(@filegroup); 
    print @separator; 
    print '||' + replicate(' ',18) + @msg; 
    raiserror(@separator,0,0) with nowait; 
    declare idxGen cursor local fast_forward
    for 
    select 
         s.name schema_name
        ,t.name table_name
        ,i.name index_name 
        ,stuff((select convert(varchar(max),',') + quotename(c.name) + (case when ic.is_descending_key = 1 then ' desc' else ' asc' end)
            from sys.columns c
            inner join sys.index_columns ic on c.column_id = ic.column_id and c.object_id = ic.object_id and ic.index_id = i.index_id
            where c.column_id = ic.column_id and t.object_id = ic.object_id 
            order by ic.key_ordinal for xml path('')),1,1,'') column_name 
    from sys.indexes (nolock) i 
    inner join sys.tables (nolock) t on t.object_id = i.object_id and i.type_desc = 'CLUSTERED' 
    inner join sys.schemas (nolock) s on s.schema_id = t.schema_id and t.is_ms_shipped = 0 
    where s.name = @schema 
    open idxGen; 
    fetch next from idxGen into @schema_name,@table_name,@index_name,@column_name; 
            /* 
                first match wins, covers unique clustered, clustered and "with" defaults on most tables 
                for unique and not. I'm not sure if the last 2 every trigger, will check later. 
             */ 
    while @@fetch_status = 0 
        begin 
                select @s = ' 
begin try 
    begin try 
        create unique clustered index ' + quotename(@index_name) + ' on ' + quotename(@schema_name) + '.' + quotename(@table_name) + ' (' + @column_name + ') 
        with (drop_existing=on) 
        on ' + quotename(@filegroup) + '; 
    end try 
    begin catch 
        create clustered index ' + quotename(@index_name) + ' on ' + quotename(@schema_name) + '.' + quotename(@table_name) + ' (' + @column_name + ') 
        with (drop_existing=on) 
        on ' + quotename(@filegroup) + '; 
    end catch; 
    raiserror(''[Moved/On]: ' + quotename(@schema_name) + '.' + quotename(@table_name) + ' to ' + quotename(@filegroup) + ''', 0,0);
end try 
begin catch 
    print error_message(); 
    raiserror(''[### MOVE FAILED ###]: ' + quotename(@schema_name) + '.' + quotename(@table_name) + ''', 0,0);
end catch; '; 
            if @exec = 1 
                begin 
                    exec(@s); 
                end 
                    else 
                begin 
                    print(@s); 
                end; 
            fetch next from idxGen into @schema_name,@table_name,@index_name,@column_name; 
        end; 
    close idxGen; 
    deallocate idxGen; 
    print @separator; 
    print '||' + replicate(' ',15) + 'Moved Tables in schema:' + quotename(@schema) + ' to filegroup:' + quotename(@filegroup); 
    print '||' + replicate(' ',15) + 'If any [### MOVE FAILED ###] were printed above, manual move of that table may be needed.' 
    print @separator; 
    print ' '; 

end try 
begin catch 
    print 'Error! find out why!'; 
    throw
end catch 
GO 

我现在的问题是自动化和/或复制,轻松地将[主数据]拆分为自己的4拆分。

添加文件并运行dbcc shrinkfile(1,emptyfile)的过程很好,但是当所有文件的页数彼此相等或接近时,我想自动停止该过程。

我当时正在考虑在循环上创建和启动sql agent作业,以检查每个文件的页面,并在文件> = mdf页面时通过kill(dbccrinkfile的spid)来停止它,但是我真的更喜欢将其全部保存在一个脚本中,而无需外部依赖。

我需要打开一个新查询并执行该查询,直到我接近匹配为止,然后停止执行dbcc命令。

if object_id('tempdb..#fileStats','U') is not null
    begin
        drop table #fileStats;
    end;

create table #fileStats (
     FileId int
    ,FileGroup int
    ,TotalExtents bigint
    ,UserExtents bigint
    ,Name nvarchar(128)
    ,FileName nvarchar(2048)
);
insert into #fileStats (FileId, FileGroup, TotalExtents, UserExtents, Name, FileName)
exec('dbcc showfilestats');

select 
     UserExtents - (select sum(userextents) / count(FileId) from #fileStats where FileGroup = 1) extleft
    ,UserExtents
    ,Name
    ,FileName
from #fileStats 
where FileGroup = 1 
;

我可以将其与waitfordelay放在一个循环中,然后在比赛附近杀死/退出比赛,但是我试图从中摆脱出手动过程。它必须由非熟练或自动化流程执行。

有什么想法吗?

0 个答案:

没有答案