We have a single production database [ProdDatabase]
on an Azure SQL Server instance that was created back in 2013. At the time of creation the compatibility level of [ProdDatabase]
and [master]
was hence set to 100 (SQL Server 2008).
Over the past few weeks we've successfully updated the compatibility level of [ProdDatabase]
to 110, 120, 130 and now 140 (the latest). At each step we created a copy of [ProdDatabase]
and tested the update on it before doing it online on [ProdDatabase]
. It all worked very well and now [ProdDatabase]
is up-to-date at compatibility level 140.
The [master]
database, however, is still at compatibility level 100. For consistency we would like to update this [master]
database to compatibility level 140 as well (just by calling alter database [master] set compatibility_level=<110,120,130,140>
.
To minimize risk we were thinking of updating [master]
step-by-step as well (i.e. first 110, then 120, then 130, then 140). Considering that it's a system database, however, it doesn't seem as though we can "rehearse" this on a copy and it looks like we'll just have to do it directly on the live [master]
database. Considering that [master]
contains login and database metadata, we're slightly concerned that we might somehow break our live production app if the update doesn't work for some reason. We have however tested it on a local installation and it seems to work fine. Our production web app never queries [master]
for anything, however it uses an SQL login that is of course listed in the [master]
database.
My questions: Might somebody be able to describe the "best practice" approach for updating the compatibility level of the [master]
database on an Azure SQL instance? Are there any pitfalls regarding [master]
in particular that we should be aware of? Would the alter database [master] set compatibility_level=110;
call be easily reversible if things did blow up?
答案 0 :(得分:3)
您无法更改Azure主数据库的兼容级别,进一步
兼容性级别仅影响指定数据库的行为,而不影响整个服务器的行为
因此我认为主兼容性级别不会影响其他数据库。此外,在Azure SQL中,主数据库只保存登录和元数据相关信息