The SQL Server Transaction Log is comprised of Virtual Log Files (VLFs), which vary in size, depending on how you have grown your Transaction Log over time. Each transaction that occurs on your SQL Server instance is logged to the Transaction Log prior to being hardened to your data files. SQL Server will start writing to one VLF and continue to move to the next VLF as it progresses through the Transaction Log. This is how SQL Server maintains the “C” in ACID; that is “Consistency”. There is much more to be said with regard to the Transaction Log, but I will not go into the details in this post.
If a transaction is running for a very long time, or if a log backup has not been taken in a long while, or if a VLF cannot be cleared for any other reason (a SQL mirror might be down), the number of VLFs that are available will start to diminish. If the Transaction Log ever runs out of available VLFs, it will either have to grow the Transaction Log, or your database will have to stop. Due to the nature of SQL Server, the database cannot remain available to users without available space to record transactions; again, due to maintaining ACIDity. Neither eventuality is optimal: you do not want your database to become unavailable, nor would you like your Transaction Log to grow (if you can avoid it). When a Transaction Log grows, it must zero-out the entirety of the added diskspace (even if you have Instant File Initialization configured properly).
To avoid these sorts of scenarios, I monitor the density of my active VLFs, to try to proactively handle any situation that might arise, before it becomes a crisis. Below, I will provide the script that I use to check each Transaction Log in the instance.
This script runs “DBCC LogInfo” against each database in the instance, records the number of VLFs, how many are active, and how many are available, then calculates the density as a percentage, and displays the results:
DECLARE @databaseList TABLE ( [database] VARCHAR(256), [executionOrder] INT IDENTITY(1,1) NOT NULL ) DECLARE @vlfDensity TABLE ( [server] VARCHAR(256), [database] VARCHAR(256), [density] DECIMAL(7,2), [unusedVLF] INT, [usedVLF] INT, [totalVLF] INT ) DECLARE @logInfoResult TABLE ( [FileId] INT NULL, [FileSize] BIGINT NULL, [StartOffset] BIGINT NULL, [FSeqNo] INT NULL, [Status] INT NULL, [Parity] TINYINT NULL, [CreateLSN] NUMERIC(25, 0) NULL ) DECLARE @currentDatabaseID INT, @maxDatabaseID INT, @dbName VARCHAR(256), @density DECIMAL(7,2), @unusedVLF INT, @usedVLF INT, @totalVLF INT INSERT INTO @databaseList ( [database] ) SELECT [name] FROM [sys].[sysdatabases] SELECT @currentDatabaseID = MIN([executionOrder]), @maxDatabaseID = MAX([executionOrder]) FROM @databaseList WHILE @currentDatabaseID <= @maxDatabaseID BEGIN SELECT @dbName = [database] FROM @databaseList WHERE [executionOrder] = @currentDatabaseID DELETE @logInfoResult FROM @logInfoResult INSERT INTO @logInfoResult EXEC('DBCC LOGINFO([' + @dbName + '])') SELECT @unusedVLF = COUNT(*) FROM @logInfoResult WHERE [Status] = 0 SELECT @usedVLF = COUNT(*) FROM @logInfoResult WHERE [Status] = 2 SELECT @totalVLF = COUNT(*) FROM @logInfoResult SELECT @density = CONVERT(DECIMAL(7,2),@usedVLF) / CONVERT(DECIMAL(7,2),@totalVLF) * 100 INSERT INTO @vlfDensity ( [server], [database], [density], [unusedVLF], [usedVLF], [totalVLF] ) VALUES ( @@SERVERNAME, @dbName, @density, @unusedVLF, @usedVLF, @totalVLF ) SET @currentDatabaseID = @currentDatabaseID + 1 END SELECT [server], [database], [density], [unusedVLF], [usedVLF], [totalVLF] FROM @vlfDensity ORDER BY [density] DESC
I sometimes run this script across all of my instances using Red-Gate MultiScript to see how my VLF densities are doing across my entire enterprise. This is particularly useful when our DR site is down for some reason, it helps me keep track of which databases, if any, are in danger of having an impact due to VLFs being unable to clear.