The following recommendations are designed to improve performance and avoid performance problems.
SQL Server Recommendations:
- Change Syniti databases to SIMPLE recovery model (from FULL) where possible. Changing to SIMPLE will improve performance of bulk operations, prevent log files from retaining history and simplify the backup and restore strategy. When the SIMPLE recovery model is used, you can only restore to the point of the most recent database backup. This is usually acceptable for Syniti databases, but the Syniti team should be comfortable with this and Active Governance (dspConduct, cApps) databases may need to be full or bulk logged.
- Ensure that Autoshrink is not set on any database. Autoshrink should never be used on a production database, especially when combined with autogrowth as the two settings cause frequent (and unnecessary) growing and shrinking of the data files, leading to slow performance and fragmentation (which leads to more slow performance). Additional info: http://thomaslarock.com/2009/03/when-to-use-auto-shrink/. Microsoft has a slightly more measured opinion, but it boils down to the same thing: http://support.microsoft.com/kb/315512 The following syntax will set autoshrink off in all databases. Errors will be generated when it tries to set autoshrink off on master and tempdb, however there will be no harmful effect of this:
sp_msforeachdb 'alter database [?] set auto_shrink off'
NOTE: use sp_msforeachdb with caution. - Ensure that data is maintained – databases (indexes) must be defragmented at least weekly and statistics recalculated weekly (or after large loads of data).
- File Size Management Recommendations
While these recommendations apply to all databases, it makes sense to focus on the larger and more active databases first, eventually addressing all of them if possible. Relying on the AutoGrowth feature of SQL Server to manage file sizes is not an ideal situation for the following reasons:
- If autogrowth is allowed without limits then the entire drive can fill up – this is especially likely if multiple databases coexist on the same drive, and all have autogrow configured with no limit.
- For larger files, percentage based autogrowth increments can be so great in actual MB that the autogrowth process takes a long time or even times out.
- Small growth increments can cause problems if they cause autogrowth to occur too frequently.
- While the growth process is taking place, blocking, resource usage, and performance problems can result and operations in the database can be disrupted.
- Autogrowth can lead to terrible disk fragmentation problems.
The answer is to avoid relying on autogrowth to manage file sizes. Microsoft never intended autogrowth to be used on busy production databases. An effort should be made to estimate the eventual maximum sizes of the larger databases so that those values can be set as the initial data file sizes and thereby eliminate the need for autogrowth. The amount of data in these files (i.e. their fullness) must be monitored on an ongoing basis. It is acceptable to leave autogrowth enabled just-in-case, but it should not be the principal file size management mechanism.
- After changing file sizing based on the above, it is important to perform disk defragmentation for the database and log file drives.
- Ensure that a maximum memory setting is configured for each instance when multiple instances share the same hardware (not recommended). While the max mem limit may constrain the performance of the particular instances slightly, overall it will be beneficial. If all the configured memory was being used, the total memory in use by SQL Server instances would be more than the physical memory in the machine. This would likely cause performance problems (especially for the last instance that starts up) and would starve Windows itself of sufficient RAM to function. If the total server memory is x, the sum of the maximum memory settings on the instances should be x – 3GB.
Summary of Recommendations:
- Never use autoshrink.
- Don’t rely on autogrow for production databases.
- Estimate the required data file size and manually configure the database initial size to that number. This can be calculated based on past experience combined with knowledge of the amount of incoming data from external data sources.
- Log files (when using the SIMPLE recovery model) grow based on activity levels, and should eventually reach a peak beyond which they do not grow, regardless of the amount of data in the database. There is no easy way to estimate the size of log files. Consider setting the initial size of the log file at up to 25% of the initial data file size, but leave autogrowth on (max 250MB increment), just in case. Review the log file sizes periodically and readjust the file size as needed.
- Assuming available disk space is sufficient, build in a fudge factor, but don’t be tempted to make the initial size ridiculously high.
- Add together all the file sizes for all the databases on the disk to make sure you aren’t overcommitted. Never let a disk get more than about 70% full.
- If AutoGrowth is in use, or has been in use, request Server Admins to schedule a disk fragmentation of the data and log drives.
- Use monitoring tools to monitor the file sizes automatically for you on a continuing basis.
- Rely on autogrow only for small databases that grow minimally or test databases with low activity levels.
- Ensure that autogrow increments are appropriate – do not use percentages, use a fixed number of megabytes. Generally increments should not be greater than 250MB.
Additional reading:
http://www.sqlskills.com/BLOGS/PAUL/post/Importance-of-data-file-size-management.aspx