Improve Performance as Part of a SQL Server Installposted June 1.PM by House of Brick Senior Consultant.Updated Jan. 2. 7, 2.While there are many blog posts about installing SQL Server, there are far fewer that discuss the non install items that should be completed to make your SQL Server system run better and provide greater throughput.This post will show several of the most significant performance items that should be done as part of any SQL Server install.These are true whether you are installing on a physical or virtual system.While the examples shown here are for Windows 2.SQL Server 2. 00.R2 and 2. 01. 2, the steps are largely the same for all releases of Windows and SQL Server.These are not intended to be the complete set of items to be addressed before, during, and immediately after installation, but they are the ones that have the largest performance impact.Power Setting. Power Options Control Panel Power allows the administrator to select whether to have the Windows operating system in a power restricted mode or not.How To Install Sql Server 7 Free' title='How To Install Sql Server 7 Free' />House of Brick Senior Consultant Updated Jan.While there are many blog posts about installing SQL Server, there are far fewer that discuss the noninstall.Hi Everyone, A user has SQL Server 2008 R2 Management Studio installed and we approved KB2528583 SQL Server 2008 R2 SP1 to be pushed out via WSUS.The. SQL Server 2016, new version of SQL Server is released by Microsoft on 28th of May, 2015.Public available free SQL Server 2016 download is ready at Microsoft TechNet.SQL Server SysPrep related setup actions can be accessed through the Installation Center.The Advanced Page of the Installation Center has two options Image.WhEM/hqdefault.jpg' alt='How To Install Sql Server 7 Free' title='How To Install Sql Server 7 Free' />We always recommend High Performance or a custom plan that never shuts off the computer for Windows 2.SQL Servers to be running any kind of restricted mode.The default value is Balanced.If you are running Windows 2.Always On. Disk Alignment and Partition Offsets.There are several good whitepapers and blog posts on this issue, including this blog post from Jimmy May at Microsoft.Jimmys blog post does a great job of explaining the issues with alignment and offsets in detail.The bottom line is this disk partitions created under Windows 2.KB, and a desirable offset is usually a multiple of 6.KB the SAN stripe size.The stripe size is determined by the SAN or disk being used.The default offset for new disks created under Windows 2.KB, which is 6. 41.SAN tools change this as well.As demonstrated in the above reference article, there can be large IO penalties for having misaligned disks.To determine the partition alignments use the Windows DISKPART command line utility.For all drives housing SQL Server data, log, or backup files the offset should be 1.KB. Not having this set properly as shown below can result in significant performance degradation.As long as were discussing disk, the recommended file allocation unit cluster size for SQL Server data and log drives is 6.KB 6. 55. 36 bytes, since this is one full extent eight 8.KB pages in SQL Server.Note that this is not true for operating system drives, file share drives, etc., which function well with the default 4 KB 4.Using a cluster size other than 6.KB will have a performance impact, though it will be less than that of using an incorrect offset.This can be discovered by looking at the Bytes Per Cluster using the fsutil command as shown below.Finally, lets look at disk allocation.For our installations we always request the following drives be allocated Windows and non SQL Server software usually the C driveSQL Server installation drive for the non database components.SQL Server data. SQL Server logs.Temp. DB Data. SQL Server backups.This provides the isolation of IO profiles and activity to ensure good performance.Instant File Initialization.Instant File Initialization allows files to be created without having to spend time zeroing out the entire size of the file.For example, a 2.GB data file can be added to the server in seconds as an empty file.Whereas, without instant file initialization, the 2.GB file must be written with zeros before it can be used.This is also true as a database file grows over time.Instant File Initialization is enabled by granting the SQL Server service account the perform volume maintenance tasks user right in Local Security Policy User Rights Assignment in Windows.If SQL Server runs using a local administrator user ID, it has been implicitly granted this right.If the service account is changed to a domain account, the new account needs to be added to this right.File Allocations and the Model Database.It is important that the allocations for the data and log files for any database be set to a size that will handle the existing data and leave room for growth.Setting these correctly will eliminate fragmentation and reduce the number of Virtual Log Files VLFs.Transaction log files are broken internally under the hood into pieces called Virtual Log Files VLFs, and every time a log goes through auto growth additional VLFs are added to the file.As SQL Server MVP Kimberly Tripp describes in item 8 in her post as well as in this post there are performance issues sometimes serious performance issues with having too many or even too few VLFs in your transaction log file.A general guideline is to have fewer than 5.VLFs in your log file for any database.This brings us to the Model database.As you know, the Model database serves only one purpose, to set database parameters like size, recovery model, file location, etc.Setting the Model database at a size and autogrowth and, perhaps growth limit will help reduce fragmentation, VLFs, and database files being placed on the wrong drive.It will also ensure that the recovery model is set to the proper default.We always set the Model database to some reasonable configuration post install.Temp. DBUnless directed by Microsoft Support, the greatest number of Temp.DB data files that are generally useful is one per core, and even this may be excessive.Having too many Temp.DB data files can actually cause performance issues as described by Microsoft MVP Paul Randal in his article.As Paul notes, the current best practice is actually to have a number of data files equal to or the number of cores.It is almost always recommended to have at least two Temp.DB data files, even with a small number of cores.Temp. DB writes to the data files in a round robin manner provided the files have approximately the same free space.As a result, House of Brick recommends that all Temp.DB data files in an instance be the same size and have the same autogrowth.These Temp. DB data files can exist on the same drive but, as noted earlier, Temp.DB data files should be on their own disk.Min Server Memory Max Server Memory.The Max Server Memory and Min Server Memory should be configured so that the SQL Server does not completely consume all of the servers memory.Not setting Max Memory can cause some interesting performance issues.Max Server Memory should be set, usually 4 8 GB below total system memory, and monitored.Some recommendations say to leave only 3.MB of memory free on the server, but this doesnt account for any other processes, which run on the server like anti virus scans, updates, and monthly processes. Desert Combat Mod Dcx Battlefield 1942 Servers there. After setting Max Memory, the perfmon counter Memory available bytes should be watched to verify that enough free memory is left available.Plan Cache. A common waste of memory in SQL Server is in the plan cache.In database systems, query statements are compiled and the generated execution plans are stored in the plan cache for potential subsequent re use.The problem arises when there are many ad hoc single use statements whose plans are stored and never re used, and in some cases waste significant system memory.As the plan cache grows it begins to consume memory previously used for data buffers.This is why plan cache size can impact performance.SQL Server MVP Kimberly Tripp has written quite a bit about this topic in her blog and she makes several recommendations with which House of Brick agrees and recommends to clients.First is to enable the Optimize for Ad hoc Workloads SQL Server instance option on SQL 2.This option causes a stub of the execution plans for ad hoc queries to be written to the plan cache on the first execution, thereby saving space.If the ad hoc query is run again, the entire execution plan is then written to the plan cache.The second recommendation is to check the amount of the plan cache that is storing single use plans, and if the amount is greater than 5.MB, to periodically clear the ad hoc cache via the DBCC FREESYSTEMCACHE SQL Plans command.This specific recommendation is outlined at sqlskills.To find the amount of plan cache wasted in this fashion, run the following usage query included in that article against your instance SELECT objtype AS Cache.Type., countbig AS Total Plans.AS Total MBs., avgusecounts AS Avg Use Count.CASE WHEN usecounts 1 THEN sizeinbytes ELSE 0 END.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
November 2017
Categories |