I'm provisioning a SQL Server 2019 build and have been allocated a virtual server running Windows Server 2016. It has 4 virtual processors and I have 410Gb of drive space to work with.
My build has to run two versions of the same 35Gb database.
My intention is to split the 410Gb drive on the new box up as follows:
L:\ to hold log files. Based on a rule of 25% of the size of the database = 9Gb per DB = 18Gb). T:\ to hold tempdb files. 4 data files (one per logical processor) at 1.5Gb each with a log file of 3Gb and a further 2Gb per DB for auto growth = 11Gb. D:\ drive of 190Gb for data and an E:\ drive of 190Gb for all backups.
Does that sound like a sensible split?
In Working with tempdb in SQL Server 2005 Microsoft writes:
Following are some additional configuration and performance recommendations for tempdb:
Create the tempdb database on a fast I/O subsystem. Use disk striping to numerous directly attached disks.
Separate the disk space dedicated to tempdb from user databases.
Create many files to maximize disk bandwidth and to reduce contention in allocation structures. As a general guideline, create one data file per CPU. Each file should be set to the same size. This enables the proportional fill algorithm to distribute the allocation load uniformly with minimal contention.
Now although this was initially written for SQL Server 2005, it is still referenced in the article:
...where Microsoft writes:
To improve the concurrency of tempdb, try the following methods:
Increase the number of data files in tempdb to maximize disk bandwidth and reduce contention in allocation structures. As a general rule, if the number of logical processors is less than or equal to eight (8), use the same number of data files as logical processors. If the number of logical processors is greater than eight (8), use eight data files. If contention continues, increase the number of data files by multiples of four (4) up to the number of logical processors until the contention is reduced to acceptable levels. Alternatively, make changes to the workload or code.
Consider implementing the best practice recommendations in the following TechNet topic:
If the previous steps do not significantly reduce the allocation contention and the contention is on SGAM pages, implement trace flag -T1118. Under this trace flag, SQL Server allocates full extents to each database object, thereby eliminating the contention on SGAM pages.
So the first article references the old article, but with a twist, because further down Microsoft writes:
Having the different data files on separate disks would be provide additional performance benefit. However, this is not required. The files can co-exist on the same disk volume.
Generally placing (user) database files on other drives can be considered a good idea, but doesn't have to be. These factors depend on your storage sub-system and the throughput (I/O) the sub-systems provide.
There is an Q&A on Disk Configuration on Server Fault:
Where Paul Randal's answer starts off with:
This is a big 'it depends' question.
I can't answer the how to create the individual RAID arrays question for you, as I'm not a storage expert, but I can help you with the rest.
The first thing you nede to consider is what is the workload on the various databases - OLTP (read/write) or DSS/DW (read-mostly). For read/write workloads, you should be looking at RAID 1 or RAID 10 (RAID 1+0), as these provide redundancy and great read/write performance. For read-mostly workloads you can use RAID 5. The reason RAID 5 should not be used for read/write workloads is that you pay a performance penalty on writes. ...
Seeing as Paul was in the SQL Server Development Team I would tend to give him some advance credit for his answers, especially since he know's a lot about the internal working of SQL Server. He has revised some of the information as time passed, but his blog posts are good starting points.
An equally interesting answer in the same Q & A references the Microsoft article Optimizing tempdb Performance equally states:
Make each data file the same size; this allows for optimal proportional-fill performance.
Put the tempdb database on a fast I/O subsystem. Use disk striping if there are many directly attached disks.
Put the tempdb database on disks that differ from those that are used by user databases.
There is the equally interesting piece of information tightly hidden in the new Microsoft Docs for the newer versions of SQL Server, to be found in:
Recommendations when working with files and filegroups:
- Most databases will work well with a single data file and a single transaction log file.
- If you use multiple data files, create a second filegroup for the additional file and make that filegroup the default filegroup. In this way, the primary file will contain only system tables and objects.
- To maximize performance, create files or filegroups on different available disks as possible. Put objects that compete heavily for space in different filegroups.
- Use filegroups to enable placement of objects on specific physical disks.
- Put different tables used in the same join queries in different filegroups. This step will improve performance, because of parallel disk I/O searching for joined data.
- Put heavily accessed tables and the nonclustered indexes that belong to those tables on different filegroups. Using different filegroups will improve performance, because of parallel I/O if the files are located on different physical disks.
- Don't put the transaction log file(s) on the same physical disk that has the other files and filegroups.
As you can see, it is not only about placing the correct files on the adequate storage, but also about storing the relevant data in separate filegroups. (Which is a topic for high end configurations)
Even Azure has differences in the disk setup page:
In the example screen shots provided in the example, the disk can be stored in different lcoations and on different storage (SSD, Premimum SSD, ...)
It has 4 virtual processors and I have 410Gb of drive space to work with.
Depending on your Storage setup, you might be able to get some fast SSD storage for your SQL Server instance (VM). Put the Transaction Log Files and maybe even the TEMPDB files on those disks. Create separate drives.
Put the MDF and NDF files of the user databases on the slower disks / storage.
Ensure you have enough RAM to support data buffering as this will reduce the I/O required on the MDF and NDF files (disk storage).
In the end it depends on a lot of factors. Go and read some of Paul Randal's and Brent Ozar's articles regarding RAM, I/O, tempdb and disk partitioning (Another aspect; you are formattting your data disks with 64kb cluster size?)
It really does depend on what you have and where you want to end up.
External links referenced by this document: