#0191-SQL Server-Maximum Capacity Specifications-How do you calibrate your systems?


It’s close to a week-end, and hence, I have a very short post for today.


Very often I find teams and SQL Server enthusiasts coming up to me and asking me some of the following questions:



  • How many databases can I host on a SQL Server instance?
  • How large can these databases be?
  • Is there a limit to the number of records in a table?

I often point them to Books On Line/MSDN where they can find the official specification which would be able to answer all of their questions. Today, I thought of putting an end to this almost endless cycle of same question-answer sessions and provide the direct links to the Books on Line/MSDN maximum capacity specification for SQL Server. So, here goes:


Maximum Capacity Specifications for SQL Server



How do you calibrate your systems?


Whenever someone asks a minimum/maximum capacity specification related question, I always wonder how they would calibrate their systems and more importantly, certify their product against these numbers. So, here’s what I would like to learn from you, my kind readers:



How do you derive minimum/maximum capacity specifications for your systems?


Ideally, the situation would be that a production system would have the same limitations as the underlying platform. But, that would not happen in practice due to the choice of the architecture and/or technical design decisions.


Do leave a note as you go. It’s a very interesting question, with no fixed answer – every system and every team would have their own methods, which is why I am sure it would be a great discussion.


Until we meet next time,


Be courteous. Drive responsibly.

2 thoughts on “#0191-SQL Server-Maximum Capacity Specifications-How do you calibrate your systems?

  1. marc_jellinek@hotmail.com

    There are no simple answers for a number of reasons:

    DBAs do not often participate in hardware or vendor selection. They work with what they have.
    Corporations will often have a corporate standard for server hardware, which may or may not be influenced by DBA input.
    Capacity calculations are often made up of information that is not available (how many users will be using the system, how many IOPS are required, how fast will data volumes grow, how much network bandwidth will the application consume and how fast will that grow) etc.
    Cost is a major component of determining hardware standards. Since hardware costs are not predictable (see last years hard drive shortages and attendant unanticipated price increases), companies will often buy as much hardware as they can get… for price X. Anything more than price X requires justification.

    Now add virtualization into the mix. You’ve got the underlying hardware plus the hypervisor/virtualization layer to consider. Let’s say you have a 16 core server running 4 VMs. Do you provision 1, 2, 4, 8 or all 16 cores to a virtual machine.

    At the end of the day, it comes down to testing in production. Yes, this is a bad idea. But testing in a lab environment often doesn’t yield accurate results. If you see processes that are processor bound, add cores. If you see processes that are memory bound, add RAM. If you see processes that are network bound, add network bandwidth (if you can). If you see processes that are disk bound, look at your storage configuration (this last one is often hardest, especially in a production environment).

    Like

    Reply
  2. Nakul Vachhrajani

    Thank-you for sharing your views, Marc! I have been bitten once by the storage configuration, so I completely understand what you are saying about that part being the hardest.

    Like

    Reply

Let me know what you think about this post by leaving your feedback here!

This site uses Akismet to reduce spam. Learn how your comment data is processed.