Quantcast
Channel: sql server – Monitis Blog
Viewing all articles
Browse latest Browse all 39

30 MS SQL Performance Tips

$
0
0

Microsoft SQL ServerGiving out free advice is getting to be an old tradition here at Monitis. We’ve been running blog posts offering advice and guidance on everything from Apache LAMP security, everything you wanted to know about monitoring MS SQL server data with VBScript, to monitoring Windows Log files via Powershell.

Today, we’ll tackle how to maximize your experience with Microsoft SQL Server.

Microsoft SQL Server is Microsoft’s SQL relational database. It is one of the leading databases on the market, and it is highly scalable and exists in several different editions. Available editions range from the free Express Edition, the Standard Edition and the Enterprise Edition. (For more information about available editions and their features check the following site: http://www.microsoft.com/sqlserver/en/us/product-info/compare.aspx)

Database servers are complex to manage and configure. But in this article we will go through a series of tips that you may find helpful to optimize your SQL Server environment.

Here are 30 tips you can use to get the most from your MS SQL performance:

  1. Don’t share the SQL server hardware with other services
    If other workloads are running on the same server where SQL Server is running, memory and other hardware resources will be shared among this workload. In this condition it will be more difficult to identify the cause of poor performances as they arise. You may find yourself wasting a lot of time just figuring if you have to optimize SQL Server or the other workload.
  2. Use Multiple Disk Controllers
    SQL Server can take advantage from scattering data across multiple disk drives. However, a storage controller has limits in the throughput. And, while using multiple disks, it is better to also use multiple controller — in order to avoid I/O bottlenecks.
  3. Use the Appropriate RAID Configuration
    When it comes to choosing a RAID (Redundant Array of Independent Disks) level, you may consider cost, performance, and availability requirements: RAID 5 is cheaper than RAID 0+1, and RAID 5 performs better for read operations than write operations. RAID 0+1 is more expensive and performs better for write-intensive operations.
    If possible you should choose hardware-level RAID rather than software RAID. Software RAID is usually cheaper but uses CPU cycles while RAID controllers have onboard logic that will offset this workload from the CPU.
  4. Provide a separate disk for heavily used tables and indexes
    If you have heavily accessed tables or indexes, you will boost performance by allocating those objects in their own file group on a separate physical disk.
  5. Know your workload and monitor performance metrics
    This is the basis of every optimization work: you must first know how you use resources in order to optimize their usage.  In general, SQL Server benefits from having plenty of memory but, depending on the workload, you may have different usage patterns for processor and disks.  Again, constantly monitor your system metrics over time and focus your efforts on resources with the highest usage patterns.
  6. Separate OLAP and OLTP Workloads
    OLAP (Online Analytical Processing) and OLTP (Online Transaction Processing) workloads on the same server have to be designed to not interfere with each other. OLAP and reporting workloads tend to be characterized by less frequent but long-running queries. OLTP workloads, on the other hand, tend to be characterized by lots of small transactions that return something to the user in less than a second. Long-running queries for analysis, reports, or ad-hoc queries may block inserts and other transactions in the OLTP workload until the OLAP query completes.  If you need to support both workloads, consider creating a reporting server that supports the OLAP and reporting workloads. If you perform lots of analysis, consider using SQL Server Analysis Services to perform those functions.
  7. Use fixed size databases
    If you allocate disk space for a database while creating it you can be confident enough that the allocated space will be contiguous and therefore you will get the best possible performances. Instead if you set the Autogrow option the disk space will be allocated only when needed and will be probably very fragmented.  A fragmented database will perform worse that a contiguous one. So, especially in production, it is better to allocate space when you first create a database.
  8. Put tempdb on a separate disk
    The tempdb database is a temporary storage area that is used when performing operations such as GROUP BY or ORDER BY. Keeping tempdb on a separate disk will ensure that such operation will not have a negative impact on the performance of other database operations.
  9. Separate data and logs on different physical disks
    Database and logs have different usage patterns: database is read and written in an almost random way, while logs are mostly written sequentially. Separating them on different physical disks allows the operation to be executed with the best possible performance.
  10. Use table partitioning
    Partitioning allows you to keep portions of the same table on different physical disks. Using a partition to separate current data from historical data, you can keep all data on the same table. But keep just current data on your faster disks and therefore improve your query performances.
  11. Create indexes
    Indexes allow searching for data inside database tables in the most optimized way, and it is very important that all necessary indexes are created for the queries that are going to be served by the database engine.  Consider creating indexes on columns frequently used in the WHERE, ORDER BY, and GROUP BY clauses. These columns are the best candidates for indexes.
  12. Create clustered indexes
    Create clustered indexes instead of non-clustered in order to increase the performance of the queries that return a range of values and for the queries that contain the GROUP BY or ORDER BY clauses that return the sort results.  Since a table can have only one clustered index, you should choose the columns for this index very carefully. Analyze all your queries, choose most frequently used queries and include into the clustered index only those columns which provide the most performance benefits from your creation.
  13. Create non-clustered indexes
    Create non-clustered indexes to increase performance of the queries that return fewer  rows and where the index has good selectivity. A table can have as many as 249 non-clustered indexes, but you should consider non-clustered index creation carefully because each index can take up disk space and has impact on data modification.
  14. Rebuild indexes periodically
    While you update, delete and create records in your tables your indexes becomes fragmented and performance may degrade over time. You should consider rebuilding indexes periodically in order to keep performance at the best level.  For tables with a clustered index, rebuilding that index means defragmenting the table that is also beneficial.
  15. Use covering indexes
    A covering index is an index that includes all the columns referenced in the query. Covering indexes can improve performance because all the data for the query is contained within the index itself and only the index pages–not the data pages–will be used to retrieve the data. Covering indexes can bring a lot of performance improvement  because it can save a huge amount of I/O operations.
  16. Drop indexes that are not used
    Limit the number of indexes if your application updates data very frequently. Because each index takes up disk space and slows the adding, deleting, and updating of rows, you should create new indexes only after analyzing data usage, the types and frequencies of queries performed and how your queries will use the new indexes. In many cases, the speed advantages of creating the new indexes outweigh the disadvantages of additional space used and slowly rows modification.
    Use Index Wizard to identify indexes that are not used in your queries.
  17. Retrieve only the data you need
    Sometimes you may be tempted to use SELECT * FROM … when writing your queries, this way you will retrieve all fields in a table when you only need some. In order to reduce the size of transferred data you should specify the list of just the columns you need.
  18. Use Locking and Isolation Level Hints to Minimize Locking
    Within transactions, use the “WITH NOLOCK” option when possible. You’ll avoid long wait times for concurrent instances of your application accessing the same rows.
  19. Use parameters in queries
    The SQL Server query optimizer keeps recently used query plans in memory. When you are not using parameters, the parameters themselves contribute to make queries different from each other, and therefore, the Query Optimizer will not reuse them. Using parameters, the number of query plans in memory will decrease and they will more likely be reused.
  20. Choose the smallest data type that works for each column
    Explicit and implicit conversions may be costly in terms of the time that it takes to perform the conversion itself. There is also a cost in terms of the table or index scans that may occur because the optimizer cannot use an index to evaluate the query.
  21. Use varchar instead of text
    Columns that use the text data type have extra overhead because they are stored separately on text/image pages rather than on data pages. Use the varchar type instead of text for superior performance for columns that contain less than 8,000 characters.
  22. Use unicode only when necessary
    Unicode data types like nchar and nvarchar take twice as much storage space compared to ASCII data types like char and varchar.
  23. Limit the use of cursors
    Cursors can result in some performance degradation compared to select statements. Try to use correlated subquerìes or derived tables if you need to perform row-by-row operations.
  24. Devote the appropriate resources to schema design
    Take the time and devote the resources that are needed to gather the business requirements — to design the right data model and to test the data model. Make sure that your design is appropriate for your business and that the design accurately reflects the relationships between all objects. Changing a data model after your system is already in production is expensive, time consuming, and inevitably affects a lot of code.
  25. Avoid long actions in triggers
    Trigger code is often overlooked when developers evaluate systems for performance and scalability problems. Because triggers are always part of INSERT, UPDATE, or DELETE calling transactions, a long-running action in a trigger can cause locks to be held longer than intended, resulting in the blocking of other queries. Keep your trigger code as small and as efficient as possible. If you need to perform a long-running or resource-intensive task, consider using message queuing to accomplish the task asynchronously.
  26. Avoid expensive operators such as “NOT LIKE”
    Some operators in joins or predicates tend to produce resource-intensive operations. The LIKE operator with a value enclosed in wildcards (“%a value%”) almost always causes a table scan. This type of table scan is a very expensive operation because of the preceding wildcard. “LIKE” operators with only the closing wildcard can use an index because the index is part of a B+ tree, and the index is traversed by matching the string value from left to right.  Negative operations, such as <> or NOT LIKE, are also very difficult to resolve efficiently. Try to rewrite them in another way if you can. If you are only checking for existence, use the “IF EXISTS” or the “IF NOT EXISTS” construct instead. You can use an index. If you use a scan, you can stop the scan at the first occurrence.
  27. Evaluate the query execution plan
    In SQL Query Analyzer, enable the Display Execution Plan option, and run your query against a meaningful data load to see the plan that is created by the optimizer.
    Evaluate this plan and then identify any good indexes that the optimizer could use. Also, identify the part of your query that takes the longest time to run and that might be better optimized. Understanding the actual plan that runs is the first step toward optimizing a query. As with indexing, it takes time and knowledge of your system to be able to identify the best plan.
  28. Use Sp_executesql for dynamic code
    If you must use dynamic code in your application, try to wrap it in the sp_executesql system stored procedure. This allows you to write parametrized queries in T-SQL and you save the execution plan for the code. If the dynamic code has little chance of being called again, there is no value in saving the execution plan because the execution plan will eventually be removed from the cache when the execution plan expires. Evaluate whether an execution plan should be saved or not. Note that wrapping code in the sp_executesql system stored procedure without using parameters does not provide compile time performance savings.
  29. Keep Statistics Up to Date
    Statistics are used by SQL Server Query Optimizer to select the best index to use when extracting data from your table. If statistics are not up to date you may end up keeping an index that is never used.
  30. Keep database administrator tasks in mind
    Do not forget to take database administrator tasks into account when you think about performance. For example, consider the impact that database backups, statistic updates, DBCC checks, and index rebuilds have on your systems. Include these operations in your testing and performance analysis.

We hope that these MS SQL performance tips have come in handy. Read the following post (MySQL Database Monitoring Best Practices) for more information on MySQL monitoring! Also take a look at

The post 30 MS SQL Performance Tips appeared first on Monitis Blog.


Viewing all articles
Browse latest Browse all 39

Latest Images

Trending Articles





Latest Images