Saturday, June 25, 2011

System Center Advisor codename Atlanta

New product from Microsoft called System Center Advisor (codename Atlanta) Release Candidate (RC) is available here.  It was announced at the recent March Pass Summit. 

I have not downloaded the software; however, just looking at the web site it looks like it is an SQL2008 replacement for the SQL Server Health and History tool (SQLH2) which supports SQL 2005.

Real-World Experience on SQL Azure

Megan Keller and Michael Otey have written a great article in SQL Server Magazine about real world experience with migration to SQL Azure.   In a well written interview with Stephen Yu, the vice president of development at Quosal,  the company's experience was generally positive.   Both the good and not-so-good were revealed with insightful interview questions. The most surprising aspect is the company’s experience with SQL Azure outages.  Worth the read.

Sunday, June 19, 2011

Striped Backups Help VLDB Backup Performance

An often overlooked performance boost to DBA operations is the use of striped backups on large databases.  Combined with backup compression, striped backups provide a significant reduction in time required to perform backups. 

Refer to the Technet article on backups, the SQLCAT article on VLDB backups and subsequent whitepaper for more information.

Saturday, June 18, 2011

Instant File Initialization

Instant File Initialization in SQL Server 2005/2008 is an easy performance win for the SQL DBA.  This feature allows SQL Server to allocate space (only data not log space) much quicker.   Besides making initial database creation faster, it has the advantage of increasing the size of database data files much faster as well.  This is an operational win with little downside.

Kimberly Tripp has an excellent blog on using this feature.   Paul Randall has a great follow-up in his blog where his dispels some common myths about instant file initialization. 


 

Tuesday, June 14, 2011

Database Tuning Without Impacting Production Servers

Database Tuning Engine Advisor has a feature which allows you to offload analysis to a test SQL server.  Here’s how it works:
             create a test SQL Server
             create a workload from the production SQL Server
             create an XML input file that references the workload and the name of the test server
             launch dta on the production server using the input file

From the msdn article:
The production server will use the information in the XML file to locate the test server and offload statistics and other data necessary for analysis. This technique allows the optimization to be based on the production server's hardware configuration, while reducing the processing required by the production server.   Also, storing a workload in a table negatively impacts performance of the production server, so you should always store the workload in a file when creating a SQL Server Profiler trace on a busy production server.

Monday, June 13, 2011

Extended Events Session on SQL Server

SQL Server Extended Events (Extended Events) is a general event-handling system for server systems. The Extended Events infrastructure supports the correlation of data from SQL Server, and under certain conditions, the correlation of data from the operating system and database applications. In the latter case, Extended Events output must be directed to Event Tracing for Windows (ETW) in order to correlate the event data with operating system or application event data.

You can use Extended Events for a wide range of monitoring and troubleshooting scenarios. The following scenarios illustrate a few of the situations where Extended Events can provide valuable data for resolving issues in such areas as:
  • Troubleshooting the cause of working set trimming.
  • Troubleshooting excessive CPU usage.
  • Troubleshooting deadlocks.
  • Correlating request activity with Windows ETW logs
To use SQL Server Extended Events, see this MSDN article.

Sunday, June 12, 2011

Stored Procedure Execution Article

If there are transactional replication publication articles that update a large number of rows using a stored procedure, you may want to consider publishing the stored precedure as a stored procedure execution article to perform those updates.  The typical example is to update every item's price by 10%.

CREATE PROCEDURE UpddateItemPriceTenPct AS
UPDATE ItemInfo SET Price = Price * 1.10

In this example, each row is updated and is transmitted to the target subscribers.  With a large number of rows, it can put a burden on performance. 

If this is a stored procedure execution article, then just the stored procedure execution gets transmitted to each subscriber who updates their rows locally.

Technet gives details here.

Saturday, June 11, 2011

Performance Tuning is a Science, NOT an Art

The field of SQL Server performance tuning is vast.  Many books, training classes, blogs, articles,  and careers are made based on this subject.   All too often, I hear that performance tuning is an art and not a science.  This is wrong thinking.  Sometimes I see SQL administrator professionals analyze a server finding no performance issue and yet I find a glaring problem withing minutes of signing onto the server.    There have been many examples of where I analyzed the root cause of a performance issue only to find I was wrong about the root cause.  There has to be a clearly defined method of analyzing performance issues such that SQL administrator professionals can confidently and consistantly determine cause and effect change.

I created a separate blog to look at this issue.

Friday, June 10, 2011

Recovering Transactional Consistency Across SQL Servers

In order to maintain transactional consistency across separate SQL Servers (or across databases on the same server), SQL Server uses distributed transactions coordinator. 
 
In order to have a backup/recovery strategy that ensures transactional consistency in the event of database/system issues, transactions need to use marks.  When recovering transaction logs, use the WITH STOPATMARK option.

This strategy is required since SQL Server cannot restore transactional consistency across related database with point-in-time recovery.

Wednesday, June 8, 2011

Transparent Data Encryption and Impact on Performance

Transparent Data Encryption (TDE), at first blush, appears to be a great solution for real-time encryption of databases and database backups with little administrative effort.  Applications do not have to be modified to access data while using this feature. 

A good overview showing implementation of TDE is found here.

What are the performance implications?  Are there any performance issues related to other non-encrypted databases on the same SQL Instance as the encrypted database?  What is the effect on page/row compression and backup compression?

DatabaseJournal.com posts an excellent review of TDE and performance tests.  Summerizing their listed caveates with TDE:
  • TDE is a SQL Server 2008 Enterprise-only feature.
  • Performance hit is largely dictated by the application workload, in some of the testing, the overhead was measured to be less than 5%.
  • When you turn on TDE for one database in an instance, tempdb is encrypted for all databases on that instance.
  • Encrypted data compresses significantly less than equivalent unencrypted data; using TDE and backup compression together is not recommended.
  • While a re-encryption scan for a database encryption operation is in progress, maintenance operations to the database are disabled.

Saturday, June 4, 2011

Allowing CLR Code Access to Windows OS Resources with Minimum Permissions

SqlContext.WindowsIdentity can be used to retrieve the token associated with the current security context. Managed code in EXTERNAL_ACCESS and UNSAFE assemblies can use this method to retrieve the context and use the .NET Framework WindowsIdentity.Impersonate method to impersonate that context.

In other words, use SqlContext.WindowsIndenity to allow a CLR running in SQL Server to access Windows OS Resources without granting excess priveleges to SQL Server.

Common Criteria (CC) and SQL Login Auditing

SQL login auditing and other security features is implemented as a part of Common Criteria (CC).  The Common Criteria for Information Technology Security Evaluation's Common Criteria Certification supersedes several older evaluation schemes including the U.S. Trusted Computer Systems Evaluation Criteria (TCSEC) which specified the well-known Class C2 rating.  Microsoft discusses the Common Criteria Certification, and explains the implementation of that standard as a configuration item in SQL 2008 R2.
When the configuration item is set to on or 1,
  • residual information protection (RIP) is turned on
    • RIP requires a memory allocation to be overwritten with a known pattern of bits before memory is reallocated to a new resource.
  • login auditing is enabled
    • Information about the last successful login time, the last unsuccessful login time, and the number of attempts between the last successful and current login times is made available. These login statistics can be viewed by querying the sys.dm_exec_sessions dynamic management view.
  • table-level DENY takes precedence over a column-level GRANT
    • this is a change in the default behaviour
To implement (only in SQL2005/2008 Developer/Enterprise):

sp_configure 'show advanced options', 1
go
reconfigure
go
sp_configure 'common criteria compliance enabled', 1
go
reconfigure
go
Restart the server.

Friday, June 3, 2011

Worst Physical Reads - SQL 2005 / SQL 2008


We are seeing large average disk queue lengths (200+) on one of our non-clustered SQL2005 servers.  We used this to identify the query that is the cause of the high IO.

SELECT TOP 10
  creation_time,last_execution_time,total_physical_reads
  ,total_logical_reads,total_logical_writes, execution_count
  ,total_worker_time, total_elapsed_time, total_elapsed_time / execution_count
  avg_elapsed_time, SUBSTRING(sqlText.text, (qs.statement_start_offset/2) + 1,
  ((CASE statement_end_offset WHEN -1 THEN DATALENGTH(sqlText.text) ELSE
  qs.statement_end_offset END - qs.statement_start_offset)/2) + 1) AS SQL_Statement
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) sqlText
ORDER BY total_physical_reads DESC