Friday, November 4, 2011

Error Installing SQL 2005 on New Multi-Core Server

We encountered an unusual error when installing SQL 2005 x64 Ent Edn on a Windows 2003 R2 x64 Ent Edn server (non-clustered) running on a new server with 48 core and 32Gb RAM.  Other than installing software that was back a couple of versions on new hardware, there seemed to be nothing unusual or difficult about the install (the age should have given me the first clue as to the problem).  The installation process seemed to go well - all the required information was entered and the installer seemed to be nearing the end of its jouney by instlaling the SQL Engine and starting the engine. 

The error message displayed was:
The service did not respond to the start or control request in a timely fashion.
The installer then uninstalled all the components.

Looking through the summary.txt and other install log files SQL Server generates usually gives me the best information about an installation failure.  In this case, the useful information was in application event id 4097 from Dr Watson which stated:
The application, E:\Program Files\MSSQL.1\MSSQL\Binn\sqlservr.exe, generated an application error The error occurred on 11/03/2011 @ 15:25:56.962 The exception generated was c0000005 at address 0000000001B323F0 (sqlservr!DmpGetClientExport)

The problem is that SQL Server 2005 does not install / start correctly on some of the newer hardware configurations.  This problem is discussed in MS Support Blogs.  Per KB954835, you may not be able to install Microsoft SQL Server 2005 on a computer that has a multi-core processor, and one of the following conditions is true:
  • The ratio between logical processors & physical sockets is not a power of 2.
  • The number of physical cores is not a power of 2.
  • The total number of CPUs is an odd number. For example, there are seven or nine CPUs.
The solution is simple enough:  reconfigure the server to use only 1 core when installing.  Then reconfigure the server to use all desired core.  This is done by using the misconfigure command, setting the numcore in the advanced area of the boot.ini tab and rebooting. 

This problem does not occur when installing SQL 2008 or SQL 2008 R2.

Saturday, October 15, 2011

SQL Server Code Name "Denali" is Officially SQL Server 2012

It's official! SQL Server Code Name "Denali" is Officially SQL Server 2012. Release Candidate 0 (RC0) will be available at the end of the year, with the RTM version available the first half of next year. Microsoft MVP Dan English gives us the scoop on his blog.

Wednesday, October 12, 2011

Comparison of SQL Server 2008 R2 Features by Edition

Feature comarison of SQL 2008 R2 by Edition.  I am comaring only the Datacenter, Enterprise, Standard and Express Editions.

Sunday, October 9, 2011

SQL Server 2008 SP3 Now Available

Find the SQL Server 2008 SP3 download here.  MSDN discussion on their blog site here
One of the fixes is for the installation setup package - about time!  I have blogged on many install issues related to SQL 2008 and SQL 2008 R2.  I'm hoping a similar service pack will follow shortly for SQL 2008 R2.

Wednesday, September 28, 2011

How to Restore SYSADMIN Access to SQL Server When SYSADMIN Access is Lost

One of our DBAs installed SQL Server 2008 R2 as a clustered instance on a Windows 2008 R2 cluster with Integrated Security.  The only account with access to SQL Server was his personal account; he set his default database to a user database.   The user database became corrupt.  The result was he could not logon to SQL Server.  Neither could anyone else. 

The solution is rather easy and documented in BOL and MSDN here.  Simply set the SQL startup option to be -m.  This option starts SQL Server in single-user mode (only a single user can connect), the CHECKPOINT process is not started, and any local administrator connecting is assigned the sysadmin fixed server role. 

To add my logon id to the server as a logon with the sysadmin fixed server role, I added -m to the startup option and re-started the server.  I connected to the SQL Instance with SSMS.  I then added my logon id and added myself to the sysadmin fixed server role. 

I fixed the DBA's logon id by changing his default database (currently set to nothing as his default database was corrupt) to master.   I reminded the DBA to use Windows groups with role based security where the group's default database is master as that is our policy.

Monday, August 29, 2011

Clustered SQL 2008 R2 Installation Failure

Installing SQL 2008 R2 on a Windows 2008 R2 cluster can be a challenge!   

An example of the problems experienced during installation is when one of the DBAs from my team had a failed SQL Server clustered installation.  Specifically, SQL Server installed on the initial node, but failed to add itself to the second node of a two node cluster.  Attempts to fix the installation were not initially successful as the process seemed to end after the Microsoft SQL Server Setup program with no further action.

The fix to the problem I performed the following steps:
  • Uninstall Microsoft SQL Server Setup
  • Delete the C:\Program Files\Microsoft SQL Server directory
  • Delete HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server hive
The problem seems to be with the SQL Server Setup program that executes prior to any install or fix of SQL Server 2008.  I'm not sure, but there seems to be residual information from previous install attempts that keep additional install attempts from being successful.  There are a number of articles and blog posts on this subject.....enough of them that this program seems to be problematic for many people.

We also had another issue.  The cluster check failed on the node where we were adding the SQL Server node to the SQL clustered instance.  The cluster check succeeded on the other nodes, so I'm working on the assumption that the cluster is actually fine.  How to get past this cluster check?  Use the undocumented setup option skiprules
Setup /SkipRules=Cluster_VerifyForErrors /Action=AddNode
This setup option is referred to in kb953748.

Friday, August 26, 2011

SQL 2008 R2 Install Error: 2337

We seem to be getting 2337 errors on a regular basis when installing SQL Server 2008 R2 x64.  Very frustrating.   It seems that when one issue occurs (bad password, invalid entry, etc), the likelihood of getting a 2337 increases.  The best solution for the 2337 is a Microsoft KB article found here.

Wednesday, August 17, 2011

Benefits of Using soft-NUMA

SQL Server is Non-Uniform Memory Access (NUMA) aware.  SQL Server uses NUMA out of the box without additional configuration.

SQL Server also allows you to group CPUs into nodes.  This is referred to as soft-NUMA. You usually configure soft-NUMA when you have many CPUs and do not have hardware NUMA, but you can also use soft-NUMA to subdivide hardware NUMA nodes into smaller groups.  Only the SQL Scheduler and SQL Network Interface use soft-NUMA.

Why use soft-NUMA?

The benefits of soft-NUMA include reducing I/O and lazy writer bottlenecks on computers with many CPUs and no hardware NUMA.  Configuring four soft-NUMA nodes provides four I/O threads and four lazy writer threads, which could increase performance.

Details on MSDN here.

Wednesday, August 3, 2011

FILESTREAM Use with High Availability

FILESTREAM is not supported by all replication/HA features.  Technet details are here.  A brief summary:
  •   Database Snapshots:  NO, NOT supported
  •   Mirroring:  NO, NOT supported
  •   Replication:  YES, Is supported
  •   Log Shipping:  YES, Is supported
  •   Failover Clustering:  YES, Is supported
Note that when FILESTREAM used with clustering, the filestream filegroup is required to be on a clustered shared resource.
 

Database TRUSTWORTHY Property

The TRUSTWORTHY database property indicates whether the SQL instance trusts the database to access objects external to the database.  When the TRUSTWORTHY property is set to ON, it allows database modules that use an impersonation context to have access to resources outside the database. 

For example, user-defined functions, stored procedures, or assemblies may access resources outside the database.   Also, impersonation of database level principals or users (known as extending database impersonation) by using the EXECUTE AS USER statement allows access outside the database.

When migrating a database that requires external access (and the TRUSTWORTHY property is ON), you need to reset the property after migration.  Since the SQL instance does not automatically trust databases as a default, all databases are set to TRUSTWOTHY OFF when created or attached. 

To set the TRUSTWORTHY property:
ALTER DATABASE <database name>
SET TRUSTWORTHY { ON | OFF }


Modules that are designed to access resources outside the database, and assemblies with either the EXTERNAL_ACCESS and UNSAFE permission setting, have additional requirements in order to run successfully.

Sunday, July 31, 2011

Changing a Read-Only File Group to Read-Write with a Partial Backup Sequence

If you have a database using the simple recovery model with both read-write and read-only file groups, you may want to use differential partial backups.  This backup strategy makes sense (and saves backup disk space / backup time) Since you do not need to repeatedly backup data that is read-only.  This backup scenario is called a partial backup sequence.

For example, you may have:
FILEGROUP        TYPE
Primary          Read-Write
FG2              Read-only
FG3              Read-only
FG4              Read-write


After doing an initial full backup, you may want to backup the file groups enabled for write by performing a weekly (depending on your business needs for data recovery) partial full backup like this:
BACKUP DATABASE <database name> READ_WRITE_FILEGROUPS TO <backup device>

and a daily (also depending on your business needs for data recovery) differential partial backup like this:
BACKUP DATABASE <database name> READ_WRITE_FILEGROUPS TO <backup device> WITH DIFFERENTIAL

What if you change a filegroup from read-only to read-write?  What do you need to do to your backup jobs to account for the change?

A partial backup sequence must begin with a base partial backup before differential partial backups can be performed. Therefore, to include a filegroup that has been changed from read-only to read-write in subsequent partial backups, you need to create a new partial base backup. You create a partial base backup by specifying the READ_WRITE_FILEGROUPS in a FULL backup.  After that point, you have a base to do a recovery and include differentials as needed.

Tail-Log Backups

Tail-Log Backups are needed when you wish to perform a database recovery on a database with full or bulk-logged recovery model and you are recovering to a point after the last full, differential, or transaction log backup.  If you are doing a recovery to point a in time prior to the last full, differential, or transaction log backup then a tail log backup is NOT required.

A Tail-Log Backup is also the first step in performing a manual failover of a primary log shipping server to a secondary server. 

A Tail-Log backup is simply a log backup taken using the BACKUP LOG statement.  It is called a Tail-Log backup because it is backing up the remainder or tail of the transaction log. What differentiates a Tail-Log backup from any other log backup?  When taking a Tail-Log backup, you need to ensure that transactions are not occurring after you do the backup.

If the database you are recovering is not damaged, online and available for connections, you should use the backup statement with NORECOVERY option.  Using the NORECOVERY option puts the database into the restoring state where no changes are allowed:
BACKUP LOG <database_name> TO <backup_device>
    WITH NORECOVERY 

If the database is damaged, you should use the CONTINUE_AFTER_ERROR option: 
BACKUP LOG <database_name> TO <backup_device>
    WITH CONTINUE_AFTER_ERROR
This option allows you to successfully produce a transaction log backup even if the database file is damaged.  Per the MSDN article, the log backup will succeed if:  "the log files are undamaged, the database is in a state that supports tail-log backups, and the database does not contain any bulk-logged changes".

Thursday, July 28, 2011

Preview of Windows Server "8" and Denali Release Date Hint

Interesting video of a presentation made to Microsoft business partners during the 2011 Microsoft Worldwide Partner Conference (July 10-14, 2011).

At 37:55 is a Windows Server "8" first time sneak preview. 

At 41:38 is a presentation on SQL Server "Denali" with heavy emphasis on BI. 

Most interesting is that the presenter says SQL Server "Denali" will be available the "first half of next year".  Based on this info, my guess is that SQL Server "Denali" will be RTM'd March of 2012.

If you're interested in SQL Server "Denali", it is well worth the time.

Thursday, July 21, 2011

MS_DataCollectorInternalUser User with No Login

Found what I thought was an orphaned User MS_DataCollectorInternalUser in the MSDB databases on our SQL2008 and SQL2008R2 servers.  My assumption is that it has to do with the Data Collector.  What I did not realize is that this user account is not an orphaned user at all.  It is a user account intentially created with no SQL Login.

As of SQL Server 2005, a user can be created without login - see MSDN reference.  The example given in this link for the use of this kind of user with no login is to grant access to the user and then grant impersonation of that user to an SQL Login.  From the same link above:

USE AdventureWorks2008R2 ;
CREATE USER CustomApp WITHOUT LOGIN ;
GRANT IMPERSONATE ON USER::CustomApp TO [adventure-works\tengiz0] ;
GO

To use the CustomApp access rights:

EXECUTE AS USER = 'CustomApp' ;
GO

I have not found a way to differentiate a user which is truly orphaned (once upon a time had a login) and a user wihout a login. I've left messages on Microsoft forums; however, no one seems to know how to differentiate between orphaned users and users created with no login.

Thursday, July 14, 2011

Utility Control Point in SQL 2008 R2

Monitoring Instances of SQL Server in the SQL Server Utility is something new in SQL Server 2008 R2.  The Utility Control Point (UCP), as it is called, looks to be incomplete as implemented in this release of SQL Server.  The purpose of UCP is to monitor the resource health of multiple SQL Servers.  The MSDN reference is here.  I have not spent much time with UCP, but it does look like it will fit nicely with the resource data collection in Centralized Management Server MDW and the new Microsoft Service Center offering:  SCA

Friday, July 8, 2011

Startup Stored Procedure

We were audited by our internal IT auditors who asked some good questions.  One question I could not immediately answer was:  "Why do some of your SQL instances have the 'scan for startup procs' configuration option set"?  Since we have no stored procedures configured (using sp_procoption) to run at startup (that I'm aware of), I did not know how to answer the question. 

I executed the following to determine what stored procedures (if any) are executing at startup:
select name from sys.sysobjects
where xtype = 'p'and
      OBJECTPROPERTY(id, 'ExecIsStartup') = 1;

The result was:  sp_MSrepl_startup

Replication sets sp_MSrepl_startup as a stored procedure to execute at SQL instance startup. 

Full explanation of the OBJECTPROPERTY function is here.

Mystery solved.

Thursday, July 7, 2011

Windows 2008 R2 Cluster Restart Issue

We had two of our Windows 2008 R2 clusters, each hosting two SQL 2008 R2 Enterprise Edition instances, failover one of the SQL resource groups in each of the two clusters at about the same time (within seconds).  The event log contained an Event ID 1135 on one cluster and Event ID 4201 on the other cluster as the pertinent system events just prior to failover.  How do you analyze these and is there a common issue?

To analyze Event ID 1135, see the Technet article here.  Essentially, I ran the "Validate this Cluster..." configuration function and found a NIC intended for backups only was used by the cluster as a network resource.  In our environment, we specifically configure the backup NICs to not be able to allow access from peer to peer node (just to backup devices), so this is most likely the cause.  I also found on one of the nodes that the backup NIC driver needed to be re-installed.  I'm guessing NIC drivers were updated and network resources on the cluster was a side-effect.

To analyze Event ID 4201, the Technet article was of no use since it states the message is a NIC start message and the message accompanying the event stated the NIC could not be started.  I'll have to report that one.  Further searching resulted in finding similar messages wherein the NIC was causing problems.  Looking at all NICs on the servers in the cluster, I saw no problems.  The message contained a GUID which referenced the NIC.  Searching the registry I found the problem NIC was the Microsoft Failover Cluster Virtual Adapter.  

I am still investigating this one.

In the meantime, I've disabled the use of all backup NICs from the clusters and validated that all clustered resources do function and properly failover to/from each node.

<Update>The network crew came clean and stated they restarted a network applicance (firewall) which caused the SQL Instances on both clusters to restart.  Mystery solved.

Max Server Memory in SQL 2008

Yagnesh Shah (blog) does a nice job of giving an easy to follow formula for determining the max server memory advanced options setting for SQL 2008 in his post here.  I have always added a little more memory than what was required of the OS and other apps running on the server.  Yagnesh's formula allows for additional memory space for worker threads - something I had not considered.

Tuesday, July 5, 2011

Windows 2008 Cluster Log Location

Looking for the Windows Cluster Log in Windows 2008 or Windows 2008 R2?  It's no longer a simple text file in the windows cluster directory as it was with Windows 2003.  There is a much more suphisticated system of tracing/logs in the new Event Tracing for Windows (ETW) which is now used in Windows 2008. 

If you want to extract a log file similar to the old cluster.log, perform the following:
1.   Go to a command prompt
2.   Type "Cluster /Cluster:yourclustername log /gen /copy "C:\temp".
3.   Navigate to the c:\temp directory and there you will find the .log files for each node of your cluster.

The cluster log can now be opened in Notepad.

Full details of the new ETW feature of Windows used by the failover clustering feature is provided on MSDN and Technet.

Monday, July 4, 2011

SQL Server DBA Interview Questions and Answers

Pinal Dave (blog) does a great job of discussing SQL Server DBA interview questions and answers here.  I like his overall approach to why both experienced SQL Server DBAs and newbies need to review these questions.  His answers are accurate and complete.  Worth the read.

Sunday, July 3, 2011

Moving TEMPDB files

TEMPDB files cannot be moved when SQL Server is online.  The way to move the files is to execute an ALTER DATABASE command which modifies the TEMPDB FILENAME (and location) in the catalog.  The next time the SQL Server is started, the files are created in the new location.   See the MSDN reference here.

The following script creates an ALTER DATABASE command which then I submit to move the TEMPDB files:  

SELECT 'ALTER DATABASE tempdb MODIFY FILE (NAME = ' +
name +
', FILENAME = ''M:\MSSQLDATA\' +
name +
'.ndf'');'
FROM sys.master_files
WHERE database_id = DB_ID(N'tempdb');
GO 

The resultant query becomes the script to move the files:

ALTER DATABASE tempdb MODIFY FILE (NAME = tempdev, FILENAME = 'M:\MSSQLDATA\tempdev.ndf');
ALTER DATABASE tempdb MODIFY FILE (NAME = templog, FILENAME = 'M:\MSSQLDATA\templog.ndf');
ALTER DATABASE tempdb MODIFY FILE (NAME = TempD2, FILENAME = 'M:\MSSQLDATA\TempD2.ndf');
ALTER DATABASE tempdb MODIFY FILE (NAME = TempD3, FILENAME = 'M:\MSSQLDATA\TempD3.ndf');
ALTER DATABASE tempdb MODIFY FILE (NAME = TempD4, FILENAME = 'M:\MSSQLDATA\TempD4.ndf');

Be sue to modify the FILENAME part of the statement to match your environment drive/folders and modify the resultant code to change the .ndf to .mdf/.ldf file extension as appropriate for that data file (proper file extension name is not required for SQL Server, but helps DBAs quickly identify file types).

The old files are still there and should be deleted to conserve space as they are no longer used.

Saturday, July 2, 2011

SQL 2008 R2 Replication Default Article Properties

SQL 2008 R2 Replication, as a default behaviour, does not replicate non-clustered indexes.

Saturday, June 25, 2011

System Center Advisor codename Atlanta

New product from Microsoft called System Center Advisor (codename Atlanta) Release Candidate (RC) is available here.  It was announced at the recent March Pass Summit. 

I have not downloaded the software; however, just looking at the web site it looks like it is an SQL2008 replacement for the SQL Server Health and History tool (SQLH2) which supports SQL 2005.

Real-World Experience on SQL Azure

Megan Keller and Michael Otey have written a great article in SQL Server Magazine about real world experience with migration to SQL Azure.   In a well written interview with Stephen Yu, the vice president of development at Quosal,  the company's experience was generally positive.   Both the good and not-so-good were revealed with insightful interview questions. The most surprising aspect is the company’s experience with SQL Azure outages.  Worth the read.

Sunday, June 19, 2011

Striped Backups Help VLDB Backup Performance

An often overlooked performance boost to DBA operations is the use of striped backups on large databases.  Combined with backup compression, striped backups provide a significant reduction in time required to perform backups. 

Refer to the Technet article on backups, the SQLCAT article on VLDB backups and subsequent whitepaper for more information.

Saturday, June 18, 2011

Instant File Initialization

Instant File Initialization in SQL Server 2005/2008 is an easy performance win for the SQL DBA.  This feature allows SQL Server to allocate space (only data not log space) much quicker.   Besides making initial database creation faster, it has the advantage of increasing the size of database data files much faster as well.  This is an operational win with little downside.

Kimberly Tripp has an excellent blog on using this feature.   Paul Randall has a great follow-up in his blog where his dispels some common myths about instant file initialization. 


 

Tuesday, June 14, 2011

Database Tuning Without Impacting Production Servers

Database Tuning Engine Advisor has a feature which allows you to offload analysis to a test SQL server.  Here’s how it works:
             create a test SQL Server
             create a workload from the production SQL Server
             create an XML input file that references the workload and the name of the test server
             launch dta on the production server using the input file

From the msdn article:
The production server will use the information in the XML file to locate the test server and offload statistics and other data necessary for analysis. This technique allows the optimization to be based on the production server's hardware configuration, while reducing the processing required by the production server.   Also, storing a workload in a table negatively impacts performance of the production server, so you should always store the workload in a file when creating a SQL Server Profiler trace on a busy production server.

Monday, June 13, 2011

Extended Events Session on SQL Server

SQL Server Extended Events (Extended Events) is a general event-handling system for server systems. The Extended Events infrastructure supports the correlation of data from SQL Server, and under certain conditions, the correlation of data from the operating system and database applications. In the latter case, Extended Events output must be directed to Event Tracing for Windows (ETW) in order to correlate the event data with operating system or application event data.

You can use Extended Events for a wide range of monitoring and troubleshooting scenarios. The following scenarios illustrate a few of the situations where Extended Events can provide valuable data for resolving issues in such areas as:
  • Troubleshooting the cause of working set trimming.
  • Troubleshooting excessive CPU usage.
  • Troubleshooting deadlocks.
  • Correlating request activity with Windows ETW logs
To use SQL Server Extended Events, see this MSDN article.

Sunday, June 12, 2011

Stored Procedure Execution Article

If there are transactional replication publication articles that update a large number of rows using a stored procedure, you may want to consider publishing the stored precedure as a stored procedure execution article to perform those updates.  The typical example is to update every item's price by 10%.

CREATE PROCEDURE UpddateItemPriceTenPct AS
UPDATE ItemInfo SET Price = Price * 1.10

In this example, each row is updated and is transmitted to the target subscribers.  With a large number of rows, it can put a burden on performance. 

If this is a stored procedure execution article, then just the stored procedure execution gets transmitted to each subscriber who updates their rows locally.

Technet gives details here.

Saturday, June 11, 2011

Performance Tuning is a Science, NOT an Art

The field of SQL Server performance tuning is vast.  Many books, training classes, blogs, articles,  and careers are made based on this subject.   All too often, I hear that performance tuning is an art and not a science.  This is wrong thinking.  Sometimes I see SQL administrator professionals analyze a server finding no performance issue and yet I find a glaring problem withing minutes of signing onto the server.    There have been many examples of where I analyzed the root cause of a performance issue only to find I was wrong about the root cause.  There has to be a clearly defined method of analyzing performance issues such that SQL administrator professionals can confidently and consistantly determine cause and effect change.

I created a separate blog to look at this issue.

Friday, June 10, 2011

Recovering Transactional Consistency Across SQL Servers

In order to maintain transactional consistency across separate SQL Servers (or across databases on the same server), SQL Server uses distributed transactions coordinator. 
 
In order to have a backup/recovery strategy that ensures transactional consistency in the event of database/system issues, transactions need to use marks.  When recovering transaction logs, use the WITH STOPATMARK option.

This strategy is required since SQL Server cannot restore transactional consistency across related database with point-in-time recovery.

Wednesday, June 8, 2011

Transparent Data Encryption and Impact on Performance

Transparent Data Encryption (TDE), at first blush, appears to be a great solution for real-time encryption of databases and database backups with little administrative effort.  Applications do not have to be modified to access data while using this feature. 

A good overview showing implementation of TDE is found here.

What are the performance implications?  Are there any performance issues related to other non-encrypted databases on the same SQL Instance as the encrypted database?  What is the effect on page/row compression and backup compression?

DatabaseJournal.com posts an excellent review of TDE and performance tests.  Summerizing their listed caveates with TDE:
  • TDE is a SQL Server 2008 Enterprise-only feature.
  • Performance hit is largely dictated by the application workload, in some of the testing, the overhead was measured to be less than 5%.
  • When you turn on TDE for one database in an instance, tempdb is encrypted for all databases on that instance.
  • Encrypted data compresses significantly less than equivalent unencrypted data; using TDE and backup compression together is not recommended.
  • While a re-encryption scan for a database encryption operation is in progress, maintenance operations to the database are disabled.

Saturday, June 4, 2011

Allowing CLR Code Access to Windows OS Resources with Minimum Permissions

SqlContext.WindowsIdentity can be used to retrieve the token associated with the current security context. Managed code in EXTERNAL_ACCESS and UNSAFE assemblies can use this method to retrieve the context and use the .NET Framework WindowsIdentity.Impersonate method to impersonate that context.

In other words, use SqlContext.WindowsIndenity to allow a CLR running in SQL Server to access Windows OS Resources without granting excess priveleges to SQL Server.

Common Criteria (CC) and SQL Login Auditing

SQL login auditing and other security features is implemented as a part of Common Criteria (CC).  The Common Criteria for Information Technology Security Evaluation's Common Criteria Certification supersedes several older evaluation schemes including the U.S. Trusted Computer Systems Evaluation Criteria (TCSEC) which specified the well-known Class C2 rating.  Microsoft discusses the Common Criteria Certification, and explains the implementation of that standard as a configuration item in SQL 2008 R2.
When the configuration item is set to on or 1,
  • residual information protection (RIP) is turned on
    • RIP requires a memory allocation to be overwritten with a known pattern of bits before memory is reallocated to a new resource.
  • login auditing is enabled
    • Information about the last successful login time, the last unsuccessful login time, and the number of attempts between the last successful and current login times is made available. These login statistics can be viewed by querying the sys.dm_exec_sessions dynamic management view.
  • table-level DENY takes precedence over a column-level GRANT
    • this is a change in the default behaviour
To implement (only in SQL2005/2008 Developer/Enterprise):

sp_configure 'show advanced options', 1
go
reconfigure
go
sp_configure 'common criteria compliance enabled', 1
go
reconfigure
go
Restart the server.

Friday, June 3, 2011

Worst Physical Reads - SQL 2005 / SQL 2008


We are seeing large average disk queue lengths (200+) on one of our non-clustered SQL2005 servers.  We used this to identify the query that is the cause of the high IO.

SELECT TOP 10
  creation_time,last_execution_time,total_physical_reads
  ,total_logical_reads,total_logical_writes, execution_count
  ,total_worker_time, total_elapsed_time, total_elapsed_time / execution_count
  avg_elapsed_time, SUBSTRING(sqlText.text, (qs.statement_start_offset/2) + 1,
  ((CASE statement_end_offset WHEN -1 THEN DATALENGTH(sqlText.text) ELSE
  qs.statement_end_offset END - qs.statement_start_offset)/2) + 1) AS SQL_Statement
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) sqlText
ORDER BY total_physical_reads DESC

Saturday, May 28, 2011

Setup.rll Contains an Error

Problems with SQL2008 to SQL 2008 R2 Upgrade.

After an upgrade to SQL2008 R2 from SQL 2008 failed, I tried re-starting the upgrade.  The error:
Clearly states the setup.rll file is the problem.   

I replaced the setup.rll file in my install folder (E:\SQL2008R2\resources\1033\setup.rll) which did not resolve the issue.  My assumption was that when running setup from the install directory that fresh copies sent to the target installation folder (c:\Program Files\Microsoft SQL Server\100\Setup Bootstrap…etc…). 

After replacing my install copy of software with binaries from another ISO file and experiencing the same failed results, I finally realized that I needed to copy the setup.rll file manually to the target installation folder.  This solved the problem.

Thursday, May 26, 2011

Let's Build our DBA Community

The Exceptional DBA award rocks! 
It's a cool to recognize your peers.  It helps build the DBA Communty.

Wednesday, May 25, 2011

Copy-Only Backups

Copy-Only Backups are useful as special or one-off backups between scheduled full/differential/transaction log backups.   They are useful when doing backups for special purposes for client DBA databases where you don’t have to keep the backup as part of the continuous LSN for recovery purposes.

Syntax:   BACKUP DATABASE database_name TO <backup_device> … WITH COPY_ONLY

SQL 2008 R2 / Windows 2008 R2 Cluster install

Discovered there are two options for installation of an SQL 2008 R2 in a Windows 2008 R2 Cluster.  We currently install with option 1.  Here's a description of both options:

Option1:   Integration Installation with Add Node

SQL Server integrated failover cluster installation consists of the following steps:
·    Create and configure a single-node SQL Server failover cluster instance.  When you configure the node successfully, you have a fully functional failover cluster instance.  At this point, it does not have high availability because there is only one node in the failover cluster.
·    On each node to be added to the SQL Server failover cluster, run Setup with Add Node functionality to add that node.

Option 2:   Advanced/Enterprise Installation

SQL Server Advanced/Enterprise failover cluster installation consists of the following steps:
·     On each node that will be a possible owner of the new SQL Server failover cluster that you are creating, follow the Prepare Failover Cluster setup steps that are listed in the Prepare section.  After you run the Prepare Failover Cluster on one node, Setup creates the Configuration.ini file that lists all the settings that you specified.  On the additional nodes to be prepared, instead of following these steps, you can supply the auto generated Configuration.ini file from first node as an input to the Setup command line.  This step prepares the nodes ready to be clustered, but there is no operational instance of SQL Server at the end of this step.
·    After the nodes are prepared for clustering, run Setup on one of the prepared nodes, preferably on the node that owns the shared disk that has the Complete Failover Cluster functionality.  This step configures and finishes the failover cluster instance.  At the end of this step, you will have an operational SQL Server failover cluster instance and all the nodes that were prepared previously for that instance will be the possible owners of the newly created SQL Server failover cluster.

From technet

Sunday, May 22, 2011

SQL Server-Level Principal Login Certificate

On one of our SQL2005 instances, a certificate used for SQL Server-level principal login, ##MS_SQLAgentSigningCertificate##, is currently not trusted as a CA Root cert.  It needs to be installed/added back as a Trusted Root CA.


Technet discusses these certificates.  I have not found much on these certs except an incident where MSDB was moved to another instance and permissions needed to be fixed.

My assumption is the cert store on the server lost its reference to the CA Cert and just needs to be added back in.  I added in the cert and that solved the problem.

Saturday, May 21, 2011

Cluster Problems when Upgrading to SQL2008

While upgrading from SQL2005 to SQL2008 (not R2) in a 2-node 2-instance Windows 2003 R2 clustered environment, one of the DBAs ran into the problem where one of the SQL resouce groups would not move from one node to the other.  Error message:

An error occurred attempting to move the '<groupname>' group.
A cluster node is not available for this operation.
Error ID: 5005 (0000138d).


Found this which gave the answer.  One of the resources (the network name) did not have the node we we were moving to in its preferred node list.  Once that was corrected, the resource group moved as it should.

Saturday, May 7, 2011

Transactional Replication and Primary Key

When creating a publication, I discovered one of the tables was not allowed as an article in the publication since it had no primary key:

Create table VendorData (
  colUniqueID1    int NOT NULL,
  colUniqueID2    int NOT NULL,
  colFiller1      varchar(1000),
  colInclude1     char(4),
  colInclude2     int,
  colFiller4      varchar(1200),
  colClusterCol   int NOT NULL);
go

Create unique index ix_VendorData_colUniqueID1_ID2
      on VendorData(colUniqueID1, colUniqueID2)
      include (colInclude1, colInclude2);
go

Create clustered index ix_VendorDataClusterCol on VendorData(colClusterCol);
Go

I was not sure how to create the primary key for the purpose of replication and not change the table in a way that would change the way query results were returned.

The only difference (I am aware of) between a primary key and a unique index is that a unique index allows nulls and the primary key does not.  Since the unique index in this case is composed of two columns that each have the NOT NULL constraint, the unique index is functionally the same as a primary key and should produce the same query results.  However, this unique index in this scenario is also a covering index.  I do not know a way of creating a primary key that is also a covering index.

The solution I came up with was to keep the unique covering index and add the primary key constraint to the table using the columns in the unique index as components of the primary key – without the included columns of course.

Alter table VendorData WITH NOCHECK
Add constraint PK_VendorData_colUniqueID1_colUniqueID2 PRIMARY KEY (colUniqueID1, colUniqueID2)

This did work.  I was not sure if the index would actually be created or if the SQL parser would realize the index already existed and point the primary key to the existing unique index (with the caveat that the columns would retain their NOT NULL constraint).  The index was created in seconds on a table with >120mil rows.  My guess is that the index for the primary key was not re-generated, but used the existing unique index.

Comments welcome.

Monday, May 2, 2011

What's a hierarchyid?

While studying for the MCTIP SQL 2008 Developer exam, I ran across a data type I'd never seen before (and I'm sure I'll discover more in my career):  the hierarchyid.  References in Technet and an informative blog by Simon Ince.

This CLR based data type is designed to provide a simple way to represent hierarchical data - a bill of materials or empoyee org chart are examples. 

It is interesting that recursive Common Table Expressions were emphasized in the MCTS SQL Developer exam, and hierarchyid data type replaces this construct with something much easier ot use.

Sunday, May 1, 2011

Studying for Exam 70-451

Recently earned my MCTS SQL 2008 Database Developer certification using the Microsoft self-paced training kit.  Studying for  Microsoft Certification exam 70-451 .  Exam 70-451 will earn my MCITP SQL 2008 Database Developer certification.  My goal is to take the exam 5/25/11.

There's no Microsoft self-paced training kit, so I'm putting together everything I can find on this exam.  The Microsft certification web site gives a good explation of the skills tested on.  I'm using Technet/MSDN/BOL as my guide on the subject.

Practice tests not available from MeasureUp, but found one on Kaplan

Friday, April 15, 2011

How to find Page Splits by Table/Index

This script provided information on which Table/Index was performing the page split.  I'm surprised there's apparrantly no DMV for this information.  Sorry, I do not have a reference from where I originally got this.

select AllocUnitName, count([AllocUnitName]) [Splits]
from ::fn_dblog(null, null)
where Operation = N'LOP_DELETE_SPLIT' and parsename(AllocUnitName,3) <> 'sys'
group by AllocUnitName

Monday, April 11, 2011

How to View Index Fragmentation

The scripts below (provided by Cory Brown of Microsoft) were used to determine how badly the indexes were fragmented:

-- ==========================================================================
-- Description: The purpose of this script is to look at the fragmentation of

--              existing indexes in the database
--              and determine if the index should be REBUILT, REORGANIZED, or

--              statistics updated.
-- Date:        September 22, 2010
-- Author:      Cory Brown
-- ===========================================================================

--Rebuild Index recommended if fragmentation is above 30%
SELECT    DB_NAME(database_id),*
FROM      sys.dm_db_index_physical_stats (NULL, NULL, NULL, NULL, NULL)
WHERE     avg_fragmentation_in_percent > 30
AND       page_count > 1000
ORDER BY  avg_fragmentation_in_percent DESC

--Reorganize Index recommended if fragmentation is between 15% and 30%
SELECT    DB_NAME(database_id),*
FROM      sys.dm_db_index_physical_stats (NULL, NULL, NULL, NULL, NULL)
WHERE     avg_fragmentation_in_percent < 30
AND       avg_fragmentation_in_percent > 15
AND       page_count > 1000
ORDER BY  avg_fragmentation_in_percent DESC

--UpdateStats recommended if fragmentation is below 15%
SELECT    DB_NAME(database_id),*
FROM      sys.dm_db_index_physical_stats (NULL, NULL, NULL, NULL, NULL)
WHERE     avg_fragmentation_in_percent < 15
AND       page_count > 1000
ORDER BY  avg_fragmentation_in_percent DESC
 

Sunday, April 10, 2011

Performance issues on SQL 2005: Deadlocks, Index Fragmentation

Encountered performance issues on an SQL 2005 x64 server hosting the databases for OCS Server 2007.  Using perfmon, I quickly saw that memory, CPU, network, and IO resources were not constrained.  Then looked at waits, deadlocks, and excessive block times.  Deadlocks are an issue, and long blocking times experienced for most client connections on the OCS databases (rtc, rtcdyn).

Looking at index fragmentation, I saw problems (>80% fragmentation) for several of the larger tables.  Perfomed defrag on those tables which provided immediate relief.   Relief was short-lived as indexes were quickly fragmented again - within 10minutes.   Next looked at page splits to see if that was causing fragmentaiton.  At an average of 5 page splits per second, it looked likely.  I wanted to see which indexes were experiencing page splits.   There was no obvious DMV or other method to see this information.