Saturday, October 3, 2009

The HOBT is Moving!

Some exciting news today – I’ve been invited to blog over at SQLBlog.com!  SQLBlog is the home of some of the brightest minds in the SQL Server world, so I’m still a bit shocked that MVP Adam Machanic thought highly enough of my content to ask me to blog on his site.  I’m going to try to keep a low profile over there so he doesn’t realize the mistake that he’s made ;-)

My continual aim has been to reach as many people as possible with the content that I publish on The HoBT, and the decision move to SQLBlog was made with that in mind.  The sheer number of talented people that write over there has resulted in a pretty significant following, and I’m of the mind that the more people that I can help or confuse, the better.  I thrive on feedback, corrections, and smart-assed comments, so I hope that you’ll update your RSS subscription and pick up the blog at it’s new home over on SQLBlog.com!

RSS Feed: http://sqlblog.com/blogs/aaron_alton/rss.aspx

Main Blog: http://sqlblog.com/blogs/aaron_alton/

Friday, October 2, 2009

So You Want To Learn Business Intelligence, Eh? Part 5: SSAS Deployment and Administration

The fifth installment in this blog series (this one rather belated – apologies if you’ve been following along) is about SSAS Deployment and Administration.

If you’re from a “core” SQL Server background, you’re in for quite the shock when you try to administer an Analysis Services instance.  Very (and I do mean very) little of what you know from traditional database administration is applicable in SSAS.  That said, a lot of what we DBA’s spend our time troubleshooting on the relational engine side (locking and blocking, for instance) simply doesn’t exist in Analysis Services.

The authors do a good job of introducing the main tenets of Analysis Services administration.  In order, the reader is walked through the following:

  • Partition Design, which is a bit more involved than the partitioned indices and views of the normal relational world.
  • Aggregation Design, or the process of finding the balance between query response time and cube processing time/space.
  • Cube Deployment, and the various methods available to you to deploy your Analysis Services projects.
  • Processing cube objects.  If you’re into that sort of thing.
  • Locking down SSAS – roles and security assignment.  Much simpler than relational engine security, so there’s no excuse not to do it right ;)
  • High-Availability Strategies and DR.  And NO, they’re not the same thing.
  • Tuning and monitoring.

As is the case in the relational world, administration is a big job and needs to be taken seriously.  The self-paced training kit provides a good introduction into SSAS administration, but you need to supplement it with other sources in order to get a decent understanding of the subject.  Books Online is (as always) very handy in this respect, as are the many BI blogs that you can find on the intertubes.

Tune in tomorrow for data mining.  I mean…. TUNE IN TOMORROW FOR DATA MINING!!!

Tuesday, September 29, 2009

SQL Server MVP Deep Dives: One-of-a-Kind Content That Supports a Great Cause

About a year ago, Paul Nielsen proposed that the SQL Server MVPs collaboratively write a book whose proceeds would benefit a globally recognized charity: War Child International.  Simply put, the MVPs who collaborated to write this book have donated their time, experience, and wisdom to a great cause.  You can see a complete list of the authors involved on the book’s site at http://www.sqlservermvpdeepdives.com

I can’t say enough about the people who were involved in this project.  Writing a book, or chapters of a book, is not an insignificant effort.  It is a very time-consuming task, and the fact that they did it for charity simply makes it that much more amazing.

I will be picking up a copy, and I hope that you’ll do the same.  The e-Book is expected to be available shortly, whereas the printed version is expected later this year.  If you purchase your copy through http://www.sqlservermvpdeepdives.com, WarChild will receive an extra 10% of the purchase through the publisher’s affiliate account.

Saturday, September 19, 2009

Lousy HA is Not Necessarily Better Than No HA

In fact, it can be worse.  And no, I’m not just being a retentive purist.  Hear me out.

High-Availability is the ultimate goal of most corporate IT departments.  It’s annoyingly measured and spouted by counting the “number of nines” that your HA solution delivers.  And there is no shortage of technologies designed to help you obtain your uptime goals: Multipathed SANs, Mirrored Multipathed SANs, Clusters, Geographically-Dispersed Clusters, ESX Farms….the list goes on.  For the most part, the vendors of these technologies aren’t foolish enough to claim that their solutions alone are enough to deliver on the promise of uninterrupted uptime, but somewhere along the line all of the HA-hype has done a number on the collective consciousness of the IT world. 

“You’re running a cluster?  You’re lucky – you must sleep well at night!”

Hahahoho…

At one time, there was an entry barrier into the world of high-availability that stopped most companies from venturing into this space: cost.  Quite simply, it cost a bloody fortune to throw together, for instance, an MSCS cluster.  For starters, the entire cluster solution had to be certified for running an MSCS cluster.  Not the individual components, but the combination of components, right down to your SAN firmware version.  That reduced your choices to a handful of (usually very expensive) solutions.  But this wasn’t so bad – it just weeded out “real” availability requirements from “fake” ones.  Application owners could request five nines of availability, but when they saw the price tag, suddenly they could somehow afford more than five minutes of downtime per year.  That’s not to say that all HA requirements were BS – just that costs and budgets kept requirements in check.

Thanks to the incessant advancement of tecknowledgee, HA solutions are becoming more commonplace.  For instance, with Windows Server 2008 Failover Clustering, you no longer need a certified cluster solution.  Sure, each component needs to be suitable for a cluster, and the entire cluster needs to pass certain validation tests, but the requirements are significantly relaxed from the days of MSCS clusters.

This is great news, right?  It’s the modern day equivalent of a chicken in every pot!  A cluster in every server room!  What’s that?  You don’t have a server room?  Well, a cluster in every supply closet!

OK, I might be getting a bit carried away, but you get the picture.

So where’s the problem with all of this?  I mean…isn’t cheaper better?  And of course it is, I say.  But we can’t get ahead of ourselves. 

There is no shortcut to high-availability. 

Before we start promising to deliver uptime out the wazoo, we have to learn to think for high-availability.  From a 40,000 foot view, it’s hard to imagine that implementing a high-availability mechanism can actually cause damage, but it can.  Let’s take a look at some of the side-effects of half-assed HA implementations, shall we?

1. You can actually reduce the uptime of your applications
Say what?  How can implementing a HA technology reduce the uptime of your applications?  Quite easily, actually.  Let’s take a typical “dual everything” server.  Dual NICs, dual power supplies, dual RAID controllers, a direct-attached RAID array.  You can survive the failure of most of the “fragile” bits of your server, replace the offending hardware, and be back in business.  But there are a few single points of failure…the OS for one, the motherboard.  This isn’t ideal – we’ll never make five nines like this!  We need more HA!  We need (drumroll please) a cluster!

So you implement a “simple” active/passive 2 node cluster.  A couple of servers, or more commonly blades.   An iSCSI or fibre channel SAN.  You roll it into production, and everything is beautiful.  Set it and forget it, right?

Wrong.  Do you have anyone qualified to manage that SAN?  Do you have anyone who knows a cluster from their elbow?  Are you monitoring your new technological valhalla?  No?

OK…so let’s say you neglected to enable multipathing on the SAN.  Now you have far more single points of failure than you did with your standalone server.  Or let’s say that, in trying your darndest to create a new LUN, you end up clearing the SAN configuration.  Or a routine firmware flash fails, and your SAN won’t come online.  Or let’s say that you’re not monitoring cluster failover events, so you don’t even notice that node 1 went offline in the middle of the night.  Trust me…you’ll notice when node 2 does the same.

You see, a wise person once said “complexity is the enemy of security”.  An equally wise person stole borrowed the phrase and applied it to availability – for complexity is indeed the enemy of availability.  Without the skills and infrastructure to back up a HA solution, the complexity that you’re introducing can actually decrease your availability!

2. You can cripple your DR strategy
Half-assed HA can hurt your disaster recovery strategy as well, in two important ways:
  i. “We have (clustering/SAN mirroring/multipathing)!  We don’t need a DR strategy!”. 
Wrongo.  Nothing replaces backups, and even the best HA solution will call on backups from time to time.  Don’t believe me?  Well, how does your cluster help you when your database is suspect?  Toldja so.
  ii. “We’re backing up to a network share every five minutes!”.  Great!  Did you check to make sure that said “network share” isn’t using the same storage subsystem that your production servers are hosted on?

3. You can create false confidence
A business that has faith in it’s “highly available” infrastructure will learn to lean on it more and more.  And this is a good thing – it means that the technology we all work so hard to implement and maintain is paying dividends.  But to your users, promised HA is the same thing as real HA.  They trust that, when you promise them 99.999% uptime and <5 minutes data loss, you know what you’re talking about.  So much so that they may choose not to develop backup plans should the unthinkable happen.  And when it does happen, the business can be seriously injured (or even destroyed), because they can’t meet their contractual obligations/can’t meet reporting deadlines/can’t ship their product.  Think about it before you promise the sun, the moon, and the stars, because…

4. You can be out of work
You can be the CIO’s poster child, but if you promise something you can’t deliver, and the fallout seriously impacts the business, you had better have your CV up to date.  Have you really thought about what it takes to deliver any measure of high-availability?  Let’s take a typical SQL Server database application.  What does it depend upon?  For starters, the obvious: the availability of the database.  Which depends upon the OS and physical hardware being up and functioning.  That’s it, right?  Not  quite.  How about:
  - The physical network connecting your application to your clients
  - The application servers and/or terminal servers, and all of their dependencies
  - DNS, DHCP, and Active Directory (what good is an application if your Windows Authenticated users can’t log in?)
  - The security of your application infrastructure (an application that is down because of a hacker or a disgruntled employee is no more available than an application that is down because of an infrastructure failure)
  - The power that runs the whole shebang

And there may be more, depending upon your environment.  “But wait!” you say, “Those things aren’t my problem!  I’m just a DBA!”.  True, you may be “just a DBA”, but have you documented your dependencies on items that are outside of your control?  Have you obtained SLAs from your network admins, your security admins, your SAN admins…..to support the SLA that you delivered to your application owners?  Trust me – the corporate chopping block will be much more sympathetic to your plight if you have already documented your dependencies on external factors, before the proverbial excrement hits the fan.  Any amount of finger pointing after the fact comes across as just that – finger pointing.  And good managers don’t brook lousy excuses.

 

Now, let’s not misconstrue the message.  I’m not saying for a second that you shouldn’t try to implement highly available infrastructures in your environment.  Nor am I saying that you’re an idiot if you have rolled out a half-assed infrastructure.  The message I’m trying to impart is that you need to examine your “highly available” solution from every possible angle before a disaster makes you wish that you had.

See you next time.

Tuesday, September 8, 2009

Surveys, Get Yer Surveys Heah!

I’m sorry – I just didn’t know any other way to make surveys sound exciting.

Peter Saddow asked the SQL community to put a plug out for a few SQL Express surveys that Microsoft has developed.  The results of these surveys will directly influence the development of the SQL Express installation experience, so if you work with SQL Express, please take the time to fill them out.  Or, as Peter more eloquently said:

Microsoft is committed to making the SQL Server Express installation experience the best in industry. For us to achieve this goal, we need your candid feedback on your experience with installing SQL Server Express.  If you have used several different versions of SQL Server Express, focus on the latest version that you have used.  When SQL Server 2008 Customer Technical Preview 3 is released, feel free to complete this survey again since several installation improvements have been made.  This information will be used to improve our future releases including SQL Server 2008 R2 and SQL Server 11.  This survey will be available until the end of November.

Please visit the following blogs that contain instructions:

General survey around installing SQL Server Express

http://blogs.msdn.com/petersad/archive/2009/09/08/general-sql-server-express-survey.aspx

Survey that focuses on embedding SQL Server Express

http://blogs.msdn.com/petersad/archive/2009/09/08/embedding-sql-server-express.aspx

Seriously.  Fill them out.  And blog about it, or “retweed”, or whatever it is you kids do nowadays.

Tuesday, August 25, 2009

Treating The Results of a Stored Procedure as a Table Expression

I stumbled upon a StackOverflow thread this week, where the poster was asking how they could fetch the results of a stored procedure into a temporary table without knowing the resultset’s composition (column layout) in advance. If your first question is “why?”, good for you. If you’re trying to capture the resultset returned by a stored procedure without knowing it’s composition, how in the world can you possibly hope to use said result set?

Whatever the case, the OP clearly wanted to do so. They had gotten as far as concluding that the SELECT…INTO…FROM syntax (which creates the object specified after the INTO keyword) would do the heavy lifting for them, if only they could treat the stored procedure as a table expression (if you have no idea what I’m talking about, check out the examples below). The normal use of the SELECT INTO syntax looks something like this:

SELECT *  --Don't use *.  You didn't see this here.
INTO #MyTempTable --Drop the hash to create a permanent table.
FROM MyRealTable

You can’t substitute a sproc name for your table name, and expect it to work. What you can do is use OPENROWSET or OPENQUERY. Have a look here:

CREATE PROC MyProc
AS
BEGIN
SELECT
* FROM sys.databases --Or whatever.
END

GO


SELECT *
INTO #MyTempTable
FROM OPENROWSET('SQLNCLI',
'Server=(local)\SQL2008;Trusted_Connection=yes;',
'EXEC MyProc')

SELECT * FROM #MyTempTable

Neat, eh? Keep in mind that you need to have “Ad Hoc Distributed Queries” enabled:

sp_configure 'Show Advanced Options', 1
GO
RECONFIGURE
GO

sp_configure 'Ad Hoc Distributed Queries', 1
GO

RECONFIGURE
GO

Here’s the original thread – there are a number of very good contributions on the thread.

Thanks to SQL MVP Rob Farley for putting a name to what the OP was asking for – the title of this article was a direct “yoink” from his comment on my post. I would have called it “How to bring back the results of a stored procedure into a temporary table even though you don’t know what the results look like at design time”. I ran short of breath just writing that one.

See you next time.

Sunday, August 23, 2009

So You Want To Learn Business Intelligence, Eh? Part 4 – Extending SSAS Cubes

Time for more multi-dimensional fun. If you read the previous article in this series, we finally started digging into the cubey goodness that is SSAS. Today we’re going to look at Chapter 6 in our trusty guide (MCTS Self-Paced Training Kit (Exam 70-448): Business Intelligence Development and Maintenance). In this chapter, we start pimping our SSAS cubes; implementing hierarchies, creating KPIs, Actions, Translations, and Perspectives, and creating calculations and queries in (gasp!) MDX.

I introduced hierarchies in the last blog post, but for those who were asleep didn’t catch that post, a hierarchy is….well…a hierarchy. An example is probably better than my weak, recursive definition.

A Time Hierarchy
Year
Quarter
Month
Week
Day
Hour
Minute
Second

Establishing a hierarchy serves two purposes: a usability benefit for the end user (they can browse data at any level in the hierarchy, without having to search for disparate time dimensions), and a performance benefit for SSAS. How’s that? Well, much of the reason why SSAS is able to present aggregated data to the end user so quickly is that it pre-aggregates said data. Usually, the most frequently queried aggregations are stored, and the balance are calculated upon request. By implementing a hierarchy, SSAS can use a lower level aggregation to “roll up” to a higher level aggregation. For instance, if we already have an aggregation at the Day level, a Month aggregation can be calculated very efficiently. As usual, the book did an excellent job of presenting the concept of hierarchies, and of explaining how to implement them.

KPIs are one of the biggest buzzwords associated with modern Business Intelligence. For the initiate, a KPI is a quick indication of the degree of success with which a business is meeting it’s goals. What those goals are depends upon the organization and the KPI consumer, but some easy examples are Customer Profitability, Inventory Turnover, Expenses as they relate to Revenue, etc. In defining a KPI, you provide SSAS with instructions on how to calculate the current value of your metric, your target value (goal), your KPI’s status (value as it relates to the goal – are we well under the target? Approaching the target? Have we exceeded the target?), and the trend (are you trending up, or trending down). There are, of course, more properties available, but the four I just mentioned are the key properties. A KPI is what powers those lovely gauges and stoplights we see so frequently on BI dashboards.

Actions simply allow you to perform additional activities within your cubes, provided that your client software supports said activities. Good examples of cube activities include linking out to a website, drilling into detail data, and linking out to an SSRS report. Chapter 6 walks you through the process of creating a drill-through action in detail.

Translations are…just that. They translate the data – for instance, presenting data in languages other than the cube data’s primary language to international users. Of course, SSAS doesn’t actually perform the translation – your datawarehouse needs to contain translated dimension attributes, but a translation allows the end user to select one of your defined languages and see all of their cube data in the translated language.

You can think of a perspective as a subset of a cube. The book’s authors are quick to note that perspectives are not security mechanisms – they simply exist to allow you to present a relevant subset of a (potentially huge) cube to a business user.

Finally, in lesson 3, we took a run at the dreaded MDX. I won’t even try to explain MDX here, save to say that it is indeed deceptively similar to TSQL. My one beef about the book to date is that the coverage of MDX was sparse and rushed. The authors jumped right into multidimensional syntax without really walking us through the basics of the language. As a result, the reader is left to wonder why a member is expressed as [Product].[Product].[Mountain-100] (the AdventureWorks Mountain Bike 100). Why [Product].[Product]? Why does the member name have two parts? Is this always the case? In their defense, you could probably work with cubes for years without needing to write very much MDX at all: the designers built into BIDS generally “write” all of the MDX for you. And though the exam doesn’t place much weight on MDX, I still believe that you need to understand it in order to really understand SSAS. Unfortunately, you’ll need to spend a bit of time on MSDN before you can really wrap your head around MDX.

And that’s it for this week. See you next time, when we’ll talk about SSAS deployment and administration. Stop yawning. After that, it’s data mining time, baby.