MS SQL Server 70-457—Transit your MCTS on SQL Server 2008 to MCSA SQL 2012 Part 1

I Just finished the exam 70-457(SQL Server 2012 Transition Part 1). There are two sections in this exam. One is SQL2012 Development(70-461) and the other is about SQL2012 Admin(70-462). Time is separated to be counted. Every section has 25 questions and timed as 90 minutes.

I did not see a lot of changes comparing with the exams in SQL2008. Question types are multiple choices and “Matching”, which gives me left side and right side blocks, you need to drag and drop to match the right pairs.

It is not hard either. I missed 2 questions in the first section, scored 900, and missed 3 questions in the second one, scored 866. Honestly, I doubt the quality of questions. I firmly believe, in every section, at most I could miss only one. So actually the score is lower than I expect and thought.

Image

Advertisements

SQL Server 2012 Certification Paths

As you may have heard, there is a new roadmap for SQL Server professionals to prove your knowledge in SQL Server latest version—SQL 2012.

I wrapped up the concise information as below.

If you are a new exam-taker, you can follow the “table 1” as below.

Level Certificate Title Exam Title Training Exam
Associate Level Microsoft certified Solutions Associate (MCSA) Querying Microsoft SQL Server 2012 10774 70-461
Administering Microsoft SQL Server 2012 Databases 10775 70-462
Implementing a Data Warehouse with Microsoft SQL Server 2012 10777 70-463
Expert Level(Based on MCSA) Microsoft certified Solutions Expert (MCSE) for Data Platform Developing Microsoft SQL Server 2012 Databases 10776 70-464
Designing Database Solutions for SQL Server 2012 Free eBook 70-465
Microsoft certified Solutions Expert (MCSE) for BI Implementing Data Models and Reports with Microsoft SQL Server 2012 10778 70-466
Designing Business Intelligence Solutions with Microsoft SQL Server 2012 Free eBook 70-467
Pinnacle Level(Based on MCSE) Microsoft Certified Solutions Master (MCSM) MCSM: Data Platform Knowledge (not yet released)
MCSM: Data Platform Lab (not yet released)

(Table 1: New Exam-Taker Roadmap, MCSAàMCSEàMCSM)

Or if you hold the certifications in SQL 2008 version already, you have one more option, an upgradable path. For this path, you can refer to “Table 2” as below.

Level Certificate Title Exam Title Exam
Associate Level Microsoft certified Solutions Associate (MCSA) Transition Your MCTS on SQL Server 2008 to MCSA: SQL Server 2012, Part 1 70-457
Transition Your MCTS on SQL Server 2008 to MCSA: SQL Server 2012, Part 2 70-458
Expert Level(Based on MCSA) Microsoft certified Solutions Expert (MCSE) for Data Platform Transition Your MCITP: Database Administrator 2008 or MCITP: Database Developer 2008 to MCSE: Data Platform 70-459
Microsoft certified Solutions Expert (MCSE) for BI Transition Your MCITP: Business Intelligence Developer 2008 to MCSE: Business Intelligence 70-460

(Table 2: SQl2012, an Upgradable Path)

Resource Governor — Segregate Monster Queries

Overview

Honestly, resource governor is one of my favorite features in SQL Server. This feature has been introduced from SQL 2008 version and enhanced in SQL 2012. Right now, it is quite stable and practical. However, the bitter reality is that not many DBAs embrace this amazing feature. As a consultant, I had many chances to talk with DBAs in quite a few corporations. It seems to me they are either used to consider performance issues should be taken into account in design phrase(developers ‘accountability) instead of administration phrase or some cases they just heard about it but no idea how to implement it. Recalling the recent tragedy in one of my clients, I want to promote this feature once again here.

This technical article is organized to introduce resource governor feature via answering the questions of what, why, and how.

What is Resource Governor

In short, resource governor is the feature designed to allocate some certain resources mainly referring to CPU and memory (notice so far this feature does not support to limit the I/O resources) to some specific requests so as to prevent the “noisy neighbor” problem or prioritize some critical requests.

Figure 1 is the architecture of resource governor. As it illustrates, if enabling the resource governor (it is disabled by default), any coming request is classified by classification function (UDF: User Defined Function) into a specific group, in which a certain resources cap are defined. With that hard cap, the requests in that group at least or at most can only consume that limited resources.

Figure 1: Resource Governor Architecture

Why We Need Resource Governor

There are a couple of situations we need Resource Governor. First of all, we need this feature to prevent noisy neighbors. As it is often to see the servers are increasingly powerful— 8 cores or 16 cores even 32 cores. If there is only one application plugged in this server, it is kind of resource-wasting. To avoid this waste, usually what we do is to make the power database server shared by a couple of applications in terms of a few databases in one physical box either in one instance or multiple instances. Sharing is a good consideration however how we can prevent them interfere each other. In performance tuning world, we have a golden rule—20/80 principle, which means 20% poor queries grab 80% resources. Even some worse cases are that the poor query totally makes the server sink and grabs all resources so as to all other applications freeze end users as victims. Because of this, we need an efficient approach to segregate the monster queries or we call this as a “noisy neighbor” issue. Besides, applications have different priorities. Some are critical ones, who are sensitive to waiting time. On the other hand, the other might not. For example, backup process is a typical time waiting insensitive one in most cases, which means it doesn’t matter how fast it finishes in 20 minutes or in 50 minutes. On the contrary, some SLAs (Service Level Agreement) guarantee that the critical applications should not be negatively influenced by the backend backup process. With those being said, you can see resource governor is a fantastic feature somehow. Before SQL 2008, we have no way to work this out.

How to Use Resource Governor

This section shows an example how to use the resource governor. In this example, supposing we have two users, they are “VIP” and “backupAccount”. VIP user can use resources as much as he needs. However, for the user “backupAccount”, we only allocate 30% CPU usage for him.

Step 1: Create two logins “VIP” and “backupAccount”

CREATE LOGIN [VIP] WITH PASSWORD=N’abcd’, DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFFGOCREATE LOGIN [backupAccount] WITH PASSWORD=N’abcd’, DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF

GO

Step 2: Create a resource pool with the limitation of 30% CPU consumption at most

CREATE RESOURCE POOL pMAX_CPU_PERCENT_30WITH (MAX_CPU_PERCENT = 30);GO

Step 3: Create a work group based the pool “pMAX_CPU_PERCENT_30”

CREATE WORKLOAD GROUP gMAX_CPU_PERCENT_30USING pMAX_CPU_PERCENT_30;GO

Step 4: Create a classification function to route the user “backupAccount” into a resource-restricted work group. For other users, by default, if there is no designated work group assigned, they go into the “default” work group, which has been created without resource limitation use by system.

CREATE FUNCTION dbo.rgclassifier_MAX_CPU() RETURNS sysnameWITH SCHEMABINDINGAS

BEGIN

DECLARE @workload_group_name AS sysname

IF (SUSER_NAME() = ‘backupAccount’)

SET @workload_group_name = ‘gMAX_CPU_PERCENT_30’

RETURN @workload_group_name

END;

GO

Step 5: enable the resource governor

ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION= dbo.rgclassifier_MAX_CPU);GOALTER RESOURCE GOVERNOR RECONFIGURE;

GO

Step 6: run the same query in 2 sessions simultaneously with different logins to test their performance

SET NOCOUNT ONDECLARE @i INTDECLARE @s VARCHAR(100)

SET @i = 100000000

WHILE @i > 0

BEGIN

SELECT @s = @@version;

SET @i = @i – 1;

END

Performance results shown as table 1

Login Duration
VIP 4m8s
backupAccount 5m23s

(Table 1: Performance Result)

This testing result demonstrates the resource governor takes effect on the resource allocation limitation to the login “backupAccount” so as that this user perform worse than the other in running the same work load.

Step 7: (Optional) use windows “perfmon” utility to monitor the resource pool’s performance

Setting: SQL Server: Resource Pool StatsàCPU usage%–>your resource pool

(Figure 2: Setup the monitoring of resource pool)

(Figure 3: Resource Pool monitoring)

Summary

As observed and analyzed above, resource governor today as a feature plays an important role in SQL server features family. It is a breakthrough for SQL server to help all DBAs’ work defensive enough and be out of monster query nightmare.

Data Reading from Data Warehouse in Cube Processing

Overview

As most of us know, SSAS Cube should be processed before you create any analytic reports on that. However, we might oversight what cube processing does and how to read the data from data warehouse database (DWH).

This write-up will give you a clear picture what is happening behind this mysterious cube processing and what detailed queries are submitted to DWH to fetch data.

Cube Processing

Processing is a series of steps, in which Analysis Services loads data from a relational data source into a multidimensional model, aggregates measures, saves the data into xml files system and builds some indexes for internal use to achieve the quick response on coming queries. For objects that use MOLAP storage, data is saved on disk in the database file folder. For ROLAP storage, processing occurs on demand, in response to an MDX query on an object. For objects that use ROLAP storage, processing refers to updating the cache before returning query results.

By examining the data folder of the cube database, you can find a list of XML files and folders as below. Every xml file with the corresponding folder store the information of the data source (*.ds.xml), data source view (*.dsv.xml), cube and dimensions.

 Image

(Figure 1: SSAS database file structure)

Before saving the structure and data information into cube database files, the most important step for SSAS is to read the data from data warehouse database.

Reading Data from Data Warehouse Database

In order to clearly present the queries that cube processing submits to DWH, I designed one cube and captured the queries to illustrate how exactly the processing works.

Figure 2 shows the data modeling of this cube.

 Image

(Figure2: Cube Modeling)

As it can be seen, in this cube, we have dimensions including DimCurrency, DimCustomer, DimStore, DimDate, DimProduct, DimPromotion and one measure group “FactonlineSales”.

When processing the cube, every dimension will be processed, that is to say, read data from dimension tables from data warehouse database. Take the dimension currency as an example; dimension currency is designed as “figure 3”, CurrencyKey as the dimension key and other three attributes are “Description”, “Label” and “Name”. The key and all attributes are correspondingly to be read from DWH by queries. Table 1 list the queries how this cube processing reads data for the dimension “currency”.

Image 

(Figure 3: Dimension Currency)

 Image

(Figure 4: Dimension Processing)

–Dim Currency

–Key reading

 SELECT DISTINCT

 [dbo_DimCurrency].[CurrencyKey] AS [dbo_DimCurrencyCurrencyKey0_0],

 [dbo_DimCurrency].[CurrencyLabel] AS [dbo_DimCurrencyCurrencyLabel0_1],

 [dbo_DimCurrency].[CurrencyName] AS [dbo_DimCurrencyCurrencyName0_2],

 [dbo_DimCurrency].[CurrencyDescription] AS [dbo_DimCurrencyCurrencyDescription0_3]

 FROM [dbo].[DimCurrency] AS [dbo_DimCurrency]

 

–Attribute: [CurrencyDescription]

SELECT DISTINCT [dbo_DimCurrency].[CurrencyDescription] AS [dbo_DimCurrencyCurrencyDescription0_0]

FROM [dbo].[DimCurrency] AS [dbo_DimCurrency]

 

–Attribute: [CurrencyLabel]

SELECT  DISTINCT [dbo_DimCurrency].[CurrencyLabel] AS [dbo_DimCurrencyCurrencyLabel0_0]

FROM [dbo].[DimCurrency] AS [dbo_DimCurrency]

 

–Attribute: [CurrencyName]

SELECT DISTINCT [dbo_DimCurrency].[CurrencyName] AS [dbo_DimCurrencyCurrencyName0_0]

FROM [dbo].[DimCurrency] AS [dbo_DimCurrency]

 

(Table1: Queries reading currency dimension data)

 

After processing all dimensions, Cube processing also needs to handle the measure group. In this case, the measure group is “FactOnlineSales”. (By default, every fact forms one measure group.) During the time processing measure group, SSAS engine sends the query to DWH to read data intensively from the corresponding fact table. In this case, the query reading on fact table shows in the “table 2”. By this query, Cube tries to get the answer of every measure broken down by all possible keys combination.  

Image 

(Figure 5: Measure Group Processing)

 

–Processing measure group

SELECT [dbo_FactOnlineSales].[dbo_FactOnlineSalesSalesQuantity0_0] AS [dbo_FactOnlineSalesSalesQuantity0_0],

[dbo_FactOnlineSales].[dbo_FactOnlineSalesSalesAmount0_1] AS [dbo_FactOnlineSalesSalesAmount0_1],

[dbo_FactOnlineSales].[dbo_FactOnlineSalesReturnQuantity0_2] AS [dbo_FactOnlineSalesReturnQuantity0_2],

[dbo_FactOnlineSales].[dbo_FactOnlineSalesReturnAmount0_3] AS [dbo_FactOnlineSalesReturnAmount0_3],

[dbo_FactOnlineSales].[dbo_FactOnlineSalesDiscountQuantity0_4] AS [dbo_FactOnlineSalesDiscountQuantity0_4],

[dbo_FactOnlineSales].[dbo_FactOnlineSalesDiscountAmount0_5] AS [dbo_FactOnlineSalesDiscountAmount0_5],

[dbo_FactOnlineSales].[dbo_FactOnlineSalesTotalCost0_6] AS [dbo_FactOnlineSalesTotalCost0_6],

[dbo_FactOnlineSales].[dbo_FactOnlineSalesUnitCost0_7] AS [dbo_FactOnlineSalesUnitCost0_7],

[dbo_FactOnlineSales].[dbo_FactOnlineSalesUnitPrice0_8] AS [dbo_FactOnlineSalesUnitPrice0_8],

[dbo_FactOnlineSales].[dbo_FactOnlineSales0_9] AS [dbo_FactOnlineSales0_9],

[dbo_FactOnlineSales].[dbo_FactOnlineSalesOnlineSalesKey0_10] AS [dbo_FactOnlineSalesOnlineSalesKey0_10],

[dbo_FactOnlineSales].[dbo_FactOnlineSalesDateKey0_11] AS [dbo_FactOnlineSalesDateKey0_11],

[dbo_FactOnlineSales].[dbo_FactOnlineSalesProductKey0_12] AS [dbo_FactOnlineSalesProductKey0_12],

[dbo_FactOnlineSales].[dbo_FactOnlineSalesCustomerKey0_13] AS [dbo_FactOnlineSalesCustomerKey0_13],

[dbo_FactOnlineSales].[dbo_FactOnlineSalesStoreKey0_14] AS [dbo_FactOnlineSalesStoreKey0_14],

[dbo_FactOnlineSales].[dbo_FactOnlineSalesPromotionKey0_15] AS [dbo_FactOnlineSalesPromotionKey0_15],

[dbo_FactOnlineSales].[dbo_FactOnlineSalesCurrencyKey0_16] AS [dbo_FactOnlineSalesCurrencyKey0_16]

  FROM

(

SELECT [SalesQuantity] AS [dbo_FactOnlineSalesSalesQuantity0_0],

[SalesAmount] AS [dbo_FactOnlineSalesSalesAmount0_1],

[ReturnQuantity] AS [dbo_FactOnlineSalesReturnQuantity0_2],

[ReturnAmount] AS [dbo_FactOnlineSalesReturnAmount0_3],

[DiscountQuantity] AS [dbo_FactOnlineSalesDiscountQuantity0_4],

[DiscountAmount] AS [dbo_FactOnlineSalesDiscountAmount0_5],

[TotalCost] AS [dbo_FactOnlineSalesTotalCost0_6],

[UnitCost] AS [dbo_FactOnlineSalesUnitCost0_7],

[UnitPrice] AS [dbo_FactOnlineSalesUnitPrice0_8],

1   AS [dbo_FactOnlineSales0_9],

[OnlineSalesKey] AS [dbo_FactOnlineSalesOnlineSalesKey0_10],

[DateKey] AS [dbo_FactOnlineSalesDateKey0_11],

[ProductKey] AS [dbo_FactOnlineSalesProductKey0_12],

[CustomerKey] AS [dbo_FactOnlineSalesCustomerKey0_13],

[StoreKey] AS [dbo_FactOnlineSalesStoreKey0_14],

[PromotionKey] AS [dbo_FactOnlineSalesPromotionKey0_15],

[CurrencyKey] AS [dbo_FactOnlineSalesCurrencyKey0_16]

FROM

   (

       SELECT OnlineSalesKey, DateKey, StoreKey, ProductKey, PromotionKey, CurrencyKey, CustomerKey,

                     SalesOrderNumber, SalesOrderLineNumber, SalesQuantity, SalesAmount, ReturnQuantity,

                     ReturnAmount, DiscountQuantity, DiscountAmount, TotalCost, UnitCost, UnitPrice

       FROM    FactOnlineSales

   ) AS [FactOnlineSales]

)    AS [dbo_FactOnlineSales]

 

(Table2: Processing Measure Group)

Conclusion

Although the queries reading data from DWH could be slightly differently according to the different cube storages (MOLAP, HOLAP and ROLAP) and processing options, this article still depicts the essence of the cube processing.  Hope this brings you more ideas on cube processing internals. See you next time.