AZ-400

Found out a lot of questions are not covered in exams. so here is to open the new questions we see and have a discussion on. Your comments are always welcome.

2, You use Azure Pipelines to manage build pipelines, GitHub to store source code, and Dependabot to manage the dependencies. You have an app named App1. Dependabot detects a dependency in App1 that requires an update. What should you do first to apply the update?

  • A, Create a branch
  • B, Approve the pull request
  • C, Perform a commit
  • D, Create a pull request

my first thought is D). but my second thought is, if Dependabot triggers automatically a pull request, then the answer will be B.

Dependabot is a GitHub App which is automatically installed on every repository where automated security updates are enabled and create pull requests to keep your dependencies secure and up-to-date.

Before now, Dependabot was a standalone paid service until it was aquired by GitHub and directly integrated into GitHub, thereby making it free of charge.

Your thoughts on this?

More new questions will be added here soon.

 

Heap size estimation

In term of table size estimation, it comes to three parts, namely clustered index, non-clustered index and heap. Heap basically is a table without any index. This write up only covers the size estimation of a nuke table (Heap).

You can use the following steps to estimate the amount of space that is required to store data in a heap:

  1. Specify the number of rows that will be present in the table:

    Num_Rows = number of rows in the table

  2. Specify the number of fixed-length and variable-length columns and calculate the space that is required for their storage:

    Calculate the space that each of these groups of columns occupies within the data row. The size of a column depends on the data type and length specification.

    Num_Cols = total number of columns (fixed-length and variable-length)

    Fixed_Data_Size = total byte size of all fixed-length columns

    Num_Variable_Cols = number of variable-length columns

    Max_Var_Size = maximum total byte size of all variable-length columns

  3. Part of the row, known as the null bitmap, is reserved to manage column nullability. Calculate its size:

    Null_Bitmap = 2 + ((Num_Cols + 7) / 8)

    Only the integer part of this expression should be used. Discard any remainder.

  4. Calculate the variable-length data size:

    If there are variable-length columns in the table, determine how much space is used to store the columns within the row:

    Variable_Data_Size = 2 + (Num_Variable_Cols x 2) + Max_Var_Size

    The bytes added to Max_Var_Size are for tracking each variable-length column. This formula assumes that all variable-length columns are 100 percent full. If you anticipate that a smaller percentage of the variable-length column storage space will be used, you can adjust theMax_Var_Size value by that percentage to yield a more accurate estimate of the overall table size.

    If there are no variable-length columns, set Variable_Data_Size to 0.

  5. Calculate the total row size:

    Row_Size = Fixed_Data_Size + Variable_Data_Size + Null_Bitmap + 4

    The value 4 in the formula is the row header overhead of the data row.

  6. Calculate the number of rows per page (8096 free bytes per page):

    Rows_Per_Page = 8096 / (Row_Size + 2)

    Because rows do not span pages, the number of rows per page should be rounded down to the nearest whole row. The value 2 in the formula is for the row’s entry in the slot array of the page.

  7. Calculate the number of pages required to store all the rows:

    Num_Pages = Num_Rows / Rows_Per_Page

    The number of pages estimated should be rounded up to the nearest whole page.

  8. Calculate the amount of space that is required to store the data in the heap (8192 total bytes per page):

    Heap size (bytes) = 8192 x Num_Pages

This calculation does not consider the following:

  • Partitioning

    The space overhead from partitioning is minimal, but complex to calculate. It is not important to include.

  • Allocation pages

    There is at least one IAM page used to track the pages allocated to a heap, but the space overhead is minimal and there is no algorithm to deterministically calculate exactly how many IAM pages will be used.

  • Large object (LOB) values

    The algorithm to determine exactly how much space will be used to store the LOB data types varchar(max), varbinary(max), nvarchar(max),text, ntext xml, and image values is complex. It is sufficient to just add the average size of the LOB values that are expected and add that to the total heap size.

  • Compression

    You cannot pre-calculate the size of a compressed heap.

  • Sparse columns

What does DBA do right after SQL Server installation?

What usually does an experienced DBA do after installation of SQL Server?

Here is the list by my best knowledge,

Step 1: Install the Service Pack, Hotfixes and Cumulative Updates
https://support.microsoft.com/en-us/kb/321185#bookmark-completeversion

SQL Server Versions and Build Numbers

Step 2: Configure SQL Services
SSCM(SQL Server Configuration Manager)
Services started?
Which account to start the service?
Restart Service on SQL Server and SQL Agent

Step 3: Configure Default Directories
Should be done in installation

Step 4: Configure Default Database Properties
Step 5: Configure tempdb Database
Step 6: Instance Level config
Configuring Parameter Values
Configuring SQL Server Network
Port
Step 7: Configure Security
Step 8: Configure Error Logs
Step 9: DB mail
Step 10: SQL Agent
Job History Size
Operator

Document Map’s secret

Document map is one of my favorite features in SSRS especially for nested document map.

In SSRS, document map makes the report navigation in report much easier. After design the document map in report, It comes with an “index” section in the left frame when you view the report, example shown as below.

DM1

If you only setup the document map as a bookmark, that is not something I wanna talk today as it is really straightforward. My focus today is on “nested” document map. First of all, what do I mean “Nested”. By examining the example above, Category(“Bike”) as the first level holds “Subcategory”(“Road Bikes”) inside of it’s body, further, Subcategory as a group embraces the model (“Road-150”) . And under the umbrella of “Model”, there is a list of products. This is what I mean “nested”. When it comes to implementation next, how to achieve this? I hope my draft picture below can illustrate the idea. As the picture shows, the implementation in SSRS is to use list/table to contain another list/table inside and setup document map on each group level in each container(list/table).

DM

Hope I have touched the right point and you take something useful away.

Report Manager–“Run as Administrator”

The magic of “Run as Administrator” shines many places with no exception in report manager(RM, which is a Reporting Service web portal). If you run IE/Chrome to open the RM, you only can see this,

RM1

On the other hand, if you run your IE/Chrome or other explorers in “Admin” mode(Right click–>Run as administrator), it looks like below. There are two outstanding features only available for “admin” mode–“site settings” and  “User Folders”.

RM

You can setup Reporting Service instance level properties in “Site Setting” feature and , when “My Reports” feature has been enabled, “Users Folders” option gives administrator to explorer all users’ preference pre-stored reports or linked reports.

Keep in mind magics only show up in “Run as Administrator” mode but not in the normal mode though you truly are an administrator.

My Friends, give a shoot by yourself.

Who disabled my Reporting Service Instance Property in Management Studio?

Did you ever try to use SQL Server Management Studio(SSMS) to connect your Reporting Service engine? I believe you may connect SQL Server Database Engine or SSAS much more often but fewer people try this way to SSRS.

This thread is to walk through this option using SSMS to connect your reporting service.

To start, launch your SSMS in your start menu. In the SSMS connect dialog, choose “Reporting Services” in your service type and fill in the RS server name, also let’s choose windows authentication for this case then hit “connect” button. Yes, it takes a little bit longer time to connect if this is your first time to do so. Now connected!

First question you may have is what we can manage for SSRS in SSMS. In SSMS, we can manage all instance-level options including jobs, Security roles, shared schedules and the most important one, as least to me, is the property setup for SSRS. Now, let’s right click SSRS instance, in the following popup menu, please check out your last item: property. Surprise? Yes, it surprised me for a couple of times when I started to use this tool in the earlier time. The “property” item is, by default, disabled as shown in below picture.

SSMS-SSRS1How to turn it on? Easy and tricky! Shut down your SSMS first. Go to the Start menu and find the shortcut of SSMS, right click on it, choose “Run as administrator” in the popup menu. And connect to SSRS again. What do you see? It is working this time, right?!

SSMS-SSRS2

Now, you ask, if the login I am using to connect SSRS is the member of Administrator Group, do I still need to run as administrator? The answer is firmly YES! Further, let’s check what we can setup in property window. A lot: Portal name, Enabling “My Reports”, execution options, History, logging and security, I like it, you know what. For some options, this is the only approach you can setup SSRS instance-level properties.

SSMS-SSRS2

Alright, guess you got it. Have fun with SSRS property setup. See you next time.

12/26/2014

SQL Server Versions and Editions

Versions

SQL Server 6.5 or 7.0 probably is the first version we started to use as a 12-year DB/BI engineer. Before this version, I know SQL Server 4.2 is the first version Microsoft independently developed. Before 4.2, Microsoft collaborating with Sybase started to developed this database product but I don’t know what version they called. As you see Sybase was a big player in SQL Server development, today’s DB products from Sybase like Sybase IQ still look similar with SQL Server to some degrees.

Following the earlier versions(4.2, 6.0, 6.5 and 7.0), it comes with SQL Server 2000, which named after year. Then as you probably know, SQL Server 2005, 2008, 2008R2, 2012 and now you have the latest one SQL Server 2014. If you examine the SQL Server instance in Management Studio carefully, it is not hard to find actually the internal version number following this pattern XX.YY.ZZZZ though the commercial one is named after years.

What XX.YY.ZZZZ tells us? XX is the major version number, for example, SQL Server 2000’s major number is 8 and SQL Server 2005 is 9 and so on. YY stands for the minor version number, taking SQL Server 2008R2 as an instance, it looks like 10.50.ZZZZ, here 50 is the minor version number for SQL Server 2008R2. The last piece ZZZZ means build number. You probably know in development tools, this number goes self-incremental every time you build the solution. With the comprehensive automation test, if build number 1300 is the most stable one then the release version will choose this build number, this is why you can see “10.0.1600” is the SQL Server 2008 first release version number.

A Quick summary for SQL Server public versions and internal versions

RTM(no SP) SP1 SP2 SP3 SP4
SQL Server 2014

Codename: Hekaton

12.0.2000.8
SQL Server 2012,

Codename: Denali

11.0.2100.60 11.0.3000.0 11.0.5058.0
SQL Server 2008 R2,

Codename: Kilimanjaro

10.50.1600.1 10.50.2500.0 10.50.4000.0 10.50.6000.34
SQL Server 2008,

Codename: Katmai

10.0.1600.22 10.0.2531.0 10.0.4000.0 10.0.5500.0 10.0.6000.29
SQL Server 2005,

Codename: Yukon

9.0.1399.06 9.0.2047 9.0.3042 9.0.4035 9.0.5000
SQL Server 2000,

Codename: Shiloh

8.0.194 8.0.384 8.0.532 8.0.760 8.0.2039
SQL Server 7.0,

Codename: Sphinx

7.0.623 7.0.699 7.0.842 7.0.961 7.0.1063

Version-related abbreviations

When you download the SQL Server installations, you may see some shortened words. Below is a list of shortened names confused me before.

CTP Community Technology Preview (beta release)
RC Release Candidate
RTM Released To Manufacturing; It is the original, released build version of the product, i.e. what you get on the DVD or when you download the ISO file from MSDN.
CU Cumulative Update; Cumulative updates contain the bug fixes and enhancements–up to that point in time–that have been added since the previous Service Pack release and will be contained in the next service pack release. Installation of the Cumulative Update is similar to the installation of a Service Pack. Cumulative Updates are not fully regression tested.
SP Service Pack; much larger collection of hotfixes that have been fully regression tested. In some cases delivers product enhancements.
GDR General Distribution Release; GDR fixes should not contain any of the CU updates.
QFE Quick Fix Engineering; QFE updates include CU fixes.

All SQLServer service packs are cumulative, meaning that each new service pack contains all the fixes that are included with previous service packs and any new fixes.

Editions

In addition to different versions, Microsoft also composes the same version into different editions to target various audience. Express edition is a free edition, which has the limitation of DB size less than 2GB. Above the Express edition, there is a standard edition, which is mostly designed for small or middle size companies. Above the standard edition, there is a more powerful edition called developer edition, which is considered to be used for developers. Traditionally, the most powerful edition is Enterprise edition, which includes all features. When SQL 2012 came to the market, more editions are introduced like Business Intelligence, Enterprise Core and Data Center editions.

They are about it in term of versions and editions. Looking for more details for all components, check out below link,  http://support.microsoft.com/kb/321185/en-us

Enjoy.

Why BIDS only works with Oracle 32bit Client Drive?

Both BIDS(Business Intelligence Development Studio) and Data Tools, BI development tools, in design phase, cannot co-work with 64BIT Oracle drive. Why? As you may notice, even what you installed is 64bit SQL Server, but these two BI tools are still a 32 bit application. With this reason, when you develop the SSIS package, in BIDS/DT, the package cannot recognize 64bit oracle drive.

The solution is to install 32bit oracle drive when you develop the SSIS packages. With 32 bit drive, you can debug the package and run the package in BIDS/DT. Does it mean when moving the package to other environments like QA/PD…you still need to use 32bit oracle drive? No. When you use SQL Agent job to wrap up the finished SSIS packages, it is highly recommended to use 64bit oracle dirve because SQL Agent Services is a 64bit applcaition. It works well with 64 bit  drive. Can it(SQL Agent Job) work with 32 bit oracle drive? Yes, but you need specifically setup your jobs running in 32 mode? What is the down side if it is running in 32 bit mode? There is restriction to leverage the memory resources on server.

So, the conclusion is , in development phase, you have to install Oracle 32bit drive. When it pushes to QA/PD, it is recommended to have 64bit in place as your SQL Agent job  can work well with it.

Hope it helps!