MSBI (SSIS/SSRS/SSAS) Online Training

Wednesday, February 19, 2014

*** MSBI Online Training Program DEMO Updates (Start from 19-FEB-2014) ***

Hi All,

I would like to start new batch from 19-FEB-14 on wards, please join the demo session on 19-FEB-14 at 10:00 PM IST. 
Please send me request on my skype id:- nv.ravindranathreddy 
 and be ready online at 10:00 PM IST



Training Details:
About myself- I  have trained 200+ candidates and 90% successfully got the jobs on MSBI platform. I do have 10+ yrs MSBI experience at Architect level and right now I am in Switzerland and use to take online session.
I will teach from Basics to advanced level with real-time examples.

Training Updates:
ü  Session start time is: 10:00 PM -11:00 PM IST
ü  Duration: 40 days
ü  Fee: 15,000 INR
ü  I will share all code (regular sessions), step by step documents and interview questions +  end-to-end project implementation guidance.
ü  If you are interested send me request on my skype id:- nv.ravindranathreddy 
ü  Please install teamviewer from google. If you do not have SQL Server 2008R2 software, let me know then  I can share the software thru online link then you can download it.


Training highlights:
ü   More than 2000 interview questions with the explanation
ü   2GB of MSBI data free (with IMP documents, videos etc...)
ü   Trained more than 200 + aspirants on MSBI.
ü   We are providing the best training on MSBI and consulting for MSBI jobs as well in all major cities.
ü   For sure before you finish training itself you will get a job on MSBI.
ü   We will provide real-time projects with module by module explanation ( free of cost). 

Thanks!
Best Regards,
Ravindra
PRINCE2®, ITIL®, MCITP

Friday, January 10, 2014

MSBI (SSIS,SSRS,SSAS) Online Training with 100% Placement Guarantee WITH REALTIME PROJECTS

Hi All,
I am happy to announce we have trained 200+ students ( only 5 aspirants per batch) with 90% successful jobs throughout global.
I wish you a successful career for all my students; good luck !!



Well as we committed for day one onwards we work with aspirants very closely to make them understand MSBI concepts in detail with real-time projects as well as real-time scenarios.
Our aim is not only to help to get job but also able to stand in BI projects with successful end-to-end implementation.

Please visit below page for more details about MSBI training. Don’t miss this opportunity.

!! Right place for Right career!!


Best Regards,

Ravindra

MSBI & SQL SERVER – 2012 Certification Details

Hi MSBI Guys,

I am providing online training for below certifications and will take care of registration at  Prometric center,, certification will help you a lot in your career.
It is actually very good to have certification in terms of to get good package and shortlist the resume early from the race.
Please let me know if you are interested.


All the best!
Regards,

Ravindra





SSIS AND SSRS Latest Interview Questions &Answers

Q1 Explain architecture of SSIS?
SSIS architecture consists of four key parts:
a) Integration Services service: monitors running Integration Services packages and manages the storage of packages.
b) Integration Services object model: includes managed API for accessing Integration Services tools, command-line utilities, and custom applications.
c) Integration Services runtime and run-time executables: it saves the layout of packages, runs packages, and provides support for logging, breakpoints, configuration, connections, and transactions. The Integration Services run-time executables are the package, containers, tasks, and event handlers that Integration Services includes, and custom tasks.
d) Data flow engine: provides the in-memory buffers that move data from source to destination.


Q2 How would you do Logging in SSIS?
Logging Configuration provides an inbuilt feature which can log the detail of various events like onError, onWarning etc to the various options say a flat file, SqlServer table, XML or SQL Profiler.

Q3 How would you do Error Handling?
A SSIS package could mainly have two types of errors
a) Procedure Error: Can be handled in Control flow through the precedence control and redirecting the execution flow.
b) Data Error: is handled in DATA FLOW TASK buy redirecting the data flow using Error Output of a component.

Q4 How to pass property value at Run time? How do you implement Package Configuration?
A property value like connection string for a Connection Manager can be passed to the pkg using package configurations.Package Configuration provides different options like XML File, Environment Variables, SQL Server Table, Registry Value or Parent package variable.

Q5 How would you deploy a SSIS Package on production?
 A) Through Manifest
1. Create deployment utility by setting its propery as true .
2. It will be created in the bin folder of the solution as soon as package is build.
3. Copy all the files in the utility and use manifest file to deply it on the Prod.
B) Using DtsExec.exe utility
C)Import Package directly in MSDB from SSMS by logging in Integration Services.

Q6 Difference between DTS and SSIS?
Every thing except both are product of Microsoft :-).

Q7 What are new features in SSIS 2008?
explained in other post
http://sqlserversolutions.blogspot.com/2009/01/new-improvementfeatures-in-ssis-2008.html

Q8 How would you pass a variable value to Child Package?
too big to fit here so had a write other post
http://sqlserversolutions.blogspot.com/2009/02/passing-variable-to-child-package-from.html


Q9 What is Execution Tree?
Execution trees demonstrate how package uses buffers and threads. At run time, the data flow engine breaks down Data Flow task operations into execution trees. These execution trees specify how buffers and threads are allocated in the package. Each tree creates a new buffer and may execute on a different thread. When a new buffer is created such as when a partially blocking or blocking transformation is added to the pipeline, additional memory is required to handle the data transformation and each new tree may also give you an additional worker thread.

Q10 What are the points to keep in mind for performance improvement of the package?
http://technet.microsoft.com/en-us/library/cc966529.aspx

Q11 You may get a question stating a scenario and then asking you how would you create a package for that e.g. How would you configure a data flow task so that it can transfer data to different table based on the city name in a source table column?


Q13 Difference between Unionall and Merge Join?
a) Merge transformation can accept only two inputs whereas Union all can take more than two inputs

b) Data has to be sorted before Merge Transformation whereas Union all doesn't have any condition like that.

Q14 May get question regarding what X transformation do?Lookup, fuzzy lookup, fuzzy grouping transformation are my favorites.
For you.

Q15 How would you restart package from previous failure point?What are Checkpoints and how can we implement in SSIS?
When a package is configured to use checkpoints, information about package execution is written to a checkpoint file. When the failed package is rerun, the checkpoint file is used to restart the package from the point of failure. If the package runs successfully, the checkpoint file is deleted, and then re-created the next time that the package is run.

Q16 Where are SSIS package stored in the SQL Server?
MSDB.sysdtspackages90 stores the actual content and ssydtscategories, sysdtslog90, sysdtspackagefolders90, sysdtspackagelog, sysdtssteplog, and sysdtstasklog do the supporting roles.

Q17 How would you schedule a SSIS packages?
Using SQL Server Agent. Read about Scheduling a job on Sql server Agent

Q18 Difference between asynchronous and synchronous transformations? 
Asynchronous transformation have different Input and Output buffers and it is up to the component designer in an Async component to provide a column structure to the output buffer and hook up the data from the input.

Q19 How to achieve parallelism in SSIS?
Parallelism is achieved using MaxConcurrentExecutable property of the package. Its default is -1 and is calculated as number of processors + 2.

-More questions added-Sept 2011
Q20 How do you do incremental load?
Fastest way to do incremental load is by using Timestamp column in source table and then storing last ETL timestamp, In ETL process pick all the rows having Timestamp greater than the stored Timestamp so as to pick only new and updated records

Q21 How to handle Late Arriving Dimension or Early Arriving Facts.
 
Late arriving dimensions sometime get unavoidable 'coz delay or error in Dimension ETL or may be due to logic of ETL. To handle Late Arriving facts, we can create dummy Dimension with natural/business key and keep rest of the attributes as null or default.  And as soon as Actual dimension arrives, the dummy dimension is updated with Type 1 change. These are also known as Inferred Dimensions.

Q1. WHAT is SQL Server Reporting Services(SSRS)?
SQL Server Reporting Services is a server-based reporting platform that you can use to create and manage tabular, matrix, graphical, and free-form reports that contain data from relational and multidimensional data sources. The reports that you create can be viewed and managed over a World Wide Web-based connection

Q2.
Architecture of SSRS:

 -Admin 

Q3. What are the three stages of Enterprise Reporting Life Cycle ?
a. Authoring
b. Management
c. Access and Delivery

Q4. What are the components included in SSRS?
1. A Complete set of Tools that can be used to create, manage and view reports
2. A Report Server component that hosts and processes reports in a variety of formats. Output formats include HTML, PDF, TIFF, Excel, CSV, and more.
3.An API that allows developers to integrate or extend data and report processing in custom applications, or create custom tools to build and manage reports.

Q5. What is the benefit of using embedded code in a report?
1. Reuseability of Code: function created in embedded code to perform a logic can be then used in multiple expressions
2. Centralized code: helps in better manageability of code.


Q6. Which programming language can be used to code embedded functions in SSRS?
Visual Basic .NET Code.

Q7. Important terms used in the reporting services? 

1. Report definition:  The blueprint for a report before the report is processed or rendered. A report definition contains information about the query and layout for the report.

2. Report snapshot: A report that contains data captured at a specific point in time. A report snapshot is actually a report definition that contains a dataset instead of query instructions.

3. Rendered report: A fully processed report that contains both data and layout information, in a format suitable for viewing (such as HTML).

4. Parameterized report: A published report that accepts input values through parameters.

5. Shared data source: A predefined, standalone item that contains data source connection information.

6. Shared schedule: A predefined, standalone item that contains schedule information.

7. Report-specific data source: Data source information that is defined within a report definition.

8. Report model: A semantic description of business data, used for ac hoc reports created in Report Builder.

9. Linked report: A report that derives its definition through a link to another report.

10. Report server administrator: This term is used in the documentation to describe a user with elevated privileges who can access all settings and content of a report server. If you are using the default roles, a report server administrator is typically a user who is assigned to both the Content Manager role and the System Administrator role. Local administrators can have elevated permission even if role assignments are not defined for them.

11. Folder hierarchy: A bounded namespace that uniquely identifies all reports, folders, report models, shared data source items, and resources that are stored in and managed by a report server.


12. Report Server: Describes the Report Server component, which provides data and report processing, and report delivery. The Report Server component includes several subcomponents that perform specific functions.

13. Report Manager: Describes the Web application tool used to access and manage the contents of a report server database.

14. Report Builder: Report authoring tool used to create ad hoc reports.

15. Report Designer: Report creation tool included with Reporting Services.

16. Model Designer: Report model creation tool used to build models for ad hoc reporting.

17. Report Server Command Prompt Utilities: Command line utilities that you can use to administer a report server.
a) RsConfig.exe, b) RsKeymgmt.exe, c) Rs.exe

Q8. what are the Command Line Utilities available In Reporting Services?
· Rsconfig Utility (Rsconfig.exe): encrypts and stores connection and account values in the RSReportServer.config file. Encrypted values include report server database connection information and account values used for unattended report processing
· RsKeymgmt Utility: Extracts, restores, creates, and deletes the symmetric key used to protect sensitive report server data against unauthorized access
· RS Utility: this utility is mainly used to automate report server deployment and administration tasks.Processes script you provide in an input file.
  
Q. How to know Report Execution History?
ExecutionLog table in ReportServer database store all the logs from last two months
 SELECT * FROM ReportServer.dbo.ExecutionLog

-Development
Q. What is difference between Tablular and Matrix report?
OR What are the different styles of reports?

Tablular report: A tabular report is the most basic type of report. Each column corresponds to a column selected from the database.

Matrix report: A matrix (cross-product) report is a cross-tabulation of four groups of data:
    a. One group of data is displayed across the page.
    b. One group of data is displayed down the page.
    c. One group of data is the cross-product, which determines all possible locations where the across and down data relate and places a cell in those locations.
    d. One group of data is displayed as the "filler" of the cells.
Martix reports can be considered more of a  Pivot table.

Q. How to create Drill-through reports?
Using Navigation property of a cell and setting child report and its parameters in it.

Q. How to create Drill-Down reports?
To cut the story short: 
- By grouping data on required fields
-Then toggle visibility based on the grouped filed




SSIS - Uses with For Loop Container

How to configure a simple For Loop Container? The For Loop Container is a repeating container that does something untill a condition evaluates true.

Four examples:
- Do something ten times
- Wait untill a certain time
- Wait for a couple of seconds / minutes


A) Do something ten times
This example does ten times what's within the loop.

1) Variable
Create a integer variable named Counter. This variable is used the count till 10.
 

Variable Counter (right click in Control Flow)








2) Add For Loop Container
Drag a For Loop Container to your Control Flow and give it a suitable name.

 

















3) Edit For Loop Container
·                     Edit the For Loop Container and set the InetExpression to: "@[User::Counter] = 0". This is the initial value of the counter. This example starts at zero.
·                     Set the required EvalExpression to: "@[User::Counter] < 10". This is the evaluation expression that contains the expression used to test whether the loop should stop or continue. The example stops after 10 times (0, 1, 2, 3, 4, 5, 6, 7, 8, 9).
·                     Set the requires AssignExpression to: "@[User::Counter] = @[User::Counter] + 1". This is an optional iteration expression that increments the loop counter.

Set the expressions



4) Testing
For testing purposes I added a Script Task with a MessageBox that shows de value of the counter.
 

Testing the For Loop





B) Wait untill a certain time
This example uses the For Loop as a Wait statement. Let's wait untill 12:20.

1) For Loop Container
Add a For Loop Container to your Control Flow and give it a suitable name. I added a Annotation to clarify that it does nothing within the loop.

Waiting For Loop














2) Edit For Loop Container
Only enter this EvalExpression: GETDATE() <= DATEADD("Mi",20,DATEADD("Hh",12,(DT_DBDATE)GETDATE()))

EvalExpression



















Clarification of the expression
It removes the time from the current datetime. So 01-02-2011 12:01:45.002 becomes 01-02-2011 0:00:00.000:
(DT_DBDATE)GETDATE()
After that it adds 20 minutes and 12 hours. So it becomes 01-02-2011 12:20:00.000:
DATEADD("Mi",20,DATEADD("Hh",12,(DT_DBDATE)GETDATE()))
That datetime is compared to the current datetime:
GETDATE() <= DATEADD("Mi",20,DATEADD("Hh",12,(DT_DBDATE)GETDATE()))

3) Testing
For testing purposes I added a Script Task after the loop that shows the current time in a messagebox.

Testing














C)  Wait for a couple of seconds / minutes
With a slightly different approach then example B, you can wait for a couple of seconds, minutes, etc.
Use this expression 
in the same way as example B to wait 30 seconds:
DATEADD("ss", 30, @[System::ContainerStartTime]) > GETDATE()
or 5 minutes:
DATEADD("mi", 5, @[System::ContainerStartTime]) > GETDATE()

Click 
here for more information about DATEADD.

Note: Delay techniques with a For Loop causes unnecessary CPU processing. A Script Task with a 
Thread.Sleepdoesn't have that drawback.


Thursday, October 17, 2013

SSIS - Slowly Changing Dimension Type 1 Implementaion with MergeJoin Vs Lookup Vs SCD Transformation

Hi All,
I would like to differentiate the SCD implementation design patterns with different design advantages/disadvantages in terms of performance. Enjoy !!

I.                    Slowly Changing Dimension Type 1 with Merge Join.
Right from the start the design is a bit different. Instead of adding your source query to an OLEDB Source component and then connecting it to the lookup or the SCD component, we create 2 OLEDB Source components. 
The first one is the same as in the previous two posts, it connects to our source table (UserUpdate).  However we have to modify the query a little bit to get it to work with the Merge Join component.  So lets take a look at that query.
SELECT
    [UserAK]
    ,[ScreenName]
    ,[AccountCreateDate]
    ,[TimeZone]
    ,[Language]
    ,[GeographySK]
    ,CHECKSUM( 
        [ScreenName]
        ,[AccountCreateDate]
        ,[TimeZone]
        ,[Language]
        ,[GeographySK] 
    ) as Hash_CD
FROM
    DimUserUpdate
Order By UserAK

The addition of the ORDER BY clause is essential when using the Merge Join design pattern.

The second OLEDB Source component will query against our destination table, essentially replacing the lookup  component from the Lookup Conditional Split design pattern.  As in the lookup query we only need to bring back the UserAK, UserSK and the Hash_CD.  But just like the source query above we need to add the ORDER BY clause because we are going to use the Merge Join.

SELECT [UserSK]
      ,[UserAK]
      ,[Hash_CD]
  FROM [Demo].[dbo].[DimUser]
Order By UserAK 
Simply using the ORDER BY clause is not enough for SSIS to know that the datasets you are intending to enter into the Merge Join is not enough. You have to tell SSIS that the data is sorted, and on what column the sort is happening.  I agree it’s a little redundant, but that’s how it is.  You must do this for each OLEDB Source.  To do this right click on the OLEDB Source and select Show Advanced Editor from the menu.
Once inside the advanced editor select the Input and Output Properties Tab along the top of the window. Then under the Common Properties for the OLEDB Source Output change the IsSorted property to True.
That takes care of telling SSIS that the data is sorted, but now we need to tell SSIS what column it is sorted on.  To do that we drill down OLEDB Source Output to the Output Columns and select the column, UserAK, (or columns if your query is ordered by more than one column) and change its SortKeyPosition to 1
Now do this for the second OLEDB source and then we’ll be ready for the Merge Join component.  Please note that if you have multiple column in your order by clause they must match in both queries, and you must set the SortKeyPosition in the same order for both components.  Once the the source components are configured drag in the Merge Join component.
Now drag the first source component output connection to the Merge Join component. Once you’ve connected this a selection box will appear.  It will ask you to tell it what side of the join this output belongs on.  This is important, because we are basically going to be creating a Left Outer Join inside the component, so knowing which side of the join each portion belongs to is obviously essential.
When you add the second output from the other OLEDB Source you won’t be prompted again, it will just make it the input for the other side of the join. Now double click on the Merge Join component and open the editor. 
The first property to set is the Join Type.  Your options are Left Outer Join, Inner Join, or Full Outer Join.  We are going to be using the Left Outer Join option.  Below the Join Type is the columns from our two queries each on the side we specified when hooking up the component to the Merge Join.  The keys are already linked now we need to add the columns we want to return from both sides of the join.  Make sure to both the Hash_CD columns to make comparing them easier later on in the package.
The next step will be to add a conditional split to the package to determine which rows are new rows to be inserted and which rows need to be evaluated for updates.  To do this requires only one output to be created and we’ll call it NewRecords.  Here is the expression you need to add to the editor.
If this doesn’t make immediate sense let me explain.  Since we did a left outer join if there were no matches between the two queries on the UserAK it means that the records without a match are new records and as a result the UserSK would evaluate to NULL.  Now hook up the NewRecords output to your OLEDB Destination, set the connection to the destination table, confirm the mappings and the new records will get inserted correctly.
Now on to the second Conditional Split. Here we need to evaluate if the records that matched on UserAK have had a change in any of the columns that we are tracking changes on.  To do this we will use the Checksum values that we’ve created in our source query.  We need to write 2 SSIS expressions, 1 for each output, to determine if the records have changed.  Here are the expressions:
We then connect the Update output to the OLEDB Command component to update the rows that need to be updated.
Inside the OLEDB Command editor the first tab you are shown is the connection manager tab. Simply connect to the destination database and then select the component properties tab.
At the bottom of this tab, under Custom Properties is the SQLCommand property. Here you will write your update statement. To do this you will have to map all of the columns to be updated in the destination to the columns coming through in the data flow. The query will look like you are mapping them to parameter values (?). Notice that even the where clause is set using the parameter indicator.
UPDATE DimUser
 SET
     [ScreenName] = ?
    ,[AccountCreateDate] = ?
    ,[TimeZone] = ?
    ,[Language] = ?
    ,[GeographySK] = ? WHERE [UserSK] =?
Once the query is complete go to the Column Mapping tab.
No it is just a matter of correctly mapping the parameter values to the columns in your data flow (input columns). Make sure to pay attention to your column order in the update query to map the parameter values to the appropriate input column. Remember that the last parameter value is for the where clause and it is why we brought the UserSK value from the Lookup query to begin with.
When your done the Data flow should look something like this.
I hope this helps some of you looking to try different ways to update SCD1 Dimensions.

II. Slowly Changing Dimension Type 1 with Lookup and Conditional Split
In below post I talked about using the SCD component that comes with SSIS to load a Dimension.  This method is ok for loading small tables (<50 span=""> The Lookup and Conditional Split design pattern performs much better.  The main reason for the enhanced performance is the lookup component.  The lookup component executes its query once and stores the results in cache where as the SCD component queries the destination table for each row that comes from the source.  While there is no easy to configure wizard to setting up this design pattern, it isn’t too difficult to do manually.
With the SCD component we didn’t have to do any additional work in our source query to make comparing the columns quick and easy.  With the Lookup and Conditional Split we need to add a checksum to our query.  Here is the query that includes the checksum:
SELECT
    [UserAK]
    ,[ScreenName]
    ,[AccountCreateDate]
    ,[TimeZone]
    ,[Language]
    ,[GeographySK]
    ,CHECKSUM( 
        [ScreenName]
        ,[AccountCreateDate]
        ,[TimeZone]
        ,[Language]
        ,[GeographySK] 
    ) as Hash_CD
FROM
    DimUserUpdate
Now that we have our source query configured correctly and added the Lookup component and connected the two, lets configure the Lookup.
On the General Tab you can configure the Cache mode, the connection type and the way to handle no matches.  This third setting is very important when configuring the lookup for Slowly Changing Dimensions.  You want to set the option to Redirect Rows to No Match output.  This allows you to insert rows that don’t already exist in you destination table.
On the Connection Tab you obviously set the connection to your database and can either choose to select a table or use a SQL query.  I suggest you always use a SQL query and bring back only the rows you need for the lookup.  Doing this will save space in memory, and if you have a very wide table with millions of rows this could cause your lookup to run slowly or even fail.  In this query I am only bringing back the UserAK (business key) UserSK(surrogate key) and the Checksum value.  With regards to the checksum it is entirely up to you (or the data architect if that isn’t you as well) on whether or not you store the checksum value.  In this example I am not storing the value.  Here is the query:
SELECT
    UserSK
    ,UserAK
    ,CHECKSUM( 
        [ScreenName]
        ,[AccountCreateDate]
        ,[TimeZone]
        ,[Language]
        ,[GeographySK] 
    ) as Hash_CD
FROM dbo.DimUser 
The Columns Tab is pretty easy to configure.  The two boxes on the upper half of the editor window represent the source query (left) and the lookup query (right).  To configure just drag the business key (UserAK) from the source query on to the busness key (UserAK) from the lookup query.  Then put a check mark next to the columns from the Lookup that you want to bring back into the dataflow (UserSK and Hash_CD).  Since the column Hash_CD exists in both the lookup and the source queries make sure to set the Output Alias (LKP_Hash_CD) so it is easy to differentiate between the two.  This is all you need to do to configure the lookup.
From the lookup we should have to outputs:
  1. Lookup Match Output
  2. Lookup No Match Output
The Lookup No Match Output will contain all of our new records.  We can map this output directly to our OLEDB Destination with no further work. 
The Lookup Match Output contains all the records that had matches in the destination table, so the next step will be to determine if the records coming are different from the records that already exists.  To do this we will use the Conditional Split transform.
To configure this transform we will use the two Hash_CD values two create two different outputs from the Conditional Split.  Fist we will configure the NoChange Outupt.  First name the output and then add the SSIS expression to compare the two values.  In this instance we want to send all the matching Hash_CD values to this output.  Here is the expression :
LKP_Hash_CD == Hash_CD
The next output will be the Change Output, and will contain all the records where the Hash_CD values didn’t match. Here is that expression:
LKP_Hash_CD != Hash_CD
That is it for the Conditional Split configuration. Now we need to set up the package to perform the updates.  There are a couple of methods to do this, but we are going to use the OLEDB Command to do the updates.

As you can see we use the Change Output to connect to our OLEDB Command component. 

Inside the OLEDB Command editor the first tab you are shown is the connection manager tab. Simply connect to the destination database and then select the component properties tab.
At the bottom of this tab, under Custom Properties is the SQLCommand property.  Here you will write your update statement.  To do this you will have to map all of the columns to be updated in the destination to the columns coming through in the data flow.  The query will look like you are mapping them to parameter values (?).  Notice that even the where clause is set using the parameter indicator.
UPDATE DimUser
 SET
     [ScreenName] = ?
    ,[AccountCreateDate] = ?
    ,[TimeZone] = ?
    ,[Language] = ?
    ,[GeographySK] = ?
 WHERE [UserSK] =?
Once the query is complete go to the Column Mapping tab.
No it is just a matter of correctly mapping the parameter values to the columns in your data flow (input columns).  Make sure to pay attention to your column order in the update query to map the parameter values to the appropriate input column. Remember that the last parameter value is for the where clause and it is why we brought the UserSK value from the Lookup query to begin with. 
Once all the mapping is done click OK and you are now ready to handle Type 1 changes in your Slowly Changing Dimension.











III. Slowly Changing Dimension Type 1 Changes using SSIS SCD Component
The Slowly Changing Dimension Component included with SSIS is one of the methods you can use to manage slowly changing dimensions, and its pretty easy to use.  The problem with the SCD component is that it performs pretty badly, especially as the number of rows in your table grows.  I would say that any Dimension with over 50,000 records in it would be too big for this component.  For small dimensions it will work just fine.
Once you have configured your source component and placed the SCD component in you design pane and hooked the two components up, double click on the SCD component to open the editor.
The first step is to connect to your destination table.  Then you need to select the column or columns in the destination table that match the key or keys from your source table.  In this instance we are choosing the UserAK column which is the Primary Key in our source Table and the Alternate Key in the destination table.  Once the Key columns are selected click NEXT.
On this screen you tell the wizard which of you non key columns are you going to update.  You do this by selecting one of the options from the dropdown menu under the Change Type column next to each Dimension Column.  There are three options:
  1. Fixed Attributes, which means that the data in these columns won’t ever change, even if a change comes through from the source. 
  2. Changing Attributes which corresponds to a Type 1 change.
  3. Historical Attributes which corresponds to a Type 2 change.
Since that we are only worried about Type 1 changes we are going to select the Changing Attribute option.  Once all the columns are configured as you would like them click next.
Here we will configure how to handle the Fixed and Changing attributes. The first options determines how we want to handle updates that come for fixed attributes, either fail the transform or not. The next options allows you to update columns in historical records as well as the current record that are changing attributes. Once done here click next.
On the next screen you configure how to handle inferred members, for our purposes we will just leave this option disabled. Click Next and Finish.  The wizard will now add both an OLEDB Destination component and OLEDB Command component to handle the inserts and the updates and configure them for you.
Please don’t let the relative ease of setting up the SCD component drive your decision to use it.  If you expect your dimension to get fairly large there are other design patterns that you can use that will produce much better results.