MSBI (SSIS/SSRS/SSAS) Online Training

Monday, January 12, 2015

Top Ten: New Features in SQL Server 2014

Hi All,
I am listing SQL Server  2014 top 10 new features here.

Microsoft introduced some significant enhancements inSQL Server 2014—especially with In-Memory OLTP. However, as you might expect after such a short release cycle, not every subsystem has been updated; there are no major changes to SQL Server Integration Services (SSIS), SQL Server Replication Services, or SQL Server Reporting Services (SSRS). Nonetheless, there are plenty of significant enhancements. Here are 10 new features in SQL Server 2014.

1. In-Memory OLTP Engine

SQL Server 2014 enables memory optimization of selected tables and stored procedures. The In-Memory OLTP engine is designed for high concurrency and uses a new optimistic concurrency control  mechanism to eliminate locking delays. Microsoft states that customers can expect performance to be up to 20 times better than with SQL Server 2012 when using this new feature. For more information, check out “Rev Up Application Performance with the In-Memory OLTP Engine.”

2. AlwaysOn Enhancements

Microsoft has enhanced AlwaysOn integration by expanding the maximum number of secondary replicas from four to eight. Readable secondary replicas are now also available for read workloads, even when the primary replica is unavailable. In addition, SQL Server 2014 provides the new Add Azure Replica Wizard, which helps you create asynchronous secondary replicas in Windows Azure.

3. Buffer Pool Extension

SQL Server 2014 provides a new solid state disk (SSD) integration capability that lets you use SSDs to expand the SQL Server 2014 Buffer Pool as nonvolatile RAM (NvRAM). With the new Buffer Pool Extensions feature, you can use SSD drives to expand the buffer pool in systems that have maxed out their memory. Buffer Pool Extensions can provide performance gains for read-heavy OLTP workloads.

4. Updateable Columnstore Indexes

When Microsoft introduced the columnstore index in SQL Server 2012, it provided improved performance for data warehousing queries. For some queries, the columnstore indexes provided a tenfold performance improvement. However, to utilize the columnstore index, the underlying table had to be read-only. SQL Server 2014 eliminates this restriction with the new updateable Columnstore Index. The SQL Server 2014 Columnstore Index must use all the columns in the table and can’t be combined with other indexes.

5. Storage I/O control

The Resource Governor lets you limit the amount of CPU and memory that a given workload can consume. SQL Server 2014 extends the reach of the Resource Governor to manage storage I/O usage as well. The SQL Server 2014 Resource Governor can limit the physical I/Os issued for user threads in a given resource pool.

6. Power View for Multidimensional Models

Power View used to be limited to tabular data. However, with SQL Server 2014, Power View can now be used with multidimensional models (OLAP cubes) and can create a variety of data visualizations including tables, matrices, bubble charts, and geographical maps. Power View multidimensional models also support queries using Data Analysis Expressions (DAX).

7. Power BI for Office 365 Integration

Power BI for Office 365 is a cloud-based business intelligence (BI) solution that provides data navigation and visualization capabilities. Power BI for Office 365 includes Power Query (formerly code-named Data Explorer), Power Map (formerly code-named GeoFlow), Power Pivot, and Power View. You can learn more about Power BI atMicrosoft’s Power BI for Office 365 site.

8. SQL Server Data Tools for Business Intelligence

The new SQL Server Data Tools for BI (SSDT-BI) is used to create SQL Server Analysis Services (SSAS) models, SSRS reports, and SSIS packages. The new SSDT-BI supports SSAS and SSRS for SQL Server 2014 and earlier, but SSIS projects are limited to SQL Server 2014. In the pre-release version of SQL Server 2014, SQL Server Setup doesn’t install SSDT-BI. Instead, you must download SSDT-BI separately from the Microsoft Download Center.

9. Backup Encryption

One welcome addition to SQL Server 2014 is the ability to encrypt database backups for at-rest data protection. SQL Server 2014 supports several encryption algorithms, including Advanced Encryption Standard (AES) 128, AES 192, AES 256, and Triple DES. You must use a certificate or an asymmetric key to perform encryption for SQL Server 2014 backups.

10. SQL Server Managed Backup to Windows Azure

SQL Server 2014’s native backup supports Windows Azure integration. Although I’m not entirely convinced that I would want to depend on an Internet connection to restore my backups, on-premises SQL Server 2014 and Windows Azure virtual machine (VM) instances support backing up to Windows Azure storage. The Windows Azure backup integration is also fully built into SQL Server Management Studio (SSMS).

Microsoft released CU5 for SQL Server 2014

Hi, This is my first post in 2015 and this update contains hotfixes for issues that were fixed after the release of SQL Server 2014. This build number of this cumulative update package is 12.0.2456.0.
List of fixes related to SQL Server Analysis Services / PowerPivot 2014:
Release
SQL Bug Number
KB Article Number
Description
CU537206642981424FIX: Exception when you select more than one hierarchy from parent/child dimension in SSAS 2012 or in SSAS 2014
CU532606062992069FIX: XMLA and Windows PowerShell return different code for the same SSAS 2012 or SSAS 2014 processing
CU537206352998870FIX: "The record ID is incorrect" error when you use DAX to execute query in a dimension in SSAS 2012 or SSAS 2014
CU534285683016518FIX: "You do not have permission." error when you run a report that connects to a .BISM file in PowerPivot gallery
CU429060842963137FIX: Incorrect result is returned when you query multiple dimension members in MDX and DAX in SSAS 2012 or SSAS 2014
CU429061012965922FIX: Issue when you configure scheduled data refresh for the PowerPivot workbook in SharePoint 2013
CU429060912975385FIX: User obtains incorrect data when two users in the same security role run a client report in SSAS 2012 or SSAS 2014
CU432605882975743FIX: Incorrect result and exception occur when you use MDX query to do session cube grouping in SSAS
CU429061512976172FIX: An access violation occurs when you perform Process Data or Process Full on a partition in a table in SSAS
CU429061492976861FIX: The time is longer and longer to perform process full on an SSAS 2012 or SSAS 2014 tabular model database
CU429061152976953FIX: The DAX query with filter returns incorrect result after you run a DAX query without any filters in SSAS 2012 or in SSAS 2014
CU432605782989512FIX: MDX query returns no result when the cache is not cleared after you run a previous query in OLAP or tabular model
CU432606032989590FIX: Except function does not return the calculated member that is created at the ALL member level in SSAS
CU432606182993483FIX: SSAS processing fails when you perform a Process Full operation on many partitions in a transaction
CU429765642999308Add detailed information to the "Call to Excel Services returned an error" error message in SQL Server 2014 PowerPivot
CU325783022961559FIX: Incorrect folder when deploying SQL Server 2012 or SQL Server 2014 PowerPivot solutions by using SharePoint 2013 Central Admin
CU325783042963372FIX: Sub-total of calculated measure is incorrect (non-filtered) when you execute MDX query in SSAS 2012 or SSAS 2014
CU325156532975434FIX: Error when you complete an SSAS 2014 image after specifying a non-default root directory
CU328951692989715FIX: Dimension attribute is lost in Excel PivotTable Field List when you set AttributeHierarchyDisplayFolder to "" in SSDT-BI
CU225252682932559FIX: Totals are wrong after you filter on a pivot table item and remove the filter in SSAS 2012 or SSAS 2014
CU225252702954480FIX: SSAS instance crashes when you perform writeback action by using the role with cell security defined
CU225252902961254FIX: The execution of an MDX query in SSAS 2012 or SSAS 2014 is much slower than it is in SSAS 2008 R2
CU225252992962650FIX: Access violation occurs when running a Process Recalc on a tabular table in SSAS 2012 or SSAS 2014
CU225935752967900FIX: Data models do not load after you run a PowerPivot scheduled data refresh job when you work with SQL Server 2014
CU222628402968974FIX: You cannot create a power view report against a UDM cube in SSAS 2014
CU222628772974373FIX: Poor performance when you use DAX functions on a String or Boolean column in SSAS 2014
CU223613162962599Can now disable checking whether dimension and partition data originate from the same data source when you run a ROLAP query in SQL Server 2014
CU120340932860446FIX: Unexpected error in "XMCompressionAnalyzer::SelectBitPackCompression" function when you perform a Process command against a partitioned table in SSAS 2012 or SSAS 2014
CU120340992864111FIX: It takes longer than expected to cancel an MDX query that contains many GROUP BY clauses in SSAS 2012 or SSAS 2014
CU119933172864930FIX: You cannot write text that has more than 32,767 characters in Report Query Designer in SQL Server
CU120341152872946FIX: CPU spinning occurs when you use Power View to run a DAX query against a Multidimensional model in SSAS 2012 or SSAS 2014
CU120341282876882FIX: "Internal error: An unexpected error occurred" when you run a "Process" command against the TFS SSAS cube
CU120661692880071FIX: Error messages when you perform a ProcessRecalc operation after a ProcessData operation on an SSAS 2012 or SSAS 2014 tabular model
CU120340812880094FIX: Grand total or subtotal rows display incorrect results in a Power View report that runs against a multidimensional cube in SSAS 2012 or SSAS 2014
CU120341092884124FIX: "Type mismatch" error when you run an MDX query that uses the Member.Name property against an SSAS 2012 or SSAS 2014 database
CU120334062890415Notification messages of data update failure are not sent when you use the PowerPivot for SharePoint 2013 add-in for SQL Server 2012 SP1 or SQL Server 2014
CU119933182890818FIX: Incorrect value is returned when you run an UPDATE CUBE statement that uses the USE_WEIGHTED_INCREMENT allocation method in SSAS
CU118521052895694FIX: Query scope assignment works incorrectly after you run a ClearCache command in SSAS
CU119933162896903FIX: You cannot perform any administrator actions on databases when a data model corruption occurs in SSAS 2012 or SSAS 2014
CU120332732897263"An error occurred when loading VertiPaq data" message after you load a database in SSAS 2012 or SSAS 2014
CU120336672897265FIX: "Invalid column name" error when you run some MDX queries against ROLAP partitions in SSAS 2012 or SSAS 2014
CU120337212897282FIX: Wrong date format will be returned when you run DateAdd or Day function in SSAS 2012 or SSAS 2014 on a non-US locale computer
CU120336172905275FIX: Error messages when you run some MDX or DAX queries in SSAS 2012 or SSAS 2014
CU120333002905298FIX: It takes much longer for a user without administrator permissions to run an MDX query than an administrator user in SSAS 2012 or SSAS 2014
CU120349112912928FIX: SQL Server 2012 or SQL Server 2014 PowerPivot for SharePoint 2010 cannot create databases when the SharePoint server name is FQDN
CU118521132912949FIX: SSAS crashes when more than one thread tries to access an SSAS cube at the same time
CU119933152921630FIX: Incorrect result is returned when you run an MDX query that contains a DRILLTHROUGH statement in SSAS
CU122053552927524FIX: MDX query does not reuse cache after clear cache command on different Measure Group with Non-Admin login in SSAS 2012 or SSAS 2014
CU121013502927748FIX: PowerPivot workbook cannot do data refresh in SharePoint 2013 after you apply SQL Server 2012 CU7 or in SQL Server 2014
CU122053572927844FIX: Large DistinctCount become slower in tabular mode in SQL Server 2012 or SQL Server 2014
CU120768652922799FIX: Unexpected exception is thrown when you convert writeback partition in SSAS

Wednesday, October 8, 2014

BI Tool Comparison’s






I got this question often: which is the best Data Visualization product? Let’s compare some DV platforms such as Spotfire, Qlikview, Tableau Software and the Microsoft BI platform (PowerPivot, Excel 2010, SQL Server with its VertiPaq, SSAS, SSRS and SSIS). 2 key factors for tool selection are
·         which makes it easy to comprehend the data,
·         price-performance
Because modern datasets are huge (or growing very fast!), they are usually best comprehended using Data Visualization (DV) with an ability to interact with data through visual drill-down capabilities and dashboards. There is NO a single best visualization product.  Each has its place.  For example, Spotfire has best web client and analytical functionality. On the other hand, Qliktech may be the best visualization product for interactive drill-down capabilities.  The Microsoft BI platform provides better price-performance ratio and good as a backend for DV (especially with release of SQL Server 2012) or for people who wish to build own (that will be a mistake) DV. Tableau has the best ability to interact with OLAP cubes etc. We put into a summary table the comparison of 4 Platforms to help you to evaluate DV products, based on your needs.
Criteria
Spotfire
Qlikview
Tableau
MS BI Stack
Comment
Business Criteria
======= ======= ======= =======
Speed, Scalability, Price
Implementation Speed
Good
High
Good
Average
Qlikview is fastest to implement
Scalability
Unlimited
Limited by RAM
Very Good
Good
Need the expert in scalable SaaS
Pricing
High
Above Average
High
Average
Microsoft is the price leader
Licensing/support cost
High
High
High
Average
Smart Client is the best way to save
Enterprise Readiness
Excellent
Good for SMB
Good for SMB
Excellent
Partners are the key to SMB market
Long-term viability
Good
1 product
Average
Excellent
Microsoft are 35+ years in business
Mindshare
Analytics Market
Growing fast
Growing fast
3rd attempt to win BI
Qlikview is a DV Leader, Successful IPO
Technical Criteria
======= ======= ======= =======
Drilldown, Analytics, UI
Clients for End Users
ZFC, Spotfire Silver
RIA, ZFC,Mobile
Windows,ZFC
Excel, .NET
Free Qlikview Personal Edition is a big plus
Interactive Visualization
Very Good
Excellent
Very Good
As good as Excel
Most users value Visualization over Modeling
Data Integration
Good
Good
Excellent
Good
Need for Data Integration expert
Visual Drill-Down
Very Good
Excellent
Good
Average
Qlikview is fastest thanks to in-memory database
Dashboard Support
Very Good
Excellent
Good
Below Average
Spotfire and Qlikview are best for Dashboards
Integration with GIS
Excellent
Good
Good
Average
Spotfire has the best GIS integration
Modeling and Analytics
Excellent
Weak
Excellent OLAP
Good with SSAS
Spotfire is the best, Excel is the most popular
UI & set of Visual Controls
Very Good
Best
Very Good
Good
Need for UI expert to integrate DV components
Development Environment
Rich API, S+
Scripting, Rich API
Average
Excellent
Tableau requires less consulting than competitors
64-bit In-Memory Columnar DB
Very Good
Excellent
In-memory Data Engine
Very Good
64-bit RAM allows huge datasets in memory
Summary – Best for:
Visual Analytics
DV, Drilldown
Visual OLAP
Backend for DV
Good Visualization requires a customization!
We also suggest for you to take a look on other comparisons: Altheon comparing Qlikview, Tableau and Omniscope and others (please ping me if you will find good comparison of DV tools).
Notes.
·         TIBCO is a vendor of the balanced Spotfire Platform: Spotfire Professional, Server, Enterprise and Web Players, S-Plus, DecisionSite, Metrics, Data Automation and Miner  technologies, combining in-memory database with interactive Data Visualization, advanced analytics (S+), data mining and multi-filtering functionality. Spotfire 3.2.1 provides best web client and supports iPad
·         Qliktech is a vendor of Qlikview DV Platform: Qlikview Developer, Qlikview Server, Publisher and Access Point tools with a leading in-memory columnar database combined with advanced Visual Drill-Down, Interactive Dashboards and comprehensive set of client software, running even on SmartPhones.
·         Microsoft is a vendor of the most comprehensive set of BI and DV components, including SQL Server, Analytical, Reporting and Integration Services (SSAS, SSRS, SSIS), Sharepoint, Excel 2010 with PowerPivot add-in (Excel is the most popular BI tool, regardless) and VertiPaq engine. I did not include PerformancePoint Server: it was discontinued in April 2009 and PerformancePoint Services from SharePoint I cannot recommend, because I CAN NOT recommend SharePoint.
·         DV expert can be a cost-effective customizer of all of the above with the ability to customize DV with development components from leading technological vendors like Dundas, DevExpress etc.
·         I did not include Oracle, because it does not have own DV tool; however Oracle is an OEM partner with Tableau and resell it as a part of OBIEE toolset.
DV expert can help to select the appropriate DV approach for your specific application. Additional services include the following:
·         Professional gathering of business requirements, system analysis of workflow and dataflow, and functional specifications;
·         The custom software development for Extract, Transform and Load (ETL) processing from large Client Databases into in-memory superfast columnar DB and interactive OLAP Cubes;
·         Design of Custom Data Visualization and dashboards deployed over the Internet through smart client and RIA technologies.
Custom DV applications enable the user to perform visual drill-down, fast searches for outliers in large datasets, easy-to-use root-cause and business logic analysis, interactive data search, and visual and predictive analytics. Some factors and/or features are very important and some I did not mention because they will  be mentioned on other pages and posts of this blog (e,g, see “in-memory” page and DV Tools pages. I perceive that DV area has 4 super-leaders in this area: Qlikview 10, Spotfire 3.3, Tableau 6.1 and PowerPivot, but for completeness (because the real function is to be a “Data Visualizers”), I wish to add Visokio’s Omniscope. I do not include vendors who are 1-2 generation behind: SAP, SAS, IBM, Oracle, Microstrategy and I can add to this list a dozen more of mis-leaders. Many vendors working on some in-memory technology. Tableau 6.X has now in-memory data engine (64-bit).
Additional factors to consider when comparing DV tools (table above overlaps with list below):
·         - memory optimization [Qlikview is the leader in in-memory columnar database technology];
·         - load time [I tested all products above and PowerPivot is the fastest];
·         - memory swapping [Spotfire is only who can use a disk as a virtual memory, while Qlikview limited by RAM only];
·         - incremental updates [Qlikview probably the best in this area];
·         - thin clients [Spotfire has the the best thin client, especially with their recent release of Spotfire 3.2 and Spotfire Silver];
·         - thick clients [Qlikview has the best THICK client, Tableau has free Desktop Reader, Visokio has Java-based Omniscope Desktop Viewer] ,
·         - access by 3rd party tools [PowerPivot's integration with Excel 2010, SQL Server 2008 R2 Analysis Services and SharePoint 2010 is a big attraction];
·         - interface with SSAS cubes [PowerPivot has it, Tableau has it, Omniscope will have it very soon, Qlikview and Spotfire do not have it],
·         - GUI [3-way tie, it is heavily depends on personal preferences, but Qlikview is more easy to use than others];
·         - advanced analytics [Spotfire 3.2 is the leader here with its integration with S-PLUS and support for IronPython and other add-ons]

·         - the productivity of developers involved with tools mentioned above. In my experience Qlikview is much more productive tool in this regard.

Thursday, October 2, 2014

SSAS cube performance improvement Best methods – Part 2

In part 1 we looked at a method to quantify the work that gets done by SQL Server Analysis Server and found that the OLE DB provider with a network packet size of 32767 brings best throughput while processing a single partition and maxing out the contribution per single CPU.
In this 2nd part we will focus on how to leverage 10 cores or more (64!) and benefit from every of these CPU’s available in your server while processing multiple partitions in parallel; hope the tips and approach will help you to test and determine the maximum processing capacity of the cubes on your SSAS server and process them as fast as possible!
Quick Wins
If you have more than 10 cores in your SSAS server the first thing you’ll notice when you start processing multiple partitions in parallel is that Windows performance counter ‘% Processor time’ of the msmdsrv process is steady at 1000% which means 10 full CPU’s are 100% busy processing. Also the ‘Rows read/sec’ counter will top and produce a steady flat line similar to the one below at 2 million Rows read/sec (==200K rows read/sec per CPU):

In our search for maximum processing performance we will increase the number to reflect the # Cores by modifying the Data Source Properties. Change the ‘Maximum number of connection’ from 10 into the # Cores in your server. In our test server we have 32 logical- and 32 Hyperthreaded = 64 cores available.
1) # Connections
By default each cube will open up a maximum of 10 connections to a data source. This means that up to 10 partitions are processed at the same time. See picture below: 10x status ‘In Progress- ’ for the AdventureWorks cubes which is slightly enpanded to span multiple years:

Just by changing the number of connections to 64 the processing of 64 partitions in parallel results in an average throughput of over 5 million Rows read/sec, utilizing 40 cores (yellow line)
This seems a great number already but its effective (5 million rows/40 cores =) 125K Rows per core and we do still see a flat line when looking at the effective throughput; this tells us that we are hitting the next bottleneck. Also the CPU usage as visible in Windows Task Manager isn’t at its full capacity yet!

Time to fire up another Xperf or Kernrate session to dig a bit deeper and zoom into the CPU ticks that are spend by the data provider:
Command syntax:
Kernrate -s 60 -w -v 0 -i 80000 -z sqlncli11 -z msmdsrv -z oleaut32 -z sqloledb -nv msmdsrv.exe -a -x -j c:\websymbols > SSAS_trace.txt

This shows an almost identical result as the profiling of a single partition in blog part I.
By profiling around a bit and checking on both the OLEDB and also some SQL native client sessions surprisingly you will find that most of the CPU ticks are spend  on… data type conversions.

The other steps make sense and include lots of data validation; like, while it fetches new rows it checks for invalid characters etc. before the data gets pushed into an AS buffer. But the number 1 CPU consumer, CDataSource::DataConvert is an area that we can optimize!
(To download a local copy of the symbol files yourselves, just install the Windows Debugger by searching the net for ‘windbg download’  and run the symchk.exe utility to download all symbols that belong to all resident processes into the folder c:\websymbols\;
C:\Program Files (x86)\Windows Kits\8.1\Debuggers\x64\symchk.exe /r /ip *  /s SRV*c:\websymbols\*http://msdl.microsoft.com/download/symbols )
2) Eliminate Data type conversions
This is an important topic; if the data types between your data source and the cube don’t match the transport driver will need a lot of time to do the conversions and this affects the overall processing capacity; Basically Analysis Server has to wait for the conversion to complete before it can process the new incoming data and this should be avoided.
Let’s go over an AdventureWorksDW2012 Internet_sales partition as example:

By looking at the table or query that is the source for the partition, we can determine it uses a range from the FactInternetSales table. But what data types are defined under the hood?
To get to all data type information just ‘right click’ on the SSAS Database name and script the entire DB into a new query Editor Window.
Search through the xml for the query source name that is used for the partition, like: msprop:DbTableName="FactInternetSales"


These should match the SQL Server data types; check especially for unsignedByte, short, String lengths and Doubles (slow) vs floats (fast).  (We do have to warn you about the difference between an exact data type like Double vs an approximate like Float here).
A link to a list of how to map the Data types is available here.
How can we quickly check and align the data types best because to go over them all manually one by one isn’t funny as you probably just found out. By searching the net I ran into a really nice and useful utility written by John Tunnicliffe called ‘CheckCubeDataTypes’ that does the job for us; it compares a cube’s data source view with the data types/sizes of the corresponding dimensional attribute. (Kudos John!) But unfortunately even after making sure the datatypes are aligned and running Kernrate again shows that DataConvert is still the number one consumer of CPU  ticks on the SSAS side.
3) Optimize the data types at the source
To proof that this conversion is our next bottleneck we can also create a view on the database source side and explicitly cast all fields to make sure they match the cube definition. (This will also be an option to test environments where you don’t own the cube source & databases)
Maybe as best-practice CAST all columns even if you think the data types are right and exclude also the ones that are not used for processing the Measure group from the View. (For example, to process the FactInternetSales Measure Group from the AdventureWorks2012 DW cube  we don’t need  [CarrierTrackingNumber], [SalesOrderNumber], [PromotionKey] and [CustomerPONumber]) ; every bit that we don’t have push over the wire and process from the database source is a pure win.  Just create a view with the name ‘Speed’ like to give it a try.

(Note: always be careful when changing data types!
For example,  in the picture above,  using the ‘Money’ data type is Okay because it is used for  FactInternetSales, but Money is not a replacement for all Decimals (as it will only keep 4 digits behind the decimal point and doesn’t provide the same range) so be careful when casting data types and double check you don’t lose any data!)
Result: by using the data type optimized Speed view as source the total throughput increased from  5 to 6.6-6.8 Million rows Read/sec and 4600% CPU usage (== 147K rows/CPU).  That’s 36% faster. We’re getting there!
The picture also shows that one of the physical CPU sockets (look at the 2nd line of 16 cores in Numa Node 1) is completely max’d out:

4) Create a ‘Static Speed’ View for testing
If you would like to take the database performance out of the equation something I found useful is to create a static view in the database with all the values pre-populated this way there will still be a few logical reads from the database but significant less physical IO.
Approach:
1) Copy the original query from the cube:

2) Request just the SELECT TOP (1):

3) Create a Static view:
Add these values to a view named ‘Static_Speed’ and cast them all:

4) Create an additional test partition that queries the new Static_view

5) Copy this test partition multiple times
Create at least as many test partitions equal to the number of cores in your server, or more:
Script the test partition as created in step 4):

Create multiple new partitions from it by just changing the and ; these will run the same query using just the static view. This way you can test the impact of your modifications to the view quickly and at scale!
6) Processing the test partitions
Process all these newly created test partitions who will only query the statics view and  select as many of them or more as the number of CPU’s you have available in your SSAS server.
Determine the maximum processing capacity of your cube server
by monitoring the ‘Rows Read/sec’!

Wrap Up
If you have a spare moment to check out the workload performance counters of your most demanding cube servers you may find that there is room for improvement. If you see flat lines during the Cube processing I hope your eyes will now start to blink; by increasing the number of connections or checking if you don’t spend your CPU cycles on data type conversions you may get a similar of over 3x improvement, like shown in the example above. By looking at the Task Manager CPU utilization where just one of the NUMA nodes is completely max’d out might indicate its time to start looking into some of the msmdsrv.ini file settings…