The Microsoft BI front end tools provide a fantastic mechanism to create reports, dashboards and free form analytics. So, how do we use these tools when the underlying data is provided by SAP, either R3 or BW?
Well, I’m in a session at the Microsoft BI Conference this year with Scott Cameron, and he’s discussing some scenarios on how to get data into PowerPivot. These techniques are quite generic, and can allow data to be exposed and consumed by a number of front ends, namely…
- Excel 2007 / 2010
- Reporting Services 2008 / 2008 R2
- Data Dumps
Third Party Providers
- Theobald Xtract PPV
- ERP-Link iNet.BI
- Simplement Data Liberator
Interestingly, only 2 of the techniques involve Microsoft-provided drivers and techniques. The others are either custom data exports and imports, or third-party providers. Let’s discuss them a little further.
Scenario 1: SAP BW à Excel 2010
Excel 2007 / 2010 natively supports browsing and reporting on SAP BW infocubes. No need for additional add-ins, or 3rd party software. Only requirement is the install and configuration of the SAP GUI front end.
This SAP GUI front end provides two things:
SAP BW OLE DB Provider
- limitation here is that only supports 32 bit Excel.
- SAP Login (Description and Client Number)
Once the driver has been installed, and we’ve used SAP GUI to create a SAP Login; we can then use the Get External Data wizard in Excel to connect to SAP BW, in a very similar manner to connecting to Analysis Services cubes. Some differences though:
- When connecting to external data, select Other Data Sources
Select the SAP BW OLE DB Provider
- Again, if it’s not there, it’s probably a 64 bit machine ;(
- Data Source = Description of the SAP Login created in SAP GUI
- User Name and Password are the SAP credentials
- Under the All tab, add an Extended Property for the Client Number, SCF_CLIENT = <Client Number>
And you’re done!
Check http://bit.ly/cLrN7t for further details.
I like this approach personally because I am quite a fan of Excel to analyse data. It also allows re-use of the existing BW cubes, and no data needs to be physically moved between systems.
Scenario II: SAP BW à Reporting Services
SQL Server 2008 and SQL Server 2008 R2 allows reports to be written directly against SAP BW data sources.
These reports use the Microsoft .Net Provider 1.1 for SAP NetWeaver Business Intelligence. This driver is installed as a part of the Client Tools install on the SQL Server install media.
To enable these queries on the BW Server, you need to enable the QueryCube for external access. (This is done through the SAP BEX Query Designer)
The connection string is the XML/A connection URL. This URL allows for the passing of parameters (as the dataset itself doesn’t support parameters) Once connected, the report design experience is again similar to reporting off of Analysis Services.
Again, a neat method to create reports, as tools like Report Builder 3.0 do allow more people to create and consume these reports. And the data is not physically moved into a staging environment.
Check http://bit.ly/cWTfaE for further details.
Scenario III: SAP Open Hub à CSV File
SAP provides an Open Hub license. This allows data to be exported. These data exports are natively written to CSV file, however it is also possible to export directly to SQL Server (which would be preferred!)
Clearly, this is the most manual method to get to the data. The biggest negatives here are
- the security is lost in translation
- the Open Hub license is not a trivial purchase from my understanding
Scenario IV: Third Party Providers
Theobald Xtract PPV
- Very neat software. Basically creates a data feed that PowerPivot can consume to directly import data. (Without having to use the “traditional” reporting services middle tier.
- Check http://bit.ly/9oO59D for further details
- Provides a host of tool that use a .Net Data Provider to extract data from SAP.
- Interesting one is the Data Source View Creator, as it creates an Analysis Services cube. (which I’m a massive fan of)
- Check http://bit.ly/bWZNEx for further details
Simplement Data Liberator
A replicator to move data from SAP to SQL in real-time
- (Most SAP administrators are nervous to do this, as you need to enable replication in SAP)
- Neat solution to create a fully-fledged SQL Server data warehouse
- Check http://bit.ly/b5D8Cn for further details
So, I’m in a session at MS BI Conference 2010 with Dave Wickert, and he’s doing a great job of working us through things to look out for when installing the PowerPivot client.
This is an Add-In that runs in the context of Microsoft Excel 2010. Any version of Excel that supports add-ins will support the PowerPivot add-in (which is almost any version!)
The download is available from http://www.powerpivot.com/download.aspx
Make sure you download and install the version that correlates to the version of Excel (not your Operating System). So, basically 32-bit Excel = 32-bit Add-in (regardless of 64 bit OS) And yes, you want to use the 64-bit version if at all possible. It allows you to create and manage larger datasets, any size that fits in RAM will work.
However, please note that only 4G files can be saved to disk, and only 2G files can be uploaded to Sharepoint. Yes, only J
Preconditions for the Add-in to work include:
VSTO installed and running
- The Complete install of Office manages this by default
- If installing only Excel, be sure to select “Shared Components” as well to include VSTO
The PowerPivot add-in is enabled. (once installed)
- Check under File – Options – Addins
- Ensure that the add-in is not in the Disabled State
If the add-in is still gone, even if both are ok, it is possible to expose the VSTO exceptions on Excel start up
- Create an environment variable, VSTO_SuppressDisplayAlerts, and set to 0
- Check www.powerpivotgeek.com for full details
Debugging options for the client add-in can be set through PowerPivot à Settings. It’s possible to log events off to a log file, which can be opened and analysed in SQL Server Profiler. Also possible to create a snapshot of the current memory usage, by clicking “Snapshot”.
PowerPivot provides a feature in SharePoint Central Admin, where administrators can see the usage history of published PowerPivot workbooks. This PowerPivot Management Dashboard is essentially a custom web part web page that shows a number of views on the current state of the system, including:
Infrastructure – Server Health
- Query Response Times
- Average CPU Usage
- Average Memory Usage
Workbook Activity – Chart and List
- This is the killer time based chart showing animated history of workbook activity
- Data Refresh – Activity and Failures
- Essentially a document library that has been exposed through a web part
Elements that cannot currently be updated:
- Overall Layout. The “dashboard itself is not written in something like Performance Point, so the layout and content can’t be changed.
- The Silverlight Control. Sigh. There are no plans to produce a separate reusable bubble chart control. Bit of a pity, would be great to embed in other dashboards. Maybe next release J
Elements that we can update:
Any of the Excel based workbooks, including the Workbook activity book
- These workbooks are in themselves PowerPivot workbooks, which feeds data from the PowerPivot Shared Services Database. (The Usage schema tables)
- The Reports document library (bottom right section of the dashboard.)
- This is simply a reference to a document library in All Site Content à PowerPivot Management à Instance ID à 1033
So I’m in New Orleans, Louisiana; attending the Microsoft Business Intelligence conference 2010. It starts officially tomorrow.
This is night #2 that we’ve been in this party city, and I figured that it’s about time I put down some of my initial thoughts about this place.
Firstly, this is definitely the city that you want to attend with some mates. There are simply too many parties going on in the evenings, and you really want to be there (and let’s be honest, no one likes to party alone.) It’s also a good thing to have a wing man to save you when offered things on Bourbon street that you would rather avoid! (Thanks Mark, you legend you!)
Secondly, It’s a city where history and the present are joined at the hip. It’s great to see how the French Quarter has been preserved while the rest of the city has grown into palm lined streets and mega hotels. Oh yeah, loving the palm lined streets, Canal street being a great example (and yes, we walked most of it in a desperate attempt to find a megamall for Mark)
Talking Mega malls takes me to thought number three… New Orleans is not the city you come through too to visit the mega Best Buy / Walmart type stores. (unless you have a car of course, which we don’t for this trip) New Orleans itself is really driven by the French Quarter, and the river attractions. These involve more artie-type shops, where you can get a great massage, tarot card reading, or a nice fridge magnet; but not an ipod… sadly.
So, the first two days have been great so far. We’ve done all sorts of things including:
- Eating at the Crescent City Brew House. Hands down the best rib place on planet earth.
- Chilling up the Mississippi (did I spell that right??) on the Steamboat Natchez. Seriously like going back in time. Old school thick carpets in the lunch hall, honky jazz band, big steam wheel at the back. Definitely recommended!
- Eaten waffles for breakfast. Who says they only need to be pudding!
- Watched 12 sports at once. Yip, one of the sports bars here at 12 massive screens going at once. A little ADD, but at least we got to see the French Open Final. (nice one Nadal!)
- Drinking a Hurricane at Pat O’Briens. Legendary.
- Oh, and registering at the BI Conference, which is why we’re here J
Analysis Services cubes provide a granular mechanism to maintain security rights, right from cube level through to individual dimensional elements. This security is maintained through the implementation of cube roles, where individual Windows Accounts or Groups can be assigned various access rights. These access rights are traditionally managed through SQL Management Studio, as and when required. See Technet for a full description on roles and how to maintain them.
There are cases however, where this maintenance can become tedious, especially in cases where a large number of individual users require granular rights against dimensions with many hundreds or thousands of members.
In traditional SQL based systems, these types of scenarios are typically handled through SQL Server mapping tables. These tables would map individual windows logins against the corresponding entities that that user had rights to see. It is possible to re-use these types of mapping tables in the context of Analysis Services to manage security access. (without writing extensive XMLA or C#-type code to manage roles!!)
Let’s take a version of the Adventure Works cube, and assume that we need to manage security rights to the Product Sub Category level in the Product dimension. The process would be as follows:
- Implement and Populate Mapping Table
As described above, let’s create a mapping table, which maps an individual user against the dimension key that he/she has rights to access; something like:
- Include table into Cube Project
This table can then be incorporated into the cube Data Source View, linking to the ProductSubcategory table.
- User Dimension
From this table, we can create a User dimension, with one level, namely User. So, the user represented in the above table would have a key of [User].[User].[bi2010bob]
- Link Dimension to Cube
Firstly, let’s add our mapping table into the cube as an additional measure group, with a basic measure.
We can then add our new User dimension to the cube, and ensure that the relationships have been setup correctly.
- Create cube role
The final step in the process is to create a role in the cube which will use our underlying security mappings to limit user roles.
This role relies on some predefined MDX functionality.
- UserName. This function in MDX returns the NT name of the user that is connected to the cube, in the form domainusername. E.g. bi2010bob
- StrToMember. This function takes string input, which can be a concatenated expression, and returns the underlying cube member with this name or key.
Exists. This lesser-used function determines limits a given set, returning only those members that have actual values in a given Measure Group.
- Again, see Technet for a full description of Exists
We create a role, with the following properties:
- General à Read Definition
- Membership à NT AuthorityAuthenticated Users
- Cubes à Read Access to the Required Cubes, in our case Sales.
Dimensional Data à in the attribute against which we’re creating security, in our case Product Subcategory, add the following MDX (Under the Advanced Tab, Allowed Member Set)
- EXISTS([Product].[Subcategory].Members, STRTOMEMBER(“[User].[User].[” + UserName + “]”), “Fact Security”)
- Enable Visual Totals. This will depend on your security policy. If checked, will only show totals that apply to the restricted set. If unchecked, the All levels will display the totals as if security were not applied.
All done! Now we’re ready to deploy, process and test.
The table below outlines the different result when querying Order Qty by Subcategory, as an Administrator and as Bob.
Extending the Analysis Services security model to exploit SQL managed mappings is a tremendous way to leverage existing security mappings, as well as manage those scenarios which are simply too cumbersome to manage by hand.
Well, I’m going to write this particular blog post at the risk of looking incredibly silly, however it may help someone…
I’m in the process of creating some internal reports using Excel 2010 Beta on top of an Analysis Services back end. This requires the creation of a number of Pivot Tables, and some super slinky Slicers, and the report is looking really nice.
One small glitch in the process is that whenever I change Slicer values, the Pivot Table columns automatically resize, leaving me with a number of cells with the inevitable “###” values. This requires me to auto expand the column to see the values again. Not an ideal solution to present to my Financial Director J
To overcome this small issue, simply go to the Pivot Table options, and unselect the “Autofit column widths on update” check box. Set the column widths to what you need, and you’re in the game.
When installing and configuring Sharepoint 2010 Beta yesterday, I ran into a small bug. No big surprise seeing as it is Beta software after all J
I installed the Sharepoint software successfully on a single server, on Windows Server 2008 R2 and SQL Server 2008 R2. However, when running the Sharepoint 2010 Products Configuration Wizard, an error occurred when running step 8.
The error reads as follows:
One or more configuration settings failed. Completed configuration settings will not be rolled back. Resolve the problem and run this configuration wizard again. The following contains detailed information about the failure:
Failed to create sample data.
An exception of type Microsoft.Office.Server.UserProfiles.UserProfileException was thrown. Additional exception information: Unrecognized attribute ‘allowInsecureTransport’. Note that attribute names are case-sensitive. (C:Program FilesMicrosoft SharedWeb Server Extensions14WebClientsProfileclient.config line 56)
The root cause seems to be some conflict with the WCF which requires a hotfix, and Ian Campbell has written a nice blog entry about the error here
However, I was unable to download the hotfix yesterday, so to resolve the problem, I simply removed the following XML lines from the config file mentioned above…
I then reran the Sharepoint 2010 Products Configuration Wizard, and configured successfully. Probably not the recommended route, as I’m sure something will be looking for this tag in future, but it seems to be a stable short term fix so far.