SAP BW Interview Questions



Dear readers, these SAP BW Interview Questions have been designed specially to get you acquainted with the nature of questions you may encounter during your interview for the subject of SAP BW. As per my experience good interviewers hardly plan to ask any particular question during your interview, normally questions start with some basic concept of the subject and later they continue based on further discussion and what you answer:

OLAP Processor

Metadata Repository,

Process designer and other functions.

Business Explorer BEx is reporting and analysis tool that support query, analysis and reporting functions in BI. Using BEx, you can analyze historical and current data to different degree of analysis.

OLAP Processor
  • SAP systems (SAP Applications/SAP ECC)
  • Relational Database (Oracle, SQL Server, etc.)
  • Flat File (Excel, Notepad)
  • Multidimensional Source systems (Universe using UDI connector)
  • Web Services that transfer data to BI by means of push

In BW 3.5, you can load data in Persistence Staging Area and also in targets from source system but If you are using SAP BI 7.0 data load should be restricted to PSA only for latest versions.

An InfoPackage is used to specify how and when to load data to BI system from different data sources. An InfoPackage contains all the information how data is loaded from source system to a data source or PSA. InfoPackage consists of condition for requesting data from a source system.

Note that using an InfoPackage in BW 3.5, you can load data in Persistence Staging Area and also in targets from source system but If you are using SAP BI 7.0 data load should be restricted to PSA only for latest versions.

In Extended Star schema, Fact tables are connected to Dimension tables and dimension table is connected to SID table and SID table is connected to master data tables. In Extended star schema you have Fact and Dimension tables are inside the cube however SID tables are outside cube. When you load the transactional data into Info cube, Dim Id’s are generated based on SID’s and these Dim id’s are used in fact tables.

In Extended Star schema one fact table can connect to 16 dimensions tables and each dimension table is assigned with 248 maximum SID tables. SID tables are also called Characteristics and each characteristic can have master data tables like ATTR, Text, etc.

In Star Schema, Each Dimension is joined to one single Fact table. Each Dimension is represented by only one dimension and is not further normalized.

Dimension Table contains set of attribute that are used to analyze the data.

Info Objects are known as smallest unit in SAP BI and are used in Info Providers, DSO’s, Multi providers, etc. Each Info Provider contains multiple Info Objects.

InfoObjects are used in reports to analyze the data stored and to provide information to decision makers.

Info Objects can be categorized into below categories −

  • Characteristics like Customer, Product, etc.
  • Units like Quantity sold, currency, etc.
  • Key Figures like Total Revenue, Profit, etc.
  • Time characteristics like Year, quarter, etc.

Info Area in SAP BI are used to group similar types of object together. Info Area are used to manage Info Cubes and Info Objects. Each Info Objects resides in an Info Area and you can define it a folder which is used to hold similar files together.

To access data in BI source system directly. You can directly access to source system data in BI without extraction using Virtual Providers. Virtual providers can be defined as InfoProviders where transactional data is not stored in the object. Virtual providers allow only read access on BI data.

VirtualProviders based on DTP

VirtualProviders with function modules

VirtualProviders based on BAPI’s

VirtualProviders based on DTP

This type of Virtual Providers are based on the data source or an Info Provider and they take characteristics and key figures of source. Same extractors are used to select data in source system as you use to replicate data into BI system.

When to Virtual Providers based on DTP?

When only some amount of data is used.

You need to access up to date data from a SAP source system.

Only few users executes queries simultaneously on the database.

Virtual Provider with Function Module

This Virtual Provider is used to display data from non BI data source to BI without copying the data to BI structure. The data can be local or remote. This is used primarily for SEM application.

Transformation process is used to perform data consolidation, cleansing and data integration. When data is loaded from one BI object to other BI object, transformation is applied on the data. Transformation is used to convert a field of source into the target object format.

Transformation rules −

Transformation rules are used to map source fields and target fields. Different rule types can be used for transformation.

Real time data acquisition is based on moving data to Business Warehouse in real time. Data is sent to delta queue or PSA table in real time.

Real time data acquisition can be achieved in two scenarios −

By using InfoPackage for real time data acquisition using Service API.

Using Web Service to load data to Persistent Storage Area PSA and then by using real time DTP to move the data to DSO.

Real time Data Acquisition Background Process −

To process data to InfoPackage and data transfer process DTP at regular intervals, you can use a background process known as Daemon.

Daemon process gets all the information from InfoPackage and DTP that which data is to be transferred and which PSA and Data sore objects to be loaded with data.

InfoObjects are created in Info Object catalog. It is possible that an Info Object can be assigned to different Info Catalog.

A DSO is known as storage place to keep cleansed and consolidated transaction or master data at lowest granularity level and this data can be analyzed using BEx query.

A DataStore object contains key figures and charactertics fields and data from DSO can be updated using Delta update or other DataStore objects or master data. DataStore objects are commonly stored in two dimensional transparent database tables.

DSO component consists of three tables

Activation Queue −

This is used to store the data before it is activated. The key contains request id, package id and record number. Once activation is done, request is deleted from the activation queue.

Active Data Table −

This table is used to store current active data and this table contains the semantic key defined for data modeling.

Change Log −

When you activate the object, changes to active data re stored in change log. Change log is a PSA table and is maintained in Administration Workbench under PSA tree.

DataStore object for direct update allows you to access data for reporting and analysis immediately after it is loaded. It is different from standard DSO’s in the way how it processed the data. Data is stored in same format in which it was loaded to DataStore object for direct update by the application.

one table for active data and no change log area exists. Data is retrieved from external systems using API’s.

Below API’s exists

  • RSDRI_ODSO_INSERT: These are used to insert new data.

  • RSDRI_ODSO_INSERT_RFC: Similar to RSDRI_ODSO_INSERT and can be called up remotely.

  • RSDRI_ODSO_MODIFY: This is used to insert data having new keys.For data with keys already in the system, the data is changed.

  • RSDRI_ODSO_MODIFY_RFC: Similar to RSDRI_ODSO_MODIFY and can be called up remotely.

  • RSDRI_ODSO_UPDATE: This API is used to update existing data.

  • RSDRI_ODSO_UPDATE_RFC: This is similar to RSDRI_ODSO_UPDATE and can be called up remotely.

  • RSDRI_ODSO_DELETE_RFC: This API is used to delete the data.

As structure of this DSO contains one table for active data and no change log so this doesn’t allow delta update to InfoProviders.

In Write optimized DSO, data that is loaded is available immediately for the further processing.

Write optimized DSO provides a temporary storage area for large sets of data if you are executing complex transformations for this data before it is written to the DataStore object. The data can then be updated to further InfoProviders. You only have to create the complex transformations once for all data.

Write-optimized DataStore objects are used as the EDW layer for saving data. Business rules are only applied when the data is updated to additional InfoProviders.

It only contains table of active data and there is no need to activate the data as required with standard DSO. This allows you to process the data more quickly.

Infosets are defined as special type of InfoProviders where data sources contains Join rule on DataStore objects, standard InfoCubes or InfoObject with master data characteristics. InfoSets are used to join data and that data is used in BI system.

Temporal Joins: are used to map a period of time. At the time of reporting, other InfoProviders handle time-dependent master data in such a way that the record that is valid for a pre-defined unique key date is used each time. You can define Temporal join that contains atleast one time-dependent characteristic or a pseudo time-dependent InfoProvider.

Infosets are used to analyze the data in multiple InfoProviders by combining master data charactertics, DataStore Objects, and InfoCubes.

You can use temporal join with InfoSet to specify a particular point of time when you want to evaluate the data.

You can use reporting using Business Explorer BEx on DSO’s without enabling BEx indicator.

  • Inner Join
  • Left Outer Join
  • Temporal Join
  • Self Join

InfoCube is defined as multidimensional dataset which is used for analysis in a BEx query. An InfoCube consists of set of relational tables which are logically joined to implement star schema. A Fact table in star schema is joined with multiple dimension tables.

You can add data from one or more InfoSource or InfoProviders to an InfoCube. They are available as InfoProviders for analysis and reporting purposes.

An InfoCube is used to store the data physically. It consists of a number of InfoObjects that are filled with data from staging. It has the structure of a star schema.

In SAP BI, an Infocube contains Extended Star Schema as shown above.

An InfoCube consists of a fact table which is surrounded by 16 dimension tables and master data that is lying outside the cube.

Real time InfoCubes are used to support parallel write access. Real time InfoCubes are used in connection with the entry of planning data.

You can enter the data in Real time InfoCubes in two different ways −

Transaction for entering planning data

BI Staging

A real time InfoCube can be created using Real Time Indicator check box.

Yes, when you want to report on charactertics or master data, you can make them as InfoProvider.

To convert a standard InfoCube to real time InfoCube, you have two options −

Convert with loss of Transactional data

Conversion with Retention of Transaction Data

Yes, Double Click on the info package grp → Process Chain Maintenance button and type in the name and description.

  • H Hierarchy
  • F fixed value
  • Blank

Yes.

MultiProvider

ODS

They provide granular data, allows overwrite and data is in transparent tables, ideal for drilldown and RRI.

InfoCube

This is used for star schema, we can only append data, ideal for primary reporting.

MultiProvider

It contains a physical data and allow to access data from different InfoProviders.

Start Routines

The start routine is run for each Data Package after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global Data Structures. This structure or table can be accessed in the other routines. The entire Data Package in the transfer structure format is used as a parameter for the routine.

Update Routines

They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks.

This is used to load new Data Package into the InfoCube aggregates. If we have not performed a rollup then the new InfoCube data will not be available while reporting on the aggregate.

During loading, perform steps in below order −

First load the master data in the following order: First attributes, then texts, then hierarchies.

Load the master data first and then the transaction data. By doing this, you ensure that the SIDs are created before the transaction data is loaded and not while the transaction data is being loaded.

To optimize performance when loading and deleting data from the InfoCube −

  • Indexes
  • Aggregates
  • Line item and high Cardinality
  • Compression

To achieve good activation performance for DataStore objects, you should note the following points −

Creating SID Values

Generating SID values takes a long time and can be avoided in the following cases −

Do not set the 'Generate SID values' flag, if you only use the DataStore object as a data store. If you do set this flag, SIDs are created for all new characteristic values.

If you are using line items (document number or time stamp, for example) as characteristics in the DataStore object, set the flag in characteristic maintenance to show that they are "attribute only".

It is the method of dividing a table for report optimization. SAP uses fact file partitioning to improve performance. We can partition only at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as data is stored in the relevant partitions. Also table maintenance becomes easier.

Infocube is structured as star schema where a fact table is surrounded by different dim table that are linked with DIM'ids.

ODS is a flat structure with no star schema concept and which will have granular data (detailed level). Overwrite functionality.

Navigational attribute is used for drilling down in the report.

If separators are used inconsistently in a CSV file, the incorrect separator is read as a character and both fields are merged into one field and may be shortened. Subsequent fields are then no longer in the correct order.

Before you can transfer data from a file source system, the metadata must be available in BI in the form of a DataSource.

Yes.

In form of PSA tables

DB connect is used to define other database connection in addition to default connection and these connections are used to transfer data into BI system from tables or views.

To connect an external database, you should have below information −

  • Tools
  • Source Application knowledge
  • SQL syntax in Database
  • Database functions

Universal data UD connect allows you to access Relational and multidimensional data sources and transfer the data in form of flat data. Multidimensional data is converted to flat format when Universal Data Connect is used for data transfer.

UD uses J2EE connector to allow reporting on SAP and non-SAP data. Different BI Java connectors are available for various drivers, protocols as resource adapters −

  • BI ODBO Connector
  • BI JDBC Connector
  • BI SAP Query Connector
  • XMLA Connector

What is Next ?

Further you can go through your past assignments you have done with the subject and make sure you are able to speak confidently on them. If you are fresher then interviewer does not expect you will answer very complex questions, rather you have to make your basics concepts very strong.

Second it really doesn't matter much if you could not answer few questions but it matters that whatever you answered, you must have answered with confidence. So just feel confident during your interview. We at tutorialspoint wish you best luck to have a good interviewer and all the very best for your future endeavor. Cheers :-)

sap_bw_questions_answers.htm
Advertisements