Create stored procedure in Teradata

Here is the example of simple stored procedure with only one output variable.

Create procedure Test (out SubID Varchar(100))


Select top 1 Sub_ID into SubID from Member;

End ;

Test – Stored Procedure Name

Out – To specify the output parameter.

To Execute the Stored procedure in Teradata,

CALL Test(SubID);

Create MDC Tables in DB2

We can create a Multi Dimensional clustering table in DB2. Mostly the tables are used in the data warehousing.  If a table contain the large amount of data then we can convert the table in to the MDC Table. If we convert it then it will respond quickly.



In the above example the clustering created based on the three dimensions, c1,c3,c4.

We can create the clustering based on the combination of number of columns.



   ORGANIZE BY DIMENSIONS (c1, (c3, c4))

In the above example, two clustering was created one is on dimension c1, and another one is on c3 and c4. Combination of c3 and c4 creates a single clustering.

Operations Behind Alter Table in DB2

We are frequently using alter table queries in our daily operations. The the list of operations running behind the single alter query.

  1. Alter Table Procedure called “SYSPROC.ALTOBJ” .  This procedure called from DB2 Design Advisor.
  2. Drop all dependent objects of the table.
  3. Rename the table
  4. Create the table using the new definition
  5. Recreate the all dependent objects of the table.
  6. Transform the existing table data  into the new table.   If any of the transformations are required while moving, the transformation logics are implemented in the data moving process.  The transformations are required for the data type changes.

DB2 Table Types

For easier table design and faster query performance we have some types of tables in DB2.  Choosing the right table design we can optimize the query performance.  

  1. Regular Tables.
  2. Append Mode tables
  3. Multidimensional Clustering tables.
  4. Range cluster Tables.
  5. Partitioned tables.

Regular Tables.

            This kind of table is normally used in DB2.  These kind of tables we can use indexes.

Append Mode Tables.

            This kind of tables are used when the tables will involve in more number of transactions per day.  The table support faster  insert and update operations.

Multidimensional Clustering tables.

            This kind of tables support number of cluster indexes.  We can use number of dimensions for cluster indexing.   Mostly this kind of design will be used in the Data warehousing. The larger amount of data handling will be easier when we uses this table design.

Partitioned tables.

            This tables design will be used when the number of indexes are tightly coupled with the table.

Show List of Tables in DB2

Here I share the query which used to get the list of tables from DB2.

select * from syscat.tables where type = ‘T’

Here you can change the type of the table. The type would be ‘T’, ‘S’,’V’. 

The above query will give the entire details about the table.

Informatica Interview questions

Which transformation should we use to normalize the COBOL and relational sources?

Normalizer Transformation. Normalizer Transformation.

Normalizer Transformation.When we drag the COBOL source in to the mapping Designer workspace,the normalizer transformation automatically appears,creating input and output ports for every column in the source.

Difference between static cache and dynamic cache?

In case of Dynamic catche when you are inserting a new row it looks at the lookup catche to see if the row existing or not,If not it inserts in the target and catche as well in case of Static catche when you are inserting a new row it checks the catche and writes to the target but not catche

If you cache the lookup table, you can choose to use a dynamic or static cache. By default, the lookup cache remains static and does not change during the session. With a dynamic cache, the Informatica Server inserts or updates rows in the cache during the session. When you cache the target table as the lookup, you can look up values in the target and insert them if they do not exist, or update them if they do.

The information you are posting should be related to java and ORACLE technology. Not political.

Informatica Interview Questions

What are the differences between Connected and Unconnected Lookup?

Connected Lookup Unconnected Lookup
Connected lookup participates in dataflow and receives input directly from the pipeline Unconnected lookup receives input values from the result of a LKP: expression in another transformation
Connected lookup can use both dynamic and static cache Unconnected Lookup cache can NOT be dynamic
Connected lookup can return more than one column value ( output port ) Unconnected Lookup can return only one column value i.e. output port
Connected lookup caches all lookup columns Unconnected lookup caches only the lookup output ports in the lookup conditions and the return port
Supports user-defined default values (i.e. value to return when lookup conditions are not satisfied) Does not support user defined default values

What is the difference between Router and Filter?

Router Filter
Router transformation divides the incoming records into multiple groups based on some condition. Such groups can be mutually inclusive (Different groups may contain same record) Filter transformation restricts or blocks the incoming record set based on one given condition.
Router transformation itself does not block any record. If a certain record does not match any of the routing conditions, the record is routed to default group Filter transformation does not have a default group. If one record does not match filter condition, the record is blocked
Router acts like CASE.. WHEN statement in SQL (Or Switch().. Case statement in C) Filter acts like WHERE condition is SQL.

What can we do to improve the performance of Informatica Aggregator Transformation?

Aggregator performance improves dramatically if records are sorted before passing to the aggregator and “sorted input” option under aggregator properties is checked. The record set should be sorted on those columns that are used in Group By operation.

It is often a good idea to sort the record set in database level (why?) e.g. inside a source qualifier transformation, unless there is a chance that already sorted records from source qualifier can again become unsorted before reaching aggregator

What are the different lookup cache?

Lookups can be cached or uncached (No cache). Cached lookup can be either static or dynamic. A static cache is one which does not modify the cache once it is built and it remains same during the session run. On the other hand, A dynamic cache is refreshed during the session run by inserting or updating the records in cache based on the incoming source data.

A lookup cache can also be divided as persistent or non-persistent based on whether Informatica retains the cache even after session run is complete or not respectively

How can we update a record in target table without using Update strategy?

A target table can be updated without using ‘Update Strategy’. For this, we need to define the key in the target table in Informatica level and then we need to connect the key and the field we want to update in the mapping Target. In the session level, we should set the target property as “Update as Update” and check the “Update” check-box.

Let’s assume we have a target table “Customer” with fields as “Customer ID”, “Customer Name” and “Customer Address”. Suppose we want to update “Customer Address” without an Update Strategy. Then we have to define “Customer ID” as primary key in Informatica level and we will have to connect Customer ID and Customer Address fields in the mapping. If the session properties are set correctly as described above, then the mapping will only update the customer address field for all matching customer IDs.

Deleting duplicate row using Informatica

Q1. Suppose we have Duplicate records in Source System and we want to load only the unique records in the Target System eliminating the duplicate rows. What will be the approach?
Let us assume that the source system is a Relational Database . The source table is having duplicate rows. Now to eliminate duplicate records, we can check the Distinct option of the Source Qualifier of the source table and load the target accordingly.