Quantcast
Channel: Process Integration (PI) & SOA Middleware
Viewing all 741 articles
Browse latest View live

How to Configure Agent as Configurable Parameter in PI ccBPM

$
0
0

In order to use the decision steps and deliver alerts used in ccBPM processes one have to configure an ‘Agent’ as configurable parameter. This blog shortly explains how to do that.

 

1. First create a new configurable parameter (in this example called Middleware) of the type Agent. Save an activate the ccBPM process.

blog1.png

blog2.png

blog3.png

2. Go to the Integration Directory/Builder and open the integration process containing the agent.

blog4.png

 

3. Set object to edit mode and enter the value (the real user account as known in SAP business workflow) for the agent.

blog6.png

4. Press ENTER and activate your changes. Please notice; If you don’t press ENTER your changes will not be reflected in the configuration.

blog7.png

 

From now on the approver (Middleware) can see his/her decision step requests using transaction SBWP as workflow item in his inbox where he can take appropriate action. If configured alerts will also be deliverd in the e-mail inbox of this user.


Certificate and Troubleshooting - Guide for Seeburger - AS2 - Adapter

$
0
0

Hello,

 

since many years, the Seeburger AS2 Adapter is available for PI and is heavily used throughout different industries. The documentation that comes with the Adapter is quite comprehensive and already covers some potential configuration issues (to help out if someone is not completely familiar with the AS2 protocol).

 

Here I want to provide you now with a "HowTo" - Guide that has been created based on the experience of Seeburger-Consultants in various projects. It is not complete but covers many different issues that you might encounter with certificates in your AS2- Setup.

 

Please be aware: This document does not replace the AS2-Adapter-Documentation but should be used like an additional FAQ - Reference.

It is not an "official" document but will also be updated with feedback provided in this blog.

 

 

The following (temporary) Link allows for viewing and downloading the document (as pdf).


https://mft.seeburger.de/portal-seefx/~public/c034b83f-2591-4e21-b994-b4d5cf7d6756?download

 

Let me know if you encounter any difficulties. Looking forward to your feedback.

Insert value from Request message to Response message using GetPayloadValueBean and PutPayloadValueBean

$
0
0

There is very frequent requirement for the scenario like

1. Asynchronous interface like Idoc/ File will call the synchronous interface like RFC/web service

2. Merge some of the values from Idoc/ FIle request message with response message and send this message to create another file

 

It was impossible to achieve above scenario without BPM earlier. Then comes Async-sync bridge using RequestResponse and ResponseOneway beans to achieve async-sync scenario.

However still it was not possible to merge the values from request message with response message without BPM as there is no provision in PI to store the values of any field within PI.

 

That's not the case now:

One can use GetPayloadValueBean and PutPayloadValueBean to achieve this.

 

I will not focus on async-sync bridge using RequestResponse and ResponseOneway beans as this is already well described on scn.

 

To continue with above scenario, we will take an example where

1. Asynchronous Request message sends SAP username and Reference username

2. Fetch SAP Id using SAP username using synchronous RFC call

3. Merge SAP Id and Reference Username and send it in response file

 

We can achieve this as follows:

 

1. Create File to RFC synchronous scenario with

     Request message having 2 fields SAP username, Reference Username

     Response message having 2 fields SAP Id and Reference Username

 

2. Configure async-sync bridge in sender file adapter using RequestResponse and ResponseOneway beans

 

3. In RFC adapter, configure GetPayloadValueBean, RemovePayloadValueBean and PutPayloadValueBean as follows:

 

 

Module.png

We need value of Reference username to be merged with SAP Id. However we don't pass this field to RFC. Then how can we store it?

For this purpose, I added one field in RFC structure in PI.

a. export xsd of RFC request structure

b. Edit xsd and add field WEBREFERENCEUSER

c. Import xsd in external definition and use it as the target message in request message mapping

The Operation mapping will have same RFC interface imported from SAP. We will use external definition only in message mapping.

 

Now, we use GetPayloadValueBean and store the value of WEBREFERENCEUSER in module context. It is like a variable which will hold the value of field (same as container variable in BPM). In above example, it is defined as test.

Define parameter as get:/ns1:BAPI_USER_GET_DETAIL/WEBREFERENCEUSER and value of parameter as test.

The parameter xmlns is used to define the namespace ns1. The multiple namespaces can be defined separated by space.

We can define multiple parameters to store multiple values.

 

So we have successfully stored the value for request message.

However we don't want to pass WEBREFERENCEUSER to RFC call as there is no such field in RFC structure. So we remove the element before calling RFC using RemovePayloadValueBean.

parameter name : remove:/ns1:BAPI_USER_GET_DETAIL/WEBREFERENCEUSER

Same module key will be used as that of GetPayloadValueBean.

 

Then we call the RFC so we need to put GetPayloadValueBean and RemovePayloadValueBean before adapter's module call.

 

Next, we need to retrieve the value stored earlier. So now we add PutPayloadValueBean after adapter's module:

parameter name - put:/ns1:BAPI_USER_GET_DETAIL.Response/ADDRESS/ADDR_NO

parameter value - test ----> same module context where the value of request message was stored earlier

Here as well Same module key will be used as that of GetPayloadValueBean.

 

We can add a new field to Response structure or use existing one.

Then response mapping will map the value as required.

 

Below are the message log details:

 

auditlog.png

The response mapping:

 

RespMapping.png

 

This is the response:

 

<?xml version="1.0" encoding="UTF-8"?>

<ns1:mt_test_in xmlns:ns1="http://abc.com/test">

<id>0000067833</id>

<name>TestReferenceUser</name>

</ns1:mt_test_in>

 

Reference links:

http://help.sap.com/saphelp_nw73ehp1/helpdata/en/09/14324ca86f4fa8b0ccbd4e5aaa7139/content.htm

http://help.sap.com/saphelp_nw73ehp1/helpdata/en/d7/d0ee447cfe43d6b44fbe7845781a14/content.htm?frameset=/en/09/14324ca86f4fa8b0ccbd4e5aaa7139/frameset.htm

http://help.sap.com/saphelp_nw73ehp1/helpdata/en/03/f9286f7b284928b1c41025d4ba1cf4/content.htm?frameset=/en/09/14324ca86f4fa8b0ccbd4e5aaa7139/frameset.htm

Resolving Connection Issue in Seeburger Workbench Mapping Variables

$
0
0

Brief Description

In one of our projects, we had a couple of outbound interfaces where in, IDOC is the sender and EDI sub-system is the receiver.  The message mapping was completed and we did rigorous testing in development and the interface worked successfully. The IDOC was triggered and the EDI file was getting generated without any issues.

In the ISA segment, there is a field D_I12 is the Interchange control number.  This is the way that an EDI sub-system identifies the envelope.  This control number should not be random, but it should be an incrementing number and it should be 9 digits padded with zeroes.

The below UDF along with a combination of more UDFs maintains a counter in the Seeburger workbench and increment’s it each and every time whenever this interface is executed and returns the counter value back to the mapping and to the output EDI file. This was all working perfectly fine when we tested in our Development environment.  Later we decided to move the interface to Quality and conduct further testing on the interface before moving it to Production.

When the IDOC was triggered in Quality system, the outbound interface failed at mapping level.

1.jpg

Below is a screenshot of the Seeburger Workbench where the counter variables are maintained and are incremented on every execution.

 

3.jpg

 

Error

Usually, this UDF would establish a connection to the Seeburger Workbench and increment the mapping variable and return the increamented value. After debugging in the Quality,we found the source of the error. 

The UDF could not establish a connection to the Seeburger Workbench. This was the root cause of the error. Why a connection could not be established, it required deeper investigtion

2.jpg

Investigation

There are java classes which are provided by Seeburger, This is usually found in your Seeburger software component, under "imported archives" I found the java classes which were used for the connection purpose.  After identifying the “properties file” which contained values for various parameters like host, port etc i found it pretty strange that the Port, "50000" was hardcoded and it was not dynamic, but I am sure there must be a reason why this was done. I immediately checked the port of my Quality server and it was a different port, it was not 50000. hence it did not match the port that was hardcoded in the properties file. There were 2 properties file which contained the hardcoded port.

1)    1)  com.seeburger.functions.permstore.CounterFactory.properties

 

5.jpg

2)   com.seeburger.functions.permstore.VariableFactory.properties  

    

file1.jpg

 

Solution

The port in these 2 files had to be changed to match my Port in my Quality system. I exported the java class from the Imported Archives and opened the file in a notepad, changed the port number to my port and saved the file. The files were then imported back again and i activated the imported archives which contained the change in the properties files. I re-triggered an IDOC and this time there was no error and the UDF was able to make a successful connection to the Seeburger workbench and return the incremented counter value.

Similarly we applied the same solution too in production environment and corrected the same issue.

I hope this solution will help you in case your ports are different for each environment.

XPI Inspector

$
0
0

Hi Folks,

Yesterday I have read about XPI Inspector and have used it for the first time. I have found this very interesting. Although many of you may be aware of it but also I feel there are many people who may not be aware of this. So I thought I would share few points I learnt.

To start with, XPI Inspector is basically a tool developed by SAP which is a web application  for collecting information about XI-related configurations and traces.

The tool also performs  certain number of configuration checks, such as SSL client/server verification.

Which checks are executed depends on the selected example or the type and properties of the selected XI communication channels

The tool does not do any configuration changes on the system. All collected information is saved on the file system of the central Java instance and is available for review.

Note that in order to perform some checks, such as verification of SSL client connections, the tool automatically opens a dummy https or ftps connection to the remote ssl server

Several additional general options are available for selection:

 

1. Collect debug traces from Messaging System

2. Collect debug trace from XI Module Processor

3. Collect HTTP Traces

4. Collect Open SQL Traces

5. Collect JCo Traces

6. Collect information about the system state.

Download the file named "xpi_inspector_ear.ear" from the sap marketplace.
Deploy the tool on the XI Adapter Engine which you would like to inspect. Note that if you download the file from the web version  that the extension of the file could be changed from EAR to ZIP. In this case you have to rename it again right after download from ZIP to EAR.
To deploy on 6.40 or 7.00 server version use SDM.
To deploy on 7.10 or above server versions use one of the following options:

    • Deploy View Plug-in from SAP NWDS.
    • Telnet command: deploy <xpi_inspector_ear.ear file path> version_rule=all
    • JSPM tool


Open a new browser window and load the "XPI Inspector" by using the following url address: http(s)://<host>:<port>/xpi_inspector
The recommended browser is Microsoft IE.

You will need administrator's credentials to access the XPI Inspector.
Select the example to use. SAP shall inform you in advance which example you need.
Follow all the instructions displayed on the screen.
Basically, you need to perform the following three steps:

    1. Select the example and start the inspection.
    2. Reproduce the problem and stop the inspection right after the problem occurs.
    3. Download the zip file generated by the tool and attach the zip file to the CSS message.


SAP will continue to extend the set of automatic checks performed by the tool and will update the version of the tool in this note accordingly.
You can check the version of the tool already deployed on your system by using "About" dialog in the UI.
In case of problems send a notification e-mail to the author by using the link inside the same dialog.

 

 

List of examples available for use:
  Example 1   (CPA Cache)
  Example 11  (Authentication & SSL)
  Example 18  (RWB)
  Example 19  (Mapping Runtime)
  Example 30  (XI Adapter)
  Example 40  (XI Message)
  Example 50  (XI Channel)
  Example 51  (Performance Problem)
  Example 52  (Authorization & Session Management)
  Example 60  (JEE Service)
  Example 70  (JEE Application)
  Example 80  (Default Trace)
  Example 100 (Custom)

 

Hope  this helps.

Regards,

Amarnath

Building a Custom Lookup Service for cross referencing table in PI 7.3.1 Single Stack.

$
0
0

Every SAP integration implementation has the challenge of converting Legacy Values to corresponding SAP Values and vice versa. There are various approaches and every one has pro and cons.

 

Ideal solution would be to maintain the value mapping table in Integration Directory. But this will only serve when the lookup entries are a small data set. As the data set increases the Cache size keeps on increasing and consumes the heap memory.

 

The next approach is save them in a custom ABAP table on ECC and Lookup them using RFC Lookup or a table in Database and use JDBC Lookup. Again we know the problem of overheads here.

 

Recently I came across the one with minimum overheads compared to above solutions. That is using UKMS (Unified Key Mapping Service, it was available since long but I haven’t came across in my short career in PI consulting. http://scn.sap.com/people/rudolf.yaskorski/blog/2009/11/04/accessing-ukms-from-message-mappings-single-value-implementation--part-1). Idea here is to keep the Data Set close to PI. That is keeping the Data Set on ABAP Stack of PI and use the API provided in ESR by SAP which internally does a RFC Lookup. Although it can be argued that the gain is not significant over RFC Lookup with table on ECC System. Nevertheless there is gain as all the data translation calls are executed inside PI.

 

Now with PI 7.3.1 Single Stack this can’t be achieved as there is no ABAP stack to keep the Data Set close to PI.

 

I thought on following points.

 

  1. Having the Data Set close to PI. So why not maintain the table on AS Java DB. I am not sure on terminology here, what I meant is keeping the table on AS Java on which PI is installed. Use Java Dictionary to create the table and deploy on PI server. The table has 6 columns as we have for Value Mapping table (Source Agency (SA), Source Scheme (SS), Source Value (SV), Target Agency (TA), Target Scheme (TS) and Target Value (TV). SA, SS, SV, TA, TS primary key set.)
  2. Instead of going for graphical lookup (JDBC Lookup in this case), why not fetch directly form DB table. It’s usually not recommended, but what we are doing here is just value retrieval, no transactions, no keeping the connection open for long time. Just a single hit per lookup and close. The overhead of converting to JDBC Structure and entire flow of going to JDBC Adapter and coming back is saved. In the example I have used the CPA Cache’s Data Source as I feel that will be the least used one compared AF Data Source at runtime as CPA cache keeps the objects in memory. 
  3. Use a simple cache implementation. In the example I have used Synchronized LinkedHaspMap with maximum entries = 10000 and eldest entry removal as my eviction policy. This can be replaced with any of available cace implementations (ehcache, tried with some errors or google guava, should be straight forward).  I was facing a problem in keeping the cache accessible across mapping invocations. I have not researched much here, but keeping static variable does not work, due to Java Reflection may be as it is the way the mapping classes are loaded. So I used JNDI to bind my cache object and do a JNDI Lookup. Found it a very useful. The gain may be minimal due to this cache for large Data Sets, but yes, locality of reference is underlying in all computation we do. Following is the sample code.

    

         

package com.ibm.cust.lookup;

import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.Collections;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.Properties;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.sql.DataSource;


public class CustLookupService  {

public String getMapping(String sa, String ss, String sv, String ta, String ts){

  Connection con;
  DataSource ds;
  InitialContext ctx;
  PreparedStatement stmt;
  ResultSet rs;
  String targetValue="";
  final int maxEntries = 10000;
  Map<String, String> cache = null;
  Properties properties = new Properties();
  try {
   ctx = new InitialContext(properties);
   cache = (Map<String,String>) ctx.lookup("LRUCache");
  } catch (NamingException e) {
   targetValue = "Cache Naming Ex"+e.getMessage()+e.getCause();
  }
  if(cache==null){
   cache = Collections.synchronizedMap(new LinkedHashMap<String, String>(maxEntries,.90F,false){
    private static final long serialVersionUID = 1L;
    public boolean removeEldestEntry(Map.Entry<String,String> eldest) {
     return size() > maxEntries;
    }
   });
  }

  targetValue = cache.get(sa+":"+ss+":"+sv+":"+ta+":"+ts);

  if(targetValue == null){
   try {
    ctx = new InitialContext(properties);
    ds = (DataSource) ctx.lookup("jdbc/notx/SAP/BC_XI_AF_CPA");
    con  = ds.getConnection();
    stmt = con.prepareStatement("select TV from UKMS_ENTRIES where (SA = ? AND SS = ? AND SV = ? AND TA = ? AND TS = ?) ");
    stmt.setString(1, sa);
    stmt.setString(2, ss);
    stmt.setString(3, sv);
    stmt.setString(4, ta);
    stmt.setString(5, ts);

    rs = stmt.executeQuery();
    if(rs.next())
     targetValue = rs.getString(1);
    else
     targetValue = "result set was empty";

    con.close();
    cache.put(sa+":"+ss+":"+sv+":"+ta+":"+ts, targetValue);

   // targetValue = targetValue.concat("frdb");

    ctx.rebind("LRUCache", cache);

   } catch (NamingException e) {
    targetValue = "DS Naming Ex"+e.getMessage()+e.getCause();
   } catch (SQLException e) {
    targetValue = "SQL Ex"+e.getMessage()+e.getCause();
   }

  }/* else
   targetValue = targetValue.concat("frca");*/
  return targetValue;
}

}

     

 

  1. To maintain the table, it’s a simple problem with lot of options available. Chose according to your convenience.
    1. Use BPM. Build a UI task and post to PI. Then using JDBC Adapter insert into DB.
    2. File to JDBC. A simple CSV for mass loading.
    3. As in the case of Value Mapping Replication, Proxy to JDBC.

     

 

Implementation Steps:

 

  1. Create Java Dictionary Project to store the mapping entries, build the sda and deploy on PI.test1.JPG
  2. Use any of the options (discussed in point 4) to insert the values into table.
  3. Use the code provided to build the zip containing class file. Import the archive in ESR.test2.JPG
  4. Using it in UDF.    

test3.JPG

 

Any thoughts on feasibility of this idea in a productive implementation would be great. It is my first blog, so please let me know if I missed anything in putting my idea forward. 

Request/Response Bean for IDOC_AAE adapter

$
0
0

Hi,

Here is my first blog in SCN.

In SAP PI 7.31, we have the option of using adapter modules for IDOC_AAE adapter. This is a very interesting option. Generally IDOC is Asynchronous. But in this scenario, we will see how it will work as Synchronous.

This is a scenario for Request/Response Bean for IDOC_AAE adapter.

Here I am considering my scenario as test purpose. It is not a real time scenario. My only concentration in this blog is on Request/Response Bean for IDOC_AAE.

My scenario is that I want to take an IDOC from ECC and send it to some RFC which is synchronous one and I place response RFC data into a file in some other location.

My Scenario:

I have Imported IDOC and RFCs from SAP R/3 System and created one Data type, Message Type for File structure.

Here, for this Request/Response Bean the trick is that create one Operation Mapping with only Asynchronous IDOC and RFC thing. Then create the ID part.

In ID Part, create the below steps.

Import Business System from SLD for both IDOC and RFC and create one Business component for FILE.

Create one Sender IDOC Communication Channel and one RFC receiver Channel and one File Receiver Channel.

I am concentrating only on IDOC Communication channel for Request/Response bean. Remaining both RFC and File will be same as it is for other general cases.

Sender IDOC Communication Channel Configuration:

For sender IDOC_AAE, use RFC Parameters as it is. You can use default also for RFC Server parameters.

In Adapter module of IDOC_AAE, add the below steps.

 

 

Here we have to use Untitled.png

Adapter Namespacehttp://sap.com/xi/XI/System.

Adapter Type:    File

Otherwise it is throwing CPACachelookup error.

Now Create ICO for this object as it is with Asynchronous Operation Mapping created before.

Coming back to ESR.

Create 1 Synchronous Service for Idoc and response message as follows:

Sender Service Interface:

Untitled.png

Here

Category:  Outbound.

MODE:  Synchronous.

Request Message as: Use your IDOC Message.

Response Message:  Use your File Message Type.

Now change the operation mapping for Idoc to RFC used in ICO

a.       Remove Idoc interface as sender

b.      Enter synchronous interface created above using IDoc and response message types

c.       maintain both request and Response Mapping

 

Untitled.png

Untitled.png

Testing:

Trigger the IDOC from ECC.

Now go to Communication Channel and check your interface.

Untitled.png

 

 

Check In Message Monitoring

Here you will get 2 messages one for request and one for response.

 

Untitled.png

 

Finally, the File has been created in the target Directory.

Untitled.png

This is for Request and Response Bean for IDOC_AAE.

Note: If we try to change the ICO now, it will throw error as operation mapping is now synchronous with different sender interface. So if there is a need to change ICO, then first operation mapping needs to point to asynchronous interface again and activate the ICO object. Then again change operation mapping with synchronous interface and activate the operation mapping. 

 

 

Regards,

Sreenivas Veldanda

PI Alerting on AAE/AEX

$
0
0

The new component-based message alerting for PI, available with release 731 (http://help.sap.com/saphelp_nw73ehp1/helpdata/en/2c/f0a3d4540c4c9a9af65139801ef826/content.htm) is eliminating dependency to ABAP PI stack and allows to benefit from alerting on pure JAVA PI installations, like AEX. Even more, by starting a dedicated java scheduler job on your AEX you are in position to immediately receive emails for erroneous situation almost out of the box. The older alerting solution is completely based on PI Integration Server(http://help.sap.com/saphelp_nw73ehp1/helpdata/en/4b/b30db925cc3c1de10000000a42189b/content.htm?frameset=/en/4b/b30db925cc3c1de10000000a42189b/frameset.htm).

 

The new solution is decomposing alert evaluation to each PI component, where a local alert engine is generating alerts based on rules maintained and distributed from PI Directory. Local alert engine is then keeping the alerts in a dedicated storage until some consumer (Alert Inbox solution) process them.

 

I will briefly cover some aspects of component-based message alerting, accenting on its JAVA side.

 

Alert store

 

Local alert stores are actually defined during configuration time, inside alert rules. Any consumer in an alert rule will result in local alert store, once the rule is distributed to corresponding PI component, during activation and standard PI cache refresh procedure (alert rule is configuration object in PI directory). If alert rule is enabled, then it is distrubuted to local CPA cache of components selected in alert rule. Its now accessible from local alert engine during evaluation of alerts in case of erroneous events coming from PI runtime. 

alert_rule.png

 

Local alert stores are created dynamically during runtime by local alert engine, with the first alert evaluated for particular consumer. On AAE/AEX, the alert store is JMS queue. Each alert is actually a JMS text message, which text content is encoded to JSON format (alert payload) and instrumented with custom JMS header fields for error label, component, rule ID and scenario ID. The latter allows for fast access (browse or consume) of alerts for example by particular error label or component, ignoring natural queue order with the usage of JMS selector.

 

Any JMS destination (including alert queues), could be accessed remotely via P4 by standard JMS API. There is also WS on each AAE/AEX for consuming alerts - http://host:port/AlertRetrieveAPI_Service/AlertRetrieveAPIImplBean?wsdl.

 

For troubleshooting there might be situations where you want to just scan the alert store, without touching the alerts there, thus WS is not quite useful. The recommended way is to use some open source tool for browsing JMS destinations like for example Hermes JMS - http://www.hermesjms.com. I tried it, and to be honest, it is really buggy. In order to save your time I would recommend to use the preconfigured (replace the one in your .hermes home folder after installation) configuration file, I managed to figure out during my exeprience with installation version 1.15 but shall be compatible with any version I hope. It is attached to this post as hermes-config.xml. Just adjust principal, credentials, host and P4 port (providerURL property) as well as local path to required j2ee client jar libraries. In case you like to be able to browse more alert queues add them too, beside ALERT-TO-MAIL. You may also use browse with selector based on ErrLabel, ScenarioId, RuleId and Component header fileds of alerts. Another important remark is that you do not need to trigger and wait for JNDI lookup with that file I already preconfigured what is necessary, so this really saves an hours

browse_with_selector.png

 

Just to mention that every (remote) alert consumer shall have the role SAP_XI_ALERT_CONSUMER (or Administrator), otherwise will get exception when try to access alert storage, for example:
“javax.jms.JMSSecurityException: You do not have permissions: action alertingVP.queue and instance:name alertingVP.queue
action browse
instance jms/queue/xi/monitoring/alert/ALERT-TO-MAIL”.

 

In this example I am using User-Defined Message Search (UDS) scenario with failed PI message, which is resulting in alert, containing UDS attributes as part of JSON format. This is feature which could be enabled or disabled during alert rule definition on configuration time. Please note that you can not have the whole payload of the failed PI message in the alert JSON, but only the UDS attributes, if have any. This is how main tools involved (PI directory alert rule, UDS configuration in NWA, PI message payload and alert payload displayed with Hermes JMS) looks like:

uds_alert_example.png

 

Default traces example

 

In case you are not receiving alerts for particular erroneous situation you might want to know what is going on and where processing got stuck. For this I am attaching sample traces helping to validate all important steps during alert evaluation. Please check attached text document - default_traces_example.txt for more details.

 

Housekeeping alerts

Housekeeping alerts are marking situation which you might want to address, as it indicates either too many alerts are received for short time interval, or the consumption of alert is too slow (or completely missing). Both situations might lead to consumption of lot of memory to store the alerts. The housekeeping alert might look like this:

 

{
"Component": "af.qx2.uctvt783",
"ErrCat": "HOUSEKEEPING_ERROR_CATEGORY",
"ErrCode": "HOUSEKEEPING_ERROR_CODE",
"ErrLabel": "1033",
"ErrText": "109 alerts were deleted for component af.qx2.uctvt783 and
rule ID fd2ab54d40f933b5ad2a2454e49e80d0",
"RuleId": "fd2ab54d40f933b5ad2a2454e49e80d0",
"Timestamp": "2013-02-05T20:17:15Z"
}

 

If housekeeping is triggered then the alerts will be removed from the storage and aggregation based on component and rule will produce just a few reserved alerts instead. This way the storage will be prevented from overload and the consumer will be notified about potentially dangerous situation. In the example above there had been 109 alerts per the specified component and rule which had been removed and replaced by this one when housekeeping had been triggered at the timestamp specified. In general this means that either some scenario or set of scenarios are producing lots of errors and alerts for short time interval, or alerts had not been consumed for quite long time from the storage.

The boundaries for detecting housekeeping situation are configurable as service properties in alert service:

housekeeping.png

By default housekeeping checks are performed at any chunk end (by default on every 200 alerts). If two consecutive chunks are sent faster than threshold interval (by default about 2*200 alerts per 4 seconds) then housekeeping will be triggered. The other condition to trigger housekeeping is that for some reason no alert had been consumed more than 15 minutes (900000 milliseconds), in this case assumption is that consumer might be completely missing or is consuming too rarely. As mentioned, all these intervals and numbers from above are configurable as online modifiable service properties and no JAS restart is necessary to apply them. Just change, and after pressing “Save” button it shall be immediately applied.

 

Known fixed issues

 

 

And finally, I hope you already find out this quite nice troubleshooting guide:

http://wiki.sdn.sap.com/wiki/display/TechTSG/%28PI%29+Component-Based+Message+Alerting

 

PS: In case you have ABAP proxies, which are still generating error events and you would like to benefit from this new alerting, despite it is not downported to releases lower than 731, do not worry, it is still possible to configure them so that to send their error events to any AAE/AEX, where local alert engine will still evaluate alerts for those older ABAP components and still the alerts could be consumed in the same fashion from local alert store of AAE/AEX - http://help.sap.com/saphelp_nw73ehp1/helpdata/en/ce/d9b40646464dc78d750169d25d7278/content.htm?frameset=/en/2c/f0a3d4540c4c9a9af65139801ef826/frameset.htm


PGP and SFTP : FAQ Sheet

$
0
0

If you have read the following blogs on using the SFTP and PGP solutions in PI and still have questions unanswered, this blog with look at addressing the common queries on these subjects;

 

1. SFTP Adapter - SAP SFTP Adapter: A Quick Walkthrough

2. PGP Module

          a. PGPEncryption Module: A Simple How to Guide

          b. PGPDecryption Module: A Simple How to Guide

 

 

Note: The below list will be updated with further questions and answers appropriately.

 

FAQ - SFTP Adapter

 

Q1. My file is not getting picked. What is going wrong?

Ans. Unlike the normal FTP adapter, the SFTP adapter expects a regular expression. Cross check your configuration and provide the correct regular expression for your file name.

 

Q2. I am getting the error, "Could not process message, Internal PGP Error (org.bouncycastle.openpgp.PGPException: Exception creating cipher)"

Ans: It could be a potential unlimited JCE issue. Try the settings as described in the section 'Unlimited JCE' of this document.

 

 

Q3. I am facing issues using the ASMA in the Receiver SFTP adapter.

Ans: Try to change the namespace to http://sap.com/xi/XI/System/File and the File Name Attribute as FileName

 

scn_27march2013.JPG

 

FAQ - PGP Module

 

Q1. When I have to do Encryption, what do I need to have?

Ans: You will need a public key, along with a confirmation on what Algorithm that needs to be configured.

 

Q2. Who will provide me the public key?

Ans: Usually, an encryption is used in scenarios where PI is supposed to send files to external or third party systems (vendors, suppliers, customers etc). In these cases, the public keys are provided by the respective vendor/supplier/customer.

 

Q3. When I have to Sign and Encrypt how is it different from Q1 and Q2?

Ans: To sign, PI will also need a private key along with its passphrase.

 

Q4: Who will provide me with the key for Signing?

Ans: Since this is a private key, your organization is responsible.

 

Q5. When I have to do Decryption, what do I need to have?

Ans: You will a private key and the passphrase associated with it.

 

Q6. Who will provide me the private key for decryption?

Ans: Usually, decryption is used in scenarios where PI is receiving files from external or third party systems (vendors, suppliers, customers etc). Your organization would have provided the public key to the third party and will own the private key. Hence your organization should be providing you with the private key for you to configure the adapter.

 

Q7. When I have to Decrypt and Verify how is it different from Q5 and Q6?

Ans: To verify, PI will also need a public key usually provided by the third party involved in the exchange of files.

 

Q8. Can I manage my keys using the PI Keystore?

Ans: No. At this point of writing this blog, SAP does not provide an option to do this. The keys are managed at an OS file directory level. The default location is 'usr/sap/<System ID>/<Instance ID>/sec'

 

Q9. Can I use PGP only for the File adapter?

Ans. No. PGP module is compatible with other adapters like Mail, JMS etc.

 

 

 

Step by step Use of FCC,RD based on file name,CP in RD,ASMA,Dyanamic configuration,and StrictXML2PlainBean in Single Scenario........

$
0
0

Today I got a chance to learn FCC for Sender Side, Receiver Determination based on the  file name Condition, use of Context object, Use of CP(contain pattern )in receiver determination, use of Dynamic Configuration, Use of ASMA property for both Sender and Receiver Side and use of StrictXML2PlainBean at Receiver side ALL in single scenario. So I am thinking of sharing this…

  1. Hope you enjoy this…J

 

Scenario Description:

A file is picking from local  directory and send to SAP ECC  and provide the archived  link for that  file via proxy  OR send to other System as it is  based on the Name of the file.

Solution:So we need to provide the archived link to the field in ECC reciver structure so we are using dynamic conifgration and  we need to send a file to two different system based based on the name of file  so we are using ASMA property of Sender channel and define condtion in RD. And also we provide the same name of file as sender to reciver for that we also use Reciver side ASMA property.And in reciver side of secound system I am also using StrictXML2PlainBean because we need the same file as it is and in sender side we are using FCC so we need to back to same structure. For that we can go for FCC or using module, I am going for module

 

What is in this doc?

FCC for Sender Side

Use of Receiver Determination based of file name Condition

Use of Contains Pattern (CP) in Condition editor

Use of Context object to define the file name in RD condition

Use of Dynamic Configuration

Use of ASMA property for both Sender and Receiver Side

Use of StrictXML2PlainBean at Receiver side

 

 

What is not in this Doc?

What is receiver structure for proxy?

How proxy is implemented in ECC

Mapping related to FILE to Proxy structure not define in deep

And all Proxy related configurations

-------------------------------------------------------------------------------------------------------

Sender structure look like:

 

In mapping we provide the archived link in folloing field(VALUE) using dynamic configration other field are map with some constant :

 

 

Code for dynamic Configuration:

 

******************************************************

 

String str1 = "http:/" +  "/sap.com/xi/XI/System/File";

 

//Instantiate Dynamic configuration

DynamicConfiguration conf = (DynamicConfiguration) container.getTransformationParameters().get(StreamTransformationConstants.DYNAMIC_CONFIGURATION);

 

//Instantiate MappingTrace.

MappingTrace trace = container.getTrace();

 

DynamicConfigurationKey key = DynamicConfigurationKey.create( str1, "FileName" );

//Get the filename

String ourSourceFileName = conf.get(key);

 

DynamicConfigurationKey key1 = DynamicConfigurationKey.create( str1, "Directory" );

//Get the filelocation

String ourSourceDirName = conf.get(key1);

 

return ourSourceDirName+"\\archive\\"+ourSourceFileName;

 

 

*************************************************************************

 

Interface mapping for FILE to ECC

 

 

No need for mapping for second system because we are just transferring from one system to another as it is.

Mapping objects are like below:

 

 

 

ID part:

Sender CC:

 

FCC part is needed  to excute the Dynamic configuration got require for FILE to ECC scenario.

 

Setting ASMA property for file name and directory name:

 

 

Receiver determination where we define the condition:

 

Define the Receiver Systems

 

Select the condition editor for first one and select help for left Operand

 

Now Select Context Object and help menu for that as below:

 

Select FileName

 

 

 

Now define the condition of file name

 

Here you can also define pattern search like below:

Select CP(contain pattern)  instead of equal operator and then use pattern like *.

 

Do same as for second system also:

 

After done following conditions look like

 

Interface determination for ECC

 

 

Interface Determination for Second System:

 

No need to for interface mapping and do same as inbound interface name as sender interface name

 

 

 

Sender Agreement

 

 

Receiver Agreement for ECC

 

 

Receiver Agreement for second system

 

Receiver CC for Second system:

 

 

Here I am using StrictXML2PlainBean  module instead of FCC

 

Receiver CC for ECC Proxy:

 

 

Hope u like this...................:)

 

 

Regards

GAGAN

Placing file in two different directories using single receiver communication Channel

$
0
0

Hi All,

 

The Purpose of the document is to show that, placing a file in different directories using Single Receiver Communication Channel.

 

Steps:

 

The Main purpose of this blog is to process the file in two different directories at target system. One folder for processing the file in target system and the other folder is for archiving the file. So, the processing of the file has failed then the target system will get the same file from the other folder where PI archives the file.

 

Design and Configuration Process:

 

As per the requirement we have to create two data types, two message types, three service interfaces( One for sender and two for receivers), one message mapping and two operation mappings.

 

Message Mapping please do the following steps…

Untitled.png

In the mapping we are replacing the “+” in “\” because the directory we have mentioned with + in that “\” will not accept in parameterized mapping.

 

We need to use an UDF to different directories and parameterized mapping should be maintained.

 

UDF:

 

 

 

udf.png

 

In the parameter we need to use a variable called directory as a variable to the different directories at runtime.

 

In Signature tab Mention

sign.png

 

Maintain 2 operations mapping with parameters.

 

 

ID Part:

 

●      Configure the designed objects, interfaces created in the ESR and also the applications.

●      Configure communication channels for sender and receiver applications.

●      File channel configuration we have to specify Source directory and File Name information.

●      Receiver channel configuration we have to specify the receiver path and file name details and specify the parameterized information.

●      Receiver interface we need to specify the parameters value.

 

 

Provide ICO below steps:

 

For 1st Service interface

service interfac1.png

 

 

 

 

For 2nd Service interface:

 

service interfac2.png

 

After testing you will get

 

Output.png

 

Here are the 2 message IDs

 

Message Log for  d41d97a6-d235-11e1-bba0-000000294da2.

Message log for  d41d97a6-d235-11e1-bba0-000000294da2.

 

 

 

NOTE:  It is work only for NFS not on FTP scenarios.

Process Integration 7.11 SOAP, SSL and Payload D/Encryption using SOAPUI

$
0
0

This article is a product of a requirement where we were required to secure our communication using SSL but on top of SSL we were required to secure our payload as well with public/private key encryption. During my research I realized that the information regarding this is very limited and scattered so, I decided to write up a consolidated article hoping it will help others understand this concept better.

pic1.jpg

Option 1: HTTP – Plain and simple HTTP communication

Option 2: HTTPS Without Client Authentication – This option should be chosen if you are not planning to “authenticate” your client based on a certificate. This option is equivalent to one-way SSL in generic security world. Please note, that this is different than authenticating a client using basic user/pass. All Sender SOAP Web Services in PI inherently authenticate clients based on user/pass.

Option 3: HTTPS with Client Authentication – If you would like to go one step further and authenticate your clients (caller of your web service) based on certificates then this is the option. You can see this nice blog which talks in more details. This option is equivalent to two-way SSL in generic security world.

 

Also, note that option 2 and 3 will encrypt the tunnel between PI and Client. Which will also encrypt the user/pass since; Tunnel is encrypted first before user/pass is being sent over it. Also, for majority of interfaces these options will provide ample security. But, for certain cases you would want go a step further and encrypt the payload. Below are some of the cases where you would want to go further than just SSL and use payload encryption.

  1. Payload contains Sensitive PII (Personally Identifiable Information) ex. Credit Card number, SSN, DOB, Address etc.
  2. You have multiple hops (systems) between PI and Partner and they are storing you data like PI does in its database.
  3. You have multiple hops (systems) between PI and Partner, and they are using non-http protocol like MQ to transfer data.
  4. Requirement from Information Assurance Office that Payload should be encrypted At Rest. “At Rest” means data stored in Database tables in PI or in MQ. Just SSL won’t help with this since SSL only encrypt data “In-Flight”. In other words, SSL encrypts the communication Tunnel between Partner System and PI. But, once data is in PI, it is already decrypted and will be stored in PI database.

 

 

Now, that we have cleared some basic concept. Let’s go into developing and testing one web service interface. Below are interface requirements.

  1. Synchronous
  2. SOAP to RFC
  3. Use Integrated Configuration
  4. Test using SOAPUI 5. Use public and private key to encrypt/decrypt 6. Secure payload while “In-Flight” as well as in “At Rest”.

 

 

Here are our Assumptions:

  1. Have download keypair from NWA keystore.
  2. Firewalls are open between PI and End system.
  3. A service userid has been created and available for test
  4. Download SOAPUI. Note, I am testing using SOAPUI 4.5.1
  5. Inerface has been developed and configred except SOAP sender Adapter and Integration Configuration->Inbound Processing tab

    

One of the nice things about PI 7.11 is that you don’t have to use ABAP stack for certain interfaces. Instead, you can use Integrated Configurations. In our case we are using SOAP and RFC adapter which are part of Java stack. Below is the web service flow.

 

Request: Partner -> (SOAP) PI (RFC) -> SAP

Response: SAP -> (RFC) PI (SOAP) -> Partner

 

I will cover.

  1. Apply SSL for Integrated Configuration between SOAP and RFC sync web service.
  2. Apply Payload Encryption and Decryption in Integrated Configuration
  3. How to configure SOAPUI web service to use for Encryption and Decryption. Please note that if you are looking for using certificates for Authentication then this blog will help you.

 

 

1. SOAP Sender Adapter Settings

pic2.jpg

 

2. Integrated configuration “Inbound Processing” tab settings.

pic3.jpg

 

3. Load WSDL in SOAP UI

 

4. Add PI private key to SOAPUI keystore

 

5. Right click on the project and select “Show Project View”

 

6. Click on Keystores tab.

pic4.jpg

 

7. Add certificate to the Outgoing WS-Security Configuration

pic5.jpg

 

8. Add to the Incoming WS-Security Configuration

pic6.jpg

 

9. Configure Outgoing and Incoming WSS on “Request” window.

pic7.jpg

 

10. Hit “Submit Request” green play button on top left corner.

11. You can confrim on right screen that encryption works.

Principal Propagation using SAP Assertion Ticket CRM -> PO7.31 Single Stack

$
0
0

In a typical integration landscape involving PI we use a service user for communication between involved systems. This configuration works most of the times accept in the scenarios where we are required to pass logged in user information as user context to PI.

 

In this blog I will try to describe the configuration of Principal Propagation using SAP assertion ticket between CRM and SAP PO7.31.

 

Configuring Trust Relationship in CRM for issuing Assertion Ticket:


Call transaction STRUST to check whether a system PSE is maintained.

             By default, a self-signed system PSE should exist, which is sufficient

 

 

STRUST1.JPG

 

 

 

Call transaction RZ11 and parameter login/create_sso2_ticket

             Default value is ‘0’ change it to ‘2’.

 

 

 

parameter.JPG

 

 

Configuring PO7.31 to accept assertion ticket:

NWA -> Configuration->Trusted Systems-> Add Trusted System->By Querying Trusted System

 

Provide the details of ticket issueing system(CRM).

Trusted Systems.JPG

 

once import is complete click finish.

 

Trusted Systems - 2.JPG

 

 

 

Configuring the Login Module Stack:

NWA -> Configuration-> Authentication and Single Sign-On

I have created a custom template for Login Module “assertionTicket.

Below login module is added. This configuration means if assertion ticket is successful message is passed to PI else if assertion ticket is unsuccessful basic authentication is required.

EvaluateAssertionTicketLoginModule: SUFFICIENT

BasicPasswordLoginModule :               REQUIRED

 

to create a template click on add.

login policy template.JPG


 

 

policy config1.JPG

 

 

 

Add this custom template to SOAP adapter policy configuration

 

 


policy config2.JPG

 

 

 

Enabling Principal Propagation in CRM system:

Execute T-Code SXMB_ADM-> Configure Principal Propagation->Restore

It will create PIPPUSER & RFC destination SAPXIPP<clnt no.>

 

PP config.JPG

 

 

Now add interface and user id using 2nd tab “interface Conf. For Transfer of User IDs>

 

You can use * for all the entries to include all interfaces.

 

PP2.JPG

 

 

Configure RFC destination to P2D as per below screen shot.

 

RFC destination.JPG

 

 

Set ASMA and variable Transport Binding in sender comm. Channel.

 

comm channel.JPG

 

Now we are ready to execute the interface.

 

using SE80 transaction I executed one proxy and below is the screenshot from PI log. Here SOAP channel is using customer template created by me.

 

nwa log.JPG

 

The user id is passed to dynamic configuration of PI.

 

MM Log.JPG

 

 

 

Below is one of the use cases.  This UDF  get logged in user from dynamic configuration and  fetch last name of that user from UME.

 

 

  public String getName(Container container) throws StreamTransformationException{

 

AbstractTrace trace = container.getTrace();

DynamicConfiguration conf =(DynamicConfiguration) container.getTransformationParameters().get(StreamTransformationConstants.DYNAMIC_CONFIGURATION);

String user = "unknown";

String name = "initial";

if (conf != null) {

  DynamicConfigurationKey keyUser =DynamicConfigurationKey.create("http://sap.com/xi/XI/System/SOAP", "SRemoteUser");

  user = conf.get(keyUser);

  }

IUserFactory iuf = UMFactory.getUserFactory();

try {

  IUser iu = iuf.getUserByLogonID(user);

  name = iu.getLastName();

  }

catch (UMException e) {

  name = e.getMessage();

  trace.addDebugMessage(e.getMessage());

  }

return(name);

 

  }

Creating File Name from Mail Attachment using Standard Beans

$
0
0
Business Driver
Purchase and Sales information is sent from a partner via Email as a CSV attachment.
PI has to poll the Email server, retrieve these mails and write the CSV files in PI server directory with exactly the same name as the mail attachment.
 
Solution
   
This requirement has popped up many times in the sdn forums
and the solution to this currently is to deploy a custom adapter module
But the same requirement can be sufficed with absolutely no coding or creation of adapter modules.
This can be achieved with 3 standard beans:
MultipartHeaderBean ,PayloadSwapBean and DynamicConfigurationBean
In this blog I have highlighted only the specific settings that have to be maintained
in the communication channels to achieve our requirement, all the other ESR & ID objects should be created as usual.
Sender Email Channel Settings
In the sender email communication channel make sure to check the ‘Keep Attachments’ option , which is required to retain the attachment.
Also check the ‘Set Adapter-Specific Message Attributes’ option to enable the Dynamic Configuration attributes to be available.
Pic 1.1
New Picture.png
Pic 1.2
New Picture (1).png
The MultipartHeaderBean enables us to access the attributes of other payloads that are appended to the XI message as
an additional attachment.
In the ‘Module’ tab add the MultipartHeaderBean with the following parameters:
Parameter Name                Parameter Value       
requiredHeadersAll
dcNamespace
 
Using the PayloadSwapBean here we can replace the application payload of the XI message
i.e. email content, with another payload that is appended to the XI message i.e. the attachment
Add the PayloadSwapBean with the following parameters:
Parameter Name                Parameter Value       
swap.keyName    Payload-Name
swap.keyValue   
MailAttachment-1
Pic 1.3
New Picture.png

 

 

 

 

Receiver File Channel Settings

 

Now in the Receiver File adapter we are going to use the DynamicConfigurationBean to retrieve the attachment name.

The attachment file name is available in the Dynamic configuration attribute Part[1].Content-Description, which we will write to the PI message interface name.

 

 

To the 'Module' tab add the  DynamicConfigurationBean with the following parameters:

 

Parameter Name                Parameter Value       
key.0                     

writehttp://sap.com/xi/XI/System/Mail  Part[1].Content-Description

value.0                   

message.interface

 

Pic 2.1

New Picture (1).png

 

 

Using Variable Substitution, create a variable fname referencing the message interface name and access this variable in the file name field.

 

Pic 2.2

New Picture.png

Pic 2.3

New Picture (1).png

 

 

   

Testing

 

 

The configurations are complete. It’s time to test our scenario.

A sample mail is sent to the mail account which the sender email adapter is polling with the attachment name MyAttachment1.CSV

 

Pic 3.1

New Picture.png

 

 

 

Integrated configuration was used in this scenario, so we can see the Dynamic Configuration message attributes in message monitoring.

Note that Part[1].Content-Description has been set to MyAttachment1.CSV at runtime.

Pic 3.2

New Picture (1).png

 

 

The payload was swapped successfully and the attachment data has been set to the main payload as shown below.

 

Pic 3.3

New Picture.png

 

 

We can see the output file in the output folder path with the same name as our attachment MyAttachment1.CSV.

 

New Picture (1).png

 

Setup Multiple Operations Scenario in ESR and ID

$
0
0

I came across with  a situation where client was looking out for one WSDL with multiple operations in it. Lets say, If one cleint program depending upon incoming values from legacy, want to route messages to a particular endpoint which will do some operation in R/3 side. For example – If data is sepecific for insert/Update/Delete in legacy.

We often receive requirements to create separate interfaces for Insert/Modify/Delete and normally we follow below approaches:

  1. If Source structure are almost same then create a generic structure with indicator field in it which will specify if incoming data is Insert/Modify/Delete.
  2. If structures are different then import three structure in source mapping and do n:1 multi mapping using BPM, which hits the performance and it complicates the solutioning.

and so on..

 

Like wise we think about different solutions with respect to our requirement. We came across to a situation where client required one WSDL with multiple operations in rather having separate WSDL’s for all end to end scenarios. In actual scenario, we have  created a SOAP to Proxy synchronous call and achived this solution successfully.

In this weblog, I am creating one Asynchronous SOAP to File scenario mainly focussing on how to handle multiple operations in ESR and ID.

We are working on SAP PI 7.31 – SP- 05.

Below is the development done for this scenario. I am just showing the important steps done for this development.

 

ESR –

I assume we are done with creation below Data Types and Message Types.

pic1.jpg

Service Interface

Create one service interface with Multiple operations in it. Have a look into below screenshot.

 

pic2.jpg

Above picture depicts three operations Add/Modify/Delete, with Message Type for Add operation selected. Similarly we have to provide Message Type for each operation created above in last step.

Next step is to create three different Message and Operation Mappings for each operation we created in service interface.

 

pic3.jpg

 

ID-

In Integration Directory, Create 1 SOAP Sender Communication Channel and 1 File Receiver Channel. Once done, create an ICO like this.

 

Inbound Processing – Provide Sender Communication Channel and other values.

 

Receiver – Receiver tab contains the trick for Operation Specific  receiver determination. Have a look into below screenshot.

 

pic4.jpg

 

As we can see in above screenshot, we have to use Operation specific option to use multiple operations and depending upon these operations, we can send these data to multiple receivers (As per our requirements.). In our scenario, we are sending all files to same receiver.

Receiver Interfaces –  Next step is to provide Receiver Interfaces (Operations Mapping) for each operations. In ESR, we have created multiple Operation mapping for each operation. We will use that in this step. Have a look.

 

pic5.jpg

This receiver interface contains data for Add operation, similarly we can assign operation mapping for each operation.

In outbound processing provide, receiver file channel and activate these developments.  Next step is to download WSDL and publish this to service registry.

Once you open downloaded WSDL in SOAP UI, it will look like below screenshot.

 

pic6.jpg

Similarly, you can check these operations in WSNAVIGATOR too if you have published this on SR.

 

pic7.jpg

This is just an example scenario and we can use this feature to provide different solutions and it reduces complexitiy. In ID part using operation Specific option, provides various gates for enhance receiver determination and integrating various things into one.

 

Keep us posted with the requirements, you are using similar kind of approach.


SWIFT…… What is it?

$
0
0

Hi Experts,

 

 

Today I learnt about SWIFT Integration Package.This is the package provided by SAP which can be readily imported as ESR content and
you just have install and configure this to be able to use it.

 

Let’s start from basics as to what is swift is? and what role does it have? Although many of you may be aware of this but fewmay not be aware of this as I was not aware of this  before the need to work on the development of the same in our project has come across. So just thought of sharing few of my
thoughts..

In this blog, I will try to express my thoughts of  basics of swift and in the next blog I will cover the technicalities about swift  package like installation and configuration part.

As  you are all aware of  how quickly the transaction involving bank to bank take place no matter in which part of the world you are living.Unlike before, nowadays it all happens online and is quite easy. Within the comforts of your  home, you just need to login to your internet banking account, do some clicks and
money will reach your bank account in some other country in a couple of days.
Your bank makes it possible through SWIFT!

SWIFTor the Society for Worldwide Interbank Financial Telecommunication is a worldwide network for financial messages through which its members (i.e. financial institutions such as banks) can exchange messages related to money transfer for their customers. The messages are sent securely and reliably to the target member financial institution of SWIFT.


By the way, SWIFT is just a messaging service and it doesn’t facilitate actual cash transfer between banks. For doing that, the banks that
exchange authorization message for money transfer shall have an external banking relation between them and normally they settle the actual cash transfer
in parallel.


But the point is, once the authorization for the release of funds are sent through SWIFT, the target bank can release the money to the end
user’s account and the bank is assured of the money from the sending bank. Sometimes, the target bank will have a branch in the sending bank’s country or
vice versa and they may settle it within the purview of a single country.

Thus, the end user will receive the money without needing to know the hassles of exchange rate conversion and various other formalities,
which happen in parallel between the banks. Also, the user will receive money, irrespective of the time taken for all these.

 

SWIFT does not facilitate funds transfer; rather, it sends payment orders, which must be settled by correspondent accounts that the institutions have with
each other. Each financial institution, to exchange banking transactions, must have a banking relationship by either being a bank or affiliating itself with
one (or more) so as to enjoy those particular business features.

 

SWIFT means several things in the financial world. It may be a secure network for transmitting messages between financial institutions or a set of syntax standards for financial messages (for transmission over SWIFTNet or any other network) or a set of connection software and services, allowing
financial institutions to transmit messages over SWIFT network.

 

Over 8,700 banking organizations, securities institutions and corporate customers in more than 209 countries use SWIFT for transferring financial messages, making it the most widely used network for international financial messaging. Each financial institution registered with SWIFT is identified by a bank identifier code
popularly known as the ‘SWIFT Code’.

 

Through SWIFT, transfer of funds to various countries can be completely automated; where the core banking solution of the bank can directly communicate with
SWIFT to do the transfer. This makes the process of money transfer more efficient, secure and with lower cost. Thus, SWIFT makes the process of
transferring funds across the globe a lot easier.

 

The SWIFT secure messaging network is run from two redundant data centers, one in the United States and one in the Netherlands. These centers share information in near real-time. In case of a failure in one of the data centers, the other is able to handle the traffic of the complete network.

 

SWIFT opened a third data center in Switzerland, which started operating in 2009. Since then data from European SWIFT members
will no longer be mirrored to the US data center. The distributed architecture will partition messaging into two messaging zones: European and Trans-Atlantic  European zone messages are stored in the Netherlands and in a part of the Switzerland operating center; Trans-Atlantic
zone messages are stored in the US and in a part of the Switzerland operating center that is segregated from the European zone messages. Countries outside of
Europe were by default allocated to the Trans-Atlantic zone but could choose to have their messages stored in the European zone.

 

SWIFT provides a centralized store-and-forward mechanism, with some transaction management. For bank A to send a message to bank B with a copy or authorization with institution C, it formats the message according to standard and securely sends it to SWIFT. SWIFT guarantees its secure and reliable delivery to B after the appropriate action by C. SWIFT guarantees are based primarily on high redundancy of hardware,
software,
and people.

 

Hope you got some idea about SWIFT.

 

Regards,

Amarnath

Easy Log Configuration - How To Guide for Seeburger Adapters

$
0
0

Hello,

 

over the last years, I have encountered several cases, where the Seeburger EDI/B2B-Adapters were correct installed on a customer environment, however the logs were not always configured.

If no configuration is made, the Seeburger-Adapters will per default write their information into the default-trace / application log of the PI-System.

 

However, since the adapters provide the possibility to create an own log per adapter (e.g. AS2, X400, OFTP, SFTP) I would always recommend to do the log configuration as part of the initial adapter installation/configuration. In case of any trouble you can then easily view all necessary information and set the trace level per adapter. Also the logs can be formatted in different ways for easy viewing.

 

Although the log configuration is explained in the Seeburger Adapter Documentation, I would like to share the following Document that was created by some Seeburger Consultants to have all steps available for this log config in a easy-readable way with several additional screenshots.

 

 

The following (temporary) Link allows for viewing and downloading the document (as pdf).

 

Log Configuration - How To.pdf:

https://mft.seeburger.de/portal-seefx/~public/5ccd9eb9-eb89-4f05-8023-8e859bf18297?download

 

Let me know if you encounter any difficulties. Looking forward to your feedback to further update this document.

Multiple Idoc Segment occurance to Multiple File – 1 to N Multimapping

$
0
0

We received a requirement where based on one segment’s multiple occurance of an Idoc, we need to create multiple files as output (Based on occurance of a particular segment). Normally, in the receiver side we change occurances to 0..n to accommodate these kind of requirement. But, in this scenario, XSD used were standard XSD’s with occurance 1 and client didn’t want to change it in custom ways as receiving system is bound to receive one message/file at a time. Because of this situation we have to build 1 to n multimapping scenario based on multiple segments of incoming Idoc.

We are not using BPM in this case. Secondly, we are on PI 7.31 SP-05 single stack. Earlier we were on SP01, and we have got a lots of issue while doing multimapping for this scenario and find out that on single stack, if you are doing multimapping, it is better to have SP-04 or above.

 

Assumption –

  1. Communication between SAP ECC and SAP PI – Java stack are done (RFC Destinations are created in ECC and PI for IDOC communication.).
  2. All Idoc releated setup is done in PI 7.31 for single stack.
  3. Product , Software Component and Software Component Version are created and imported to ESR in PI.
  4. Standard XSD’s (OAGIS)for receiver side and Idocs for source side are already imported in ESR – PI.
  5. Sender IDOC_AAE and Receiver SOAP adapters are already configured with correct parameters as well as with correct URL and Action for SOAP.

 

This is an IDOC to SOAP Asynchronous requirement in which multiple xml data will be posted to third party system by SOAP receiver Channel. Third party system has internal queue mechanism to handle each xml data seperately.

Below is the development done for this scenario.

 

ESR –

No need to create Message Type for IDOC or imported XSD’s. Only create Service interface for imported XSD for inbound.

Message Mapping –

Create Message mapping as per below screenshot. We have to give source and target message and then we have to change occurance of target message (0..unbounded)to get ready for multimapping.

 

Signature Tab –

pic1.jpg

In the target side of mapping, PI will add extra messages tags for multimapping like below.

pic2.jpg

In IDOC, VWERK field is part of repeating segment and depending upon that we will map target node as per below screenshot. Rest fields we can map one to one depending upon our requirement.

pic3.jpg

Context of VWERK like below.

pic4.jpg

Now create Operations mapping and change target side occurance to 0..Unbounded for multimapping.

pic5.jpg

 

ID Configuration –

 

Now create 1 Idoc_AAE sender communication channel and 1 SOAP Receiver communication channel. Once we are done with channel configuration create an ICO for Inbound and Outbound receiver and receiver interface like below screenshot.

pic6.jpg

Above screenshot shows multiplicity of operations mapping fo this interface should be  0..Unbounded.

 

We are done with configuration of 1..n scenario in SAP PI 7.31 – SP-05.

One Communication Channel for placing file into two different directories

$
0
0

The purpose of this document is to show the step by step process of  placing one file in two different directories using single receiver communication channel in SAP PI 7.1.

Design Process:

·         Open the ESR and ID Components.

·         Create the Data Types, Message Types and Service Interfaces.

·         Create Message Mapping and Operation Mapping.

As Per the requirement we have to create 2 data types, 2 messgae types, 3 service interfaces (1 for sender and 2 for receiver), 1 message mapping and 2 operation mappings.

DATA TYPE :

Create a data type for sender and receiver as below

Sender /Source Data Type:DT_Sender_Test

 

Sender DT.jpg

Receiver/Target Data Type:DT_Receiver_Test

 

Receiver DT.jpg

 

MESSAGE TYPE:

Here we need to define/create one source and one target message type with respect to data types created.

Source Message Type:

MT_Sender_Test                      

Target Message Type:

MT_Receiver_Test                   

Service Interface:

We have to define/create one sender service interface and two receiver service interface. In this case we are creating two interfaces one for each directory.

 

Source Service Interface:

 

Servi ce Interface for Receiver 1: SI_Receiver_In

 

Receiver SI1.jpg

 

Service Interface for Receiver 2: SI_Receiver_In_1

 

ReceiverSI2.jpg

 

Message Mapping :

All are 1-1 mapping except Row

Row having below mapping as shown below,

Here using dynamic configuration for passing “directory” and replacing “+” with “\” (check in Receiver interfaces tab of “Integrated Configuration” to find the  purpose of replacing).

 

MM.jpg

 

 

Parameter: Go to signature tab and add parameter as shown below.

Use the below code in Dynamic Configuration for Directory:

---------------------------------------------------------------------------------------------------------------------------

Dynamic Configuration:

try {

DynamicConfiguration conf = (DynamicConfiguration) container

    .getTransformationParameters()

    .get(StreamTransformationConstants.DYNAMIC_CONFIGURATION);

 

DynamicConfigurationKey targetDirectorykey = DynamicConfigurationKey.create("http://sap.com/xi/XI/System/File","Directory");

 

  1. conf.put(targetDirectorykey,targetDirectory);

 

return targetDirectory;

}

catch(Exception e)

{

     String exception = e.toString();

      return exception;

}-------------------------------------------------------------------------------------------------------------

Operation Mappings:

We need to create two operation mappings. Because we have  created two interfaces, one for placing file in one directory and two for different directory.

Operation Mapping 1:

 

OP1.jpg

 

Operation Mapping 2:

Pass the parameter as Directory, select category as Simple, type as xsd: string, parameter as Import.

 

OP2.jpg

 

 

Integration Directory :

●      Configure the designed objects, interfaces created in the ESR.

●      Configure communication channels for sender and receiver.

●      File channel configuration we have to specify Source directory and File Name information.

●      Receiver channel configuration we have to specify the receiver path and file name details.

●      We need to specify the parameters value in Receiver interfaces TAB

 

 

Sender Communication Channel:

 

Create sender file communication channel and assign it in Inbound Processing TAB

 

Inb.png

 

 

Receiver Tab:

 

Add Receiver in this tab

 

rec.jpg

 

Receiver interfaces TAB:

 

Add two Operation mappings in this tab, one for first directory and another for different directory, first directory which is already mentioned in Communication channel for second directory which we are passing at runtime by using Dynamic configuration.

and pass the parameter and its value is

For second directory:

Ex: testing/receiver/sap/pi/      if this is your directory, then you have to pass the directory as mentioned below

++testing+receiver+sap+pi++

 

RCV Inter.jpg

 

Outbound Processing:

Add two receiver interfaces and Receiver communication channel in this tab.

 

Outbnd.jpg

 

After testing you will find same message id in two different directories.

Accessing ResultList object, applicable from PI7.1 onwards

$
0
0

This blog aims in explaining ResultList object/variable and its behavior/API methods with respect to PI versions and also with respect to some special mapping requirements (I faced one issue as per below thread), wherein ResultList object needs to be handled within same UDF.

 

Thread discussion: “Is ResultList not compatible with java.util.List?” -  http://scn.sap.com/thread/3255482

 

As you aware till PI7.0, ResultList variable can be easily typecast to java.util.List and then can be iterated over its values using iterator() / ListIterator() / get(int) methods of java.util.List

 

List list = (List) result;

 

However, from PI7.1 onwards the ResultList object API seems to be changed and it is no more accessible via above said methods of PI7.0 or below. In PI projects, there can be typical mapping functionality requirements (e.g., I faced one in my migration project as per above thread discussion), wherein ResultList object needs to be handled within the same UDF. After some deep analysis, now I found another solution other than the one what I mentioned in my thread.

 

Code snippet and method details in brief

 

  1. Use ResultListImpl class for typecasting ResultList object
  2. Use get() method to fetch the current value from ResultListImpl variable and then use pop() method to move to the next element.
  3. Note that, pop() method basically removes the value from ResultList object and finally after all iterations the ResultList becomes null (no values)
  4. Hence we need to add each fetched value in previous step into a temporary ArrayList variable for further manipulation as per typical mapping functionality requirement (may be some changes to existing values or addition of new values)
  5. Finally add all the required values from ArrayList to ResultList object

 

ResultListImpl rlImpl = (ResultListImpl) result;     
String tempStr;
ArrayList<String> arrayList = new ArrayList<String>();
int i = 0; 
do
{                tempStr = (String) rlImpl.get();                 rlImpl.pop();                      arrayList.add(tempStr);                i++;                if (tempStr != null)                                this.getTrace().addInfo("rlImpl.pop():" + i + " :" + tempStr);
}while (tempStr != null);

List list = arrayList;
Iterator iterator = list.iterator();
while (iterator.hasNext()) 
{
               //requirement manipulation logic                result.addValue(iterator.next()); //Populate ResultList finally with required value
}

 

May be there can be some other easy ways to access ResultList object. Please share your valuable inputs.

 

Thanks,

Praveen Gujjeti

Viewing all 741 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>