Practical example: more than 500 target systems

Today we have selected an example of an interesting enterprise integration directly from practice. A customer has its decentralized information systems (special IS for its branches) that have to work in 24/7 mode, i.e. around the clock. The systems have to work independently in the individual branches, seamless and reliable data distribution is a matter of course. The customer’s requirement was – regular sending of information from the central information system to each branch, whenever key data changes.

When communicating a large number of systems to one central system, the integration scenario is typically exactly the opposite. Ideally, the central system has its web services exposed, for example via the SAP Cloud API management tool. In that case, it is sufficient to create a single service on the side of the central system, and the other systems that want to retrieve data connect to such a service. The problem is that the client systems do not know when the data changes on the central system side, and therefore they would have to periodically ask the central system if any change has occurred. However, this was not how the customer wanted to deal with it. He wanted the central system to only send data when it changed.

If the central system has to trigger communication and actively send data to dozens or hundreds of systems, a simple task turns into a much more complex task. If we were to look at it traditionally, it would be necessary to set up one separate communication channel for each target system, where a unique URL is written, a unique certificate is assigned … etc. And do we really have to do all this 500 times? Such a setup would require a disproportionate amount of work, and each time a change is made, it would be necessary to make repeated changes to the integration platform’s settings. Each key change, such as changing the certificate, domain, etc., would then require a change to be made for all interfaces. It would require a lot of work to click through hundreds of settings, which would then logically generate a high error rate. So we didn’t proceed in this traditional way, we did some thinking 😊 and did the following. To simplify this task, we took advantage of the dynamic parameters functionality of the integration platform. In the integration platform, we ended up creating only a single integration scenario with a dynamically generated URL. This functionality, when properly configured, can add, for example, the IP address of a decentralized system at a particular branch office to the destination URL based on a suitably chosen identifier.

In addition to the standard integration scenario setup, we finally had to do a little development from a practical point of view. We created a small custom application to manage individual community parameters, which simplifies the management of branch-specific settings.

The final solution ends up being very clear, while its settings can be managed centrally. In addition, the solution will be very easy to monitor as well as to extend further.

Similar Posts