Service bus data exchange between different systems. I liked the message. Offers on the market

), former name - Axelot Datareon ESB, is designed to build a distributed information landscape of an enterprise. Software ensures the interaction of all integrated applications in one center, combining existing sources of information and providing centralized data exchange between different information systems.

The Datareon ESB corporate data service bus is a means of ensuring stability and completeness of information exchange, increasing the overall performance of the information system and reducing labor costs for its administration.

The Datareon ESB software product is officially included in the unified register of Russian programs for electronic computers and databases, which can be purchased by state and municipal institutions.

Functionality

  • Supports various standards and integration scenarios
  • Centrally manage your integration landscape with the Eclipse ecosystem
  • Data transformation (multistep data transformation algorithms with control of various conditions)
  • Transfer data of any size (vertical and horizontal scaling)
  • Easy integration with products on the 1C:Enterprise 8 platform
  • Ensuring secure data transfer
  • Diagnostics and monitoring of the state of the entire data transmission network

Problems to be solved

  • Data transfer between different information systems (with routing or point-to-point)
  • Formation of a single information space in heterogeneous environments
  • Construction of a distributed system based on the event model in the following options:
    • building applications with end-to-end business processes based on an event model;
    • creation of a system with synchronization of business applications in various information systems
  • Obtaining a scalable enterprise/holding level management architecture
  • Deployment of a data exchange system at the transport level and at the business logic level
  • Delegating the task of building information flows to analytical departments
  • Reducing the overall complexity of the integration circuit and reducing the requirements for bandwidth channels
  • Increased overall stability transport layer data transmission
  • Reducing transaction costs when exchanging data between different departments

2017

Axelot Datareon ESB 2.1.0.0

The AXELOT Datareon ESB solution is included in the list of Gold Application Development competencies - a fact confirming high quality product and its compatibility with Microsoft products.

AXELOT Datareon ESB provides a number of key benefits for businesses:

  • Possibility of integration;
  • Reliability and reusability of resources;
  • Obtaining a scalable enterprise/holding level management architecture;
  • Delegating the task of building information flows to analytical departments;
  • Reducing the overall complexity of the integration scheme and reducing the requirements for channel throughput;
  • Increasing the overall stability of the transport data transfer layer;
  • Reducing transaction costs when exchanging data between different departments;
  • Reducing the overall costs of maintaining and maintaining the information system.

Main features of the system:

  • A large number of connectors to various systems: 1C:Enterprise 8, SOAP services, REST services, MS SQL, IBM DB2, Oracle DB, PostgreSQL, SharePoint, OData, TCP, Siemens TeamCenter and others;
  • Plugin mechanism for self-development of connectors;
  • Support for various programming languages ​​and technologies when developing interaction scenarios: 1C:Enterprise 8, JavaScript, T-SQL;
  • Setting up multi-step data transformation scenarios using visual mapping mechanisms and custom XSLT transformations;
  • Work with various data formats (XML, JSON, XLS, DBF, CSV, Base64 and others);
  • Static and dynamic routing information packages;
  • High speed of interaction and fault tolerance: reduced requirements for network bandwidth, load balancing, isolation of information domains, ability to monitor the status of integration nodes;
  • Event model support, synchronous and asynchronous calls, guaranteed delivery;
  • Changing integration scenarios of subscriber systems (unloading/loading, transformation and routing mechanisms) in a “hot” mode without the need to stop them (including configurations on the 1C:Enterprise 8 platform);
  • Diagnostics and monitoring of all integration processes, the ability to debug and trace information packages.

Particular attention is paid to the integration of applications on the 1C:Enterprise 8 platform. The delivery includes a special subsystem that can be built into any standard configuration on the 1C:Enterprise 8 platform and provides all the necessary mechanisms for quick and convenient setup and administration of integration. Interaction "AXELOT: ESB Service bus data" with configuration on the 1C:Enterprise 8 platform is carried out using SOAP and REST services.

Server components "AXELOT: ESB Service Data Bus" are developed in C++. Administration and configuration of "AXELOT: ESB Service Data Bus" is carried out in the Eclipse development environment and can be performed in conjunction with the development of systems on the 1C:Enterprise 8 platform in 1C:Enterprise Development Tools. "AXELOT: ESB Service Data Bus" is multi-platform and supports OS MS Windows and Linux.

AXELOT Datareon ESB is a completely Russian development and is in the process of being included in the unified register of Russian programs for electronic computers and databases, which can be purchased by state and municipal institutions to solve certain problems.

Integration data bus is intended for building composite applications that use various standards and interaction technologies built according to different principles. Particular attention is paid to the integration of applications on the 1C:Enterprise platform.

Supports various standards and integration scenarios using the integration data bus

Quite often when building composite applications you have to deal with a situation where Various types applications are designed for various standards and integration schemes. It is also not uncommon for a change in integration mechanisms existing applications impossible or time-consuming for a number of reasons: lack of a developer, lack of source code etc. The integration bus allows you to combine such applications into a single whole, hiding differences in integration at the level of mechanisms and settings of typical connectors, which leads the interaction of applications to a single controlled integration scheme.

The following types of connectors exist in DATAREON ESB:

  • Connector for SOAP services, including web services "1C:Enterprise 8"
  • Connector for REST services, including web services "1C:Enterprise 8"
  • MS SQL connector
  • IBM DB2 Connector
  • Oracle Connector
  • PostgreSQL connector
  • SharePoint Connector
  • OData 1C connector
  • TCP connector
  • Siemens Team Center connector
  • SAP connector and others.

All connectors have the ability to parametrically configure connection to the source system and interact with it.

The list of available connectors is constantly expanding; a complete list should be checked with DATAREON.

DATAREON ESB contains a mechanism that allows you to independently develop various connectors in Java or .Net platform languages. In this way, any custom scenario for connecting to source systems can be implemented.

In Moscow, since 1958, there was the 3rd Stroiteley Street, but in 1963 it was renamed - now it is Maria Ulyanova Street, and building 25 on this street is a Khrushchev five-story building. In Leningrad (St. Petersburg), the 3rd Street of Builders never existed...


I'm talking about application integration again. Read it today domestic standard interdepartmental document flow GOST R 53898-2010 And the standard seems to be “correct” written in XML and there are all sorts of useful fields on 53 pages and all the details are given. I remember that at the end of the last century, I strongly advocated for the emergence of standards for electronic messages on the pages of Computer magazine in the article The Internet Factor in the Development of Client-Bank Systems At the end of the last century, everything looked more optimistic than at the beginning of this one. The dot-coms had not yet collapsed, the sky was higher, the grass was greener, social sites were credible, and Fielding had not yet defended his dissertation entitled Representational State Transfer. What happened in just over ten years and why the idea of ​​standardizing the format electronic document doesn't bother me anymore? Nothing important, just the application integration paradigm has changed.

How was it before? One bank sent to another electronic message(Sorry, I was working at Inkombank at the time, so I’ll talk about banks). The second bank received the message and sent a receipt to it. All this was encrypted ten times, certified digital signature, the receipt included the hash function of the original message... incoming and outgoing numbers, timestamps (also cryptographic, by the way), etc. and so on. The most discussed question of those years was whether it is necessary to generate a receipt for the receipt and what to do when the receipt was not delivered. In general, it’s scary to remember to what level of complexity the process of synchronizing the states of multiple virtual images one real document. It was easier with paper. At least until the invention of the copier.

Let's return to modern times. If message queues existed in order to safely and reliably deliver messages, then the service bus appeared in order to eliminate message exchange. And don’t tell me that this very bus is what exchanges messages. I know this, we do it ourselves, but it’s not very correct. The original idea of ​​the service bus, especially Enterprise Service Bus (ESB) is not about passing messages, but about ensuring that any application does not have to worry about creating its own local instance of an object. The point of the service is to always be able to obtain such an object. If you need a document, enter the URL and use the HTTP GET method to receive and read the document. If you wanted to change a document, you changed the document using the same URL using the HTTP PUT method. POST was added, DELETE was removed, what could be simpler? Give your document a URL. Use a WebDAV-style protocol to take a document, work on it, and return it to its place in a new status, the same one defined as a master copy, i.e. to the same URL from which you took it

Otherwise, it’s an apocalypse. Receipts and notifications of status changes are not so bad. The need to interpret document fields in the same way, and for this to synchronize reference books, is a problem. The Third Street of Builders in Moscow and the 3rd Street of Builders in St. Petersburg, as is known from the main New Year's film, are far from the same thing. Perhaps the only reference book that is interpreted equally in different departments is the Gregorian calendar. And then, I'm not completely sure. Or another example - my name on the international passport does not match my name on the British visa pasted into the same international passport. The passport says MAXIM, and the visa says MAKSIM. Because of this, I’m afraid to cross the border :) Let’s add to this the difference in sets of document states in different systems, different transition graphs, compound documents that include a set of other documents, electronic envelopes, etc. We get a problem of incredible combinatorial complexity. What if the document goes not to one department, but to several at once? In one they will fulfill it, in another they will reject it, in the third they will lose it. Therefore, process people will very soon add a route to this document, laconically expressed in BPMN notation on a dozen pages. Exceptions, returns, cancellations, incorrect digital signature verification results, undelivered receipts, expired keys... The matrix is ​​resting (but the programmers continue to work)

Application integration is an issue that sooner or later faces the IT department of any organization that has more than one of these applications. Here is a far from complete list of tasks that fit into the concept of “integration”:

  • the need to maintain general directories (for example, directories of clients or employees);
  • launching activities in one information system when events occur in another;
  • business process (an organized sequence of actions performed by both people and information systems) occurring in several applications;
  • information interaction with business partners (for example, automatic request for prices for components from the supplier);
  • unification of information exchanges and business processes in company branches.

If this kind of action occurs rarely in an enterprise (for example, once a day), then these actions can be organized in a makeshift way - for example, by manually uploading data from one application to Excel format and loading them into another application or even using duplicated input of information into two systems at once. However, if the need for information interaction between applications arises many times a day, then the question arises of ineffective use of human resources and, as a result, there is a need to automate this procedure.

Point-to-point integration

The task of point-to-point integration is relatively simple. It is necessary to understand how each of the two interacting systems is ready to transmit and receive data, create appropriate technical solutions to access these interfaces, and also implement a mechanism for converting data from the source system format to the destination system format. At best, information systems provide special integration for software interface(API), and in the worst case, reading and writing information has to be done directly into the application database. As a result, a local integration solution arises - a separate software module of our own development with all the ensuing requirements for its maintenance and maintaining its relevance.

Point-to-point integration

This is not a big problem as long as there are few point-to-point integrations - one or two. However, practice shows that the number of point-to-point integrations tends to increase, and the quality of management of these integrations, on the contrary, rapidly decreases. There are many reasons for this: the number of integration modules is increasing, developers who made one or another module are leaving the organization, data formats in integrated systems are changing, etc. The sad result of the evolutionary development of point-to-point integrations is the most complex “mincemeat” of integration interactions between enterprise applications, the attitude towards which of IT department employees can most easily be expressed in a few words: “As long as it works, it’s better not to touch it.” However, this situation does not suit either the IT department itself or business customers.

Integration stuffing

Single service bus

Having survived several generations of different approaches to application integration, global industry software came to the concept of a single enterprise service bus (Enterprise Service Bus, ESB). From an architectural point of view, an ESB is a software solution that ensures that all integrated applications interact through a single point, uniformly, providing developers and administrators with a unified and centralized means of developing, testing and monitoring the progress of all integration scenarios.

The main components that make up a modern service bus are:

  • a message broker is a high-performance backbone for exchanging messages in a unified format between applications in real time;
  • adapters - technological adapters and adapters to business systems provide interaction with applications in a format that is acceptable to them, presenting information from these messages in a unified format perceived by the broker - the more different adapters a particular integration platform provides, the greater the chances that it will not be required to implement it in your organization additional work to create adapters specific to your systems;
  • environment for developing integration scenarios - the simpler and faster the development of integration scenarios is, the less investment in this development, and therefore the faster the return on investment. The modern integration bus provides the developer with visual tools for constructing integration scenarios, which in most cases make it possible to do without low-level coding;
  • SOA tools - adherence to the principles of service-oriented architecture is the unconditional standard for all integration solutions of the “single service bus” type (as is clear from its name). Information systems are considered here as providers and consumers of services; all services published in the bus are placed in single register with the ability to reuse and manage policies associated with services;
  • various control and management tools (audits, logging, centralized monitoring, monitoring compliance with service level agreements, etc.).

The advantages of using a single service bus include:

  • scaling - the ability to build solutions of any size and load;
  • flexibility - the ability to implement and change integration scenarios without significant involvement of developers;
  • security - built-in authentication and authorization tools provide access control to services at the level of the bus itself, relieving developers of integration scenarios from the task of implementing these mechanisms;
  • the use of open standards - allows you to reduce the involvement of expensive specialists in proprietary technologies;
  • centralization of control and administration tools - allows you to avoid “blurring” the point of responsibility for integration scenarios, ensure operational monitoring and early warning in case of failures.

One more important requirement The functionality of the ESB environment includes the ability to implement integration with external organizations - business partners, suppliers, corporate clients, remote branches. The features of such integration are the unpredictable quality of channels, lack of guarantees of information delivery and poor readiness for integration as such - as a rule, the partner organization provides a very limited range of data exchange formats. In this case, the integration bus must contain a tool for building B2B interaction, allowing information exchange according to open, including industry, standards, ensure guaranteed delivery, have the means to configure information exchange in the context of a specific business partner and, of course, work in full accordance with the principles of the integration platform itself, isolating the developer of integration scenarios from the technical details of interaction with the partner .

Enterprise Service Bus

Business process management

A significant proportion of integration scenarios implies that information exchange involves not only applications acting as sources or receivers of information, but also people - employees of the organization performing various tasks or making decisions. In this case, we can talk about going beyond “pure” integration and the emergence of a new entity in the focus of our attention - business processes, and in the requirements for the integration platform - new functionality for business process management (Business Process Management, BPM). If there are BPM requirements, the integration platform must provide the developer with:

  • means visual design business processes - it is optimal that these tools can be used by people who are far from IT, for example, business analysts or methodologists. In addition, the ability to transfer business process models from specialized modeling tools to the development environment is extremely useful. The same tool should make it possible to design task forms for process participants, protecting developers as much as possible from programming;
  • business process execution environment - a special engine that provides processing of business rules, transfer of tasks between users and information systems in accordance with developed business process models, as well as processing of exceptional situations (for example, the executor exceeds the time allotted for completing a task);
  • portal of business process participants - a specialized portal that allows users to launch processes, participate in them, and monitor progress running processes and carry out administrative actions in accordance with the rights established for them;
  • monitoring and controlling tools. The ability to quickly and retrospectively analyze the flow of business processes is an important part of any BPM platform.

On this moment many software vendors tend to combine the BPM environment and integration bus into a single middleware platform, removing the strict separation that existed for several years between BPM systems and application integration tools. This approach is very progressive. Some vendors go even further and add professional business process modeling tools to the platform. Software AG is pioneering this with a solution that combines the renowned ARIS Platform modeling tool and the webMethods integration/BPM environment.

Comprehensive use of the integration platform

Offers on the market

Currently, there are three groups of software offerings for building ESBs. These groups vary in both price and functionality offered.

The first group is proposals from companies whose products are leaders in research by analytical agencies in all categories indicated in the article (ESB, SOA Governance, BPM, B2B). This group includes:

  • IBM with its WebSphere product line;
  • Software AG with webMethods integration platform;
  • Oracle with a whole series of proposals;
  • Tibco with Business Integration line.

In principle, those who do not like compromises can choose any of these manufacturers - all of the listed companies offer full-fledged product lines (however, in the case of Oracle, it is not always clear which product we are talking about, since after purchasing a number of companies, Oracle immediately offers several products, not always sufficiently integrated with each other). Tibco stands a little apart, since the size of this company is much smaller than the size of the other members of this four, which may raise some doubts about its stability. Software AG is not yet a very well-known manufacturer on the Russian market, but the webMethods platform, which is today the key offering of this company, has great potential. IBM and its products are already known and used by many enterprises, but some of them have complaints about the cost of implementing and maintaining the system.

The second group of proposals are companies that focus mainly on “pure” ESB functionality and have achieved success here. This group includes: Sun (Glassfish), Progress (Sonic) and Fujitsu.

Offerings from these companies are good if you do not intend to expand the scope of your platform towards BPM and/or B2B. Otherwise, you risk being left with insufficiently developed functionality and significantly increasing your costs for improving it to meet your needs.

The third group is the most numerous and includes all proposals not included in the previous two groups. Listing all proposals on the ESB topic in this article is pointless; you can get such a list in any search engine. If your budget for integration is limited, and you are inclined to experiment, you may well try your luck with any of them. However, risks related to both insufficiently developed functionality and possible problems with reliability, technical support and product development prospects, you assume.

Conclusion

In conclusion, I would like to give readers a few simple tips by choice of integration bus:

  • think about building an integration solution without waiting for application interoperability issues to push you to the wall. The larger the rubble, the more difficult it is to clear it;
  • Choose your platform carefully. Look for a vendor that satisfies you in all respects, since now there is plenty to choose from. You should be interested in both the technological parameters of the platform and the methodological aspects of implementation;
  • think about the future. The functional requirements that you realize now may change significantly in a year, and if the platform does not satisfy them, then you will have to “move” to another. And one move, as you know, is equal to two fires.
  • PNN company blog,
  • Website development
  • With this article I would like to open a series dedicated to IBM WebSphere ESB (hereinafter referred to as ESB) in the context of development for this product. And, first of all, you will have to become more familiar with technologies of this kind.
    Enterprise service bus (enterprise service bus) - binder software, providing centralized and unified event-oriented messaging between various information systems based on the principles of service-oriented architecture.
    Of course, you can build a corporate system based on this approach without special software (you may still have to develop something general) and call the resulting product a service bus. But the product from IBM not only has a ready-made apparatus for centralized messaging and control of this process, but also a full set of capabilities for developing flexible service-oriented applications specifically for the ESB. As a result, we can highlight the following capabilities and advantages of IBM WebSphere ESB:

    • Order and uniformity of architectural connections
    • Centralized management
    • Server side application configuration
    • Implementation of Service Component Architecture (SCA) technology in the spirit of service-oriented architecture principles
    • Protocol independence of the developed program code
    • Extensive bus and application configuration options
    At the same time, ESB provides transactional control, data conversion, safety and guaranteed delivery of messages. Access to all service services through a single point allows you to configure service communication centrally. You can also centrally manage failure events for bulk error handling.
    The classic ESB assembly topology is a cluster that provides horizontal scalability and fault tolerance. According to official recommendations, increasing the number of cluster members increases performance more effectively than increasing server power in a stand-alone topology. In addition, the cluster can be rebooted (or part of it can fail) without stopping service.
    Typically, the ESB is used as a service layer in IBM BPM, but it may well play a leading role in building an interaction model corporate systems as a powerful integration device (meaning ESB as an add-on to IBM WebSphere Application Server).
    This, in fact, is required from the ESB, since it is a “service collection point” - if you need a service that will work with other services (possibly external), then the most logical place to do integration between these services is on the ESB. For external or heterogeneous services, you can wrap it with an ESB service. Let us briefly illustrate the convenience of using “single housing” for services:

    Order
    How bigger size system, the more important order and uniformity are in it. If we are talking about a complex of systems of a large enterprise, then it can definitely be called a system big size. Of course, you can always find an administrator who has in his head a diagram of the interaction of hundreds of servers, or a bunch of volumes of unrelated documentation for each software module, which describes what and how it interacts.


    But it is much easier to have a service (ESB) that allows all interactions to occur through itself. With this approach, part of the interaction architecture in any subsystem is already clear - there is no mess in the connections between systems, servers and applications: everything is connected to the ESB and the ESB is connected to everything.

    Centralized management
    It is always more convenient to configure systems centrally - whether it is configuration, adaptation to moving servers, ensuring fault tolerance, load distribution, error handling, or monitoring and analytics.


    For example, when moving a database server, you do not need to go into the configuration of all existing application servers, and into the settings of specific applications in particular - it is enough to have one environment variable in the ESB, which specifies the database address, and then changes will need to be made at just one point.
    Or, if one of the external systems was unavailable for a long time, and not a single request to it should be lost, you can use the service for processing failed events to “throw in” undelivered messages when it is convenient.
    If you need to regulate the number of simultaneous requests to any system, or monitor these requests, analyze the load, look for bottlenecks, you need to go to the messaging control center - to the ESB server console.

    Server side configuration
    A “single home” for services, from a configuration perspective, achieves several useful goals. Firstly, this is configuration reuse (similar to code and module reuse, which is so useful in SOA), since different modules and applications can use the same database connection parameters, resources, authentication parameters, Environment Variables And so on.


    Secondly, when configuring on the server side, it is the application’s operating environment that can largely influence it, which allows you to transfer applications between different circuits (test and production), tune and even fix bugs without making changes to the application.

    By taking advantage of all these benefits, applications become chameleon-like—they are so flexible that they become part of the environment in which they operate, while still delivering important functionality.

    But the flexibility of applications running on IBM WebSphere ESB is not limited to the environment in which they run. Development capabilities make a huge contribution to this. Since systems not only need to be available, where to run, but also need to be developed and refined, these interesting points cannot be missed:

    SCA
    This architecture is based on the principle that a component provides its functionality as a service that is available to other components. Within one module, the components are software blocks (java code) that fully implement a certain functionality described by the corresponding interface. The execution logic of components is implemented by linking them into a structure based on interfaces and references (Partner Reference).

    This module structure is very convenient to develop, test, develop, change and maintain. The atomicity of the functionality implemented in the components allows you to operate the components as a whole without descending to the code level. On the other hand, it is logically necessary due to the implementation of components in a transactional context.
    Each component has interface(s) whose implementation it provides. Thus, by interconnecting components, there is no need to know them internal features– it is enough that they implement the necessary interfaces.
    Using this architecture, it is also possible to solve all tasks that require parallel work, without “manual” thread management (for example, you can make asynchronous calls to several components with a delayed response).
    Non-java components, such as the Export and Import types, allow you to provide services for external use or use external services accordingly; The Mediation Flow component provides low-level access to messages exchanged between other components and allows various transformations when working with heterogeneous interfaces.
    In addition to interfaces, the IBM business object framework provides very useful capabilities. Business objects (BO), represented by xsd diagrams, are used as objects for transferring data in interfaces, both between components and for communication between modules. They are directly integrated, for example, into a wsdl scheme for describing web services. That is, for example, if module “A” provides its functionality in the form of a web service, to use it, module “B” only needs to connect an interface and ready-made BOs, and it will be able to fully work with such a service without creating any additional java -objects for data transmission. BO is also convenient to use when exchanging data with a database, if this data is used by other components (this, of course, goes against the “DAO” pattern, but eliminates unnecessary java objects and operations of rewriting data “back and forth”).

    Protocol-independence of program code
    As you can see, protocol independence of the code is achieved by using the Export and Import components. Since communication with these components occurs via interfaces and references, program code completely independent of the protocol used for interaction. The same functionality can be easily made available over any number of supported protocols and over any required interfaces. The following figure shows adding an export with SCA binding to a component that already exposes its interface as HTTP, JMS and Web service.


    The advantages are obvious - flexibility, versatility, code reuse, speed of development and modification.
    By the way, SCA binding uses a special protocol and is intended for communication between modules within the same server/cluster. Communication through this binding is less resource-intensive and faster than other protocols.

    Configuration
    Configuration of the server and applications is carried out through the IBM console of the server.
    ESB, like IBM WebSphere in general, has quite a lot of specific capabilities and artifacts. For example, when using the same imports and exports, you can configure the end-points of the corresponding services “on the fly”. For service calls, you can configure policy sets with various rules (for example, you can install support for the WS-AT mechanism, which allows you to call a web service in the same transaction in which the client is running; but transactionality is a topic for a full article), set authentication parameters, connect certificates, etc.
    Through configuration, you can configure some mechanisms for automatically responding to exceptional situations (for example, automatically repeating the execution of components in case of errors). You can configure component tracing or change logging levels on the fly. A failure event management service is also available, which can be deliberately used for bulk error handling.
    And, of course, you can configure a lot of other things according to the Java2EE specification, which is, sometimes quite strictly, implemented in the IBM Application Server.

    All of the above establishes ESB as a convenient, powerful and flexible integration device, albeit not always easy to learn. In the future, you just need to learn how to use it.

    The following images are used in the article: