MuleSoft Integrations Design Patterns

Author: Ashish Shivatare

Why do we need Integration Design Patterns?

Integration Design Patterns provide a set of reusable solutions to common integration problems that arise when connecting different systems or applications. These patterns help to simplify the integration process, reduce development time and cost, and improve the overall reliability and performance of the integrated system.

There are many benefits to using Integration Design Patterns including the:

  1. Standardization: Integration patterns provide a standardized way of integrating different systems or applications, making it easier for developers to understand and work with the system.
  2. Reusability: Integration patterns can be reused across different projects, saving development time and cost.
  3. Flexibility: Integration patterns can be customized to meet the specific requirements of different integration scenarios.
  4. Scalability: Integration patterns can be designed to support scalable and high-performance integration solutions.
  5. Reliability: Integration patterns help to ensure the reliability of the integrated system by providing proven solutions to common integration problems.

Integration design patterns prove to be essential for building robust and efficient integration solutions that can meet the needs of modern businesses.

This blog will tell you about the most common integration patterns used in Mule to solve common integration problems when you take your complex integration problems and create designs for them.

1. API-led connectivity pattern

2. Batch Processing

3. System Synchronization

4. Scatter Gather

5. Messaging pattern

  • Reliability pattern
  • Publish and subscribe pattern

6. Large File Processing

These are the most common patterns used to solve complex integration problems.

1. API-led Connectivity Pattern: API-led integration is a methodology for connecting and integrating different applications and systems using APIs (Application Programming Interfaces). This approach enables organizations to create a flexible, scalable, and reusable integration architecture that can support their evolving business needs.

API-led integration consists of three layers: System APIs, Process APIs, and Experience APIs.

  • System APIs: These are the lowest-level APIs that connect to individual systems or applications. They provide a standardized interface for accessing the capabilities of the underlying systems, making it easier to integrate them into the overall architecture.
  • Process APIs: These APIs orchestrate multiple system APIs to create new business processes or workflows. They are responsible for defining the sequence of steps required to complete a task or transaction.
  • Experience APIs: These are the highest-level APIs that provide a unified interface to external consumers, such as mobile apps or web applications. They aggregate data and services from multiple underlying systems, providing a consistent user experience across different channels.

API-led integration patterns provide a modular and decoupled architecture that enables organizations to easily add or modify systems and services as their business needs evolve. This approach also supports a culture of reuse, where common APIs can be developed once and reused across different projects and initiatives.

Scenario 1: Web Applications and Mobile Applications requesting aggregated data from the same set of Systems (Ex. Salesforce, NetSuite, Database)

Scenario 2: Design an API Strategy where multiple internal systems across the organization must interact with a legacy system database. No validations and transformations are required.

2. Batch Processing

Batch processing is a method of processing data where a large amount of data is collected, processed, and stored in batches at once. This approach is used when data needs to be processed in bulk or when processing real-time data is not necessary.

a. The Batch processing integration pattern is a design pattern that is used to integrate batch processing systems with other applications or systems. This pattern is useful when data needs to be moved from one system to another in a batch or when data needs to be processed in batches.

b. There are several ways to implement batch processing integration patterns:

  • File-Based Integration: This pattern involves exchanging files between systems or applications. Data is stored in a file format that can be easily transferred between systems. The receiving system processes the file and produces an output file that can be sent back to the sending system.
  • Database-Based Integration: This pattern involves exchanging data between systems or applications using databases. Data is stored in a database and can be accessed by other systems using standard database protocols.
  • Message-Based Integration: This pattern involves exchanging messages between systems or applications. Messages are sent between systems using message queues, which ensure that messages are delivered reliably.
  • REST-Based Integration: This pattern involves exchanging data between systems or applications using REST APIs. REST APIs use standard HTTP protocols to send and receive data.

c. The Batch processing integration pattern can be useful in many different scenarios, such as data warehousing, ETL (extract, transform, load) processes, and data migration. It can also help to reduce the processing time and resources required to process large amounts of data.

Scenario: An organization needs to pick up daily employee changes from Database and synchronize it with Salesforce(target system). These changes can be synced on a daily basis and need not be real-time. There are over 200K employee records in the DB.

3. System Synchronization – The system synchronization integration pattern is a software design pattern that is used to ensure that multiple systems or components operate in a coordinated and synchronized manner. This pattern is used to ensure that data, processes, and actions between systems are consistent and up-to-date. The following are some of the common integration patterns used for system synchronization:

  • Request-Reply: In this pattern, a system sends a request to another system or component and waits for a response. This pattern is used when a system needs to ensure that it receives a response from the other system before proceeding.
  • Polling: In this pattern, a system periodically checks another system or component for updates or changes. This pattern is useful when real-time synchronization is not required, and it is sufficient to periodically check for changes.
  • Event-Driven Architecture: In this pattern, events are used to trigger actions in multiple systems or components. This pattern is useful when multiple systems or components need to respond to a particular event or change in real-time. The publish-subscribe pattern can be one such example of Event Driven where a system or component publishes an event, which is then received by one or more subscribing systems or components. This pattern is useful when multiple systems or components need to be notified of a particular event or change.
  • Data replication: In this pattern, data is replicated between systems or components to ensure that they are in sync. This pattern is useful when multiple systems need to access the same data, and it is important that the data is consistent across all systems.

Scenario: An organization needs to pick up closed opportunities from Salesforce and generate the invoice and sales order in Netsuite. This process should be near real-time and an invoice cannot be generated unless the customer/client is added to the Netsuite system.

4. Scatter gather – The Scatter-Gather integration pattern is used in enterprise integration to aggregate results from multiple sources into a single response. It is also known as the Fan-Out/Fan-In or Map-Reduce pattern.

  • In this pattern, a message or request is sent to multiple recipients or systems, and each recipient processes the message independently, producing a partial response. The responses are then collected and aggregated to form a single response that is returned to the original sender.
  • The Scatter phase is responsible for distributing the message to multiple recipients or systems. Each recipient performs their work independently, without any knowledge of the other recipients. This phase can be implemented using a message broker or a multicast protocol.
  • The Gather phase is responsible for aggregating the responses received from each recipient. The aggregation can be performed using different strategies such as summing, averaging, or selecting the highest or lowest value. Once the aggregation is complete, the result is sent back to the original sender.
  • The Scatter-Gather pattern is useful in situations where a request needs to be processed by multiple systems or services in parallel, and the responses need to be combined into a single result. This pattern is commonly used in distributed systems, where multiple services must be coordinated to complete a task.

5. Messaging Integration Pattern

a. Reliability Pattern

1. Reliability integration patterns are a set of design patterns that help ensure the reliability of an application’s integration points. 

2. These patterns provide solutions for common challenges related to integration reliability, such as failures, timeouts, retries, and error handling.

3. Some common reliability integration patterns include:

  • Bulkhead Pattern: This pattern is used to isolate failures in a distributed system. It works by partitioning the system into multiple isolated components, each with its own resources and processing capacity. This way, if one component fails, it does not affect the others.
  • Circuit Breaker Pattern: The Circuit Breaker pattern can be applied to protect the web application from sudden spikes in traffic or server downtime. A circuit breaker component can be inserted between the web application and the backend services it relies on, such as a database or payment gateway. The circuit breaker can monitor the performance of these services and, if it detects any failures, it can open the circuit and stop sending requests to the service until it recovers. This helps prevent cascading failures and improves the overall reliability of the system.
  • Retry Pattern: The Retry pattern can be applied to improve the availability of backend services that are prone to transient errors. For example, if a database query fails due to a temporary network issue, the Retry pattern can be used to retry the query after a short delay. This can reduce the number of failed requests and improve the responsiveness of the system.
  • Timeouts Pattern: The Timeouts pattern can be applied to prevent long-running requests from tying up system resources and causing performance issues. For example, if a user uploads a large file that takes a long time to process, the system can use timeouts to limit the time the request is allowed to run. If the request exceeds the timeout, it can be canceled, and the user can be notified of the failure.

4. By using these reliability integration patterns, developers can ensure that their applications can handle integration failures and provide a more robust and reliable user experience

Scenario 1: Transactions and Payment Services with no loss of data –  Design an API strategy where-in the transaction for Customer Onboarding and Admission fees payment should be completed with no loss of data. As the customer is new, they need not send the request twice and transactions are important.

b. Publish and Subscribe Integration Pattern (Pub-Sub)

i. The Publish-Subscribe integration pattern is a messaging pattern that decouples components or services in a system.

ii. Publishers send messages to a message broker or middleware, which delivers messages to interested subscribers.

iii. Subscribers registered with the message broker to receive messages of interest.

iv. The message broker acts as a mediator between publishers and subscribers, decoupling them from each other.

v. The Publish-Subscribe pattern enables flexible and scalable communication styles, such as one-to-many, many-to-many, and topic-based communication.

vi. MQTT is a common implementation of the Publish-Subscribe pattern, used in IoT and other distributed systems.

vii. The Publish-Subscribe pattern provides a scalable and flexible way of integrating components and services in a distributed system by decoupling components.

Scenario 1: Publish Price of Products to Multiple Trading Partner Systems –  Design an API strategy where-in the price of a product gets updated and is to be replicated in multiple partner systems.

Scenario 2: Publish Price of Products to Multiple Trading Partner Systems –  Design an API strategy where-in the price of a product gets updated and is to be replicated in multiple partner systems. This time, data delivery is critical. Hence we have to combine the reliability pattern with the Pub-Sub pattern.

6. Large File Processing – When dealing with large files in MuleSoft, it’s important to consider the processing and handling of these files to ensure that the integration flow remains performant and reliable. Here are some points to consider that can be used for large file processing in MuleSoft:

  • Streaming: When dealing with large files, it’s important to stream the data in chunks rather than loading the entire file into memory at once. This can be achieved by using MuleSoft’s streaming capabilities, which allow you to read data from a file in small chunks and process it in real time.
  • Parallel processing: For very large files, it can be beneficial to split the file into smaller chunks and process them in parallel. This can be achieved using MuleSoft’s batch processing capabilities, which allow you to split a large file into smaller parts and process them independently.
  • Asynchronous processing: When processing large files, it’s important to avoid blocking the integration flow and slowing down the entire system. By using MuleSoft’s asynchronous processing capabilities, you can process large files in the background while still allowing other integration flows to run concurrently.
  • File compression: To reduce the size of large files and make them easier to handle, you can compress them using MuleSoft’s compression capabilities. This can help to reduce the amount of data that needs to be processed and improve overall performance.
  • Error handling: When dealing with large files, it’s important to have robust error handling in place to ensure that any errors or issues are detected and handled appropriately. This can be achieved using MuleSoft’s error-handling capabilities, which allow you to define custom error-handling logic for different scenarios.

By using the above points, you can ensure that large files are processed efficiently and reliably in your MuleSoft integration flows.

Scenario: Let’s consider a use case where we get data from a CSV file on an SFTP server and load it into Salesforce. Here, we are moving data from one system to another. The source is the SFTP file, and the destination is Salesforce.



We use cookies on this site to enhance your user experience. For a complete overview of how we use cookies, please see our privacy policy.