Apisero is now part of NTT DATA - Learn more.

Close this search box.

Cache Scope and Object Store in Mule 4

Before learning about Cache scope and object store in Mule 4 we understand what is in general Caching is and other related things.


Caching (pronounced “cashing”) is the process of storing frequently used data in memory, file system or database which saves processing time and load if it would have to be accessed from original source location every time.

In computing, a cache is a high-speed data storage layer which stores a subset of data, so that future requests for that data are served up faster than is possible by accessing the data’s primary storage location. Caching allows you to efficiently reuse previously retrieved or computed data.

How does Caching work?

The data in a cache is generally stored in fast access hardware such as RAM (Random-access memory) and may also be used in correlation with a software component. A cache’s primary purpose is to increase data retrieval performance by reducing the need to access the underlying slower storage layer.

Different Types of Caching

There are two main types of caching

  • client-side caching 
  • server-side caching

Caching Benefits

  • Decreased network costs: Content can be cached at various points in the network path between the content consumer and content origin. When the content is cached closer to the consumer, requests will not cause much additional network activity beyond the cache.
  • Improved responsiveness: Caching enables content to be retrieved faster because an entire network round trip is not necessary. Caches maintained close to the user, like the browser cache, can make this retrieval nearly instantaneous.
  • Increased performance on the same hardware: For the server where the content originated, more performance can be squeezed from the same hardware by allowing aggressive caching. The content owner can leverage the powerful servers along the delivery path to take the brunt of certain content loads.
  • Availability of content during network interruptions: With certain policies, caching can be used to serve content to end-users even when it may be unavailable for short periods of time from the origin servers.

General Cache Use Cases

  • In-memory data lookup: If you have a mobile/web app front end you might want to cache some information like user profile, some historical / static data, or some API response according to your use cases. Caching will help in storing such data. Also when you create a dynamic programming solution for some problem, 2-dimensional array or hash maps work as a caching.
  • RDBMS Speedup: Relational databases are slow when it comes to working with millions of rows. Unnecessary data or old data, high volume data can make their index slower, irrespective of whether you do sharding or partitioning, a single node can always experience delay & latency in query response when the database is full in its capacity.
  • Session Store: Active web sessions are very frequently accessed data — whether you want to do API authentication or you want to store recent cart information in an e-commerce app, the cache can serve session well.
  • Token Caching: API Tokens can be cached in memory to deliver high-performance user authentication and validation.

Eviction Policy

  • Least Recently Used (LRU): In most caching use cases, applications access the same data again & again. In such use cases, the cached data that is not used very recently or sort of cold data can be safely evicted.
  • Least Frequently Used (LFU): If you have a case where you know that the data is pretty repetative, surely go for LFU to avoid cache miss.
  • Most Recently Used (MRU): This strategy removes most recently used items as they are not required at least in the near future.
  • First In, First Out (FIFO): It’s more of like MRU but it follows strict ordering of inserted data items. MRU does not honour insertion order.

Caching Strategy

  • Single Node (In-Process) Caching: It’s a caching strategy for non distributed systems. Applications instantiate & manage their own or 3rd party cache objects. Both application & cache are in the same memory space.
  • Distributed Caching: When we talk about internet scale web applications, we actually talk about millions of requests per minute, petabytes or terabytes of data in the context. So a single dedicated powerful machine or two will not be able to handle such humongous scale. We need several machines to handle such scenarios. Multiple machines from tens to hundreds make a cluster & at a very high scale we need multiple such clusters.

Caching in MULE 4

In Mule 4 caching can be achieved in mule using cache scope and/or object-store. Cache scope internally uses Object Store to store the data. Cache scope and object store have there specific use cases where they can be used effectively.

What is Cache Scope

The Cache scope is for storing and reusing frequently called data. You can use a Cache scope to reduce the processing load on the Mule instance and to increase the speed of message processing within a flow. It is particularly effective for these tasks:

  • Processing repeated requests for the same information.
  • Processing requests for information that involve large, non-consumable payloads.

The Cache scope only caches non-consumable payloads. It does not cache consumable payloads (such as a streaming payload), which can only be read once before they are lost.

What is Object Store

Object Store lets applications store data and states across batch processes, Mule components, and applications, from within an application. If used on cloud hub, the object store is shared between applications deployed on Cluster.

Object Store v2 features

  • Persists keys for 30 days unless updated. If a key is updated, the TTL (time-to-live) is extended by another 30 days. In Object Store v2 there is no limit on the number of keys per app.
  • It allows for an unlimited number of entries. There is no limit on the total size of v2 object-stores.
  • Stores value up to 10 MB (when Base64 encoded) in size. You can estimate base64 size of a payload as follows: CEILING(base10size * 1024/3) * 4, where base10size is object size in bytes. + Example: A base10 payload size of 7.5 MB converts to 10.24 MB base64.
  • It is available in all supported regions and availability zones within each region.
  • It is co-located in the same region as your workers. For example, workers hosted in Singapore would use Object Store v2 hosted in Singapore.
  • It provides a Mule connector and REST interface for access by external applications.
  • Provides end-to-end secure TLS-transport.
  • Encrypts persistent storage to FIPS 140-2 compliant standards.

Object Store File Location for On Perm

While running the project using a studio object store file will be created at the below-mentioned location.


While running the project on the perm object store file will be created at the below mentioned location.


How Cache Scope Works

Cache scope works as per the below-mentioned Algorithm.

  1. Start Cache Scope
  2. Check if filter expression is present or not?
    1. If filter expression is present then evaluate the expression
      1. If the evaluation result is true to go to step # 3
      2. If the evaluation result is false to go to step # 9
  3. Generate the Key for request
    1. If Default then generate a key using SHA256KeyGenerator
    2. If Key Expression then generate a key using a custom expression
    3. If Key Generator then generate a key using custom java class
  4. Check if Key is Present in Cache?
    1. If Key is present in cache get the corresponding response use it and go to step # 10(This is called a Cache Hit)
    2. If Key is not present in cache go to step # 5 (This is called a Cache Miss)
  5. Execute Outbound processor
  6. Is the response is consumable?
    1. If yes go tostep # 10
    2. If no go tostep # 7
  7. Generate Response Key Using SHA256 Digest
  8. Store the Response in Cache and go to step # 10
    1. If Custom Object store is configured store it using specified configuration.
    2. Else store the response in memory/local file
  9. Execute Outbound processor
  10. End Cache Scope

Mule 4 Project


The project has 3 flows:

  • helloworldFlow which listens on /hello and returns the json payload as “Hello World”.
  • byeworldFlow which listen on /hello and returns the json payload as “Bye World”.
  • scheduler calls helloworldFlow and byeworldFlow over HTTP from inside the Cache scope.
  • objectStoreFlow call helloworldFlow over HTTP, store response using a custom key and retrieve response using the same key.

Here is configuration XML for flows

<?xml version=”1.0″ encoding=”UTF-8″?>

<mule xmlns:os=”http://www.mulesoft.org/schema/mule/os” xmlns:ee=”http://www.mulesoft.org/schema/mule/ee/core” xmlns:http=”http://www.mulesoft.org/schema/mule/http” xmlns=”http://www.mulesoft.org/schema/mule/core” xmlns:doc=”http://www.mulesoft.org/schema/mule/documentation” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd http://www.mulesoft.org/schema/mule/ee/core http://www.mulesoft.org/schema/mule/ee/core/current/mule-ee.xsd http://www.mulesoft.org/schema/mule/os http://www.mulesoft.org/schema/mule/os/current/mule-os.xsd”>

 <http:listener-config name=”HTTP_Listener_config” doc:name=”HTTP Listener config” doc:id=”ed8128b6-5001-4964-9828-9246c1d2a4b9″>

   <http:listener-connection host=”″ port=”8081″/>


 <http:request-config name=”HTTP_Request_configuration” doc:name=”HTTP Request configuration” doc:id=”79f2b2b7-1811-4aa0-9286-2778ac9049a6″>

   <http:request-connection host=”localhost” port=”8081″/>


 <os:object-store name=”Object_store” doc:name=”Object store” doc:id=”ec62ef70-080b-4a1a-86d2-9466b4c40a35″/>

 <ee:object-store-caching-strategy name=”Caching_Strategy” doc:name=”Caching Strategy” doc:id=”628bf63b-c6a9-4a21-8e50-135f4fafe317″ objectStore=”Object_store”/>

 <flow name=”helloworldFlow” doc:id=”0288e2b0-5cf6-41b7-a29e-3cb8ba1effcc”>

   <http:listener doc:name=”Listener” doc:id=”38bc54c8-7d8d-477d-8f9d-88e2f31bb965″ config-ref=”HTTP_Listener_config” path=”/hello”>

     <http:response statusCode=”200″>

        <http:body><![CDATA[#[output application/json —payload]]]></http:body>

        <http:headers><![CDATA[#[output application/java


       “Content-Type” : “application/json”




   <set-payload value=”#[{ &quot;response&quot;: &quot;Hello World&quot; }]” doc:name=”Set Payload” doc:id=”2f87aff3-a56f-4da4-ba1d-d0feadba3685″/>

   <logger level=”INFO” doc:name=”Logger” doc:id=”847b059e-bfea-496f-b49c-bbdf092fae35″ message=”#[&quot;Hello Called&quot;]”/>


 <flow name=”byeworldFlow” doc:id=”28528dc4-bfea-460e-b91d-26ccd3c005ac”>

   <http:listener doc:name=”Listener” doc:id=”39e03cf9-cd3b-4308-879b-73bdc41fe2af” config-ref=”HTTP_Listener_config” path=”/bye”>

     <http:response statusCode=”200″>

        <http:body><![CDATA[#[output application/json —payload]]]></http:body>

        <http:headers><![CDATA[#[output application/java


       “Content-Type” : “application/json”




   <set-payload value=”#[{ &quot;response&quot;: &quot;Bye World&quot; }]” doc:name=”Set Payload” doc:id=”fd66f58b-ee9c-431e-9529-3242c49a3420″/>

   <logger level=”INFO” doc:name=”Logger” doc:id=”de70255d-b2c4-42e9-84c4-2ce0a72720b1″ message=”#[&quot;Bye Called&quot;]”/>


 <flow name=”scheduler” doc:id=”cd8c639d-f471-4b46-a5d2-6f83dd6d733e”>

   <scheduler doc:name=”Scheduler” doc:id=”5e8514a3-ef66-4554-b824-c470dcb60fa3″>


        <fixed-frequency frequency=”10″ timeUnit=”SECONDS”/>



   <ee:cache doc:name=”Cache” doc:id=”eaeefc29-5c3d-47cc-8cf3-632c98b2088b” cachingStrategy-ref=”Caching_Strategy”>

     <http:request method=”GET” doc:name=”hello” doc:id=”c918c0f2-bf73-46bf-b862-ca8646b2de41″ config-ref=”HTTP_Request_configuration” path=”/hello”/>

     <http:request method=”GET” doc:name=”bye” doc:id=”36ff7b9e-917f-44d8-888b-7a6caa9052d4″ config-ref=”HTTP_Request_configuration” path=”/bye”/>


   <logger level=”INFO” doc:name=”Logger” doc:id=”deaa1204-09e2-4cd6-9c5a-746d7177b8ea” message=”#[&quot;Scheduler Called&quot;]”/>


 <flow name=”objectStoreFlow” doc:id=”7bef0ebd-89b3-4e13-9aab-13df96c33430″>

   <http:listener doc:name=”Listener” doc:id=”8b93b995-7807-40cb-8b8e-0f92561c85b4″ config-ref=”HTTP_Listener_config” path=”/objectStore”/>

   <http:request method=”GET” doc:name=”hello” doc:id=”31b8bc93-cf9e-43b0-8cee-f28328e817cc” config-ref=”HTTP_Request_configuration” path=”/hello”/>

   <os:store doc:name=”Store” doc:id=”14ce71ea-daea-4b51-a998-2871f8a5db48″ key=”output”/>

   <os:retrieve doc:name=”Retrieve” doc:id=”5a534ae0-0f89-4964-af5c-32a4a4fbe85b” key=”output”>



   <logger level=”INFO” doc:name=”Logger” doc:id=”8f04d071-81f7-48cb-ac25-273650037ca7″/>



Hello World Flow

Bye World Flow

Scheduler Flow

Object Store Flow

Standard Output Logs

When we run the project schedule flow gets triggered every 10 seconds. It calls hello world flow and bye world flow from inside the cache. So when scheduler flow gets triggered from the first time it calls both the flows on http and store the response in cache i.e., object store. When the scheduler flows get triggered 2nd time onwards it used cached responses and not call flows.

Cache Configuration Details

Default Cache Configuration without Caching Strategy.

Caching Strategy Details

Caching strategy configuration for how to handle Key Generation.

Object Store Configuration

Caching strategy with Object Store to customize the Object Store behavior like Max Entries, Entry TTL, etc.

Use Cases for Cache Scope and Object Store

Cache Scope is used in below-mentioned cases

  • Need to store the whole response from the outbound processor
  • Data returned from the outbound processor does not change very frequently
  • As Cache scope internally handle the cache hit and cache miss scenarios it is more readable

Object Store is used in below-mentioned cases

  • Need to store custom/intermediary data
  • To store watermarks
  • Sharing the data/stage across applications, schedulers, batch.

Conclusion: In this blog, we have seen what is caching, its use cases, different types of eviction mode and caching strategy. Also, we have seen how we can implement the caching using Mule 4.

We use cookies on this site to enhance your user experience. For a complete overview of how we use cookies, please see our privacy policy.