Spring Cache with In-Memory Data Grid

Spring framework is a very popular lightweight dependency-injection container for Java. It simplifies Java development and the overall complexity of your enterprise applications. And, these Java applications frequently need to handle a lot of transactions for which they require scalability and robustness. And, although they’re able to scale their application tier but their database or data storage becomes a bottleneck.

To overcome these challenges, people are turning to In-Memory Data Grids (IMDG) like TayzGrid. An In-Memory Data Grid is a linearly scalable key value store that lets you cache frequently used data in a distributed manner and helps reduce those expensive database trips. As a result, the pressure on your database is significantly offloaded and is no longer a bottleneck.

Fortunately, Spring provides a project named Spring Cache that not only lets you integrate these IMDGs but also helps you simplify the way you use it. Instead of making caching API calls throughout your program, you simply annotate the methods whose output you want to cache and Spring takes care of the rest.

How Spring Cache Works

To understand how to use Spring Cache and an IMDG, lets take a look at the following function named getbyID which is responsible to fetch some information from the database (or do some computationally expensive calls)

@Cacheable("Customer") //Results of this method will be cached
  public Customer getById(String customerId) {
    //Database calls or Expensive Computations
    return customer;

When you use the @Cacheable annotation on the method getById, Spring Framework starts intercepting all calls to this method. It then checks the cache to see if the method has been previously executed with the provided method parameters and its results are already in the cache. If they are in the cache then it returns those results instead of executing the method again. But, if the method has not been executed, then it executes this method, obtains the results and caches them for future references and then returns the control back to the caller along with the results.

To enable this annotation-driven cache management you need to place @EnableCaching in your controller or Main class. It registers all the necessary Spring Components, responsible for caching, that power the framework’s aspect oriented architecture i.e. intercepting method calls where @Cacheable is provided.

Plugging in an In-Memory Data Grid

The next and the last step is to plug in an IMDG. TayzGrid is one such In-Memory Data Grid and is extremely fast and scalable. It provides Spring Cache integration that you can find in the installation folder. To plug it in, all you need to do is to provide the TayzGrid CacheManager to the Spring framework and then add in all the references from the TayzGrid lib folder. For an example, see the following code

public class Main{

  public void run(String... args) throws Exception { … }

  public CacheManager cacheManager() {
    TayzGridConfigurationManager tgconfig = new TayzGridConfigurationManager();
    TayzGridCacheManager cacheManager = new TayzGridCacheManager();
    return cacheManager;


Now all the @Cacheable annotated method calls will be intercepted and  directed to TayzGrid from here on, storing the method results in a distributed cache.

In the above example I referenced an XML, this tayzgrid-spring.xml stores and configures how TayGrid caches are accessed and you can configure them giving you more control. Create an XML file named tayzgrid-spring.xml in your project and enter the following configuration.

<application-config enable-cache-exception="true"
    <cache name="Customer"
    <cache name="DefaultCache"
           expiration-period="15" />

You can control in which cache you want to store data by changing the name in the cache tag. If you see the @Cacheable annotation I previously used, it was with a string “Customer”. This lets you select where you want your methods results to be cached. Each Cache could have its own cluster giving you complete power on your application architecture. Just remember to start those caches from the TayGridCacheManager before you test your code. You can also configure the behavior of each of your caches separately. For more detail check out the TayzGrid Programmers Guide

That’s it! With a few changes you can now cache your methods and avoid expensive database trips all with the help of Spring Cache. Plugging in TayzGrid helps you to have the ability to scale your architecture at peak loads. You’ll have your data distributed across a cluster providing high availability and high scalability to your applications. You can also integrate TayzGrid without any code change.

TayzGrid is Open Source (Apache 2.0 license) and is totally free to use. Learn more about TayzGrid or download a full working copy.

Posted in In-Memory Data Grid, Java Distributed Cache | Tagged , , | Leave a comment

Using Hibernate Second Level Cache with TayzGrid

Hibernate is one of the most used ORM solution for Java. It simplifies your database programming and enables you to develop persistent code following natural Object-oriented idioms. Hibernate is built to support high transaction environments and provides scalability at the application tier but it faces scalability bottlenecks in the database tier.

Knowing the issue, Hibernate provides a sophisticated caching mechanism. This helps applications cache frequently used data and take load off the database. In Hibernate, you get two kinds of caching:

  • First level cache
  • Second level cache

Hibernate First Level Cache

Hibernate has its own implementation of First Level Cache that is enabled by default (you cannot even disable it forcibly). This cache resides in memory and within the application process and the session scope. The primary purpose of this cache is to limit the number of SQL queries which Hibernate needs to execute within a given transaction and it does that very well. However, there are some limiting factors in this First Level Cache as mentioned below:

  • When running multiple applications in a web farm, caches are unable to sync with each other and therefore are unable to maintain data integrity.
  • The cache is within a Session Scope and gets destroyed when this scope goes away. And, since this happens frequently within Hibernate, the cache is created and destroyed over and over again which is costly.
  • Tomcat and Jetty worker processes recycle frequently. This causes the cache to be flushed and all the cached queries are lost. This results in these queries getting executed again against the database.

Hibernate Second Level Cache

In addition to the First Level Cache, Hibernate provides a pluggable Second Level Cache architecture. This allows you to plug in a third party second level cache provider simplify by modifying the configuration files. No coding is required.

Hibernate Second Level Cache is created in the Session Factory scope. This is different from the First Level Cache because it enables the second level cache objects to be available to all of the sessions within an application.

TayzGrid is an Open Source In-Memory Data Grid ready for production environments, which implements a Hibernate Second Level Cache provider. You can use it easily by plugging it in without any code changes. And, since TayzGrid lives outside the application scope, therefore the second level cache outlives the application lifetime. This allows you to keep the cache even when your application process restarts. Additionally, you can have multiple application processes in a multi-server web farm share a common second level cache.

Following configuration example shows you how easy it is to plug in TayzGrid to Hibernate:

     <property name="hibernate.cache.use_second_level_cache">
     <property name="hibernate.cache.region.factory_class">
    <property name="tayzgrid.application_id">
    <property name="hibernate.cache.use_query_cache">

  <application-config application-id="myapp"
      <region name="DefaultRegion"

Take a look at some benefits TayzGrid provides if you use it as Hibernate Second Level Cache,

  • Cache lives outside your application: This helps to keep cache and applications machine separate ensuring consistent access across multiple servers. There are no Data integrity issue as cache data is sharable between all the applications.
  • Scalable cache architecture: You can add new servers on the fly in a production environment. TayzGrid’s robust architecture distributes data at runtime to accommodate more traffic on high loads.
  • High availability & Linear Scalability: Scaling out helps to keep data available at times of server failure. The performance gained from adding server is linear, meaning; the more the servers you add the more the performance guaranteed and reducing your database traffic by 80%. Handle higher transactions without worrying about the database bottleneck.
  • In-process client caches: TayzGrid provides support of client cache that is very close to your application. In fact, you can have keep this cache within your application process (InProc mode) or in a local but separate process. The Client Cache keeps itself synchronized with the TayzGrid clustered to avoid data integrity issues. All of this gives you a super fast cache very close to your application that boosts your application performance greatly.
  • Easy Management: Use simple and easy to learn TayzGrid Manager to manage your cluster and add or remove machines in an active production environment to increase performance.

To make the most out of Hibernate you need to have second level cache enabled and a distributed cache like TayzGrid (Open Source) plugged into it. Do this with zero code change and still run it in a multi-server configuration. You can scale-out your application and handle high load by using TayzGrid as a Hibernate Second Level Cache.

Posted in Hibernate, In-Memory Data Grid | Leave a comment

Using an In-Memory Key Value Store to scale Java Apps

Businesses today are developing high traffic web applications that serve tens of thousands or even millions of concurrent users. A Majority of these applications are developed in Java and are deployed in a load-balanced web farm consisting of Linux or Windows servers. They incorporate an In-Memory Key value store to do so.

In such a high traffic environment, two things are very important. First is that your application performance should maintain response rates even under peak loads (scalability) and secondly, your application should stay up all the time and should not crash (high availability).

Incidentally, Java web application architectures can easily accommodate both needs. A load-balanced web farm allows you to add more web/app servers seamlessly. Having more than one web/app server means you don’t have a single point of failure, hence high availability.

Despite all of this, the data storage and databases just cannot cope with this much load. That is why, you see your application slowing down under peak loads because the database becomes the bottleneck. It is so because it cannot distribute its data and processing to multiple servers like your application tier can. So, although you can scale up your database server by purchasing a more expensive hardware but you cannot scale out by adding more and more database servers. Scaling up is very limited.

Java In-Memory Key Value Store

So, what is the solution to all of this? Fortunately, you can use an In-Memory Key Value Store to keep frequently used data in the cluster and reduce those expensive database trips. An In-Memory Key Value Store can be distributed to multiple servers which means you can linearly scale to handle greater transaction loads. This is what sets an In-Memory Key Value Store apart from your database server.

An In-Memory Key Value Store like TayzGrid works by simply storing all data as a combination of “keys” and “values”. In case of multiple servers, the key determines on which server the data is stored. It can do that by using a hash value algorithm just like in a HashMap data structure. The interface for you is a simple HashMap. Don’t be confused by the word Key value store and key value database they are one and the same thing.

The way to benefit from an In-Memory Key Value Store is to keep your frequently used data in it. This reduces your expensive database trips because 80% of the time you find your data in this Key Value Store. As a result, your database is no longer a bottleneck for you.

Example of an In-Memory Key Value Store

try {
    Cache = TayzGrid.initializeCache("myDistributedCache");
    customer = (Customer) cache.get(customerId);
    //If key not found in cache, Load from the database
    if (customer == null) {
        //Your logic goes here
        ResultSet rs = stmt.executeQuery(" ... ");
        if (rs.next()) {
            customer = new Customer();

            //Cache key for future use
            cache.insert(customerId, customer,
                  null, null, null, CacheItemPriority.Default);
} catch (Exception exp) {
    throw exp;

TayzGrid, a fast and scalable Open Source In-Memory Data Grid for Java released under Apache 2.0 license, is one easy & feature rich solution. It is used as an In-Memory Key Value Store by Java applications. Because it is 100% JCache (JSR 107) compliant you can plug it in without any code changes to your JCache based application.

TayzGrid is developed from NCache source code into a native Java product. NCache is a powerful .NET Distributed Cache which has been the market leader for the last 10 years. This makes TayzGrid also a mature product.

Since TayzGrid is Open Source, it is totally free to use. If your application is business sensitive, you can choose to use the Professional or Enterprise Editions that come with full support and more features.

Learn more about TayzGrid from the link below or download a fully working copy:

Download TayzGrid         Edition Comparison

Posted in NoSQL | Tagged , , | 2 Comments

Using Java Distributed Cache to Scale Java Applications

Java is very popular for developing high traffic enterprise level applications with a requirement to use a java distributed cache. These may be Java Web applications, Java web services, or other Java server type applications. A wide range of these applications belong to Online Transaction Processing (OLTP) category involving millions of transactions.

You can handle this high transaction load by adding more web/app servers at the application tier but you cannot do the same at the database tier. The most you can do is buy more expensive hardware for your database server, but that only takes you so far. Therefore, the database always becomes a bottleneck and slows down your Java applications and may even grind it to a halt if too much load is put on it.

Database Scalability Problem in High Traffic Applications

Database Scalability Problem in High Traffic Applications

Figure 1: Database Scalability Problem in High Traffic Applications

So, what to do about this? Well, Java Distributed Cache has become quite popular for handling such situations. It lets you cache application data in memory and reduce those expensive database trips that are overwhelming your database server.  Unlike a database, a Java Distributed Cache can scale linearly by letting you add more servers at run time as you need to handle greater transaction loads. This ensures that your Java Cache never becomes a scalability bottleneck.

Java Distributed Cache also lets you store your Java WebSession or Servlet Session in it. These sessions are then replicated in a multi-server environment so you don’t lose any session data if a cache server ever goes down. Plus, Java Cache is a much faster and more scalable session persistence store than your JSP Servlet container like Tomcat or JBoss.

TayzGrid is an in-memory data grid and a Java Distributed Cache for Java applications. It is extremely fast and scalable and provides linear scalability to you.

Below is an example of using TayzGrid from Java Application:

import com.alachisoft.TayzGrid.web.caching.Cache;
import com.alachisoft.TayzGrid.runtime.*;

public class MyCachingSample {
    private String url = "dbc:msql://";

    public Employee GetEmployee(String empId) throws Exception {
        Employee emp = null;
        try {
            Cache cache = TayzGrid.initializeCache("MyDistributedCache");
            emp = (Employee) cache.get(empId);
            //If key not found in cache, Load from the database
            if (emp == null) {
                //Your logic goes here
                ResultSet rs = stmt.executeQuery("SELECT * FROM Employee WHERE EmpId ='" + empId + "'");
                if (rs.next()) {
                    emp = new Employee();
                    emp.EmpId = rs.getString("EmpId");

                    //Cache key for future use
                    cache.insert(empId, emp, null, Cache.NoAbsoluteExpiration, Cache.NoSlidingExpiration, CacheItemPriority.Default);
        } catch (Exception exp) {
            throw exp;
        //return the required object
        return emp;

TayzGrid supports the industry standard JCache API (JSR 107) so you are not locked into a proprietary caching solution. It also provides powerful GUI based and command-line configuration and monitoring tools. This makes TayzGrid management very simple.
Please download a fully working copy of  TayzGrid Enterprise and try it out for yourself.


Posted in Java Distributed Cache | Tagged | 2 Comments