How to Configure Preloading in a Distributed Cache?

 

Today’s applications need to scale and handle extreme levels of transaction loads. But, databases are unable to scale and therefore become a bottleneck. To resolve this, many people are turning to in-memory distributed cache because it scales linearly and removes the database bottlenecks.

A typical distributed cache contains two types of data, transactional data and reference data. Transactional data changes very frequently and is therefore cached for a very short time. But, caching it still provides considerable boost to performance and scalability.

Reference data on the other hand does not change very frequently. It may be static data or dynamic data that changes perhaps every hour, day, etc. At times, this data can be huge (in gigabytes).

It would be really nice if this reference data could be preloaded into a distributed cache upon cache start-up, because then your applications would not need to load it at runtime. Loading reference data at runtime would slow down your application performance especially if it is a lot of data.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

How Should Reference Data be Preloaded into a Distributed Cache?

One approach is to design your application in such a way that during application startup, it fetches all the required reference data from the database and puts it in the distributed cache.

However, this approach raises some other issues. First, it slows down your application startup because your application is now involved in preloading the cache. Second, if you have multiple applications sharing a common distributed cache, then you either have code duplication in each application or all your applications depend on one application preloading the distributed cache. Finally, embedding cache preloading code inside your application corrupts your application design because you’re adding code that does not belong in your application. Of course, neither of these situations is very desirable.

What if we give this preloading responsibility to the distributed cache itself? In this case, preloading could be part of the cache startup process and therefore does not involve your application at all. You can configure the cache to preload all the required data upon startup so it is available for all the applications to use from the beginning. This simplifies your application because it no longer has to worry about preloading logic.

NCache provides a very powerful and flexible cache preloading capability. You can develop a cache loader and register it with NCache and then NCache calls this custom code developed by you upon cache startup. Let me demonstrate how to do this below:

  • Implement a simple interface named ICacheLoader. It is called to assist the cache in answering the question “How and which data to load?”
class CacheLoader : ICacheLoader
{

 public void Init(System.Collections.IDictionary parameters)
 {
   // Initialization of data source connection, assigning parameters etc.
 }

 public bool LoadNext(
 ref System.Collections.Specialized.OrderedDictionary data,
 ref object index)
 {
   // Fill ref objects with data that should be loaded in cache.
 }

 public void Dispose()
 {
   // Disposing connections and other required objects.
 }

}

  • Next step is to configure above implemented startup loader with cache. We can do it by using NCache Manger that comes with NCache or simply by adding following configuration in cache config.
<cache -loader retries="3" retry-interval="0" enable-loader="True">
   <provider assembly-name="TestCacheLoader, Version=1.0.0.0, Culture=neutral,
    PublicKeyToken=null"
    class-name="TestCacheLoader.CacheLoader"
    full-name="TestCacheLoader.dll"></provider>
   < !—parameters that will be passed to Init method of the loader-->
   <parameters name="connectionString"
   value="Data Source= SQLEXPRESS;
   Initial Catalog=testdb;Integrated Security=True;Pooling=False"></parameters>
</cache>

Any exception occurred during startup loader processing is logged without creating any problem for your application. Simple and effective!

As you can see, NCache provides you a powerful mechanism to preload your distributed cache and keep the performance of your applications always high.

Download NCache Trial | NCache Details

Posted in Distributed Cache | Tagged , , | Leave a comment

How to use a Distributed Cache for ASP.NET Output Cache?

ASP.NET Output Cache is a mechanism provided by Microsoft that allows you to keep an in memory copy of the rendered content of ASP.NET page. Due to this, ASP.NET is able to serve the subsequent user requests for this page from an in-memory cached copy instead of re-executing this page which can be quite expensive because it usually also involves database calls.

So, ASP.NET Output Cache not only improves your application performance, but also reduces a lot of expensive database trips. This improves your ASP.NET application scalability because otherwise the database would become a scalability bottleneck if all those ASP.NET pages were executed again and again.

But, the problem with the ASP.NET Output Cache is that it resides in your ASP.NET worker process address space. And, worker process resets or recycles quite frequently. When that happens, all the ASP.NET Output Cache is lost. Secondly, in case of a web garden (meaning multiple worker processes) the same page output is cached multiple times, once in each worker process and this consumes a lot of extra memory. Also, did you know that one of the most common support calls for ASP.NET is out-of-memory errors caused by ASP.NET Output Cache?

Download NCache free trial - Extremely fast and scalable in-memory distributed cacheAlso read: ASP.NET Output Cache in Microsoft Azure to Improve Performance

To overcome these limitations of ASP.NET Output Cache and to take advantage of its benefits, try using a distributed cache for storing all of ASP.NET Output Cache content. In this regard, NCache has implemented an ASP.NET Output Cache provider to enable the caching of ASP.NET rendered output in out-of-process (out-proc) cache instead of worker process address space. This way, the output of your rendered ASP.NET page is available to all other web servers in the web farm without even rendering the same ASP.NET page locally in each worker process.

By using NCache as ASP.NET Output Cache provider you can not only cache more data in out-proc cache,  but can also dramatically reduce the load on your database. This is because each rendered ASP.NET page output is accessible to all web servers in web farm without executing the page rendering process in each worker process which involves expensive database trips.

Further, NCache as ASP.NET Output Cache provider gives you the flexibility to even cache the output of certain parts of your ASP.NET page instead of complete page. This approach is very helpful in scenarios where you want certain parts of your ASP.NET to render each time. In addition, NCache also provides you high availability because even if your worker process resets or recycles, your data is not lost because it is not part of your worker process address space and resides on separate caching servers.

Steps to Configure NCache Output Caching Provider

1. Register NCache as ASP.NET Output Cache Provider: Modify your ASP.NET applications web.config to register NCache output caching provider as follows:


<caching>
    <outputCache defaultProvider ="NOutputCacheProvider" />
<providers>
<add name="NOutputCacheProvider"
     type="NCOutputCache.NOutputCacheProvider"
     exceptionsEnabled="true" enableLogs="false"
     cacheName="mypartitionofReplicaCache" />
</providers>
</outputCache>
</caching>

<compilation debug="true" targetFramework="4.0">
   <assemblies>
      <add assembly="Alachisoft.NCache.OutputCache,
           Version=4.1.0.0, Culture=neutral"/>
   </assemblies>
</compilation>

2. Add ASP.NET Output Cache tag: Add the under mentioned Output cache tag to those pages whose output you want to cache.


<%@ OutputCache VaryByParam="ID" Duration="300">

By the way, ASP.NET versions earlier than ASP.NET 4.0 do not provide support of custom ASP.NET Output Cache providers. Therefore, to support all earlier versions of ASP.NET NCache has also implemented another version of ASP.NET Output Cache provider using an HttpModule. This HttpModule based ASP.NET Output Cache provider by NCache enables you to use distributed cache to store rendered ASP.NET page output, even if your application is using ASP.NET versions earlier than 4.0.

In summary, by using NCache output caching provider you can easily boost your ASP.NET application response time and can reduce database load.

Download NCache Trial | NCache Details

Posted in ASP.Net, ASP.NET Output Cache, Distributed Cache | Tagged , , | 1 Comment

How to Share Different Object Versions using Distributed Cache?

It is a fact of life that organizations have different applications using different versions of the same classes because they were developed at different points in time and are unable to keep up. It is also a fact of life that these applications often need to share data with each other.

You can do this through the database.  However, that is slow and not scalable. What you really want is to share objects through a distributed cache that is fast and scalable. But, object sharing at runtime immediately raises the issue of version compatibility.

One way to do this is through XML in which you can transform one version of an object to another. But it is extremely slow. Another way is to implement your own custom transformation that takes one object version and transforms it to another version. But, you have to maintain this, which is a lot of effort for you.

Wouldn’t it be nice if the distributed cache somehow took care of version compatibility for you? Well, NCache does exactly that. NCache provides you a binary-level object transformation between different versions. You can map different versions through an XML configuration file and NCache understands how to transform from one version to another.

Additionally, since NCache stores all these different versions in a binary format (rather than XML), the data size stays very compact and small and therefore fast. In a high traffic environment, object size adds up to be a lot of extra bandwidth consumption, which has its own cost associated with it.

Here is an example of NCache config.ncconf with class version mapping:

<cache-config name="myInteropCache" inproc="False" config-id="0"
    last-modified="" type="clustered-cache" auto-start="False">
...
    <data-sharing>
      <type id="1001" handle="Employee" portable="True">
        <attribute-list>
          <attribute name="_Name" type="System.String" order="1"/>
          <attribute name="_Age" type="System.Int32" order="2"/>
          <attribute name="_Address" type="System.String" order="3"/>
          <attribute name="_ID" type="System.String" order="4"/>
          <attribute name="_PostalAddress" type="System.String" order="5"/>
        </attribute-list>
        <class name="DataModel.Employee:1.0.0.0" handle-id="1"
               assembly="DataModel, Version=1.0.0.0, Culture=neutral,
               PublicKeyToken=null" type="net">
          <attribute name="_Name" type="System.String" order="1"/>
          <attribute name="_Age" type="System.Int32" order="2"/>
          <attribute name="_Address" type="System.String" order="3"/>
        </class>
        <class name="DataModel.Employee:2.0.0.0" handle-id="2"
               assembly="DataModel, Version=2.0.0.0, Culture=neutral,
               PublicKeyToken=null" type="net">
          <attribute name="_ID" type="System.String" order="4"/>
          <attribute name="_Name" type="System.String" order="1"/>
          <attribute name="_Age" type="System.Int32" order="2"/>
          <attribute name="_PostalAddress" type="System.String" order="3"/>
        </class>
      </type>
    </data-sharing>
...
  </cache-config>

How does NCache do it?

In the config.nconf file that you see above, you’ll notice that the Employee class has a set of attributes defined first. These are version independent attributes and appear in all versions. This is actually a superset of all attributes that appear in different versions. Below that, you specify version-specific attributes and map them to version-independent attributes above.

Let’s say that you saved Employee version 1.0.0.0, which had a subset of the Employee version 2.0.0.0. Now, when another application tries to fetch the same Employee, but it wants to see it as version 2.0.0.0, NCache knows which version 2.0.0.0 attributes to fill with data and which ones to leave blank.

Secondly, in above sample config you will notice that Employee version 2.0.0.0 does not have the Address field in it even though version 1.0.0.0 has it. So, in this case, when NCache tries to read Employee 1.0.0.0 stored in the cache and tries to transform it to Employee version 2.0.0.0, it knows not to copy the Address field because it is not there in this newer version.

There are many other scenarios that NCache handles seamlessly for you. Please read the online product documentation for more detail on this.

Finally, the best part in all of this is that you don’t have to write any serialization code or make any code changes to your application in order to use this NCache feature. NCache has implemented a runtime code generation mechanism, which generates in-memory serialization and deserialization code of your classes at runtime, and uses the compiled form which is very fast.

In summary, using NCache you can now share different object versions between your applications without even modifying your application code.

Download NCache Trial | NCache Details

Posted in Binary Serialization, Data Sharing with Distributed Cache, Distributed Cache, Object Versions Sharing, Serialization | Tagged , , , | Leave a comment

Query a Distributed Cache with SQL-Like Aggregate Functions

Today, distributed cache is widely used to achieve scalability and performance in high traffic applications. Distributed cache offloads your database servers by serving cached data from in-memory store. In addition, few distributed caches also provide you SQL-like query capability, which you can easily query your distributed cache the way you query your database i.e. “SELECT employee WHERE employee.city=‘New York’”

First of all, most distributed caches don’t even provide SQL-like querying capabilities. Even a few that do provide this have a very limited support for it. They only provide searching of distributed cache based on simple criteria. Whereas, there are several scenarios where you have to find the result based on aggregate functions i.e. “SELECT COUNT(employee) WHERE salary > 1000” or “SELECT SUM(salary) WHERE employee.city = ‘New York’”. In order, to achieve this you have to first query the distributed cache and then calculate the aggregate function on fetched cache data.

This approach has two major drawbacks. First is that you have to execute query on distributed cache, which involves fetching of all the data from distributed cache to cache client. This data may vary from MBs to GBs, and this operation becomes more expensive when you are also paying for the consumed network bandwidth. Moreover, mostly you don’t need this data after you are done with aggregate function calculations.

Second drawback is that it involves custom programming for aggregate function calculation. This adds extra man hours and still most of the complex scenarios cannot be covered. It would be much nicer if you could continue to develop application for the purpose that it is being built and not worry about designing and implementing these extra features yourself.

These are the reasons why NCache provides you the flexibility to query distributed cache using aggregate functions like COUNT, SUM, MIN, MAX and AVG as part of its Object Query Language (OQL). Using NCache OQL aggregate functions, you can easily perform the required aggregate calculations inside the distributed cache domain. This approach not only avoids the network traffic, but also provides you much better performance in term of aggregate functions calculation. This is because all the selections and calculations are done inside the distributed cache domain and no network trips are involved.

Here is the code sample to search NCache using OQL aggregate queries:

public void Main(string[] args)
{
    ...
    NCache.InitializeCache("myPartitionReplicaCache");

    String query = "SELECT COUNT(Business.Product) WHERE
                           this.ProductID > ?  AND this.Price < ?";

    Hashtable param = new Hashtable();
    param.Add("ProductID", 100);
    param.Add("Price", 50);

    // Fetch the cache items matching this search criteria
    IDictionary searchResults = _cache.SearchEntries(query, values);
    ...
}

For more reduced query execution time, NCache runs the SQL-like query in parallel by distributing it to all the cache servers  just like the map-reduce mechanism. In addition, you can use NCache OQL aggregate queries in both .NET and Java applications.

In summary, NCache provides you not only the scalability and performance, but also the flexibility of searching distributed cache using SQL like aggregate function.

Download NCache Trial | NCache Details

Posted in Aggregate Functions, Distributed Cache, Object Query | Tagged , , , | Leave a comment

Class Versioning in Runtime Data Sharing with Distributed Cache

Today many organizations use .NET and Java technologies to develop different high traffic applications. At the same time, these applications not only have a need to share data with each other, but also want to support runtime sharing of different version of the same class for backward compatibility and cost reduction.

The most common way mostly used to support runtime sharing of different class versions between .NET and Java application is through XML serialization. But, as you know XML serialization is an extremely slow and resource hungry process. It involves XML validation, parsing, transformations, which really hampers your application performance and uses extra resources in term of memory and CPU.

The other approach widely used to support sharing of different class versions between .NET and Java is through database. However, the problem with this approach is that it’s slow and also doesn’t scale very well with the growing transactional load. Therefore, your database quickly becomes a scalability bottleneck because you can linearly scale your application tier by adding more application servers, but you cannot do the same at the database tier.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

This is where a distributed cache like NCache comes in really handy. NCache provides you a binary-level object transformation between different versions not only of the same technology but also between .NET and Java. You can map different versions through an XML configuration file, and NCache understands how to transform from one version to another.

NCache class version sharing framework implements interoperable binary serialization custom protocol that generates byte stream based on specified mapping in such a format that any new and old versions of the same class can easily de-serialize it, regardless of its development language, which can be .NET or Java.

Here is an example of NCache config.ncconf with class version mapping:

<cache-config name="InteropCache" inproc="False" config-id="0" last-modified="" type="local-cache" auto-start="False">
 ...
      <type id="1001" handle="Employee" portable="True">
        <attribute-list>
          <attribute name="Name" type="Java.lang.String" order="1"/>
          <attribute name="SSN" type="Java.lang.String" order="2"/>
          <attribute name="Age" type="int" order="3"/>
          <attribute name="Address" type="Java.lang.String" order="4"/>
          <attribute name="Name" type="System.String" order="5"/>
          <attribute name="Age" type="System.Int32" order="6"/>
          <attribute name="Address" type="System.String" order="7"/>
        </attribute-list>
        <class name="com.samples.objectmodel.v1.Employee:1.0" handle-id="1"
   assembly="com.jar" type="Java">
          <attribute name="Name" type="Java.lang.String" order="5"/>
          <attribute name="SSN" type="Java.lang.String" order="2"/>
        </class>
        <class name="com.samples.objectmodel.v2.Employee:2.0" handle-id="2"
   assembly="com.jar" type="Java">
          <attribute name="Name" type="Java.lang.String" order="5"/>
          <attribute name="Age" type="int" order="6"/>
          <attribute name="Address" type="Java.lang.String" order="7"/>
        </class>
        <class name="Samples.ObjectModel.v2.Employee:2.0.0.0" handle-id="3"
   assembly="ObjectModelv2, Version=2.0.0.0, Culture=neutral,
   PublicKeyToken=null" type="net">
          <attribute name="Name" type="System.String" order="5"/>
          <attribute name="Age" type="System.Int32" order="6"/>
          <attribute name="Address" type="System.String" order="7"/>
        </class>
        <class name="Samples.ObjectModel.v1.Employee:1.0.0.0" handle-id="4"
               assembly="ObjectModelv1, Version=1.0.0.0, Culture=neutral,
               PublicKeyToken=null" type="net">
          <attribute name="Name" type="System.String" order="5"/>
          <attribute name="Age" type="System.Int32" order="6"/>
        </class>
      </type>
    </data-sharing>
    ...
  </cache-config>

How does NCache do Class Versioning in Runtime Data Sharing?

In the ncache.config file that you see above, you’ll notice that the Employee class has a set of attributes defined first. These are version independent attributes and appear in all versions of .NET and Java classes. This is actually a superset of all attributes that appear in different versions. Below that, you specify version-specific attributes and map them to version-independent attributes above.

Now, let’s say that you saved .NET Employee version 1.0.0.0. Now, when another application tries to fetch the same Employee, but it wants to see it as Java version 1.0 or 2.0. NCache knows which attributes of .NET version 1.0.0.0 to fill with data and which ones to leave blank and vice versa.

There are many other scenarios that NCache handles seamlessly for you. Please read the online product documentation for how NCache runtime data sharing works.

Finally, the best part is that you don’t have to write any serialization and deserialization code or make any code changes to your application in order to use this NCache feature. NCache has implemented a runtime code generation mechanism, which generates the in-memory serialization and deserialization code of your interoperable classes at runtime, and uses the compiled form so it is super-fast.

In summary, using NCache you can now share different class versions between your .NET and Java applications without even modifying your application code.

Download NCache Trial | NCache Details

Posted in Data Sharing with Distributed Cache, Distributed Cache | Tagged , , | Leave a comment

Distributed Cache Continuous Query for Developing Real Time Applications

High traffic real time applications are widely used in the enterprise environment. In real-time applications, information is made available to you in moments it’s produced and any delay in doing so can cause a serious financial loss. The main challenge faced by these high traffic real time applications is to get notified about any changes in data set so that the corresponding views can be updated.

But, these high traffic real time applications cannot rely on the traditional database because they only support queries on the residing data and in order to get the updated data set you have to again execute the query after specific interval which is not instantaneous. And, this periodic polling also causes performance and scalability issues because you are making expensive database trips mostly even when there are no changes in data set.

SqlDependency is provided by Microsoft in SQL Server, and Oracle on Windows also supports it. SqlDependency allows you to specify a SQL statement, and SQL Server monitors this data set in the database for any additions, updates, or deletions and notifies you when this happens. But the problem with SqlDependency is that once it is fired, it gets unregistered from the database. Therefore, all future changes in your data set are missed and you don’t get notified.

Moreover, SqlDependency does not provide details of the record where change occurred. So in order to find the change in data set, you have to fetch the complete data set again instead of directly fetching only the specific record, which is added, updated or removed from data set. And, of course, this is not efficient.

In addition to the limitations of SqlDependency, your database is also not able to cope with the transactional demands of these high traffic real time applications where tens of thousands of queries are executed every second and database quickly becomes a scalability bottleneck. This is because although you can linearly scale your application-tier by adding more application servers, you cannot do the same with your database server.

This is where a distributed cache like NCache comes in because it allows you to cache data and reduce those expensive database trips which are causing scalability bottleneck.

NCache has a powerful Continuous Query capability that enables you to register an SQL-like query with the cache cluster. This Continuous Query remains active in cache cluster and if there is any change in data set of this query, then NCache notifies your real time application. This Continuous Query approach saves you from periodically executing the same expensive query against database to poll.

Here is sample code for NCache Continuous Query:

public void Main(string[] args)
{
    ...
    NCache.InitializeCache("myPartitionReplicaCache");
    String query = "SELECT NCacheQuerySample.Business.Product WHERE
                    this.ProductID > 100";

    Hashtable values = new Hashtable();
    values.Add("ProductID", 100);
    ...
    onItemAdded = new ContinuousQueryItemAddedCallback(OnQueryItemAdded);
    onItemUpdated = new ContinuousQueryItemUpdatedCallback(OnQueryItemUpdated);
    onItemRemoved = new ContinuousQueryItemRemovedCallback(OnQueryItemRemoved);

    ContinuousQuery query = new ContinuousQuery(queryString, values);
    query.RegisterAddNotification(onItemAdded);
    query.RegisterUpdateNotification(onItemUpdated);
    query.RegisterRemoveNotification(onItemRemoved);
    _cache.RegisterCQ(query);
    ...
}
//data set item is removed
void OnQueryItemRemoved(string key){
   ...
   Console.WriteLine("Removed key: {0}", key);
   ...
}
//data set item is updated
void OnQueryItemUpdated(string key){
   ...
   Console.WriteLine("Updated key: {0}", key);
   ...
}
//data set item is removed
void OnQueryItemAdded(string key){
   ...
   Console.WriteLine("Added key: {0}", key);
   ...
}

And, unlike SqlDependency, NCache Continuous Query remains active and doesn’t get unregistered on each change notification. So, your high traffic real time applications continue to be notified across multiple changes.

NCache Continuous Query also provides you the flexibility to be notified separately on ADD, UPDATE, and DELETE. And, you can specify these events at runtime even after creating a Continuous Query, something SqlDependency doesn’t allow you to do. This also reduces events traffic from the cache cluster to your real time application.

In summary, NCache provides you a very powerful event driven Continuous Query that no other database has. And, NCache is also linearly scalable for your high traffic real time applications.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in Continuous Query, Distributed Cache, Distributed events | Tagged , , | Leave a comment

.NET and Java Data Sharing with Binary Serialization

Many organizations today use a variety of .NET and Java technologies to develop different high traffic applications. At the same time, these organizations have a need to share data at runtime between .NET and Java applications.

One way to share data is through the database, but that is slow and also doesn’t scale very well. A much better approach is to use an in-memory distributed cache as a common data store between multiple applications. It’s fast and also scales linearly.

As you know, Java and .NET types are not compatible. Therefore, you end up transforming the data into XML for sharing. Additionally, most distributed caches either don’t provide any built-in mechanism for sharing data between .NET and Java applications or only provide XML based data sharing. If a cache doesn’t provide a built-in data sharing mechanism, then you have to define the XML schema and use a third party XML serialization to construct and read all the XML data.

But, XML serialization is an extremely slow and resource hungry process. XML serialization involves XML validation, parsing, transformations which really hamper the application performance and uses extra resources in term of memory and CPU.

Distributed cache by design is used to improve your application performance and scalability. It allows your applications to cache your application data and reduce those expensive database trips that are causing a scalability bottleneck. And, XML based data sharing goes against these performance and scalability goals for your application. If you increase transaction load on your application, you’ll see that XML manipulation ends up becoming a performance bottleneck.

A much better way is to do data sharing between .NET and Java applications at binary level where you would not have to do any XML transformations. NCache is a distributed cache that provides you runtime data sharing .NET and Java application with binary serialization.

How does NCache provide runtime data sharing between .NET and Java?

Well, before that you need to understand why your native .NET and Java binary serialization are not compatible. Java and .NET have their own binary serializations that interpret objects in their own ways and which are totally different from each other and also have different type systems. Moreover, the serialized byte stream of an object also includes the data type details as fully qualified name which are again different in .NET and Java. This, of course, also hinders the data type compatibility between .NET and Java.

To handle this incompatibility, NCache has implemented its own interoperable binary serialization that is common for both .NET and Java. NCache interoperable binary serialization identifies objects based on type-ids that are consistent across .NET and Java instead of fully qualified name that are technology specific. This approach not only provides interoperability but also reduces the size of the generated byte stream. Secondly, NCache interoperable binary serialization implements a custom protocol that generates byte stream in such a format that its .NET and Java implementations can easily interpret.

Here is an example of NCache config.ncconf with data interoperable class mapping:

  <cache-config name="InteropCache" inproc="False" config-id="0" last-modified=""
   type="clustered-cache" auto-start="False">
    ...
      <data-sharing>
       <type id="1001" handle="Employee" portable="True">
         <attribute-list>
           <attribute name="Age" type="int" order="1"/>
           <attribute name="Name" type="java.lang.String" order="2"/>
           <attribute name="Salary" type="long" order="3"/>
           <attribute name="Age" type="System.Int32" order="4"/>
           <attribute name="Name" type="System.String" order="5"/>
           <attribute name="Salary" type="System.Int64" order="6"/>
         </attribute-list>
   <class name="jdatainteroperability.Employee:0.0" handle-id="1"
    assembly="jdatainteroperability.jar" type="java">
           <attribute name="Age" type="int" order="1"/>
           <attribute name="Name" type="java.lang.String" order="2"/>
           <attribute name="Salary" type="long" order="3"/>
          </class>
          <class name="DataInteroperability.Employee:1.0.0.0" handle-id="2"
           assembly="DataInteroperability, Version=1.0.0.0, Culture=neutral,
           PublicKeyToken=null" type="net">
           <attribute name="Age" type="System.Int32" order="1"/>
           <attribute name="Name" type="System.String" order="2"/>
           <attribute name="Salary" type="System.Int64" order="3"/>
          </class>
       </type>
      </data-sharing>
    ...
  </cache-config>

As a result, NCache is able to serialize a .NET object and de-serialize it in Java as long as there is a compatible Java class available. This binary level serialization is more compact and much faster than any XML transformations.

Finally, the best part in all of this is that you don’t have to write any serialization code or make any code changes to your application in order to use this feature in NCache. NCache has implemented a runtime code generation mechanism, which generates the in-memory serialization and de-serialization code of your interoperable classes at runtime, and uses the compiled form so it is super fast.

In summary, using NCache you can scale and boost your application performance by avoiding the extremely slow and resource hungry XML serialization.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in data sharing, Distributed Cache, Serialization | Tagged , , | Leave a comment

JSP Session Persistence and Replication with Distributed Cache

As you know, JSP applications have the concept of a Session object in order to handle multiple HTTP requests. This is because HTTP protocol is stateless and Session is used in maintaining user’s state across multiple HTTP requests.

In a single web server configuration, there is no issue but as soon as you have a multi-server load balanced web farm running a JSP application, you immediately run into the issue of where to keep the Session. This is because a load balancer ideally likes to route each HTTP request to the most appropriate web server. But, if the Session is kept only on one web server then you’re forced to use the sticky session feature of the load balancer where it sends requests related to a given Session only to the web server where the Session resides.

This means one web server might get overwhelmed even when you have other web server sitting idle because the Session for that user resides on that web server. This of course hampers scalability. Additionally, keeping the Session object only on one web server also means loss of Session data if that web server goes down for any reason.

To avoid this data loss problem, you must have a mechanism where Session is replicated to more than one web server. Here, the leading Servlet Engines (Tomcat, WebLogic, WebSphere, and JBoss) all provide some form of Session persistence. They even include some form of Session replication but all of them have issues. For example, file based and JDBC persistence are slow and cause performance and scalability issues. Session replication is also very weak because it replicates all sessions to all web servers thereby creating unnecessary copies of the Session when you have more than two web servers even though fault tolerance can be achieved with only two copies.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

In such situations, a Java distributed cache like NCache is your best bet to ensure the session persistence across multiple servers in web cluster is done very intelligently and without hampering your scalability.  NCache has a caching topology called “Partition-Replica” that not only provides you high availability and failover through replication but also provides you large in-memory session storage through data partitioning. Data partitioning enables you to cache large amounts of data by breaking up the cache into partitions and storing each partition on different cache servers in the cache cluster.

NCache Partition-Replica topology replicates the session data of each node to only one other node in cluster this approach eradicates the performance implications of replicating session data on all of the server nodes without compromising the reliability.

In addition, session persistence provided by the web servers (Apache, Tomcat, Weblogic and WebSphere) uses the resource and memory of web cluster. This approach hinders your application performance because the web cluster nodes which are responsible to process the client request are also handling the extra work of session replication and its in-memory storage.  Whereas, you can run NCache on separate boxes other the one part of your web cluster. This way you can free the web cluster resources and web cluster can use those resources to handle more and more client requests.

For JSP session persistence, NCache has implemented a session module NCacheSessionProvider as Java filter. NCache JSP Servlet Filter dynamically intercepts requests and responses and handles session persistence behind the scenes. And, you don’t have to change any of your JSP application code.

Here is a sample NCache JSP Servlet Filter configuration you need to define in your application deployment descriptor to use NCache for JSP Session persistence:


<filter>
<filter-name>NCacheSessionProvider</filter-name>
<filter-class> com.alachisoft.ncache.web.sessionstate.NSessionStoreProvider
</filter-class>
</filter>
<filter-mapping>
<filter-name>NCacheSessionProvider</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>

<init-param>
<param-name>cacheName</param-name>
<param-value>PORCache</param-value>
</init-param>

<init-param>
<param-name>configPath</param-name>
<param-value>/usr/local/ncache/config/</param-value>
</init-param>

Hence, NCache provide you a much better mechanism to achieve session persistence in web cluster along with performance boost and scalability.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in Distributed Cache, Java, JSP Session Persistence | Tagged , , | Leave a comment

Scale Java Spring Applications with Distributed Cache

Spring is a popular lightweight dependency injection and aspect oriented development container and framework for Java. It reduces the overall complexity of J2EE development and provides high cohesion and loose coupling. Because of the benefits Spring provides, it is used by a lot of developers for creating high traffic small to enterprise level applications.

Here is an example of a Spring Java application:

import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

public class MySampleApp {
   …
 ApplicationContext ctx = new ClassPathXmlApplicationContext("spring.xml");
     //In spring.xml
 Department dept = (Department) ctx.getBean"department");

 List employees = dept.getEmployees();
   for(int i=0; i< employees.size();i++)
   {
      Employee emp = employees.get(i);
      System.out.println("Student Name :"student.getName());
   }
   …
 }

But, these high traffic Spring applications face a major scalability proble m. Although these applications can scale by adding more servers to the application server farm, their database server cannot scale in the same fashion to handle the growing transaction load.

In such situations, a Java distributed cache is your best bet to handle the database scalability problem. It offloads your database by reducing those expensive database trips that are causing scalability bottlenecks. And, it also improves your application performance by serving data from in-memory cache store instead of the database.

NCache is a Java distributed cache and has implemented the Spring cache provider. It introduces a generic Java cache mechanism with which you can easily cache the output of your CPU intensive, time consuming, and database bound methods of Spring application. This approach not only reduces the database load but also reduces the number of method executions and improves application performance.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

NCache has a set of annotations including @Cacheable to handle cache related tasks. Using these annotations you can easily mark the method required to be cached along with cache expiration, key generation and other strategies.

When a Spring method marked as @Cacheable is invoked, NCache checks the cache storage to verify whether the method has already been executed with the given set of parameters or not. If it has, then the results are returned from the cache. Otherwise, method is executed and its results are also cached. That is how, expensive CPU, I/O and database bound methods are executed only once and their results are reused without actually executing the method.

Java distributed cache is essentially an in-memory key-value store, therefore each method innovation should translate to a suitable unique key for the access. To generate these cache keys NCache Spring distributed cache provider uses the combination of class name, method and arguments. However, you can also implement your custom NCache Spring key generator by using com.alachisoft.ncache.annotations.key.CacheKeyGenerator interface.

Here are the steps to integrate NCache cache provider in your application: 

    1. Add Cache Annotations: Add NCache Spring annotation to methods which require caching. Here is a sample of Spring method using NCache annotation:
@Cacheable
 (cacheID="r2-DistributedCache",
  slidingExpirator = @SlidingExpirator (expireAfter=15000)
 )
 public Collection<Message>; findAllMessages()
 {
	...
      Collection<Message> values = messages.values();
	Set<Message> messages = new HashSet<Message>();
	synchronized (messages) {
		Iterator<Message> iterator = values.iterator();
		while (iterator.hasNext()) {
			messages.add(iterator.next());
		}
	}
	...
   return Collections.unmodifiableCollection(messages);
 	}

  1. Register NCache Spring Provider:  Update your Spring application spring.xml to register NCache Spring provider as follows:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:p="http://www.springframework.org/schema/p"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:oxm="http://www.springframework.org/schema/oxm"
xmlns:mvc="http://www.springframework.org/schema/mvc"
xmlns:ncache="http://www.alachisoft.com/ncache/annotations/spring"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd
http://www.springframework.org/schema/oxm http://www.springframework.org/schema/oxm/spring-oxm-3.0.xsd
http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd
http://www.alachisoft.com/ncache/annotations/spring http://www.alachisoft.com/ncache/annotations/spring/ncache-spring-1.0.xsd">

<ncache:annotation-driven>
        <ncache:cache id="r1-LocalCache" name="myCache"/>
        <ncache:cache id="r2-DistributedCache" name="myDistributedCache"/>
   </ncache:annotation-driven>

Hence, by using NCache as a Spring cache provider you can scale Spring applications linearly and boost performance.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in Distributed Cache, Spring | Tagged , | Leave a comment

Using NCache as Hibernate Second Level Java Cache

Hibernate is an Object-Relational Mapping library for Java language. It provides you mapping from Java classes to database table and reduces the overall development cycle. Because of benefits Hibernate provides, more and more high transactional applications are developed using Hibernate. Here is an example of Hibernate in a Java application.


import org.hibernate.*;

public class HibernateSample
{
	...
Session session = factory.openSession();
session = factory.openSession();
Transaction tx = session.beginTransaction();
Query query =  session.createQuery("from Customer c");
Iterator it = query.list().iterator();
while (it.hasNext ()){
Customer customer = (Customer) it.next();
...
}
       tx.commit();
session.close();
}

But, these high traffic Hibernate applications are encountering a major scalability issue. Although they are able to scale at application-tier level, their database or data storage is not able to scale with the growing number of transactional load.

Java distributed caching is the best technique to solve this problem because it reduces the expensive trips to database that is causing scalability bottlenecks. For this reason, Hibernate provides a caching infrastructure that includes first-level and second-level cache.

Hibernate first-level cache provides you a basic standalone (in-proc) cache which is associated with the Session object, and is limited to the current session only. But, the problem with Hibernate first-level cache is that it does not allow the sharing of object between different sessions. If the same object is required by different sessions all of them make the database trip to load it which eventually increases database traffic and causes scalability problem. Moreover, when the session is closed all the cache data is also lost and next time you have to fetch it again from database.

These high traffic Hibernate applications with only first-level when deployed in the web farm also faces across server cache synchronization problem. In web farm, each node runs a web server – Apache, Oracle WebLogic etc. – with multiple instances of httpd process to server the requests. And, Hibernate first-level cache in each httpd worker process has a different version of the same data cached directly from the database.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

That is why Hibernate provides you a second-level cache with provider model. Hibernate second-level cache allows you to plug-in 3rd party distributed (out-proc) caching provider to cache object across sessions and servers. Hibernate second-level cache is associated with SessionFactory object and is available to entire application, instead of single session.

When you enable Hibernate second-level cache, you end up with two caches; one is first-level cache and the other is second level cache. Hibernate always tries to retrieve the objects from first-level cache if fails tries to retrieve them from second-level cache. If that also fails then objects are directly loaded from the database and cached as well. This configuration significantly reduces the database traffic because most of the data is served by the second-level distributed cache.

NCache Java has implemented a Hibernate second-level caching provider by extending org.hibernate.cache.CacheProvider. You can easily plug in NCache Java Hibernate distributed caching provider with your Hibernate application without any code changes. NCache allows you to scale your Hibernate application to multi-server configurations without database becoming the bottleneck. NCache also provides you all the enterprise level distributed caching features like data size management, data synchronization across server and database etc.

You can plug in NCache Java Hibernate caching provider by simply modifying your hibernate.cfg.xml and ncache.xml as follows

<hibernate-configuration>
  <session-factory>
<property name = "cache.provider_class">
alachisoft.ncache.integrations.hibernate.cache.NCacheProvider,
alachisoft.ncache.integrations.hibernate.cache
</property>
  </session-factory>
</hibernate-configuration>

<ncache>
<region name = "default">
   <add key = "cacheName" value = "myClusterCache"/>
   <add key = "enableCacheException" value = "false"/>
   <class name = "hibernator.BLL.Customer">
<add key = "priority" value = "1"/>
<add key = "useAsync" value = "false"/>
<add key = "relativeExpiration" value = "180"/>
   </class>
</region>
</ncache>

Hence, by using NCache Java Hibernate distributed cache provider you can linearly scale your Hibernate applications without any code changes.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in Hibernate Second Level Cache, Java Distributed Cache | Tagged , | Leave a comment