Event Driven .NET and Java Data Sharing through Distributed Cache

These days, many companies are running both .NET and Java applications in their enterprise environment. And, often these applications need to share data with each other at runtime. The most common way they do that today is by storing the data in the database and having the other application poll and look for it. Some people also develop web services solely for the purpose of allowing Java applications to obtain data from .NET applications and vice versa.

The problem with the first approach is that data sharing cannot happen instantaneously because the other “consumer” application has to poll the database which happens after some predetermined interval. It also has performance and scalabilities issues like any application accessing the database for data. As know, a database cannot scale the same way that today’s applications can. This is because although you can linearly scale an application tier by adding more application servers, you cannot do the same at the database tier.

The second approach requires a lot of custom programming and essentially changing your application’s architecture just so you can share data with other applications, whether they’re .NET or Java. It would be much nicer if you could continue to develop each application for the purpose that it is being built and not worry about creating a custom framework for data sharing.

Runtime Data Sharing through a Distributed Cache

The second approach requires a lot of custom programming and essentially changing your application’s architecture just so you can share data with other applications, whether they’re .NET or Java. It would be much nicer if you could continue to develop each application for the purpose that it is being built and not worry about creating a custom framework for data sharing.

Ideally, you would want to have an event driven model where a .NET application can be notified whenever a Java application has some data for it and vice versa. But, as you know, .NET and Java are not inherently compatible for this kind of use.

This is where a distributed cache like NCache comes in really handy. NCache provides events that are platform independent and can be shared between .NET and Java. NCache also provides binary level data type compatibility between .NET and Java. This allows you to not only receive events but also corresponding data in the form of objects and all of that without having to go through any XML based transformation for data sharing purposes.

NCache event notification framework enables you to register to notified when different types of events occur within the cache cluster. This way, whenever there are any changes made to the data, either by .NET or Java applications, your application gets notified. Here is sample code using NCache item-based event for data sharing in Java:

import com.alachisoft.ncache.web.caching.*;

public void AddToCache()
{
CacheEventListener onItemRemoved = new CacheEventListner();
Cache cache = NCache.initializeCache("PORCache");
Employee emp = new Employee();
emp.setDept("Mechanical");

CacheItem cItem = new CacheItem(emp);
cItem.setItemRemoveCallback(onItemRemoved);
cache.insert("EMP-1000-ENG", cItem);
}

public class CacheEventListner implements CacheItemRemovedCallback
{
  ...
  public void itemRemoved(String key, Object value,
  CacheItemRemovedReason reason)
  {
Employee emp = (Employee) key;
System.out.println("Employee Removed " + key + "Dept" + emp.getDept());
  }
   ...
}

NCache provides you different cached item level notifications like item-added, item-removed and item-updated. Applications can register interest in various cached item keys (that may or may not exist in the cache yet), and they’re notified separately whenever that item is added, updated or removed from the distributed cache by anybody for any reason. For example, even if an item is removed due to expiration or eviction, the item-remove event notification is fired.

Both .NET and Java applications can register interest for the same cached items and be notified about them. The notification includes the affected cached item as well, which is transformed into either .NET or Java, depending on the type of application.

Here is a sample code of using NCache item-based event for data sharing in .NET:

public void AddToCache()
{
Cache cache = NCache.InitializeCache("PORCache");
Employee emp = new Employee();
emp.Name = "David Rox";
emp.Dept = "Engineering";
...

cache.Insert("EMP-1000-ENG", emp, null,
Cache.NoAbsoluteExpiration,
Cache.NoSlidingExpiration,CacheItemPriority.Default);

//Register Callback to get notified of changes related to provided key
cache.RegisterKeyNotificationCallback("EMP-1000-ENG",
new CacheItemUpdatedCallback(OnItemUpdated),
newCacheItemRemovedCallback(OnItemRemoved));
}
...
void OnItemRemoved(string key, object value,CacheItemRemovedReason reason)
{
//Item is removed. Do something
Employee emp = (Employee) value;
	Console.WriteLine("Employee Removed {0}, Name {1}", key, emp.Dept);
}

In summary, with NCache you can not only share data between .NET and Java applications are runtime but can also use distributed events to notify applications of any change in data.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in data sharing, Distributed Cache, Distributed events, Event driven data sharing | Tagged , , | Leave a comment

Storing ASP.NET Session State in Azure Distributed Cache

Microsoft Azure provides a platform for ASP.NET applications in the cloud. Very often, these applications are high transactional and mission critical in nature. Therefore, it is very important that these applications are able to scale and that there are no data loss if a web server goes down at any time.

ASP.NET Session State needs to be stored somewhere and its storage becomes a major performance and scalability bottleneck. In Microsoft Azure, you can store ASP.NET Session State in In-proc, SQL Azure (Database), Azure Table Storage or in a Distributed Cache.

InProc Option: In-proc session storage option doesn’t work well in Microsoft Azure architecture. First of all, ASP.NET Session State is not shared between multiple instances of the Web Role in In-proc mode. Secondly, you end up using sticky sessions in Microsoft Azure which may result in uneven load distribution. Additionally, sticky sessions involve extra configurations on your part as Microsoft Azure doesn’t use sticky sessions by default. Moreover, any Web Role instance going down due to failure or for maintenance will result in session data loss and this is obviously not acceptable.

Azure Table Option: Azure Table Storage is file based ASP.NET Session State provider which is provided on an ‘as-is basis’ as a code sample, meaning it is not officially supported by Microsoft. It is intended for storing entities that are structured. And, even though it is a cheaper option it is still not an ideal place to store ASP.NET Session State primarily because of performance as it is file based.

SQL Database Option: Microsoft Azure SQL Database can also be used as storage for ASP.NET Session State by using conventional ASP.NET SQLServer Mode. But, ASP.NET Session State object is stored in the database as a BLOB and relational databases were never really designed for BLOB storage. This causes performance issues and it is definitely a major scalability bottleneck for your Microsoft Azure ASP.NET application.

Distributed Cache Option: Distributed Cache provides an ideal storage for ASP.NET Session State in Microsoft Azure. For example you can use NCache for Azure which is a Microsoft Azure Distributed Cache for .NET applications. It is extremely fast and more scalable than all other Microsoft Azure options mentioned above and it also replicates sessions so there is no data loss if a cache server ever goes down. Moreover, you eliminate all issues related to session sharing as well as use equal load balancing that ensures full utilization of all of your Azure Web role instances.

How to Configure NCache for Azure ASP.NET Session State provider:

NCache for Azure has implemented the ASP.NET Session State provider  that can be used by Microsoft Azure ASP.NET applications. NCache for Azure uses Microsoft Azure VMs and formulates a dedicated caching tier. ASP.NET applications in Microsoft Azure can then be directed to use this Azure Distributed Cache for ASP.NET Session State storage.

And, nice thing about NCache for Azure ASP.NET Session State provider is that it doesn’t require any application code changes. Simply modify your application web.config file as follows to use NCache for Azure as your Distributed Cache for ASP.NET Session State.

   <assemblies>
      <add assembly="Alachisoft.NCache.SessionStoreProvider,
           Version=x.x.x.x, Culture=neutral,
           PublicKeyToken=CFF5926ED6A53769"/>
   </assemblies>

<sessionState cookieless="false" regenerateExpiredSessionId="true" mode="Custom"
              customProvider="NCacheSessionProvider" timeout="20">

   <providers>
      <add name="NCacheSessionProvider"
           type="Alachisoft.NCache.Web.SessionState.NSessionStoreProvider"
           sessionAppId="NCacheTest" cacheName= "TestCache"
           writeExceptionsToEventLog="false" />
   </providers>
</sessionState>

Here are a few important benefits you achieve when you use NCache for Azure as your Distributed Cache for storing ASP.NET Session State.

  • Linear Scalability and Performance: NCache for Azure is based on dynamic clustering protocol which allows you to add more servers to your cache at runtime. Your application can scale out linearly by adding more servers to your Azure Distributed Cache when your application load grows without changing application architecture.
  • Session Replication: NCache for Azure provides reliability support with help of replication. You can take application instances offline for maintenance, patching and for new releases without having to worry about any session data loss.
  • High Availability: NCache for Azure provides fault tolerant support of high availability as it is based on hundred percent peer to peer architecture. It is guaranteed that you will not lose any data or have any application downtime in case of any node failure from Distributed Cache.

Conclusion: Azure Distributed Cache such as NCache for Azure is the best option for storing ASP.NET Session State in Microsoft Azure primarily because of performance, scalability, reliability and high availability features. Microsoft Azure Distributed Cache offered by NCache for Azure is very easy to use and doesn’t require any application code changes.

Download NCache for Azure Trial | NCache for Azure Details

Posted in ASP.Net, Distributed Cache, Microsoft Azure | Tagged , , | Leave a comment

Share ASP.NET Session State across Multiple Azure Regions

Many high traffic ASP.NET applications in Microsoft Azure are deployed over multiple Microsoft Azure regions in order to handle geographically separated traffic. In these situations, the load balancer always sends traffic to the Microsoft Azure region closest to the user for faster response time.

In this scenario, you may run into a situation where you have to redirect some of your traffic to from one Microsoft Azure region to another. This may happen because you have too much traffic in one Microsoft Azure region and another region is underutilized. Another reason may be that you need to bring a region down for maintenance.

When you redirect traffic, your users normally lose their ASP.NET sessions because your ASP.NET Session State is not available in the other Microsoft Azure region. And, this is obviously not good. Ideally, you want to redirect traffic without causing any interruptions for your users.

In Microsoft Azure, the only way you can achieve this is if you keep a common ASP.NET Session State storage across multiple Microsoft Azure regions. This allows you to redirect traffic without losing any ASP.NET Session State. But, this option has severe performance issues because a large percentage of ASP.NET sessions are being accessed across the WAN.

NCache for Azure is an extremely fast and scalable Microsoft Azure distributed cache for .NET applications. NCache for Azure provides an intelligent multi-region ASP.NET Session State support for your ASP.NET applications deployed in multiple Microsoft Azure regions.

NCache for Azure intelligently detects and then automatically moves your ASP.NET sessions from one Microsoft Azure region to another region when user request is redirected from one Microsoft Azure region to another. All subsequent requests are served from this new Microsoft Azure region. This allows your ASP.NET applications to seamlessly share ASP.NET sessions across Microsoft Azure regions without negatively impacting performance or causing session data loss.

NCache for Azure allows you to achieve multi-region ASP.NET Session State capability by defining primary and secondary caches in each Microsoft Azure region. Additionally, you also specify “sid-prefix” attribute which is prefixed to all session-IDs in each Microsoft Azure region. This helps NCache for Azure SSP module to identify which ASP.NET sessions belong to which Microsoft Azure region and then NCache for Azure decides to move ASP.NET sessions when a request redirects to another Microsoft Azure region.

Here is a sample config to use NCache for your ASP.NET Session State storage

   <assemblies>
      <add assembly="Alachisoft.NCache.SessionStoreProvider,
           Version=x.x.x.x, Culture=neutral,
           PublicKeyToken=CFF5926ED6A53769"/>
   </assemblies>

<sessionState cookieless="false" regenerateExpiredSessionId="true" mode="Custom"
              customProvider="NCacheSessionProvider" timeout="20">

   <providers>
      <add name="NCacheSessionProvider"
           type="Alachisoft.NCache.Web.SessionState.NSessionStoreProvider"
           sessionAppId="NCacheTest" cacheName= "London_Cache"
           writeExceptionsToEventLog="false" />
   </providers>
</sessionState>

Additionally you need location affinity configurations for  Azure Multi-region ASP.NET Session State support.

<configSections>
     <section name="ncache" type="Alachisoft.NCache.Web.SessionStateManagement.NCacheSection,
              Alachisoft.NCache.SessionStateManagement,
              Version=x.x.x.x, Culture=neutral, PublicKeyToken=CFF5926ED6A53769"/>
</configSections>

<ncache>
   <sessionLocation>
      <primaryCache id =  "London_Cache"  sid-prefix = "LDC"/>
      <secondaryCache id ="NewYork_Cache" sid-prefix = "NYC"/>
      <secondaryCache id ="Tokyo_Cache"   sid-prefix = "TKC"/>
   </sessionLocation >
</ncache>

Please note that <ncache> section in each  Azure Region will be different, meaning each region will have its own “PrimaryCache” and will define all other region caches as “SecondaryCache”.

All ASP.NET sessions originated from any Microsoft Azure region are originally stored in the primary cache in that region. However, when a request from another Microsoft Azure region is redirected to the current Microsoft Azure region then NCache for Azure multi-region SSP module intelligently detects that the ASP.NET Session State resides in one of other Microsoft Azure regions (using “sid-prefix” attached to ASP.NET Session ID) and it automatically contacts the corresponding secondary cache on remote Microsoft Azure region and moves it to primary cache on current Microsoft Azure region. All subsequent requests are then served from this new location.

Say for example, you have defined London_Cache to be your primary cache while NewYork_Cache and Tokyo_Cache are defined as secondary caches for London site. You also specify “LDC”,”NYC” and “TKC” as sid-prefix that are attached to each session-id corresponding to London_Cache, NewYork_Cache and Tokyo_Cache sessions respectively. Now, all ASP.NET sessions originated from London region have “LDC” attached as prefix to their ASP.NET Session IDs and are stored and served from London_Cache which is primary cache for London region. But, if a request is redirected from other Microsoft Azure region such as New York or Tokyo to London region then this ASP.NET Session State is immediately identified based on sid-prefix and ASP.NET Session State is transferred from NewYork_Cache or Tokyo_Cache to London_Cache. All subsequent requests are served from London_Cache locally once ASP.NET Session State is moved to London region.

Conclusion:

NCache for Azure multi-region ASP.NET Session State support allows you to have your ASP.NET applications deployed in two or more active Microsoft Azure regions and be able to redirect traffic between Microsoft Azure regions without impacting performance or causing any application downtime. You can seamlessly redirect requests between Microsoft Azure regions to handle traffic overflows and site maintenance.

Download NCache for Azure Trial | NCache for Azure Details

Posted in ASP.Net, Microsoft Azure | Tagged , , | Leave a comment

Why Distributed Cache as NHibernate Second Level Cache?

NHibernate is a very popular object-relational mapping (ORM) solution for .NET applications because it simplifies your database programming. As a result, many applications using NHibernate are high traffic in nature and therefore face scalability bottlenecks in the database.

To tackle this, NHibernate provides a caching infrastructure so that applications can use an in-memory cache store instead and database is not exhausted anymore due to such high request load.  NHibernate caching includes first level cache and second level cache.

NHibernate First Level Cache and its Limitations

NHibernate First Level (1st level) Cache provides you a basic standalone (in-proc) cache which is associated with Session object and is limited to the current session only. 1st level cache is used by default to reduce the number of SQL queries on the database per session. But, there are a lot of limitations with this 1st level cache as follows:

  • Each process has its own 1st level cache that is not synchronized with other 1st level caches so data integrity cannot be maintained.
  • Cache size is limited to the process memory and cannot scale.
  • Worker process recycling causes cache to be flushed. Then, you have to reload it which reduces your application performance.

Solution: A Distributed Cache for NHibernate

NHibernate Second Level Cache exists at the Session Factory level which means multiple user sessions can access a shared cache. Additionally, NHibernate 2nd level cache has a plug-able architecture so you can plug-in a third party distributed cache to it without any programming. NCache has implemented NHibernate Second Level Cache Provider and you can use it as your distributed cache for NHibernate without any code changes.

Following example shows you how to use NCache in your NHibernate application as its 2nd level cache.

<hibernate-configuration>
    <session-factory>
        <property name="cache.provider_class">Alachisoft.NCache.Integrations.NHibernate.Cache.NCacheProvider, Alachisoft.NCache.Integrations.NHibernate.Cache
        </property>
        <property name="proxyfactory.factory_class"> NHibernate.Bytecode.DefaultProxyFactoryFactory, NHibernate </property>
        <property name="cache.use_second_level_cache">true
        </property>
    </session-factory>
</hibernate-configuration>

<ncache>
    <cache-regions>
        <region name="AbsoluteExpirationRegion" cache-name="myCache" priority="Default" expiration-type="sliding" expiration-period="180" use-async="false" />
        <region name="default" cache-name="myCache" priority="default" expiration-type="none" expiration-period="0" />
    </cache-regions>
</ncache>

Here are some important benefits of NCache as NHibernate 2nd level cache.

  • NCache synchronizes across processes & servers: NCache is a shared cache across multiple processes and servers. This ensures that cache is always consistent across multiple servers and all cache updates are synchronized correctly so no data integrity issues arise.
  • Scalable cache size: NCache pools memory of all cache servers together. So, you can grow the cache size by simply adding more servers to the cache cluster. This means your cache can grow to 100′s of gigabytes and even terabytes.
  • Application process recycling does not affect the cache: Your application processes now become stateless as all the data is kept in an out-proc distributed cache. So, application process recycling has no impact on the cache.
  • Linear scalability to handle larger transaction loads: You can handle larger transaction loads without worrying about your database becoming a bottleneck. This is because the distributed cache scales linearly and reduces your database traffic by as much as 90%.
  • Use Client Cache to store data In-proc: NCache provides support of client cache which is a synchronized local cache and can also run in-proc within your worker process. you can use it to attain more performance within your NHibernate applications.

As you can see, a distributed cache like NCache enables you to use NHibernate and still run your application in a multi-server configuration. You can also scale your application and handle high transaction loads by using NCache as NHibernate Second Level Cache.

Download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Posted in 3rd party integrations, Distributed Cache, NHibernate | Tagged , , , | Leave a comment

How to Replicate a Distributed Cache across the WAN

Web applications today are handling millions of users a day. And, even with this high traffic, you have to maintain high performance for your users. To achieve this performance goal, many people are using in-memory distributed cache because it is faster than going to the database. A distributed cache also provides scalability by allowing you to add cache servers at runtime as you need to handle higher transaction load.

This is all good. But, you may need to deploy and run your web applications in multiple data centers. You may do this for a variety of reasons. One is to have a disaster recovery option. In this option, you have one active and one passive data center. If the active data center goes down, the passive data center quickly picks up the traffic without causing any interruptions for the users.

Another reason is to locate data centers closer to your customers especially if they are geographically dispersed. This greatly speeds up the application response time for your users. In this option, you may have two or more active data centers rather than an active-passive data center.

If you have multiple data centers either for handling region specific traffic or for disaster recovery purpose, then you need to replicate your cache across all data centers. And, this replication is across the WAN would be very slow due to latency unless you do asynchronous replication.

NCache Banner - Reliable multi-site ASP.NET Session Storage

NCache is an extremely fast in-memory distributed cache that also provides WAN replication for your multi-datacenter needs. With NCache, there is no performance degradation while replicating your cache across the WAN because NCache does it asynchronously.

NCache Bridge Topology maintains an in-memory mirrored queue which has all the cache updates. The Bridge can then apply the same updates to one or more target caches whether they’re active or passive. You can read more about Bridge Topology at WAN Replication Topologies (Bridge)

Now, let’s see how to handle WAN replication of your distributed cache for different scenarios with NCache.

Active-passive datacenters: NCache Bridge Topology can be configured for active-passive replication. The active distributed cache submits cache changes to the Bridge so it can apply it to the passive datacenter cache.

Active-active datacenters: In this scenario, the Bridge has to also handle conflict resolution because both datacenters are active and could be updating the same data. This is called conflict resolution and is discussed later in this blog.

One active, multiple passive datacenters: In this scenario, the active distributed cache submits its changes to the Bridge and from that all changes are replicated to all other backup data centers. No conflict can occur here because all the changes are one-way from one active distributed cache to the passive ones.

Three or more active data centers: In this scenario, there is one central active data center where the Bridge is located. Then, all changes from all data centers are submitted to this central Bridge which then propagates them to all the other active data centers. And, because there are multiple active data centers, you can have conflicts when the same cached item is updated in multiple data centers simultaneously. The Bridge is responsible for handling these conflicts and making sure all data centers are updated with the same data.

You can use any Bridge Topology according to your needs for example, you may be storing ASP.NET Session State in your distributed cache and these sessions need to be replicated across the WAN to the other data center.

When same data element/item in multiple distributed caches is updated and submitted to Bridge for replication a conflict occurs. To handle conflicts, NCache adds a timestamp for every operation before submitting it to the Bridge. This is to help the Bridge determine which operation was done last for the “last update win” rule.

NCache also provides you the flexibility to implement your own custom conflict resolver and plug in it with your distributed cache.

Here is an example of a conflict resolver:


public class BridgeConflictResolverImpl : IBridgeConflictResolver
{
   private bool _enableLogging;
   private TextWriter _textLogger;

   // to initialize any parameters if needed.
   public void Init(System.Collections.IDictionary parameters) { ... }

   // To resolve entry on the basis of old and new entries.
   public ConflictResolution Resolve(ProviderBridgeItem oldEntry,
                                     ProviderBridgeItem newEntry)
   {
      ConflictResolution conflictResolution =
                                        new ConflictResolution();
      switch (oldEntry.BridgeItemVersion)
      {
         case BridgeItemVersion.OLD:
              conflictResolution.ResolutionAction =
              ResolutionAction.ReplaceWithNewEntry;
              break;
         case BridgeItemVersion.LATEST:
              conflictResolution.ResolutionAction =
              ResolutionAction.KeepOldEntry;
              break;
         case BridgeItemVersion.SAME:
              if (oldEntry.OpCode == BridgeItemOpCodes.Remove)
              {
                    conflictResolution.ResolutionAction =
                    ResolutionAction.ReplaceWithNewEntry;
              }
              else
              {
                    conflictResolution.ResolutionAction =
                    ResolutionAction.KeepOldEntry;
              }
              break;
      }

      return conflictResolution;
   }

   // To dispose parameters if needed.
   public void Dispose() {...}
}

As you can see, the custom conflict resolver allows you to make content driven decisions about which update should “win”.

In summary, the Bridge Topology allows you to run your applications in multiple data centers without having to worry about your cache getting out of sync with the other location. This ensures that your application will always be up even if one data center goes down or you will be able to reroute traffic to the other data center if one datacenter gets overwhelmed with it.

Download NCache Trial | NCache Details

Posted in Bridge Topology, Distributed Cache, Distributed Cache Replication | Tagged , , | Leave a comment

How to Handle ASP.NET Session State Storage in Multi-Site Deployments

ASP.NET has become really popular for developing high traffic web applications. And, many of these applications are deployed to multiple geographical locations. This is done either for disaster recovery purposes or for handling regional traffic by having the ASP.NET application closer to the end user.

In case of disaster recovery, there is usually one active site and one passive site. The passive site becomes active as soon as the active site goes down for any reason. In other cases, two or more sites can all be active but handling traffic closer to their region (e.g. New York, London, and Tokyo).

ASP.NET keeps user specific information in ASP.NET Session State object on the web server. This ASP.NET Session State is created when the user first uses the ASP.NET application and stays active as long as the user is actively using the application. And, by default, after 20 minute of inactivity by the user, ASP.NET expires this session.

ASP.NET Session State object is either stored in-memory on the web server (InProc), in-memory on any server (StateServer), in a SQL Server database, or in a third-party store by using ASP.NET Session State Provider (SSP) architecture. The third-party option usually is an in-memory distributed cache.

An in-memory distributed cache like NCache is a great place to store ASP.NET Session State. The reasons are faster performance, greater scalability, and better reliability of ASP.NET Session State due to session replication. Below is an example of how you can specify a “custom” session storage option in web.config file which results in NCache being used as ASP.NET Session State storage:

<sessionState cookieless="false" regenerateExpiredSessionId="true"

mode="Custom" customProvider="NCacheSessionProvider" timeout="20">

<providers>

<add name="NCacheSessionProvider"

type="Alachisoft.NCache.Web.SessionState.NSessionStoreProvider"

useInProc="false" cacheName="myReplicatedCache"/>

</providers>

</sessionState>

Read more about how NCache support for ASP.NET Session State storage.

Active-Passive Multi-Site Deployment

But, if your application is running in a multi-site deployment, then the question you have to address is what to do about ASP.NET Session State storage.

If your ASP.NET application is deployed in an active-passive multi-site configuration, then all your ASP.NET Session State is being created and stored in the active site. And, the passive site does not have any session data. So, if the active site goes down and all the traffic is redirected to the passive site, then all users will suddenly lose their sessions and will have to start over.

NCache Banner - Reliable multi-site ASP.NET Session Storage

You can avoid this by using NCache Bridge Topology. Once you create a Bridge between the active and the passive sites, the active site starts submitting all adds, updates, and removes of ASP.NET Session State objects to the Bridge. And, the Bridge asynchronously replicates them across the WAN to the passive site. The nice thing about the Bridge architecture is that it doesn’t block the active site operations at all so you don’t see any performance degradation because of it.

The only issue you have to keep in mind is that since the Bridge replicates asynchronously, there may be some sessions in the “replication queue” that won’t make it to the passive site if the active site abruptly shuts down. But, this is usually a very small number.

Read more about NCache Bridge Topology and all the situations where you can use it.

Active-Active Multi-Site Deployment

If your ASP.NET application is deployed to two or more active sites simultaneously then I recommend that you not replicate ASP.NET Session State to all the sites to save on bandwidth cost.

Additionally, most likely, each of your active site is geographically separated and it might make sense for you to keep its regional traffic to this site completely. However, you probably want the ability to route some of the traffic to other sites to handle overflow situations. Or, you may need to bring one of the sites down for maintenance without any interruptions for the users.

In this case, you can use the multi-site ASP.NET Session State Storage feature in NCache that lets you handle these cases. It lets you specify in web.config to generate session-ids with a location prefix for this session’s “primary” site. Then, even if this session request is routed to another site, that site knows where to find this session.

In this approach, the sessions do not move from their primary location even if the user request is routed to the other site. But, the other site is able to access this session from its “primary” site. Actually, each site specifies its “primary” and all others as “secondary” sites.

Below are the steps you take to achieve this goal.

  1. Add entry in web.config. It is required to generate ASP.NET session-id in the same manner on all servers and sites. You can use the genmackeys utility available with NCache installation to generate the keys.
<machineKey   validationKey="A01D6E0D1A5D2A22E0854CA612FE5C5EC4AECF24"

decryptionKey = "ACD8EBF87C4C8937" validation = "SHA1"/>
  1. To enable location affinity of a session-id, add configuration mentioned below.
<configSections>

<section name="ncache"

type="Alachisoft.NCache.Web.SessionStateManagement.NCacheSection,

Alachisoft.NCache.SessionStateManagement, Version=4.1.0.0,

Culture=neutral, PublicKeyToken=CFF5926ED6A53769"/>

</configSections>

<ncache>

<sessionLocation>

<primaryCache id="LondonCache" sid-prefix="LND"/>

<secondaryCache id="NewYorkCache" sid-prefix="NYC"/>

<secondaryCache id="TokyoCache" sid-prefix="TOK"/>

</sessionLocation>

</ncache>
  1. Specify custom session-id manager using the sessionIDManagerType attribute of the sessionState element in web.config.
<sessionState

cookieless = "false" regenerateExpiredSessionId = "true"

mode = "Custom" customProvider = "NCacheSessionProvider" timeout= "60"

sessionIDManagerType=

"Alachisoft.NCache.Web.SessionStateManagement.CustomSessionIdManager,

Alachisoft.NCache.SessionStateManagement">

<providers>

<add name="NCacheSessionProvider"

type="Alachisoft.NCache.Web.SessionState.NSessionStoreProvider"

cacheName = "myCache" />

</providers>

</sessionState>

Please note that in the above example, the <ncache> section in each site will be different, meaning each site will have its own “primary” and will consider all other sites as “secondary”.

In the above example, you can see that ASP.NET is being asked to generate session-id in a specific format so the sid-prefix can be prepended with the session-id. Once this is done, it helps ASP.NET know where this ASP.NET Session State was actually created so the cache for that site is accessed.

With this configuration, if you route any requests from one site to another for overflow, the other site fetches the ASP.NET Session State from its original “primary” site because this is part of the session-id as a location prefix. So, your WAN bandwidth consumption is minimized and you only pay for overflow traffic.

The other situation is where you want to bring a site down. In this case, just redirect all of its traffic to other sites but don’t shut down the cache servers of this site; you can shut down the web servers. And, then wait for all existing ASP.NET Session State objects to expire after their users have stopped using the application. Once this is done, just shut down the cache servers as well. With this, your users will not feel any downtime.

Take a look at how NCache helps you achieve this goal. Download a fully working 60-day trial of NCache.

Download NCache Trial | NCache Details

Posted in ASP.Net, Distributed Cache | Tagged , , | Leave a comment

How to Use Custom Dependency in Distributed Cache?

Today, web applications are increasingly using distributed cache for boosting performance and scalability by caching frequently used data so as to reduce expensive database trips. Distributed cache spans and synchronizes over multiple cache servers to let you scale in a linear fashion.

A good distributed cache usually has a Cache Dependency feature to let you expire cached items when something they depend on changes. A Cache Dependency can be key-based, file-based, or database-based. So, in essence you can specify a cached item to be dependent on another item in the cache (key-based), a file in the file system (file-based), or a row or a dataset in a SQL Server database (database-based). And, when data in any of these sources changes, your cached item is automatically removed from the cache because the “dependency was expired”. This allows you to keep your cached data fresh and correct always.

This is all well and good but what if you want your cached items to be dependent on data in data sources other than the ones mentioned above. For example, you might have an RSS feed (Rich Site Summary) that provides you data changes. And, you have your own program to read this feed and want to expire certain cached items based on whatever data changes you see in the RSS feed. There are many other similar situations where the data source is “custom”.

So, to handle these situations, a good distributed cache should provide you the flexibility to implement your own Custom Cache Dependency for your cached items so they can be expired when data in your custom data source changes.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

NCache provides such a Custom Cache Dependency feature. NCache is a powerful distributed cache for all kinds of .NET and Java applications. And, NCache lets you implement your own custom dependencies. Let me demonstrate how easily you can implement a custom dependency with NCache below. Here are the steps you have to take:

  1. Add using Alachisoft.NCache.Runtime.Dependencies; reference to your custom dependency implementation.
  2. NCache provides extensible abstract class named as ExtensibleDependency which is base class of all dependencies. You just have to inherit your custom dependency class from ExtensibleDependency and then just override its HasChanged property. When this property will return true, than item dependent item will be expired from cache.

Here is a full example of custom dependency implementation in which if available units of a specified product are less than 100 then dependency change will be triggered.

using Alachisoft.NCache.Runtime.Dependencies;

[Serializable]
public class CustomDependency : ExtensibleDependency
{
   private string _connString;
   private int _productID;

   public override bool Initialize(){ return false; }

   public CustomDependency(int productID, string connStr)
   {

     _connString = connStr;
     _productID = productID;

   }

   internal bool DetermineExpiration()
   {
     if (GetAvailableUnits(_productID) < 100)
            return true;
     return false;
   }

   internal int GetAvailableUnits(int productID)
   {
     OleDbDataReader reader=null;
     OleDbConnection connection=
                      new OleDbConnection(_connString);
     connection.Open();
     int availableUnits=-1;
     try
     {
        OleDbCommand cmd = connection.CreateCommand();

        cmd.CommandText =
        String.Format(CultureInfo.InvariantCulture,
        "Select UnitsInStock From Products" +
        " where ProductID = {0}", productID);
        reader = cmd.ExecuteReader();

        if (reader.Read())
        {
          availableUnits =
          Convert.ToInt32(reader["UnitsInStock"].ToString());
        }

        reader.Close();
        return availableUnits;
     }
     catch (Exception)
     {
         return availableUnits;
     }
   }

   public override bool HasChanged
   {
     get
      {
         return DetermineExpiration();
      }
   }
}
  1. Once you implemented your custom dependency and deployed it with NCache service, all you need is to register this dependency with dependent cache items in your application wherever needed.
using Alachisoft.NCache.Web.Caching; //add namespace

//Add following code in your application
Cache _cache = NCache.InitializeCache("myCache");

string connString =
       "Provider=SQLOLEDB;Data Source=localhost;
        User ID=sa;password=;Initial Catalog=Northwind";

CustomDependency hint = new CustomDependency(123, connString);

_cache.Add("Product:1001", "Value", new CacheDependency(hint),
       Cache.NoAbsoluteExpiration, new TimeSpan(0, 0, 10),
       Alachisoft.NCache.Runtime.CacheItemPriority.Default);

Now when data in your custom data source changes, NCache expires the dependent cached items automatically from cache. NCache is responsible for running your custom dependency code so you don’t need to worry about implementing your own separate program and hosting it in some reliable process. Try and explore it for your application specific scenarios.

Download NCache Trial | NCache Details

Posted in Cache dependency, Custom Dependency, Distributed Cache, Distributed caching | Tagged , , , | 2 Comments

How to Configure .NET 4.0 Cache to use a Distributed Cache?


In-memory distributed cache today has become really popular for applications running in a multi-server environment because it helps improve application scalability and performance.

Until .NET Framework 3.5 there was ASP.NET Cache object available only for web application under System.Web.Caching namespace. But, in .NET Framework 4.0, .NET 4.0 Cache is added under System.Runtime.Caching namespace for all types of .NET applications. .NET 4.0 Cache has functionality similar to ASP.NET Cache. But, unlike ASP.NET Cache, it has an abstract class ObjectCache that can be implemented in a customized way as needed. So, in essence .NET 4.0 Cache can be extended which ASP.NET Cache cannot be. And, MemoryCache is the default in-memory cache implementation of .NET 4.0 Cache. Here is an example:

private static ObjectCache cache = MemoryCache.Default;

private CacheItemPolicy policy = null;

private CacheEntryRemovedCallback callback = null;

// Registering callbacks and policies…
callback = new
 CacheEntryRemovedCallback(this.MyCachedItemRemovedCallback);

policy = new CacheItemPolicy();

policy.Priority =
 (MyCacheItemPriority == MyCachePriority.Default) ?

CacheItemPriority.Default : CacheItemPriority.NotRemovable;

policy.RemovedCallback = callback;

HostFileChangeMonitor changeMonitor =
 new HostFileChangeMonitor(FilePath);

policy.ChangeMonitors.Add(changeMonitor);

// Add inside cache…
cache.Set(CacheKeyName, CacheItem, policy);

One limitation of .NET 4.0 Cache’s default implementation MemoryCache is that it is a stand-alone in-process cache. And, if your .NET application runs on a multi-server environment, then you cannot use this because you need a distributed cache that can synchronize the cache across multiple servers. But fortunately, .NET 4.0 Cache architecture allows us to plug in a third party distributed cache solution and extend it.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

To address this need, Alachisoft has implemented an easy to use .NET 4.0 Cache Provider that can solve data synchronization, distribution and scalability issues especially in case of web farm/garden. And this provider basically integrates NCache with .NET 4.0 Cache. NCache is a very popular enterprise level distributed cache for .NET.

Through NCache’s .NET 4.0 Cache Provider you can plug in NCache with your application to achieve the benefits of a distributed cache.

Let me show you how easily it can be done with NCache by few steps.

  1. Create a Clustered (distributed) cache through GUI based NCache Manager. I created a clustered cache named as “MyClusterCache”.
  2. Start the cache to make it ready to use.
  3. Add references of  Alachisoft.NCache.ObjectCacheProvider library to your application from “NCacheInstallDir/NCache/integration/DotNet4.0 Cache Provider
  4. Include the following namespace in your project.
    using Alachisoft.NCache.ObjectCacheProvider;
    
  5. Initialize your CacheProvider (inherited from ObjectCache) and pass your cache name to the provider as shown below.
    ObjectCache _cache;
    string _cacheId = "MyClusterCache" ;
    _cache = new CacheProvider(_cacheId);
    
    
  6. Now you can perform all cache related operations on your cache using CacheProvider commands.

Here is full example of .NET 4.0 extended for NCache:


ObjectCache _cache;

string _cacheId = "MyClusterCache" ;

// Initialize with NCache’s .NET 4.0 Cache Provider.
_cache = new CacheProvider(_cacheId);

// Registering callbacks and policies…
NCacheFileChangeMonitor changeMonitor =
 new NCacheFileChangeMonitor(fileNames);

CacheItemPolicy ciPolicy = new CacheItemPolicy();

ciPolicy.ChangeMonitors.Add(changeMonitor);

ciPolicy.RemovedCallback +=
 new CacheEntryRemovedCallback(onCacheEntryRemoved);

//Add the dependent items in the cache.
_cache.AddItems(ciPolicy, 0, totalKeys);

NCache implementation of .NET 4.0 Cache also includes custom implementation of ChangeMonitor as NCacheEntryChangeMonitor, NCacheFileChangeMonitor, NCacheSqlChangeMonitor and NCacheOracleChangeMonitor for entry, file, SQL and Oracle based changes respectively.

Through NCache’s implementation of .NET 4.0 Cache interface, you can now adopt .NET 4.0 Cache as your standard and at the same time benefit from an enterprise level distributed cache for your .NET applications running in a multi-server environment.

Download NCache Trial | NCache Details

Posted in .NET 4.0 Cache, Distributed Cache, Distributed caching | Tagged , , | Leave a comment

How to Cache ASP.NET View State in a Distributed Cache?

ASP.NET today is widely used for high traffic web applications that can need to handle millions of users and are deployed in load balanced web farms. One important part of ASP.NET is View State that many applications use. ASP.NET View State is a very powerful mechanism that stores pages, controls and custom values between multiple HTTP requests across client and the web server.

ASP.NET View State values are stored in a hidden field on the page and encoded as a Base64 string. An ASP.NET View State looks like this:

<input id="__VIEWSTATE" type="hidden"
name="__VIEWSTATE"
value="/wEPDwUJNzg0MDMxMDA1D2QWAmYPZBYCZg9kF..." />

Although very useful, ASP.NET View State frequently becomes quite large especially in situations where your ASP.NET application has grid controls and many other controls on pages. And, this is added to your HTTP request and response and that really slows down your ASP.NET response time to unbearable levels.

Another downside of heavy ASP.NET View State is the increased bandwidth usage that increases your bandwidth cost considerably. For example, if for each HTTP requests, you end up appending 60-100k of additional ASP.NET View State data, just multiply it by the total number of transactions and you’ll quickly see how much more it will cost you in bandwidth consumption.

Finally, in some situations, there is a security risk with sending confidential data as part of ASP.NET View State. And, encrypting it before sending is also costly.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

To resolve all these problems you can cache ASP.NET View State on the web server and assign a GUID as its key in the cache. This GUID is then sent to the browser in hidden field and it comes back along with the next HTTP request and is used to fetch the corresponding ASP.NET View State from the cache. This reduces your payload sent to the browser which not only improves ASP.NET response time but also reduces your bandwidth consumption cost dramatically.

And, if your ASP.NET application is running in a load balanced web farm then you must use a distributed cache. A stand-alone cache like ASP.NET Cache won’t work. NCache is an enterprise level distributed cache that provides ASP.NET View State caching module. In order to use it, you don’t need to do any programming. Just modify your ASP.NET web.config for it.

Here are the steps to use NCache for caching ASP.NET View State:

  1. Create an app.browser file in your ASP.NET application. Create it under the directory App_browsers. Plug in page adapters in the app.browser file as following:
    <browser refID="Default">
    <controlAdapters>
    <adapter controlType="System.Web.UI.Page"
     adapterType="Alachisoft.NCache.Adapters.PageAdapter"/>
    </controlAdapters></browser>
    
  2. Then add the following assembly reference in compilation section of web.config file.
    
    <compilation defaultLanguage="C#" debug="true">
    <assemblies>
    <add assembly="Alachisoft.NCache.Adapters,
    Version=1.0.0.0,Culture=neutral,
    PublicKeyToken=cff5926ed6a53769"/></assemblies>
    </compilation>
    
  3. Register your config section in web.config file.
    <configSections>
    <sectionGroup name="ncContentOptimization">
    <section name="settings"
    type=
    "Alachisoft.NCache.ContentOptimization
    .Configurations.ContentSettings"
    allowLocation="true" allowDefinition="Everywhere"/>
    </sectionGroup>
    </configSections>
    
    
  4. Specify settings for your config section in web.config file (that was registered above).
    <ncContentOptimization>
    <settings enableMinification="true"
    enableViewstateCaching="true"
    groupedViewStateWithSessions="true"
    viewstateThreshold="0"
    enableTrace="true">
    <cacheSettings cacheName="mycache">
    <expiration type="Sliding" duration="300"/>
    </cacheSettings></settings>
    </ncContentOptimization>
    
  5. In the end, register the HTTP handler in the HttpHandlers section of web.config as following:
    <add verb="GET,HEAD" path="NCResource.axd"
     validate="false"
    type="Alachisoft.NCache.Adapters
    .ContentOptimization.ResourceHandler,
    Alachisoft.NCache.Adapters, Version=1.0.0.0,
    Culture=neutral,PublicKeyToken=cff5926ed6a53769"/>
    

After configuring NCache, you can see the ASP.NET View State tag in your application as:

< input type="hidden"
name="__NCPVIEWSTATE"
id="__NCPVIEWSTATE"
value="vs:cf8c8d3927ad4c1a84da7f891bb89185" />
< input type="hidden"
name="__VIEWSTATE" id="__VIEWSTATE" value="" />

Notice that another hidden tag is added with ASP.NET View State. It contains the unique key assigned to ASP.NET View State of your page, in your distributed cache. So whenever your application server needs ASP.NET View State value it can get it from cache easily.

With this, you will see a remarkable performance boost in your ASP.NET response times and your bandwidth consumption cost is reduced significantly as well.

Please explore more about ASP.NET View State caching by trying NCache ASP.NET View State module yourself.

Download NCache Trial | NCache Details

Posted in ASP .NET performance, ASP.Net, ASP.NET Cache, Distributed Cache, Distributed caching | Tagged , , | 2 Comments

How to Configure Preloading in a Distributed Cache?

Today’s applications need to scale and handle extreme levels of transaction loads. But, databases are unable to scale and therefore become a bottleneck. To resolve this, many people are turning to in-memory distributed cache because it scales linearly and removes the database bottlenecks.

A typical distributed cache contains two types of data; transactional data and reference data. Transactional data changes very frequently and is therefore cached for a very short time. But, caching it still provides considerable boost to performance and scalability.

Reference data on the other hand does not change very frequently. It may be static data or dynamic data that changes perhaps every hour, day, etc. And, at times, this data can be huge (in gigabytes).

So, it would be really nice if this reference data could be preloaded into a distributed cache upon cache start-up, because then your applications would not need to load it at runtime. Loading reference data at runtime would slow down your application performance especially if it is a lot of data.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

So, how should we preload this reference data into a distributed cache? One approach is to design our application in such a way that during application startup, it fetches all the required reference data from the database and puts it in the distributed cache.

But, this approach raises some other issues. First, it slows down your application startup because your application is now involved in preloading the cache. Second, if you have multiple applications sharing a common distributed cache, then you either have code duplication in each application or all your applications depend on one application preloading the distributed cache. And, finally, embedding cache preloading code inside your application corrupts your application design because you’re adding code that does not belong in your application. Of course, neither of these situations is very desirable.

What if we give this preloading responsibility to the distributed cache itself? In this case, preloading could be part of the cache startup process and therefore does not involve your application at all. You can configure the cache to preload all the required data upon startup so it is available for all the applications to use from the beginning. This simplifies your application because it no longer has to worry about preloading logic.

NCache provides a very powerful and flexible cache preloading capability. You can develop a cache loader and register it with NCache and then NCache calls this custom code developed by you upon cache startup. Let me demonstrate how to do this below.

  • Implement a simple interface named ICacheLoader. It is called to assist the cache in answering the question “How and which data to load?”
class CacheLoader : ICacheLoader
{

 public void Init(System.Collections.IDictionary parameters)
 {
   // Initialization of data source connection, assigning parameters etc.
 }

 public bool LoadNext(
 ref System.Collections.Specialized.OrderedDictionary data,
 ref object index)
 {
   // Fill ref objects with data that should be loaded in cache.
 }

 public void Dispose()
 {
   // Disposing connections and other required objects.
 }

}

  • Next step is to configure above implemented startup loader with cache. We can do it by using NCache Manger that comes with NCache or simply by adding following configuration in cache config.
<cache-loader retries="3" retry-interval="0" enable-loader="True">
   <provider
    assembly-name="TestCacheLoader, Version=1.0.0.0, Culture=neutral,
    PublicKeyToken=null"
    class-name="TestCacheLoader.CacheLoader"
    full-name="TestCacheLoader.dll" />
   <!—parameters that will be passed to Init method of the loader-->
   <parameters name="connectionString"
   value="Data Source= SQLEXPRESS;\
   Initial Catalog=testdb;Integrated Security=True;Pooling=False"/>
</cache-loader>

Any exception occurred during startup loader processing is logged without creating any problem for your application. Simple and effective!

As you can see, NCache provides you a powerful mechanism to preload your distributed cache and keep the performance of your applications always high.

Download NCache Trial | NCache Details

Posted in Distributed Cache | Tagged , , | Leave a comment

How to use a Distributed Cache for ASP.NET Output Cache

ASP.NET Output Cache is a mechanism provided by Microsoft that allows you to keep an in memory copy of the rendered content of ASP.NET page. Due to this, ASP.NET is able to serve the subsequent user requests for this page from an in-memory cached copy instead of re-executing this page which can be quite expensive because it usually also involves database calls.

So, ASP.NET Output Cache not only improves your application performance but also reduces a lot of expensive database trips. And, this improves your ASP.NET application scalability because otherwise the database would become a scalability bottleneck if all those ASP.NET pages were executed again and again.

But, the problem with the ASP.NET Output Cache is that it resides in your ASP.NET worker process address space. And, worker process resets or recycles quite frequently and when that happens, all the ASP.NET Output Cache is lost. Secondly, in case of a web garden (meaning multiple worker processes) the same page output is cached multiple times, once in each worker process and this consumes a lot of extra memory. And, did you know that one of the most common support calls for ASP.NET is out-of-memory errors caused by ASP.NET Output Cache?

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

So, to overcome these limitations of ASP.NET Output Cache so as to take advantage of its benefits, try using a distributed cache for storing all of ASP.NET Output Cache content. In this regard, NCache has implemented an ASP.NET Output Cache provider to enable the caching of ASP.NET rendered output in out-of-process (out-proc) cache instead of worker process address space. This way the output of your rendered ASP.NET page is available to all other web servers in the web farm without even rendering the same ASP.NET page locally in each worker process.

By using NCache as ASP.NET Output Cache provider you can not only cache more data in out-proc cache but can also dramatically reduces load on your database. This is because each rendered ASP.NET page output is accessible to all web servers in web farm without executing the page rendering process in each worker process which involves expensive database trips.

Further, NCache as ASP.NET Output Cache provider gives you the flexibility to even cache the output of certain parts of your ASP.NET page instead of complete page. This approach is very helpful in scenarios where you want certain parts of your ASP.NET to render each time. In addition, NCache also provides you high availability because even if your worker process resets or recycles your data is not lost because it is not part of your worker process address space and resides on separate caching servers.

Here are the steps to configure NCache output caching provider:

1. Register NCache as ASP.NET Output Cache Provider: Modify your ASP.NET applications web.config to register NCache output caching provider as follows:


<caching>
    <outputCache defaultProvider ="NOutputCacheProvider" />
<providers>
<add name="NOutputCacheProvider"
     type="NCOutputCache.NOutputCacheProvider"
     exceptionsEnabled="true" enableLogs="false"
     cacheName="mypartitionofReplicaCache" />
</providers>
</outputCache>
</caching>

<compilation debug="true" targetFramework="4.0">
   <assemblies>
      <add assembly="Alachisoft.NCache.OutputCache,
           Version=4.1.0.0, Culture=neutral"/>
   </assemblies>
</compilation>

2. Add ASP.NET Output Cache tag: Add the under mentioned Output cache tag to those pages whose output you want to cache.


<%@ OutputCache VaryByParam="ID" Duration="300" %>

By the way, ASP.NET versions earlier than ASP.NET 4.0 do not provide support of custom ASP.NET Output Cache providers. Therefore, to support all earlier versions of ASP.NET NCache has also implemented another version of ASP.NET Output Cache provider using an HttpModule. This HttpModule based ASP.NET Output Cache provider by NCache enables you to use distributed cache to store rendered ASP.NET page output, even if your application is using ASP.NET versions earlier than 4.0.

In summary, by using NCache output caching provider you can easily boost your ASP.NET application response time and can reduce database load.

Download NCache Trial | NCache Details

Posted in ASP.Net, ASP.NET Output Cache, Distributed Cache | Tagged , , | Leave a comment

How to Share Different Object Versions using Distributed Cache

It is a fact of life that organizations have different applications using different versions of the same classes because they were developed at different points in time and are unable to keep up. And, it is also a fact of life that these applications often need to share data with each other.

You can do this through the database but that is slow and not scalable. What you really want is to share objects through a distributed cache that is fast and scalable. But, object sharing at runtime immediately raises the issue of version compatibility.

One way to do this is through XML in which you can transform one version of an object to another but it is extremely slow. Another way is to implement your own custom transformation that takes one object version and transforms it to another version. But, you have to maintain this which is a lot of effort for you.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

Wouldn’t it be nice if the distributed cache somehow took care of version compatibility for you? Well, NCache does exactly that. NCache provides you a binary-level object transformation between different versions. You can map different versions through an XML configuration file and NCache understands how to transform from one version to another.

Additionally, since NCache stores all these different versions in a binary format (rather than XML), the data size stays very compact and small and therefore fast. In a high traffic environment, object size adds to be a lot of extra bandwidth consumption which has its own cost associated with it.

Here is an example of NCache config.ncconf with class version mapping:

<cache-config name="myInteropCache" inproc="False" config-id="0"
    last-modified="" type="clustered-cache" auto-start="False">
...
    <data-sharing>
      <type id="1001" handle="Employee" portable="True">
        <attribute-list>
          <attribute name="_Name" type="System.String" order="1"/>
          <attribute name="_Age" type="System.Int32" order="2"/>
          <attribute name="_Address" type="System.String" order="3"/>
          <attribute name="_ID" type="System.String" order="4"/>
          <attribute name="_PostalAddress" type="System.String" order="5"/>
        </attribute-list>
        <class name="DataModel.Employee:1.0.0.0" handle-id="1"
               assembly="DataModel, Version=1.0.0.0, Culture=neutral,
               PublicKeyToken=null" type="net">
          <attribute name="_Name" type="System.String" order="1"/>
          <attribute name="_Age" type="System.Int32" order="2"/>
          <attribute name="_Address" type="System.String" order="3"/>
        </class>
        <class name="DataModel.Employee:2.0.0.0" handle-id="2"
               assembly="DataModel, Version=2.0.0.0, Culture=neutral,
               PublicKeyToken=null" type="net">
          <attribute name="_ID" type="System.String" order="4"/>
          <attribute name="_Name" type="System.String" order="1"/>
          <attribute name="_Age" type="System.Int32" order="2"/>
          <attribute name="_PostalAddress" type="System.String" order="3"/>
        </class>
      </type>
    </data-sharing>
...
  </cache-config>

So, how does NCache do it? Well, in the config.nconf file that you see above, you’ll notice that the Employee class has a set of attributes defined first. These are version independent attributes and appear in all versions. This is actually a superset of all attributes that appear in different versions. Below that, you specify version-specific attributes and map them to version-independent attributes above.

Let’s say that you saved Employee version 1.0.0.0 which had a subset of the Employee version 2.0.0.0. Now, when another application tries to fetch the same Employee but it wants to see it as version 2.0.0.0, NCache knows which version 2.0.0.0 attributes to fill with data and which ones to leave blank.

Secondly, in above sample config you will notice that Employee version 2.0.0.0 does not have the Address field in it even though version 1.0.0.0 has it. So, in this case, when NCache tries to read Employee 1.0.0.0 stored in the cache and tries to transform it to Employee version 2.0.0.0, it knows not to copy the Address field because it is not there in this newer version.

There are many other scenarios that NCache handles seamlessly for you. Please read the online product documentation for more detail on this.

Finally, the best part in all of this is that you don’t have to write any serialization code or make any code changes to your application in order to use this NCache feature. NCache has implemented a runtime code generation mechanism, which generates in-memory serialization and deserialization code of your classes at runtime, and uses the compiled form which is very fast.

In summary, using NCache you can now share different object versions between your applications without even modifying your application code.

Download NCache Trial | NCache Details

Posted in Binary Serialization, Data Sharing with Distributed Cache, Distributed Cache, Object Versions Sharing, Serialization | Tagged , , , | Leave a comment

Query a Distributed Cache with SQL-Like Aggregate Functions

Today, distributed cache is widely used to achieve scalability and performance in high traffic applications. Distributed cache offloads your database servers by serving cached data from in-memory store. In addition, few distributed caches also provide you SQL like query capability using which you can easily query your distributed cache the way you query your database i.e. “SELECT employee WHERE employee.city=‘New York’”

First of all, most distributed caches don’t even provide SQL-like querying capabilities. And, even a few that do provide this have a very limited support for it. They only provide searching of distributed cache based on simple criteria. Whereas, there are several scenarios where you have to find the result based on aggregate functions i.e. “SELECT COUNT(employee) WHERE salary > 1000” or “SELECT SUM(salary) WHERE employee.city = ‘New York’”. In order, to achieve this you have to first query the distributed cache and then calculate the aggregate function on fetched cache data.

This approach has two major drawbacks. First is that you have to execute query on distributed cache which involves fetching of all the data from distributed cache to cache client. This data may vary from MB’s to GB’s and this operation becomes more expensive when you are also paying for the consumed network bandwidth. Moreover, mostly you don’t need this data after you are done with aggregate function calculations.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

Second drawback is that it involves custom programming for aggregate function calculation which adds extra man hours and still most of the complex scenarios cannot be covered. It would be much nicer if you could continue to develop application for the purpose that it is being built and not worry about designing and implementing these extra features yourself.

These are the reasons due to which NCache provides you the flexibility to query distributed cache using aggregate functions like COUNT, SUM, MIN, MAX and AVG as part of its Object Query Language (OQL). Using NCache OQL aggregate functions, you can easily perform the required aggregate calculations inside the distributed cache domain. This approach not only avoids the network traffic but also provides you much better performance in term of aggregate functions calculation. This is because all the selections and calculations are done inside the distributed cache domain and no network trips are involved.

Here is the code sample to search NCache using OQL aggregate queries:

public void Main(string[] args)
{
    ...
    NCache.InitializeCache("myPartitionReplicaCache");

    String query = "SELECT COUNT(Business.Product) WHERE 
                           this.ProductID > ?  AND this.Price < ?";

    Hashtable param = new Hashtable();
    param.Add("ProductID", 100);
    param.Add("Price", 50);

    // Fetch the cache items matching this search criteria
    IDictionary searchResults = _cache.SearchEntries(query, values);
    ...
}


For more reduced query execution time, NCache runs the SQL-like query in parallel by distributing it to all the cache servers  just like the map-reduce mechanism. In addition, you can use NCache OQL aggregate queries in both .NET and Java applications.

In summary, NCache provides you not only the scalability and performance but also the flexibility of searching distributed cache using SQL like aggregate function.

Download NCache Trial | NCache Details

Posted in Aggregate Functions, Distributed Cache, Object Query | Tagged , , , | Leave a comment

Class Versioning in Runtime Data Sharing with Distributed Cache

Today many organizations use .NET and Java technologies to develop different high traffic applications. At the same time, these applications not only have a need to share data with each other but also want to support runtime sharing of different version of the same class for backward compatibility and cost reduction.

The most common way mostly used to support runtime sharing of different class versions between .NET and Java application is through XML serialization. But, as you know XML serialization is an extremely slow and resource hungry process. It involves XML validation, parsing, transformations which really hampers your application performance and uses extra resources in term of memory and CPU.

The other approach widely used to support sharing of different class versions between .NET and Java is through database but the problem with this approach is that it is slow and also does not scale very well with the growing transactional load. And, your database quickly becomes a scalability bottleneck because you can linearly scale your application tier by adding more application servers, you cannot do the same at the database tier.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

Well, this is where a distributed cache like NCache comes in really handy. NCache provides you a binary-level object transformation between different versions not only of same technology but also between .NET and Java. You can map different versions through an XML configuration file and NCache understands how to transform from one version to another.

NCache class version sharing framework implements interoperable binary serialization custom protocol that generates byte stream based on specified mapping in such a format that any new and old versions of the same class can easily de-serialize it regardless of its development language; which can be .NET or Java.

Here is an example of NCache config.ncconf with class version mapping:

<cache-config name="InteropCache" inproc="False" config-id="0" last-modified="" type="local-cache" auto-start="False">
 ...
      <type id="1001" handle="Employee" portable="True">
        <attribute-list>
          <attribute name="Name" type="Java.lang.String" order="1"/>
          <attribute name="SSN" type="Java.lang.String" order="2"/>
          <attribute name="Age" type="int" order="3"/>
          <attribute name="Address" type="Java.lang.String" order="4"/>
          <attribute name="Name" type="System.String" order="5"/>
          <attribute name="Age" type="System.Int32" order="6"/>
          <attribute name="Address" type="System.String" order="7"/>
        </attribute-list>
        <class name="com.samples.objectmodel.v1.Employee:1.0" handle-id="1"
   assembly="com.jar" type="Java">
          <attribute name="Name" type="Java.lang.String" order="5"/>
          <attribute name="SSN" type="Java.lang.String" order="2"/>
        </class>
        <class name="com.samples.objectmodel.v2.Employee:2.0" handle-id="2"
   assembly="com.jar" type="Java">
          <attribute name="Name" type="Java.lang.String" order="5"/>
          <attribute name="Age" type="int" order="6"/>
          <attribute name="Address" type="Java.lang.String" order="7"/>
        </class>
        <class name="Samples.ObjectModel.v2.Employee:2.0.0.0" handle-id="3"
   assembly="ObjectModelv2, Version=2.0.0.0, Culture=neutral,
   PublicKeyToken=null" type="net">
          <attribute name="Name" type="System.String" order="5"/>
          <attribute name="Age" type="System.Int32" order="6"/>
          <attribute name="Address" type="System.String" order="7"/>
        </class>
        <class name="Samples.ObjectModel.v1.Employee:1.0.0.0" handle-id="4"
               assembly="ObjectModelv1, Version=1.0.0.0, Culture=neutral,
               PublicKeyToken=null" type="net">
          <attribute name="Name" type="System.String" order="5"/>
          <attribute name="Age" type="System.Int32" order="6"/>
        </class>
      </type>
    </data-sharing>
    ...
  </cache-config>

So, how does NCache do it? Well, in the ncache.config file that you see above, you’ll notice that the Employee class has a set of attributes defined first. These are version independent attributes and appear in all versions of .NET and Java classes. This is actually a superset of all attributes that appear in different versions. Below that, you specify version-specific attributes and map them to version-independent attributes above.

Now, let’s say that you saved .NET Employee version 1.0.0.0. Now, when another application tries to fetch the same Employee but it wants to see it as Java version 1.0 or 2.0. NCache knows which attributes of .NET version 1.0.0.0 to fill with data and which ones to leave blank and vice versa.

There are many other scenarios that NCache handles seamlessly for you. Please read the online product documentation for more detail on this.

Finally, the best part is that you don’t have to write any serialization and deserialization code or make any code changes to your application in order to use this NCache feature. NCache has implemented a runtime code generation mechanism, which generates the in-memory serialization and deserialization code of your interoperable classes at runtime, and uses the compiled form so it is super-fast.

In summary, using NCache you can now share different class versions between your .NET and Java applications without even modifying your application code.

Download NCache Trial | NCache Details

Posted in Distributed Cache | Tagged , , | Leave a comment

Develop Real Time Applications using Distributed Cache Continuous Query

High traffic real time applications are widely used in enterprise environment. In real time applications information is made available to you in moments it’s produced and any delay in doing so can cause a serious financial loss. The main challenge faced by these high traffic real time applications is to get notified about any changes in data set so that the corresponding views can be updated.

But, these high traffic real time applications cannot rely on the traditional database because they only support queries on the residing data and in order to get the updated data set you have to again execute the query after specific interval which is not instantaneous. And, this periodic polling also causes performance and scalability issues because you are making expensive database trips mostly even when there are no changes in data set.

SqlDependency is provided by Microsoft in SQL Server and Oracle on Windows also supports it. SqlDependency allows you to specify a SQL statement and SQL Server monitors this data set in the database for any additions, updates, or deletions and notifies you when this happens. But the problem with SqlDependency is that once it is fired it gets unregister from the database. Therefore, all future changes in your data set are missed and you don’t get notified.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

Moreover, SqlDependency does not provide details of the record where change occurred. So in order to find the change in data set you have to fetch the complete data set again instead of directly fetching only the specific record which is added, updated or removed from data set. And, of course, this is not efficient.

In addition to the limitations of SqlDependency, your database is also not able to cope with the transactional demands of these high traffic real time applications where tens of thousands of queries are executed every second and database quickly becomes a scalability bottleneck. This is because although you can linearly scale your application-tier by adding more application servers, you cannot do the same with your database server.

This is where a distributed cache like NCache comes in because it allows you to cache data and reduce those expensive database trips which are causing scalability bottleneck.

NCache has a powerful Continuous Query capability that enables you to register an SQL-like query with the cache cluster. This Continuous Query remains active in cache cluster and if there is any change in data set of this query, then NCache notifies your real time application. This Continuous Query approach saves you from periodically executing the same expensive query against database to poll.

Here is sample code for NCache Continuous Query:

public void Main(string[] args)
{
    ...
    NCache.InitializeCache("myPartitionReplicaCache");
    String query = "SELECT NCacheQuerySample.Business.Product WHERE
                    this.ProductID > 100";

    Hashtable values = new Hashtable();
    values.Add("ProductID", 100);
    ...
    onItemAdded = new ContinuousQueryItemAddedCallback(OnQueryItemAdded);
    onItemUpdated = new ContinuousQueryItemUpdatedCallback(OnQueryItemUpdated);
    onItemRemoved = new ContinuousQueryItemRemovedCallback(OnQueryItemRemoved);

    ContinuousQuery query = new ContinuousQuery(queryString, values);
    query.RegisterAddNotification(onItemAdded);
    query.RegisterUpdateNotification(onItemUpdated);
    query.RegisterRemoveNotification(onItemRemoved);
    _cache.RegisterCQ(query);
    ...
}
//data set item is removed
void OnQueryItemRemoved(string key){
   ...
   Console.WriteLine("Removed key: {0}", key);
   ...
}
//data set item is updated
void OnQueryItemUpdated(string key){
   ...
   Console.WriteLine("Updated key: {0}", key);
   ...
}
//data set item is removed
void OnQueryItemAdded(string key){
   ...
   Console.WriteLine("Added key: {0}", key);
   ...
}

And, unlike SqlDependency, NCache Continuous Query remains active and does not get unregistered on each change notification. So, your high traffic real time applications continue to be notified across multiple changes.

NCache Continuous Query also provides you the flexibility to be notified separately on ADD, UPDATE, and DELETE. And, you can specify these events at runtime even after creating a Continuous Query, something SqlDependency does not allow you to do. This also reduces events traffic from the cache cluster to your real time application.

In summary, NCache provides you a very powerful event driven Continuous Query that no other database has. And, NCache is also linearly scalable for your high traffic real time applications.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in Continuous Query, Distributed Cache, Distributed events | Tagged , , | Leave a comment

.NET and Java Data Sharing with Binary Serialization in Distributed Cache

Many organizations today use a variety of .NET and Java technologies to develop different high traffic applications. At the same time, these organizations have a need to share data at runtime between .NET and Java applications.

One way to share data is through the database but that is slow and also does not scale very well. A much better approach is to use an in-memory distributed cache as a common data store between multiple applications. It is fast and also scales linearly.

As you know, Java and .NET types are not compatible. Therefore, you end up transforming the data into XML for sharing. Additionally, most distributed caches either don’t provide any built-in mechanism for sharing data between .NET and Java applications or only provide XML based data sharing. If a cache does not provide a built-in data sharing mechanism, then you have to define the XML schema and use a third party XML serialization to construct and read all the XML data.

But, XML serialization is an extremely slow and resource hungry process. XML serialization involves XML validation, parsing, transformations which really hamper the application performance and uses extra resources in term of memory and CPU.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

Distributed cache by design is used to improve your application performance and scalability. It allows your applications to cache your application data and reduce those expensive database trips that are causing scalability bottleneck. And, XML based data sharing goes against this performance and scalability goals for your application. If you increase transaction load on your application, you’ll see that XML manipulation ends up becoming a performance bottleneck.

A much better way is to do data sharing between .NET and Java applications at binary level where you would not have to do any XML transformations. NCache is a distributed cache that provides you runtime data sharing .NET and Java application with binary serialization.

How does NCache provide runtime data sharing between .NET and Java through binary serialization? Well, before that you need to understand why your native .NET and Java binary serialization are not compatible. Java and .NET have their own binary serializations that interpret objects in their own ways and which are totally different from each other and also have different type systems. Moreover, the serialized byte stream of an object also includes the data type details as fully qualified name which are again different in .NET and Java. And, this of course also hinders the data type compatibility between .NET and Java.

To handle this incompatibility, NCache has implemented its own interoperable binary serialization that is common for both .NET and Java. NCache interoperable binary serialization identifies objects based on type-ids that are consistent across .NET and Java instead of fully qualified name that are technology specific. This approach not only provides interoperability but also reduces the size of the generated byte stream. Secondly, NCache interoperable binary serialization implements a custom protocol that generates byte stream in such a format that its .NET and Java implementations can easily interpret.

Here is an example of NCache config.ncconf with data interoperable class mapping:

  <cache-config name="InteropCache" inproc="False" config-id="0" last-modified=""     
   type="clustered-cache" auto-start="False">
    ...
      <data-sharing>
       <type id="1001" handle="Employee" portable="True">
         <attribute-list>
           <attribute name="Age" type="int" order="1"/>
           <attribute name="Name" type="java.lang.String" order="2"/>
           <attribute name="Salary" type="long" order="3"/>
           <attribute name="Age" type="System.Int32" order="4"/>
           <attribute name="Name" type="System.String" order="5"/>
           <attribute name="Salary" type="System.Int64" order="6"/>
         </attribute-list>
   <class name="jdatainteroperability.Employee:0.0" handle-id="1"   
    assembly="jdatainteroperability.jar" type="java">
           <attribute name="Age" type="int" order="1"/>
           <attribute name="Name" type="java.lang.String" order="2"/>
           <attribute name="Salary" type="long" order="3"/>
          </class>
          <class name="DataInteroperability.Employee:1.0.0.0" handle-id="2" 
           assembly="DataInteroperability, Version=1.0.0.0, Culture=neutral, 
           PublicKeyToken=null" type="net">
           <attribute name="Age" type="System.Int32" order="1"/>
           <attribute name="Name" type="System.String" order="2"/>
           <attribute name="Salary" type="System.Int64" order="3"/>
          </class>
       </type>
      </data-sharing>
    ...
  </cache-config>

As a result, NCache is able to serialize a .NET object and deserialize it in Java as long as there is a compatible Java class available. This binary level serialization is more compact and much faster than any XML transformations.

Finally, the best part in all of this is that you don’t have to write any serialization code or make any code changes to your application in order to use this feature in NCache. NCache has implemented a runtime code generation mechanism, which generates the in-memory serialization and deserialization code of your interoperable classes at runtime, and uses the compiled form so it is super fast.

In summary, using NCache you can scale and boost your application performance by avoiding the extremely slow and resource hungry XML serialization.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in data sharing, Distributed Cache, Serialization | Tagged , , | Leave a comment

JSP Session Persistence and Replication with Distributed Cache

As you know, JSP applications have the concept of a Session object in order to handle multiple HTTP requests. This is because HTTP protocol is stateless and Session is used in maintaining user’s state across multiple HTTP requests.

In a single web server configuration, there is no issue but as soon as you have a multi-server load balanced web farm running a JSP application, you immediately run into the issue of where to keep the Session. This is because a load balancer ideally likes to route each HTTP request to the most appropriate web server. But, if the Session is kept only on one web server then you’re forced to use the sticky session feature of the load balancer where it sends requests related to a given Session only to the web server where the Session resides.

This means one web server might get overwhelmed even when you have other web server sitting idle because the Session for that user resides on that web server. This of course hampers scalability. Additionally, keeping the Session object only on one web server also means loss of Session data if that web server goes down for any reason.

To avoid this data loss problem, you must have a mechanism where Session is replicated to more than one web server. Here, the leading Servlet Engines (Tomcat, WebLogic, WebSphere, and JBoss) all provide some form of Session persistence. They even include some form of Session replication but all of them have issues. For example, file based and JDBC persistence are slow and cause performance and scalability issues. Session replication is also very weak because it replicates all sessions to all web servers thereby creating unnecessary copies of the Session when you have more than two web servers even though fault tolerance can be achieved with only two copies.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

In such situations, a Java distributed cache like NCache is your best bet to ensure the session persistence across multiple servers in web cluster is done very intelligently and without hampering your scalability.  NCache has a caching topology called “Partition-Replica” that not only provides you high availability and failover through replication but also provides you large in-memory session storage through data partitioning. Data partitioning enables you to cache large amounts of data by breaking up the cache into partitions and storing each partition on different cache servers in the cache cluster.

NCache Partition-Replica topology replicates the session data of each node to only one other node in cluster this approach eradicates the performance implications of replicating session data on all of the server nodes without compromising the reliability.

In addition, session persistence provided by the web servers (Apache, Tomcat, Weblogic and WebSphere) uses the resource and memory of web cluster. This approach hinders your application performance because the web cluster nodes which are responsible to process the client request are also handling the extra work of session replication and its in-memory storage.  Whereas, you can run NCache on separate boxes other the one part of your web cluster. This way you can free the web cluster resources and web cluster can use those resources to handle more and more client requests.

For JSP session persistence, NCache has implemented a session module NCacheSessionProvider as Java filter. NCache JSP Servlet Filter dynamically intercepts requests and responses and handles session persistence behind the scenes. And, you don’t have to change any of your JSP application code.

Here is a sample NCache JSP Servlet Filter configuration you need to define in your application deployment descriptor to use NCache for JSP Session persistence:


<filter>
<filter-name>NCacheSessionProvider</filter-name>
<filter-class> com.alachisoft.ncache.web.sessionstate.NSessionStoreProvider
</filter-class>
</filter>
<filter-mapping>
<filter-name>NCacheSessionProvider</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>

<init-param>
<param-name>cacheName</param-name>
<param-value>PORCache</param-value>
</init-param>

<init-param>
<param-name>configPath</param-name>
<param-value>/usr/local/ncache/config/</param-value>
</init-param>

Hence, NCache provide you a much better mechanism to achieve session persistence in web cluster along with performance boost and scalability.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in Distributed Cache, Java, JSP Session Persistence | Tagged , , | Leave a comment

Scaling your Java Spring Applications with Distributed Cache

Spring is a popular lightweight dependency injection and aspect oriented development container and framework for Java. It reduces the overall complexity of J2EE development and provides high cohesion and loose coupling. Because of the benefits Spring provides, it is used by a lot of developers for creating high traffic small to enterprise level applications.

Here is an example of a Spring Java application:

import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

public class MySampleApp {
   …
 ApplicationContext ctx = new ClassPathXmlApplicationContext("spring.xml");
     //In spring.xml
 Department dept = (Department) ctx.getBean"department");

 List employees = dept.getEmployees();
   for(int i=0; i< employees.size();i++)
   {
      Employee emp = employees.get(i);
      System.out.println("Student Name :"student.getName());
   }
   …
 }

But, these high traffic Spring applications face a major scalability problem. Although these applications can scale by adding more servers to the application server farm, their database server cannot scale in the same fashion to handle the growing transactional load.

In such situations, a Java distributed cache is your best bet to handle the database scalability problem. Java distributed cache offloads your database by reducing those expensive database trips that are causing scalability bottlenecks. And, it also improves your application performance by serving data from in-memory cache store instead of the database.

NCache is a Java distributed cache and has implemented the Spring cache provider. NCache Spring provider introduces a generic Java cache mechanism with which you can easily cache the output of your CPU intensive, time consuming, and database bound methods of Spring application. This approach not only reduces the database load but also reduces the number of method executions and improves application performance.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

NCache Spring provider has a set of annotations including @Cacheable to handle cache related tasks. Using these annotations you can easily mark the method required to be cached along with cache expiration, key generation and other strategies.

When a Spring method marked as @Cacheable is invoked, NCache checks the cache storage to verify whether the method has already been executed with the given set of parameters or not. If it has, then the results are returned from the cache. Otherwise, method is executed and its results are also cached. That is how, expensive CPU, I/O and database bound methods are executed only once and their results are reused without actually executing the method.

Java distributed cache is essentially a key-value store, therefore each method innovation should translate to a suitable unique key for the access. To generate these cache keys NCache Spring provider uses the combination of class name, method and arguments. However, you can also implement your custom NCache Spring key generator by using com.alachisoft.ncache.annotations.key.CacheKeyGenerator interface.

Here are the steps to integrate NCache cache provider in your application: 

  1. Add Cache Annotations: Add NCache Spring annotation to methods which require caching. Here is a sample of Spring method using NCache annotation:
  2. @Cacheable
     (cacheID="r2-DistributedCache",
      slidingExpirator = @SlidingExpirator (expireAfter=15000)
     )
     public Collection<Message>; findAllMessages()
     {
    	...
          Collection<Message> values = messages.values();
    	Set<Message> messages = new HashSet<Message>();
    	synchronized (messages) {
    		Iterator<Message> iterator = values.iterator();
    		while (iterator.hasNext()) {
    			messages.add(iterator.next());
    		}
    	}
    	...
       return Collections.unmodifiableCollection(messages);
     	}
    
    
  3. Register NCache Spring Provider:  Update your Spring application spring.xml to register NCache Spring provider as follows:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:p="http://www.springframework.org/schema/p"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:oxm="http://www.springframework.org/schema/oxm"
xmlns:mvc="http://www.springframework.org/schema/mvc"
xmlns:ncache="http://www.alachisoft.com/ncache/annotations/spring"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd
http://www.springframework.org/schema/oxm http://www.springframework.org/schema/oxm/spring-oxm-3.0.xsd
http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd
http://www.alachisoft.com/ncache/annotations/spring http://www.alachisoft.com/ncache/annotations/spring/ncache-spring-1.0.xsd">

<ncache:annotation-driven>
        <ncache:cache id="r1-LocalCache" name="myCache"/>
        <ncache:cache id="r2-DistributedCache" name="myDistributedCache"/>
   </ncache:annotation-driven>

Hence, by using NCache Spring cache provider you can scale your Spring applications linearly and can boost performance.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in Distributed Cache, Spring | Tagged , | Leave a comment

Using NCache as Hibernate Second Level Java Cache

Hibernate is an Object-Relational Mapping library for Java language. It provides you mapping from Java classes to database table and reduces the overall development cycle. Because of benefits Hibernate provides, more and more high transactional applications are developed using Hibernate. Here is an example of Hibernate in a Java application.


import org.hibernate.*;

public class HibernateSample
{
	...
Session session = factory.openSession();
session = factory.openSession();
Transaction tx = session.beginTransaction();
Query query =  session.createQuery("from Customer c");
Iterator it = query.list().iterator();
while (it.hasNext ()){
Customer customer = (Customer) it.next();
...
}
       tx.commit();
session.close();
}

But, these high traffic Hibernate applications are encountering a major scalability issue. Although they are able to scale at application-tier level, their database or data storage is not able to scale with the growing number of transactional load.

Java distributed caching is the best technique to solve this problem because it reduces the expensive trips to database that is causing scalability bottlenecks. For this reason, Hibernate provides a caching infrastructure that includes first-level and second-level cache.

Hibernate first-level cache provides you a basic standalone (in-proc) cache which is associated with the Session object, and is limited to the current session only. But, the problem with Hibernate first-level cache is that it does not allow the sharing of object between different sessions. If the same object is required by different sessions all of them make the database trip to load it which eventually increases database traffic and causes scalability problem. Moreover, when the session is closed all the cache data is also lost and next time you have to fetch it again from database.

These high traffic Hibernate applications with only first-level when deployed in the web farm also faces across server cache synchronization problem. In web farm, each node runs a web server – Apache, Oracle WebLogic etc. – with multiple instances of httpd process to server the requests. And, Hibernate first-level cache in each httpd worker process has a different version of the same data cached directly from the database.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

That is why Hibernate provides you a second-level cache with provider model. Hibernate second-level cache allows you to plug-in 3rd party distributed (out-proc) caching provider to cache object across sessions and servers. Hibernate second-level cache is associated with SessionFactory object and is available to entire application, instead of single session.

When you enable Hibernate second-level cache, you end up with two caches; one is first-level cache and the other is second level cache. Hibernate always tries to retrieve the objects from first-level cache if fails tries to retrieve them from second-level cache. If that also fails then objects are directly loaded from the database and cached as well. This configuration significantly reduces the database traffic because most of the data is served by the second-level distributed cache.

NCache Java has implemented a Hibernate second-level caching provider by extending org.hibernate.cache.CacheProvider. You can easily plug in NCache Java Hibernate distributed caching provider with your Hibernate application without any code changes. NCache allows you to scale your Hibernate application to multi-server configurations without database becoming the bottleneck. NCache also provides you all the enterprise level distributed caching features like data size management, data synchronization across server and database etc.

You can plug in NCache Java Hibernate caching provider by simply modifying your hibernate.cfg.xml and ncache.xml as follows

<hibernate-configuration>
  <session-factory>
<property name = "cache.provider_class">
alachisoft.ncache.integrations.hibernate.cache.NCacheProvider,
alachisoft.ncache.integrations.hibernate.cache
</property>
  </session-factory>
</hibernate-configuration>

<ncache>
<region name = "default">
   <add key = "cacheName" value = "myClusterCache"/>
   <add key = "enableCacheException" value = "false"/>
   <class name = "hibernator.BLL.Customer">
<add key = "priority" value = "1"/>
<add key = "useAsync" value = "false"/>
<add key = "relativeExpiration" value = "180"/>
   </class>
</region>
</ncache>

Hence, by using NCache Java Hibernate distributed cache provider you can linearly scale your Hibernate applications without any code changes.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in Hibernate Second Level Cache, Java Distributed Cache | Tagged , | Leave a comment

How to Synchronize Distributed Cache with Database with CLR Stored Procedures

Distributed caching has become a very important part of any high transaction application in order to ensure that the database does not become a scalability bottleneck. But, since a distributed cache keeps a copy of your application data, you must always ensure that it is kept synchronized with your database. Without this, the distributed cache has older stale data that causes data integrity problems. SQL Server provides an event notification mechanism where the distributed cache like NCache can register itself for change notification through SqlCacheDependency and then receive notifications from SQL Server when underlying data changes in the database. This allows NCache to immediately invalidate or reload the corresponding cached item and this keeps the cache always synchronized with the database. However, SqlCacheDependency can become a very resource intensive way of synchronizing the cache with the database. First of all, you have to create a separate SqlCacheDependency for each cached item and this could easily go into tens of thousands if not hundreds of thousands. And, SQL Server uses data structures to maintain each SqlCachDependency separately so it can monitor any data changes related to it. And, this consumes a lot of extra resources and can easily choke the database server.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

Secondly, SQL Server fires separate .NET events for each data change and NCache catches these events. And, these .NET events can be quite heavy and could easily overwhelm the network traffic and overall performance of NCache and your application. There is a better alternative. This involves you writing a CLR stored procedure that connects with NCache from within SQL Server and directly updates or invalidates the corresponding cached item. And, then you can call this CLR stored procedure from an update or delete trigger of your table. You can do this either with SQL Server 2005 or 2008 and also from Oracle 10g or later but only if it is running on Windows. A CLR stored procedure is more resource efficient because it is not creating data structures related to SqlCacheDependency. And, it also does not fire .NET events to NCache. Instead, it open up an NCache client connection and directly tells NCache whether to invalidate a cached item or reload it. And, this connection with NCache is highly optimized and much faster and lighter than .NET events. Below is an example of how to use a CLR stored procedure.

  1. Copy log4net and protobuf-net from Windows GAC to NCache/bin/assembly/2.0 folder (choose 4.0 if the target platform is .NET 4.0).

2.   Register NCache and following assemblies in SQL server. Example is given below. In this example we are using Northwind as a sample database.

use Northwind

alter database Northwind
set trustworthy on;
go

drop assembly SMdiagnostics
drop assembly [System.Web]
drop assembly [System.Messaging]
drop assembly [System.ServiceModel]
drop assembly [System.Management]

CREATE ASSEMBLY SMdiagnostics AUTHORIZATION dbo
FROM N'C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\SMdiagnostics.dll'
WITH permission_set = unsafe

CREATE ASSEMBLY [System.Web] AUTHORIZATION dbo
FROM N'C:\Windows\Microsoft.NET\Framework64\v2.0.50727\System.Web.dll'
WITH permission_set = unsafe

CREATE ASSEMBLY [System.Management] AUTHORIZATION dbo
FROM N'C:\Windows\Microsoft.NET\Framework64\v2.0.50727\System.management.dll'
WITH permission_set = unsafe

CREATE ASSEMBLY [System.Messaging] AUTHORIZATION dbo
FROM N'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Messaging.dll'
WITH permission_set = unsafe

CREATE ASSEMBLY [System.ServiceModel] AUTHORIZATION dbo
FROM N'C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0\System.ServiceModel.dll'
WITH permission_set = unsafe

CREATE ASSEMBLY NCache
FROM N'C:\Program Files\NCache\bin\assembly\2.0\Alachisoft.NCache.Web.dll'
WITH permission_set = unsafe

3. Open visual studio to write a stored procedure against NCache And create a SQL CLR Database project as mentioned below. Add a reference to the NCache assembly that you created in the last step. The assembly that you need to refer is highlighted above. It will appear under SQL Server with the same name as “NCache”.

CLR_VS_Studio

4.  Write your stored procedure. Here is a sample code given:

public partial class StoredProcedures
{
    [Microsoft.SqlServer.Server.SqlProcedure]
    public static void TestSProc(string cacheName)
    {
        //--- Put your code here
        SqlPipe sp = SqlContext.Pipe;

        try
        {
            sp.Send("Starting .....");

            if (string.IsNullOrEmpty(cacheName))
                cacheName = "mycache";

            Cache _cache = NCache.InitializeCache(cacheName);
            _cache.Insert("key", DateTime.Now.ToString());
            sp.Send("Test is completed ...");
        }

5.  Enable CLR integration on database as given below:

sp_configure 'clr enabled', 1
GO
RECONFIGURE
GO

6. Deploy the stored procedure from Visual Studio and test it.
7. After deploying the stored procedure, you need to place your stored procedure assembly in (C:\Program Files\NCache\bin\assembly\2.0) folder as it does not resolve assembly references directly from windows GAC folder and needs them locally.

CLR based stored procedures or triggers can greatly improve the application performance as compared to the SqlCacheDependency that is relatively slower and can be overwhelming for large datasets.

Download NCache Trial | NCache Details

Posted in CLR procedures, Database synchronize, Distributed caching | Tagged , , | 3 Comments