How .NET Cache Improves Performance and Scalability?

Some of the very basic needs for .NET applications to stay competitive in today’s market are to be extremely responsive and scalable. The bottleneck in the way of achieving these benchmarks is your relational database.

This is a two fold bottleneck, first the reads from disk are very inefficient and time consuming. Secondly, you cannot scale out the database tier by adding more database servers. Whereas if you have a .NET distributed cache, it provides fast data access because it is in-memory and it can also scale linearly the same way your application tier does.

NCache is a .NET distributed cache that provides performance and scalability for your applications. It comes with a rich set of features including but not limited to cache elasticity, high availability, data replication, seamless integration with existing technologies and ease of management. Lets just focus on performance and scalability, as identified in the beginning. These are two fundamental metrics needed by .NET applications to survive in today’s world. Let’s see how NCache is positioned to cater for both.

NCache gets its performance edge over relational database because it keeps its data in memory and not on the disk. The performance boost over relational databases is ten times or higher depending on your hardware and .NET cache positioning in the network. For example if you deploy NCache as a local in-proc cache in your environment, the data access becomes lightning fast.

And the way NCache provides scalability is by allowing you to add more cache servers when your transaction load grows. So, if you see that your application getting overwhelmed by transaction load, just add a new cache server at run-time. You don’t even have to stop your application for this. With this new cache server added, you have the ability to serve more requests and all of this happens transparent to the user. Now, that’s what I mean by scalability.

In-Memory Data Grid Architecture

There are a number of caching topologies that NCache offers to choose from depending on your specific need. The caching topology defines how your data is stored and the way individual cache servers in the cluster interact with one another. For example, Partitioned Cache, Partition-Replica Cache, Replicated Cache, and Mirrored Cache are the caching topologies.

If your primary concern with your .NET cache is scalability and not reliability you can use the ‘Partitioned Cache’ topology. On the other hand, if your primary focus is to get reliability and not scalability you should go for the ‘Replicated Cache’ topology. The ‘Partition-Replica Cache’ is the combination of both of these and gives you the best of both worlds. It provides you with reliability and scalability at the same time with some trade offs.

I would like to conclude by saying that if you want your application to be at par with the growing needs of performance and scalability, .NET distributed cache is the way to go.

Posted in .NET 4.0 Cache, ASP.NET Cache, Distributed Cache | Tagged , , | Leave a comment

Couchbase Alternatives Open Source Distributed Cache

The database tier based on traditional RDBMS has proven to be the biggest bottleneck in the way of achieving competitive response times for applications. This has forced the application vendors to look for alternatives that can provide improved performance. One such alternative is storing data in a distributed cache.

From the available cache technologies you need to pick one that answers most if not all of the major questions asked in this domain. Going forward I will be comparing two products in this arena, Couchbase and NCache – Open Source Distributed Cache.

1 – ASP.NET Sessions

ASP.NET Session State caching has come a long way starting from keeping the session information in-memory on the Web server (default), to having it on a State server, to storing it on a SQL server. All of these have one limitation in common and that is the Single Point of Failure. Session state is lost if any of the following happens: Web server goes down, the State server goes down or the SQL server goes down.

To answer all these concerns NCache provides a solution by saving session state in its Open Source Distributed Cache. Since it’s distributed there is no single point of failure. Despite its importance Couchbase does not support saving ASP.NET sessions.

2 – ASP.NET View State

ASP.NET uses View State to store pages, controls and custom values between multiple HTTP requests. In some cases where we have complex controls on a page e.g. Data Grid Control, the string representing the View State grows very large. In such a case you would be using extra bandwidth to pass this string back and forth without any real benefit, in addition you are opening up a loop hole with regards to security.

What are the ways to address these issues ? All we need to have is a distributed cache that can store the View State text and pass back an identifier that can be used to retrieve our View State from the store. NCache provides this exact functionality in form of ASP.NET View State Caching whereas Couchbase does not.

3 – Memcached Smart Wrapper

NCache provides support to integrate with Memcached in a transparent way using Memcached integration.

Let me just say a few words about Memcached, it is a popular distributed cache that is being used in the market but offers very basic caching features. It does not provide any support for high availability, data replication, cache elasticity, and ease of management.

Couchbase does not provide any such integration, so to be able to adapt to Couchbase for someone using Memcached there is only one way; Rewrite your code from scratch!

4 – Security & Encryption

One of the fundamental requirements of applications needing fast response times is that the data be secured. This makes security and encryption must haves for providers of distributed caching.

NCache is well equipped to provide support for both of these features in a comprehensive manner, Couchbase on the other hand fails to provide support for data encryption and active directory/ LDAP authentication. Read more on NCache encryption here.

5 – Read-through & Write-through

Read-through means that your application always asks the cache for data and the cache gets it from your data source if it doesn’t have it and caches this data for future accesses. This simplifies your application code greatly because cache API is very simple to use in comparison to the database.

Similarly write-through allows your application to write to the cache and the cache then writes the same data to the database either synchronously or asynchronously. Both these features facilitate you to have the cache as your enterprise data store and have all applications read from it and write to it.

NCache provides full support for both Read-through and Write-through but Couchbase fails to do so and hence lags behind NCache here as well. More on Read-through & Write through.

Further Reading

For feature by feature comparison of Couchbase with NCache please take a look at the following link:

Couchbase vs. NCache

Posted in Distributed Cache, Open Source Distributed Cache, Product Comparison | Tagged , , , | Leave a comment

AppFabric Alternatives Open Source Distributed Cache

Today’s applications need to scale and handle extreme data loads due to explosion of web technologies and Internet of Things. The biggest bottleneck in the way of achieving this is the data tier with relational databases. The need of the hour is to be able to make this data available to applications in a faster and more efficient way.

You can achieve this by using distributed caching of data that facilitates faster response times. There are a few big names in this domain with less than competitive products, e.g. Microsoft with their AppFabric caching product. In contrast to this NCache is an Open Source Distributed Cache that offers cutting edge technology and features.

An interesting point to note here is that Microsoft is no longer pushing AppFabric in their flagship cloud environment called Microsoft Azure. Instead, they’re recommending another Open Source cache called Redis (see NCache comparion with Redis). Why Microsoft chose to do so will become clearer as you read this article.

In the next few paragraphs, I’m taking a somewhat detailed look at some of the core differences between NCache and AppFabric.

1 – High Availability of Cache (Peer to Peer Architecture)

A distributed cache runs in your production environment and as part of your application deployment. Therefore, you should treat it just like a database when it comes to high availability requirements.

Any distributed cache that does not provide a peer to peer architecture for the cache cluster becomes very limiting. AppFabric does not provide such a clustering architecture and therefore is not as flexible as NCache for providing high availability. NCache provides a truly peer to peer clustering.

2 – Synchronization with Data Sources

A very core requirement of caching data is to keep it from getting stale. In plain simple words this means that the cache needs to refresh its data every time there is an update or remove operation in the master database. NCache provides a very strong mechanism to do this called “Data Synchronization” through three types of dependencies:

  •           SqlDependency (SQL Server 2005-2012 events)
  •           OracleDependency (Oracle events)
  •           Db Dependency (polling for OLEDB databases)

This is a feature that AppFabric lacks even though it is core to a powerful distributed cache.

3 – WAN Replication

As the name suggests, WAN replication deals with the availability of data in geographically dispersed data centers. NCache provides powerful WAN replication capability for its distributed cache in the following data center configurations:

  •        Active – Passive
  •        Active – Active

The first case is applicable to the situation where you want to have one data center to handle all user requests i.e. Active data center. And at the same time have a backup data center for disaster recovery, i.e. Passive data center. All the data operations on the active data center are replicated to the passive data center asynchronously. So that if the active DC goes down the passive will become active and start serving user requests.

The second case is applicable to the situation where you want to have two active data centers serving users in their nearby geographical area. Since both DCs are active, data would be getting replicated in both directions. So that if a need arises to reroute all traffic to one data center, in case one of the DCs is overwhelmed or goes down, it can be done without any data integrity issues.

AppFabric fails to provide this very much needed functionality at all levels.

4 – Search Cache with SQL

The real power of a cache lies in the fact that once it has the data, it should be made easily searchable and accessible. NCache comes with very handy tools to accomplish this task. Here they are:

  •        Object Query Language (OQL)
  •        Group-by for Queries
  •        Delete Statement in Queries
  •        LINQ Queries

AppFabric does not provide any of this and hence lacks behind NCache on this avenue as well.

5 – Memcached Smart Wrapper

NCache provides the ability to integrate with Memcached in a seamless way using Memcached integration.

For those of you not familiar with Memcached, it is a popular distributed cache that is being used in the market but offers very basic caching features. Does not provide any support for high availability, data replication, cache elasticity, and ease of management.

AppFabric does not provide any such integration, so to be able to adapt to AppFabric for someone using MemCached there is only one way; Rewrite your code from scratch!

6 – Cache Size Management – Priority Evictions

To be able to use memory available to the cache efficiently we have the concept of evictions. Simply put it means to remove relatively old data from the cache so that we may have space for newer items. There are a number of algorithms that NCache employs to carry out this important task, such as:

  •        Least Recently Used (LRU) Evictions
  •        Least Frequently Used (LFU) Evictions
  •        Priority Evictions
  •        Do Not Evict Option

Each of these options has its own advantages and disadvantages over others, with one being most suitable for a particular use case.  AppFabric only supports LRU evictions; this is a big limitation as some scenarios might demand to keep older data and to evict using some other matrix.

Further Reading

For feature by feature comparison of NCache with AppFabric please take a look at the following link:

AppFabric vs NCache

Posted in Distributed caching, Open Source Distributed Cache, Product Comparison | Tagged , , , | Leave a comment

Redis Alternatives Open Source Distributed Cache

The demand for high speed data access along with its integrity and fault tolerance is ever increasing in the contemporary application arena. Traditional disk based RDBMS systems have failed to answer these concerns in a comprehensive manner. This paved way for in-memory data solutions that cover all the bases listed above.

One such solution is Redis that has features to tackle many of the critical issues but falls short in some fundamental aspects. NCache is an Open Source Distributed Cache for .NET that answers these concerns effectively and also provides other additional features.

Here are some of the core differences between Redis and NCache.

1 – WAN Replication

First and the foremost Redis has no support for WAN replication of the cached data. The need for this feature becomes indispensable when your application is deployed in multiple data centers across the country or world. NCache provides powerful WAN replication capability for its distributed cache in the following data center configurations:

  •        Active – Passive
  •        Active – Active

The first situation is suitable when you have an active data center and a disaster recovery backup data center. The backup data center only becomes active if the active one goes down. And, for this to be possible, the distributed cache must be replicated to the passive data center so it is available when the data center becomes active.

The second situation is applicable where you have two active data centers serving geographically dispersed users so as to improve their access time. And, you want the ability to either reroute some of the traffic from one data center to another in case of overflow. Or, one of the data centers goes down and you want to route all its traffic to the other one.

In this second case, you need the distributed cache updates from both data centers to be replicated to the other ones and also handle conflict situations. This is the capability that NCache provides which Redis does not support.

2 – Security & Encryption

Many of the applications needing a distributed cache are dealing with sensitive and highly confidential data. Therefore, security and encryption are two areas that hold fundamental importance when talking about data storage and retrieval.

Redis is lacking support for both of these neither authentication nor encryption. NCache in contrast provides support for authentication and authorization through Active Directory / LDAP. NCache also provides very strong encryption options to encrypt the stored data. Here they are:

  •        3DES
  •        AES-128
  •        AES-192
  •        AES-256

Read more on NCache encryption here.

3 – Read-through & Write-through

Read-through and write-through are familiar concepts in the domain of distributed caching. But for people starting to get familiarized with caching it won’t hurt to give a brief definition for these.

Read-through means that your application always asks the cache for data and the cache gets it from your data source if it doesn’t have it. So, this simplifies your application code greatly because cache API is very simple to use in comparison to database.

Similarly write-through allows your application to write to the cache and the cache then writes the same data to the database either synchronously or asynchronously.

Both these features allow you to designate the distributed cache as your enterprise data store and have all applications read from it and write to it. And, the cache then deals with the database. This results in the cache always being synchronized with your database.

Despite its importance Redis lacks this feature but NCache covers it in length and breadth.

4 – Cache Administration

The effectiveness of distributed cache also depends on your ability to administer and monitor it through user friendly GUI tools. In this regard, Redis does not provide any GUI tools for its cache administration or monitoring. The only things available to you are the command line tools.

On the other hand, NCache provides powerful GUI based tools like NCache Manager and NCache Monitor for cache administration and monitoring. Some people however prefer to use command line tools because you can use them in scripts for automation. In this regard, NCache also provides all the cache administration through command line tools.

5 – ASP.NET View State Caching

View State is a powerful mechanism that ASP employs to store pages, controls and custom values between multiple HTTP requests across client and the server. This View State is passed in form of an encrypted text which becomes very large in cases involving forms with lot of controls, Data Grid control, or some other complex controls. This raises two concerns:

  •        Security risk
  •        Bandwidth usage

Both of these concerns will be answered if we have a distributed cache that can store the View State text and passes back an identifier that can be used to retrieve our View State from the store. NCache provides this exact functionality in the form of ASP.NET View State Caching whereas Redis does not.

6 – Memcached Smart Wrapper

Memcached is a popular distributed cache that is being used by a host of applications to benefit from the performance boost being discussed herein. It however has a number of limitations in the areas of high availability, data replication, cache elasticity, and ease of management.

The easiest way to address these issues for the customers using Memcached is to use NCache’s Memcached integration, such that the users can plug-in NCache to their existing Memcached applications without any code change. The users only need to change their application configuration files to take advantage of NCache’s distributed caching system.

Redis does not provide any such integration and hence falls behind on this as well.

Further Reading

For feature by feature comparison of NCache with Redis please take a look at the following link:

Redis vs. NCache

Posted in Open Source Distributed Cache, Product Comparison | Tagged , , , , , | 1 Comment

ASP.NET View State Caching in Microsoft Azure for Better Performance

ASP.NET View State is a client side state management mechanism, which is used to save page and control values. ASP.NET View State is a hidden field on the page as an encoded Base64 string. It is sent to client as part of every response and is returned to the server by the client as part of a post back.

<input id="__VIEWSTATE" type="hidden"
name="__VIEWSTATE" value="wEPDwUJNzg0MDMxMDA1D2QWAmYPZBYCZg9kFgQ
CQ9kFgICBQ9kFgJmD2QWAgIBDxYCHhNQcm2aW91c0NvbnRyb2xNb2RlCymIAU1pY3
Jvc29mdC5TaGFyZVBvaW50LldlYkNvbnRyb2xzLlNQQ29udHJbE1vZGUsIE1pY3Jv
29mdC5TaGFyZVBvaW50LCBWZXJzaW9uPTEyLjAuMC4wLCBDdWx0dXJlPW5ldXRyWw
sIFB1YmxpY0tleVRva2VuPTcxZTliY2UxMTFlOTQyOWMBZAIDD2QWDgIBD2QWBgUm
Z19lMzI3YTQwMF83ZDA1XzRlMjJfODM3Y19kOWQ1ZTc2YmY1M2IPD2RkZAUmZ18yN
DQ3NmI4YV8xY2FlXzRmYTVfOTkxNl8xYjIyZGYwNmMzZTQPZBYCZg8PZBYCHgVjbG
DQWBgUmZ19lMzI3YTQwMF83ZDA1XzRlMjJfODM3Y19kOWQ1ZTc2YmY1M2IPD2...."/>

Problems with ASP.NET View State in Microsoft Azure

ASP.NET View State is a very important feature for applications deployed as web/worker roles in Microsoft Azure. But, View state comes with some issues that you need to understand and resolve in order to take full benefit from it.

First of all, ASP.NET View State becomes very large especially when your Microsoft Azure ASP.NET application has a lot of heavy controls on its pages. This results in heavy view state payload which travels back and forth between browser and your application on each request. Heavy view state payload slows down performance and also results in extra bandwidth consumption especially when an average ASP.NET View State ends up in 100’s of kilobytes and when millions of such requests are processed within your Microsoft Azure application.

ASP.NET View State is also a security risk when sending confidential data as part of view state to client. This data is vulnerable to attacks and can be tampered with by an attacker, which is a serious security threat.

Solution to ASP.NET View State Problems

You can resolve ASP.NET View State issues in Microsoft Azure applications by storing the actual ASP.NET View State on the server side in a distributed cache and never send it back to browser along with request payload.

NCache for Azure is an extremely fast and scalable distributed cache for Microsoft Azure. NCache for Azure allows you to store actual ASP.NET View State in Distributed Cache on server side and instead send a small token as view state to the client in a request payload. This dramatically reduces the request payload size. View State token is then used on the server side to find the right ASP.NET View State in NCache for Azure Distributed Cache on post backs. A smaller payload resolves issues related to performance and bandwidth utilization because you are not dealing with huge view state anymore on each request in your Microsoft Azure Application. Moreover, View State stored on the server side in NCache for Azure distributed cache is never exposed to clients so it addresses the above mentioned security concerns.

Here is an example of a token being used in place of ASP.NET View State with NCache for Azure ASP.NET View State provider:

<input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE"
value="cf8c8d3927ad4c1a84dsadfgsdsfdsda7f891bb89185" />

Using NCache for Azure ASP.NET View State Caching

Step 1: Create an app.browser file in App_browsers directory. Plug in page adapters in the app.browser file as follows:

   File: App_browsers\app.browser 

<browser refID="Default">
   <controlAdapters>
      <adapter controlType="System.Web.UI.Page"
       adapterType="Alachisoft.NCache.Adapters.PageAdapter" />
   </controlAdapters>
</browser>

Step 2: Add the following assembly reference in compilation section of web.config file.

   File: web.config

<compilation defaultLanguage="c#" debug="true" targetFramework="4.0">
   <assemblies>
      <add assembly="Alachisoft.NCache.Adapters, Version=1.0.0.0,
       Culture=neutral, PublicKeyToken=CFF5926ED6A53769"/>
   </assemblies>
</compilation>

Step 3: Register NCache for Azure config section in your web.config file.

   File: web.config

<configSections>
   <sectionGroup name="ncContentOptimization">
      <section name="settings"
       type="Alachisoft.NCache.ContentOptimization.Configurations.ContentSettings"
       allowLocation="true" allowDefinition="Everywhere"/>
   </sectionGroup>
</configSections>

Step 4: Specify settings for your config section in web.config file (that was registered above). These settings control NCache for Azure ASP.NET View State Caching features.

   File: web.config

   <ncContentOptimization>
      <settings viewstateThreshold="12" enableViewstateCaching="true"
       enableTrace="false" groupedViewStateWithSessions="false"
         <cacheSettings cacheName="myCache"
                maxViewStatesPerSession="3">
                <expirationtype="Absolute" duration="1"/>
         </cacheSettings>
      </settings>
   </ncContentOptimization>

Conclusion

NCache for Azure provides a no code change option for your Microsoft Azure applications to store ASP.NET View State on the server side in a Distributed Cache. NCache for Azure ASP.NET View State provider optimizes performance by reducing request payload and bandwidth consumption while addressing security issues related to client side View State.

Download NCache Open Source and run it Microsoft Azure.

Posted in ASP .NET performance, ASP.Net, ASP.NET View State, Distributed Cache, Microsoft Azure | Leave a comment

ASP.NET Output Caching in Microsoft Azure to Improve Performance

Microsoft ASP.NET Output Cache provides functionality to cache rendered content of ASP.NET pages or user controls for a specified duration. This allows your ASP.NET application to serve all subsequent requests from the cache instead of re-rendering and re-execution of a page.

You add the <% @OutputCache %> directive on the page to use ASP.NET Output Cache.

<%@Page ... %>
<%@OutputCacheDuration="duration"VaryByParam="paramList"%>

ASP.NET Output Cache is a very useful feature, especially for situations when a page is accessed more frequently than it changes and you serve it from cache. This improves application performance by avoiding page re-executions and also by reducing your expensive database trips especially when page involves a lot of heavy database operations. This also improves application scalability because the database generally becomes a scalability bottleneck when there are millions of such pages and requests involving database operations.

Problems with ASP.NET Output Cache in Microsoft Azure Environment

When you use ASP.NET Output Cache in Microsoft Azure then page output is stored as InProc within your Microsoft Azure Web Role by default. First issue with InProc ASP.NET Output Cache is that it limits you to the memory that is available on your Web Role instance and this may create an out of memory issue when you cache a large amount of page output data. Another issue is that your application runs on multiple load balanced Microsoft Azure Web Role instances. The next request might go to another Web Role instance, which creates a new copy of ASP.NET Output Cache data in this instance, as well. These redundant copies of page outputs in each Web Role instance consume a lot of extra memory.

Microsoft Azure Web Role instances also recycle quite frequently for maintenance and patching. When this happens, all page outputs are lost and you’ll have to re-execute all pages to re-populate your page Output Cache, which is a negative performance impact on your Azure application.

Solution to ASP.NET Output Cache Problems

One way you can resolve all these issues in Microsoft Azure is to use a distributed cache, which runs out-of-process and is a common store for all Microsoft Azure Web Role instances. ASP.NET 4.0 has introduced an extensibility point that helps developers to use any distributed cache of their choice as their ASP.NET Output Cache store.

Distributed cache is shared by all Microsoft Azure Web Roles for page outputs so there are no redundant copies made within individual web role instances. Microsoft Azure Web Roles become purely stateless so data is never lost when Web Roles are recycled. You can cache a huge amount of data in distributed cache by pooling memory resources of all cache servers together. Moreover, distributed cache reduces load on your database because you don’t have to go through page executions involving database calls in each Microsoft Azure Web Role instance separately.

NCache for Azure is an in-memory distributed cache for .NET applications deployed in Microsoft Azure cloud. NCache for Azure has implemented ASP.NET Output Cache provider which you can use to store ASP.NET page output in NCache for Azure and resolve all above mentioned issues. Additionally, NCache for Azure provides data reliability with replication as well as improves application scalability.

How to use NCache for Azure ASP.NET Output Cache Provider

You can use NCache for Azure ASP.NET Output Cache provider as follows without any code change to your Microsoft Azure ASP.NET application.

Step 1: Add Reference of NCache for Azure ASP.NET Output Cache provider assembly.

File: web.config

<compilation debug="true "targetFramework="4.0">
   <assemblies>
      <add assembly= "Alachisoft.NCache.OutputCache,Version=x.x.x.x,Culture=neutral"/>
   </assemblies>
</compilation>

Step 2: Register NCache for Azure ASP.NET Output Cache Provider under <configuration> section and provide cache settings.

File: web.config

<caching>
   <outputCache defaultProvider ="NOutputCacheProvider" />
      <providers>
         <add name="NOutputCacheProvider" type="NCOutputCache.NOutputCacheProvider"
         exceptionsEnabled="true" enableLogs="false" cacheName="mypartitionofReplicaCache"/>
      </providers>
   </outputCache>
</caching>

Step 3: Add ASP.NET Output Cache directive on the page that you want to cache. 

<%@OutputCacheVaryByParam="ProductCategory"Duration="300"%>

NCache for Azure ASP.NET Output Cache Features

NCache for Azure provides a rich set of features for caching and managing ASP.NET Output Cache. Below is a list of them:

  1. Specify duration for Page Output: NCache for Azure allows you to specify a duration for which you want to cache ASP.NET page output.
  2. Cache different versions of a page: NCache for Azure allows you to cache different versions of page depending on the various ASP.NET Output Cache directives such as VaryByParam,VaryByCustom, VaryByControl. Another version of page output is stored in distributed cache if a different param is received for a page request.
  3. Cache different portions of a page: You can also specify only portions of the page instead of caching entire page. This is for situations where you cache only static portion of the page and leave the dynamic part which is rendered at runtime.
  4. Implement Custom Hooks for ASP.NET Output Cache: NCache for Azure allows you to implement and register your custom hooks (interface) for page output. This is to attach some extended attributes to your page outputs such as NCache for Azure database dependencies, Tags, Groups, etc.

Conclusion

 As you have seen, distributed cache allows you to cache ASP.NET page outputs, which resolves your ASP.NET application issues with multiple load balanced Azure Web Roles.  NCache for Azure ASP.NET Output Cache provider helps improve ASP.NET application performance scalability and reliability.

Download NCache Trial | NCache for Azure Details

Posted in ASP.Net, ASP.NET Output Cache, Microsoft Azure, Uncategorized | Tagged , | Leave a comment

Event Driven .NET and Java Data Sharing using In-Memory Distributed Cache

These days, many companies are running both .NET and Java applications in their enterprise environment.  Often these applications need to share data with each other at runtime. The most common way they do that today is by storing the data in the database and having the other application poll and look for it. Some people also develop web services solely for the purpose of allowing Java applications to obtain data from .NET applications and vice versa.

The problem with the first approach is that data sharing cannot happen instantaneously because the other “consumer” application has to poll the database, which happens after some predetermined interval. It also has performance and scalabilities issues like any application accessing the database for data. As you know, a database cannot scale the same way that today’s applications can. This is because although you can linearly scale an application tier by adding more application servers, you cannot do the same at the database tier.

The second approach requires a lot of custom programming and essentially changing your application’s architecture just so you can share data with other applications, whether they’re .NET or Java. It would be much nicer if you could continue to develop each application for the purpose that it is being built and not worry about creating a custom framework for data sharing.

Runtime Data Sharing through a Distributed Cache

The second approach requires a lot of custom programming and essentially changing your application’s architecture just so you can share data with other applications, whether they’re .NET or Java. It would be much nicer if you could continue to develop each application for the purpose that it is being built and not worry about creating a custom framework for data sharing.

Ideally, you would want to have an event driven model where a .NET application can be notified whenever a Java application has some data for it and vice versa. But, as you know, .NET and Java are not inherently compatible for this kind of use.

This is where a distributed cache like NCache comes in really handy. NCache provides events that are platform independent and can be shared between .NET and Java. NCache also provides binary level data type compatibility between .NET and Java. This allows you to not only receive events, but also corresponding data in the form of objects and all of that without having to go through any XML based transformation for data sharing purposes.

NCache event notification framework enables you to register to be notified when different types of events occur within the cache cluster. This way, whenever there are any changes made to the data, either by .NET or Java applications, your application gets notified. Here is sample code using NCache item-based event for data sharing in Java:

import com.alachisoft.ncache.web.caching.*;

public void AddToCache()
{
CacheEventListener onItemRemoved = new CacheEventListner();
Cache cache = NCache.initializeCache("PORCache");
Employee emp = new Employee();
emp.setDept("Mechanical");

CacheItem cItem = new CacheItem(emp);
cItem.setItemRemoveCallback(onItemRemoved);
cache.insert("EMP-1000-ENG", cItem);
}

public class CacheEventListner implements CacheItemRemovedCallback
{
  ...
  public void itemRemoved(String key, Object value,
  CacheItemRemovedReason reason)
  {
Employee emp = (Employee) key;
System.out.println("Employee Removed " + key + "Dept" + emp.getDept());
  }
   ...
}

NCache provides you different cached item level notifications like item-added, item-removed and item-updated. Applications can register interest in various cached item keys (that may or may not exist in the cache yet), and they’re notified separately whenever that item is added, updated or removed from the distributed cache by anybody for any reason. For example, even if an item is removed due to expiration or eviction, the item-remove event notification is fired.

Both .NET and Java applications can register interest for the same cached items and be notified about them. The notification includes the affected cached item as well, which is transformed into either .NET or Java, depending on the type of application.

Here is a sample code of using NCache item-based event for data sharing in .NET:

public void AddToCache()
{
Cache cache = NCache.InitializeCache("PORCache");
Employee emp = new Employee();
emp.Name = "David Rox";
emp.Dept = "Engineering";
...

cache.Insert("EMP-1000-ENG", emp, null,
Cache.NoAbsoluteExpiration,
Cache.NoSlidingExpiration,CacheItemPriority.Default);

//Register Callback to get notified of changes related to provided key
cache.RegisterKeyNotificationCallback("EMP-1000-ENG",
new CacheItemUpdatedCallback(OnItemUpdated),
newCacheItemRemovedCallback(OnItemRemoved));
}
...
void OnItemRemoved(string key, object value,CacheItemRemovedReason reason)
{
//Item is removed. Do something
Employee emp = (Employee) value;
	Console.WriteLine("Employee Removed {0}, Name {1}", key, emp.Dept);
}

In summary, with NCache you can not only share data between .NET and Java applications at runtime but can also use distributed events to notify applications of any change in data.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in data sharing, Data Sharing with Distributed Cache, Distributed Cache, Distributed caching, Distributed events, Event driven data sharing | Tagged , , | Leave a comment

ASP.NET Session State Store in Microsoft Azure

Microsoft Azure provides a platform for ASP.NET applications in the cloud. Very often, these applications are high transactional and mission critical in nature. Therefore, it is very important that these applications are able to scale and that there are no data loss if a web server goes down at any time.

ASP.NET Session State needs to be stored somewhere and its storage becomes a major performance and scalability bottleneck. In Microsoft Azure, you can store ASP.NET Session State in In-proc, SQL Azure (database), Azure Table Storage or in a distributed cache.

InProc Option

In-proc session storage option doesn’t work well in Microsoft Azure architecture. First of all, ASP.NET Session State is not shared between multiple instances of the Web Role in In-proc mode. Secondly, you end up using sticky sessions in Microsoft Azure, which may result in uneven load distribution. Additionally, sticky sessions involve extra configurations on your part as Microsoft Azure doesn’t use sticky sessions by default. Moreover, any Web Role instance going down due to failure or for maintenance will result in session data loss and this is obviously not acceptable.

Azure Table Option

Azure Table Storage is file based ASP.NET Session State provider, which is provided on an ‘as-is basis’ as a code sample, meaning it is not officially supported by Microsoft. It is intended for storing entities that are structured. Even though it is a cheaper option it is still not an ideal place to store ASP.NET Session State primarily because of performance as it is file based.

SQL Database Option

Microsoft Azure SQL Database can also be used as storage for ASP.NET Session State by using conventional ASP.NET SQLServer Mode. But, ASP.NET Session State object is stored in the database as a BLOB and relational databases were never really designed for BLOB storage. This causes performance issues and it is definitely a major scalability bottleneck for your Microsoft Azure ASP.NET application.

Distributed Cache Option

Distributed Cache provides an ideal storage for ASP.NET Session State in Microsoft Azure. For example you can use NCache for Azure which is a Microsoft Azure Distributed Cache for .NET applications. It is extremely fast and more scalable than all other Microsoft Azure options mentioned above and it also replicates sessions so there is no data loss if a cache server ever goes down. Moreover, you eliminate all issues related to session sharing as well as use equal load balancing that ensures full utilization of all of your Azure Web role instances.

How to Configure NCache in Azure ASP.NET Session State Provider?

NCache in Azure has implemented the ASP.NET Session State provider that can be used by Microsoft Azure ASP.NET applications. NCache in Azure uses Microsoft Azure VMs and formulates a dedicated caching tier. ASP.NET applications in Microsoft Azure can then be directed to use this Azure Distributed Cache for ASP.NET Session State storage.

The nice thing about NCache in Azure ASP.NET Session State provider is that it doesn’t require any application code changes. Simply modify your application web.config file as follows to use NCache in Azure as your Distributed Cache for ASP.NET Session State:

   <assemblies>
      <add assembly="Alachisoft.NCache.SessionStoreProvider,
           Version=x.x.x.x, Culture=neutral,
           PublicKeyToken=CFF5926ED6A53769"/>
   </assemblies>

<sessionState cookieless="false" regenerateExpiredSessionId="true" mode="Custom"
              customProvider="NCacheSessionProvider" timeout="20">

   <providers>
      <add name="NCacheSessionProvider"
           type="Alachisoft.NCache.Web.SessionState.NSessionStoreProvider"
           sessionAppId="NCacheTest" cacheName= "TestCache"
           writeExceptionsToEventLog="false" />
   </providers>
</sessionState>

Here are a few important benefits you achieve when you use NCache for Azure as your distributed cache for storing ASP.NET Session State.

  • Linear Scalability and Performance: NCache for Azure is based on dynamic clustering protocol, which allows you to add more servers to your cache at runtime. Your application can scale out linearly by adding more servers to your Azure Distributed Cache when your application load grows without changing application architecture.
  • Session Replication: NCache for Azure provides reliability support with help of replication. You can take application instances offline for maintenance, patching and for new releases without having to worry about any session data loss.
  • High Availability: NCache for Azure provides fault tolerant support of high availability as it is based on hundred percent peer to peer architecture. It is guaranteed that you will not lose any data or have any application downtime in case of any node failure from distributed cache.

Conclusion

Azure Distributed Cache such as NCache in Azure is the best option for storing ASP.NET Session State in Microsoft Azure primarily because of performance, scalability, reliability and high availability features. Microsoft Azure Distributed Cache offered by NCache for Azure is very easy to use and doesn’t require any application code changes.

Download NCache for Azure Trial | NCache for Azure Details

Posted in ASP.Net, Distributed Cache, Microsoft Azure | Tagged , , | Leave a comment

ASP.NET Session State Sharing across Multiple Azure Regions

Many high traffic ASP.NET applications in Microsoft Azure are deployed over multiple Microsoft Azure regions in order to handle geographically separated traffic. In these situations, the load balancer always sends traffic to the Microsoft Azure region closest to the user for faster response time.

In this scenario, you may run into a situation where you have to redirect some of your traffic to and from one Microsoft Azure region to another. This may happen because you have too much traffic in one Microsoft Azure region and another region is underutilized. Another reason may be that you need to bring a region down for maintenance.

When you redirect traffic, your users normally lose their ASP.NET sessions because your ASP.NET Session State is not available in the other Microsoft Azure region. And, this is obviously not good. Ideally, you want to redirect traffic without causing any interruptions for your users.

In Microsoft Azure, the only way you can achieve this is if you keep a common ASP.NET Session State storage across multiple Microsoft Azure regions. This allows you to redirect traffic without losing any ASP.NET Session State. But, this option has severe performance issues because a large percentage of ASP.NET sessions are being accessed across the WAN.

NCache in Azure is an extremely fast and scalable Microsoft Azure distributed cache for .NET applications. NCache in Azure provides an intelligent multi-region ASP.NET Session State support for your ASP.NET applications deployed in multiple Microsoft Azure regions.

NCache in Azure intelligently detects and then automatically moves your ASP.NET sessions from one Microsoft Azure region to another region when user request is redirected from one Microsoft Azure region to another. All subsequent requests are served from this new Microsoft Azure region. This allows your ASP.NET applications to seamlessly share ASP.NET sessions across Microsoft Azure regions without negatively impacting performance or causing session data loss.

NCache for Azure allows you to achieve multi-region ASP.NET Session State capability by defining primary and secondary caches in each Microsoft Azure region. Additionally, you also specify “sid-prefix” attribute, which is prefixed to all session-IDs in each Microsoft Azure region. This helps NCache for Azure SSP module to identify which ASP.NET sessions belong to which Microsoft Azure region and then NCache for Azure decides to move ASP.NET sessions when a request redirects to another Microsoft Azure region.

Here is a sample config to use NCache for your ASP.NET Session State storage:

   <assemblies>
      <add assembly="Alachisoft.NCache.SessionStoreProvider,
           Version=x.x.x.x, Culture=neutral,
           PublicKeyToken=CFF5926ED6A53769"/>
   </assemblies>

<sessionState cookieless="false" regenerateExpiredSessionId="true" mode="Custom"
              customProvider="NCacheSessionProvider" timeout="20">

   <providers>
      <add name="NCacheSessionProvider"
           type="Alachisoft.NCache.Web.SessionState.NSessionStoreProvider"
           sessionAppId="NCacheTest" cacheName= "London_Cache"
           writeExceptionsToEventLog="false" />
   </providers>
</sessionState>

Additionally you need location affinity configurations for  Azure Multi-region ASP.NET Session State support.

<configSections>
     <section name="ncache" type="Alachisoft.NCache.Web.SessionStateManagement.NCacheSection,
              Alachisoft.NCache.SessionStateManagement,
              Version=x.x.x.x, Culture=neutral, PublicKeyToken=CFF5926ED6A53769"/>
</configSections>

<ncache>
   <sessionLocation>
      <primaryCache id =  "London_Cache"  sid-prefix = "LDC"/>
      <secondaryCache id ="NewYork_Cache" sid-prefix = "NYC"/>
      <secondaryCache id ="Tokyo_Cache"   sid-prefix = "TKC"/>
   </sessionLocation >
</ncache>

Please note that <ncache> section in each Azure Region will be different, meaning each region will have its own “PrimaryCache” and will define all other region caches as “SecondaryCache”.

All ASP.NET sessions originated from any Microsoft Azure region are originally stored in the primary cache in that region. However, when a request from another Microsoft Azure region is redirected to the current Microsoft Azure region then NCache for Azure multi-region SSP module intelligently detects that the ASP.NET Session State resides in one of the other Microsoft Azure regions (using “sid-prefix” attached to ASP.NET Session ID) and it automatically contacts the corresponding secondary cache on remote Microsoft Azure region and moves it to primary cache on current Microsoft Azure region. All subsequent requests are then served from this new location.

Say for example, you have defined London_Cache to be your primary cache while NewYork_Cache and Tokyo_Cache are defined as secondary caches for London site. You also specify “LDC”,”NYC” and “TKC” as sid-prefix that are attached to each session-id corresponding to London_Cache, NewYork_Cache and Tokyo_Cache sessions respectively. Now, all ASP.NET sessions originated from London region have “LDC” attached as prefix to their ASP.NET Session IDs and are stored and served from London_Cache, which is primary cache for London region. But, if a request is redirected from other Microsoft Azure region such as New York or Tokyo to London region then this ASP.NET Session State is immediately identified based on sid-prefix and ASP.NET Session State is transferred from NewYork_Cache or Tokyo_Cache to London_Cache. All subsequent requests are served from London_Cache locally once ASP.NET Session State is moved to London region.

Conclusion:

NCache for Azure multi-region ASP.NET Session State support allows you to have your ASP.NET applications deployed in two or more active Microsoft Azure regions and be able to redirect traffic between Microsoft Azure regions without impacting performance or causing any application downtime. You can seamlessly redirect requests between Microsoft Azure regions to handle traffic overflows and site maintenance.

Download NCache for Azure Trial | NCache for Azure Details

Posted in ASP .NET performance, ASP.Net, Microsoft Azure | Tagged , , , | Leave a comment

Distributed Cache as NHibernate Second Level Cache

NHibernate is a very popular object-relational mapping (ORM) solution for .NET applications because it simplifies your database programming. As a result, many applications using NHibernate are high traffic in nature and therefore face scalability bottlenecks in the database.

To tackle this, NHibernate provides a caching infrastructure so that applications can use an in-memory cache store instead and database is not exhausted anymore due to such high request load.  NHibernate caching includes first level cache and second level cache.

NHibernate First Level Cache and its Limitations

NHibernate First Level (1st level) Cache provides you a basic standalone (in-proc) cache which is associated with Session object and is limited to the current session only. 1st level cache is used by default to reduce the number of SQL queries on the database per session. But, there are a lot of limitations with this 1st level cache as follows:

  • Each process has its own 1st level cache that is not synchronized with other 1st level caches so data integrity cannot be maintained.
  • Cache size is limited to the process memory and cannot scale.
  • Worker process recycling causes cache to be flushed. Then, you have to reload it which reduces your application performance.

Solution: A Distributed Cache for NHibernate

NHibernate Second Level Cache exists at the Session Factory level, which means multiple user sessions can access a shared cache. Additionally, NHibernate 2nd level cache has a plug-able architecture so you can plug-in a third party distributed cache to it without any programming. NCache has implemented NHibernate Second Level Cache Provider and you can use it as your distributed cache for NHibernate without any code changes.

Following example shows you how to use NCache in your NHibernate application as its 2nd level cache:

<hibernate-configuration>
    <session-factory>
        <property name="cache.provider_class">Alachisoft.NCache.Integrations.NHibernate.Cache.NCacheProvider, Alachisoft.NCache.Integrations.NHibernate.Cache
        </property>
        <property name="proxyfactory.factory_class"> NHibernate.Bytecode.DefaultProxyFactoryFactory, NHibernate </property>
        <property name="cache.use_second_level_cache">true
        </property>
    </session-factory>
</hibernate-configuration>

<ncache>
    <cache-regions>
        <region name="AbsoluteExpirationRegion" cache-name="myCache" priority="Default" expiration-type="sliding" expiration-period="180" use-async="false" />
        <region name="default" cache-name="myCache" priority="default" expiration-type="none" expiration-period="0" />
    </cache-regions>
</ncache>

Here are some important benefits of NCache as NHibernate 2nd level cache:

  • NCache synchronizes across processes & servers: NCache is a shared cache across multiple processes and servers. This ensures that cache is always consistent across multiple servers and all cache updates are synchronized correctly so no data integrity issues arise.
  • Scalable cache size: NCache pools memory of all cache servers together. So, you can grow the cache size by simply adding more servers to the cache cluster. This means your cache can grow to 100′s of gigabytes and even terabytes.
  • Application process recycling does not affect the cache: Your application processes now become stateless as all the data is kept in an out-proc distributed cache. So, application process recycling has no impact on the cache.
  • Linear scalability to handle larger transaction loads: You can handle larger transaction loads without worrying about your database becoming a bottleneck. This is because the distributed cache scales linearly and reduces your database traffic by as much as 90%.
  • Use Client Cache to store data In-proc: NCache provides support of client cache, which is a synchronized local cache and can also run in-proc within your worker process. You can use it to attain more performance within your NHibernate applications.

As you can see, a distributed cache like NCache enables you to use NHibernate and still run your application in a multi-server configuration. You can also scale your application and handle high transaction loads by using NCache as NHibernate Second Level Cache.

Download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Posted in 3rd party integrations, Distributed Cache, NHibernate | Tagged , , , , | Leave a comment

How to Replicate a Distributed Cache across the WAN?

Web applications today are handling millions of users a day. Even with this high traffic, you have to maintain high performance for your users. To achieve this performance goal, many people are using in-memory distributed cache because it is faster than going to the database. A distributed cache also provides scalability by allowing you to add cache servers at runtime as you need to handle higher transaction load.

This is all good. But, you may need to deploy and run your web applications in multiple data centers. You may do this for a variety of reasons. One is to have a disaster recovery option. In this option, you have one active and one passive data center. If the active data center goes down, the passive data center quickly picks up the traffic without causing any interruptions for the users.

Another reason is to locate data centers closer to your customers especially if they are geographically dispersed. This greatly speeds up the application response time for your users. In this option, you may have two or more active data centers rather than an active-passive data center.

If you have multiple data centers either for handling region specific traffic or for disaster recovery purpose, then you need to replicate your cache across all data centers. This replication is across the WAN and would be very slow due to latency unless you do asynchronous replication.

NCache Banner - Reliable multi-site ASP.NET Session Storage

NCache is an extremely fast in-memory distributed cache that also provides WAN replication for your multi-datacenter needs. With NCache, there is no performance degradation while replicating your cache across the WAN because NCache does it asynchronously.

NCache Bridge Topology maintains an in-memory mirrored queue which has all the cache updates. The Bridge can then apply the same updates to one or more target caches whether they’re active or passive. You can read more about Bridge Topology at WAN Replication Topologies (Bridge)

How to Handle WAN replication for Different Scenarios?

Active-Passive Data Centers

NCache Bridge Topology can be configured for active-passive replication. The active distributed cache submits cache changes to the Bridge so it can apply it to the passive datacenter cache.

Active-Active Data Centers

In this scenario, the Bridge has to also handle conflict resolution because both data centers are active and could be updating the same data. This is called conflict resolution and is discussed later in this blog.

You can use any Bridge Topology according to your needs. For example, you may be storing ASP.NET Session State in your distributed cache and these sessions need to be replicated across the WAN to the other data center.

When the same data element/item in multiple distributed caches is updated and submitted to Bridge for replication a conflict occurs. To handle conflicts, NCache adds a timestamp for every operation before submitting it to the Bridge. This is to help the Bridge determine which operation was done last for the “last update win” rule.

NCache also provides you the flexibility to implement your own custom conflict resolver and plug in it with your distributed cache.

Here is an example of a conflict resolver:


public class BridgeConflictResolverImpl : IBridgeConflictResolver
{
   private bool _enableLogging;
   private TextWriter _textLogger;

   // to initialize any parameters if needed.
   public void Init(System.Collections.IDictionary parameters) { ... }

   // To resolve entry on the basis of old and new entries.
   public ConflictResolution Resolve(ProviderBridgeItem oldEntry,
                                     ProviderBridgeItem newEntry)
   {
      ConflictResolution conflictResolution =
                                        new ConflictResolution();
      switch (oldEntry.BridgeItemVersion)
      {
         case BridgeItemVersion.OLD:
              conflictResolution.ResolutionAction =
              ResolutionAction.ReplaceWithNewEntry;
              break;
         case BridgeItemVersion.LATEST:
              conflictResolution.ResolutionAction =
              ResolutionAction.KeepOldEntry;
              break;
         case BridgeItemVersion.SAME:
              if (oldEntry.OpCode == BridgeItemOpCodes.Remove)
              {
                    conflictResolution.ResolutionAction =
                    ResolutionAction.ReplaceWithNewEntry;
              }
              else
              {
                    conflictResolution.ResolutionAction =
                    ResolutionAction.KeepOldEntry;
              }
              break;
      }

      return conflictResolution;
   }

   // To dispose parameters if needed.
   public void Dispose() {...}
}

As you can see, the custom conflict resolver allows you to make content driven decisions about which update should “win”.

In summary, the Bridge Topology allows you to run your applications in multiple data centers without having to worry about your cache getting out of sync with the other location. This ensures that your application will always be up even if one data center goes down or you will be able to re-route traffic to the other data center if one datacenter gets overwhelmed with it.

Download NCache Trial | NCache Details

Posted in Bridge Topology, Distributed Cache, Distributed Cache Replication | Tagged , , | Leave a comment

How to Store ASP.NET Session State in Multi-Site Deployments?

ASP.NET has become really popular for developing high traffic web applications. Many of these applications are deployed to multiple geographical locations. This is done either for disaster recovery purposes or for handling regional traffic by having the ASP.NET application closer to the end user. In case of disaster recovery, there is usually one active site and one passive site. The passive site becomes active as soon as the active site goes down for any reason. In other cases, two or more sites can all be active, but handling traffic closer to their region (e.g. New York, London, and Tokyo). ASP.NET keeps user specific information in ASP.NET Session State object on the web server. This ASP.NET Session State is created when the user first uses the ASP.NET application and stays active as long as the user is actively using the application. By default, after 20 minute of inactivity by the user, ASP.NET expires this session. ASP.NET Session State object is either stored in-memory on the web server (InProc), in-memory on any server (StateServer), in a SQL Server database, or in a third-party store by using ASP.NET Session State Provider (SSP) architecture. The third-party option usually is an in-memory distributed cache. An in-memory distributed cache like NCache is a great place to store ASP.NET Session State. The reasons are faster performance, greater scalability, and better reliability of ASP.NET Session State due to session replication. Below is an example of how you can specify a “custom” session storage option in web.config file, which results in NCache being used as ASP.NET Session State storage:


Active-Passive Multi-Site Deployment

But, if your application is running in a multi-site deployment, then the question you have to address is what to do about ASP.NET Session State storage. If your ASP.NET application is deployed in an active-passive multi-site configuration, then all your ASP.NET Session State is being created and stored in the active site. And, the passive site does not have any session data. So, if the active site goes down and all the traffic is redirected to the passive site, then all users will suddenly lose their sessions and will have to start over.

NCache Banner - Reliable multi-site ASP.NET Session Storage

You can avoid this by using NCache Bridge Topology. Once you create a Bridge between the active and the passive sites, the active site starts submitting all adds, updates, and removes of ASP.NET Session State objects to the Bridge. The Bridge asynchronously replicates them across the WAN to the passive site. The nice thing about the Bridge architecture is that it doesn’t block the active site operations at all so you don’t see any performance degradation because of it. The only issue you have to keep in mind is that since the Bridge replicates asynchronously, there may be some sessions in the “replication queue” that won’t make it to the passive site if the active site abruptly shuts down. But, this is usually a very small number. Read more about NCache Bridge Topology and all the situations where you can use it.

Active-Active Multi-Site Deployment

If your ASP.NET application is deployed to two or more active sites simultaneously then I recommend that you not replicate ASP.NET Session State to all the sites to save on bandwidth cost. Additionally, most likely, each of your active site is geographically separated and it might make sense for you to keep its regional traffic to this site completely. However, you probably want the ability to route some of the traffic to other sites to handle overflow situations. Or, you may need to bring one of the sites down for maintenance without any interruptions for the users. In this case, you can use the multi-site ASP.NET Session State Storage feature in NCache that lets you handle these cases. It lets you specify in web.config to generate session-ids with a location prefix for this session’s “primary” site. Then, even if this session request is routed to another site, that site knows where to find this session. In this approach, the sessions do not move from their primary location even if the user request is routed to the other site. But, the other site is able to access this session from its “primary” site. Actually, each site specifies its “primary” and all others as “secondary” sites. Below are the steps you take to achieve this goal:

    1. Add entry in web.config. It is required to generate ASP.NET session-id in the same manner on all servers and sites. You can use the genmackeys utility available with NCache installation to generate the keys.

  1. To enable location affinity of a session-id, add configuration mentioned below:
</pre>
<section></section>
<pre>

  1. Specify custom session-id manager using the sessionIDManagerType attribute of the sessionState element in web.config.

Please note that in the above example, the <ncache> section in each site will be different, meaning each site will have its own “primary” and will consider all other sites as “secondary”. In the above example, you can see that ASP.NET is being asked to generate session-id in a specific format so the sid-prefix can be prepended with the session-id. Once this is done, it helps ASP.NET know where this ASP.NET Session State was actually created so the cache for that site is accessed. With this configuration, if you route any requests from one site to another for overflow, the other site fetches the ASP.NET Session State from its original “primary” site because this is part of the session-id as a location prefix. So, your WAN bandwidth consumption is minimized and you only pay for overflow traffic. The other situation is where you want to bring a site down. In this case, just redirect all of its traffic to other sites but don’t shut down the cache servers of this site; you can shut down the web servers. And, then wait for all existing ASP.NET Session State objects to expire after their users have stopped using the application. Once this is done, just shut down the cache servers as well. With this, your users will not feel any downtime. Take a look at how NCache helps you achieve this goal. Download a fully working 60-day trial of NCache.

Download NCache Trial | NCache Details

Posted in ASP.Net, Distributed Cache | Tagged , , | Leave a comment

How to Use Custom Dependency in Distributed Cache?

Today, web applications are increasingly using distributed cache for boosting performance and scalability by caching frequently used data so as to reduce expensive database trips. Distributed cache spans and synchronizes over multiple cache servers to let you scale in a linear fashion. A good distributed cache usually has a Cache Dependency feature to let you expire cached items when something they depend on changes. A Cache Dependency can be key-based, file-based, or database-based. So, in essence you can specify a cached item to be dependent on another item in the cache (key-based), a file in the file system (file-based), or a row or a dataset in a SQL Server database (database-based). And, when data in any of these sources changes, your cached item is automatically removed from the cache because the “dependency was expired”. This allows you to keep your cached data fresh and correct always.

This is all well and good but what if you want your cached items to be dependent on data in data sources other than the ones mentioned above. For example, you might have an RSS feed (Rich Site Summary) that provides you data changes. And, you have your own program to read this feed and want to expire certain cached items based on whatever data changes you see in the RSS feed. There are many other similar situations where the data source is “custom”. So, to handle these situations, a good distributed cache should provide you the flexibility to implement your own Custom Cache Dependency for your cached items so they can be expired when data in your custom data source changes.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

NCache provides such a Custom Cache Dependency feature. NCache is a powerful distributed cache for all kinds of .NET applications. And, NCache lets you implement your own custom dependencies. Let me demonstrate how easily you can implement a custom dependency with NCache below. Here are the steps you have to take:

  1. Add using Alachisoft.NCache.Runtime.Dependencies; reference to your custom dependency implementation.
  2. NCache provides extensible abstract class named as ExtensibleDependency which is base class of all dependencies. You just have to inherit your custom dependency class from ExtensibleDependency and then just override its HasChanged property. When this property will return true, than item dependent item will be expired from cache.

Here is a full example of custom dependency implementation in which if available units of a specified product are less than 100 then dependency change will be triggered.

using Alachisoft.NCache.Runtime.Dependencies;

[Serializable]
public class CustomDependency : ExtensibleDependency
{
   private string _connString;
   private int _productID;

   public override bool Initialize(){ return false; }

   public CustomDependency(int productID, string connStr)
   {

     _connString = connStr;
     _productID = productID;

   }

   internal bool DetermineExpiration()
   {
     if (GetAvailableUnits(_productID) < 100)
            return true;
     return false;
   }

   internal int GetAvailableUnits(int productID)
   {
     OleDbDataReader reader=null;
     OleDbConnection connection=
                      new OleDbConnection(_connString);
     connection.Open();
     int availableUnits=-1;
     try
     {
        OleDbCommand cmd = connection.CreateCommand();

        cmd.CommandText =
        String.Format(CultureInfo.InvariantCulture,
        "Select UnitsInStock From Products" +
        " where ProductID = {0}", productID);
        reader = cmd.ExecuteReader();

        if (reader.Read())
        {
          availableUnits =
          Convert.ToInt32(reader["UnitsInStock"].ToString());
        }

        reader.Close();
        return availableUnits;
     }
     catch (Exception)
     {
         return availableUnits;
     }
   }

   public override bool HasChanged
   {
     get
      {
         return DetermineExpiration();
      }
   }
}
  1. Once you implemented your custom dependency and deployed it with NCache service, all you need is to register this dependency with dependent cache items in your application wherever needed.
using Alachisoft.NCache.Web.Caching; //add namespace

//Add following code in your application
Cache _cache = NCache.InitializeCache("myCache");

string connString =
       "Provider=SQLOLEDB;Data Source=localhost;
        User ID=sa;password=;Initial Catalog=Northwind";

CustomDependency hint = new CustomDependency(123, connString);

_cache.Add("Product:1001", "Value", new CacheDependency(hint),
       Cache.NoAbsoluteExpiration, new TimeSpan(0, 0, 10),
       Alachisoft.NCache.Runtime.CacheItemPriority.Default);

Now when data in your custom data source changes, NCache expires the dependent cached items automatically from cache. NCache is responsible for running your custom dependency code so you don’t need to worry about implementing your own separate program and hosting it in some reliable process. Try and explore it for your application specific scenarios.

Download NCache Trial | NCache Details

Posted in Cache dependency, Custom Dependency, Distributed Cache, Distributed caching | Tagged , , , | 2 Comments

How to Configure .NET 4.0 Cache to use a Distributed Cache?

In-memory distributed cache today has become really popular for applications running in a multi-server environment because it helps improve application scalability and performance. Until .NET Framework 3.5 there was ASP.NET Cache object available only for web application under System.Web.Caching namespace. But, in .NET Framework 4.0, .NET 4.0 Cache is added under System.Runtime.Caching namespace for all types of .NET applications. .NET 4.0 Cache has functionality similar to ASP.NET Cache. But, unlike ASP.NET Cache, it has an abstract class ObjectCache that can be implemented in a customized way as needed. So, in essence .NET 4.0 Cache can be extended which ASP.NET Cache cannot be. And, MemoryCache is the default in-memory cache implementation of .NET 4.0 Cache. Here is an example:

private static ObjectCache cache = MemoryCache.Default;

private CacheItemPolicy policy = null;

private CacheEntryRemovedCallback callback = null;

// Registering callbacks and policies…
callback = new
 CacheEntryRemovedCallback(this.MyCachedItemRemovedCallback);

policy = new CacheItemPolicy();

policy.Priority =
 (MyCacheItemPriority == MyCachePriority.Default) ?

CacheItemPriority.Default : CacheItemPriority.NotRemovable;

policy.RemovedCallback = callback;

HostFileChangeMonitor changeMonitor =
 new HostFileChangeMonitor(FilePath);

policy.ChangeMonitors.Add(changeMonitor);

// Add inside cache…
cache.Set(CacheKeyName, CacheItem, policy);

One limitation of .NET 4.0 Cache’s default implementation MemoryCache is that it is a stand-alone in-process cache. If your .NET application runs on a multi-server environment, then you cannot use this because you need a distributed cache that can synchronize the cache across multiple servers. But fortunately, .NET 4.0 Cache architecture allows us to plug in a third party distributed cache solution and extend it.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

To address this need, Alachisoft has implemented an easy to use .NET 4.0 Cache Provider that can solve data synchronization, distribution and scalability issues especially in case of web farm/garden. This provider basically integrates NCache with .NET 4.0 Cache. NCache is a very popular enterprise level distributed cache for .NET. Through NCache’s .NET 4.0 Cache Provider you can plug in NCache with your application to achieve the benefits of a distributed cache. Let me show you how easily it can be done with NCache by a few steps.

  1. Create a Clustered (distributed) cache through GUI based NCache Manager. I created a clustered cache named as “MyClusterCache”.
  2. Start the cache to make it ready to use.
  3. Add references of  Alachisoft.NCache.ObjectCacheProvider library to your application from “NCacheInstallDir/NCache/integration/DotNet4.0 Cache Provider
  4. Include the following namespace in your project.
    using Alachisoft.NCache.ObjectCacheProvider;
    
  5. Initialize your CacheProvider (inherited from ObjectCache) and pass your cache name to the provider as shown below.
    ObjectCache _cache;
    string _cacheId = "MyClusterCache" ;
    _cache = new CacheProvider(_cacheId);
    
    
  6. Now you can perform all cache related operations on your cache using CacheProvider commands.

Here is full example of .NET 4.0 extended for NCache:


ObjectCache _cache;

string _cacheId = "MyClusterCache" ;

// Initialize with NCache’s .NET 4.0 Cache Provider.
_cache = new CacheProvider(_cacheId);

// Registering callbacks and policies…
NCacheFileChangeMonitor changeMonitor =
 new NCacheFileChangeMonitor(fileNames);

CacheItemPolicy ciPolicy = new CacheItemPolicy();

ciPolicy.ChangeMonitors.Add(changeMonitor);

ciPolicy.RemovedCallback +=
 new CacheEntryRemovedCallback(onCacheEntryRemoved);

//Add the dependent items in the cache.
_cache.AddItems(ciPolicy, 0, totalKeys);

NCache implementation of .NET 4.0 Cache also includes custom implementation of ChangeMonitor as NCacheEntryChangeMonitor, NCacheFileChangeMonitor, NCacheSqlChangeMonitor and NCacheOracleChangeMonitor for entry, file, SQL and Oracle based changes respectively. Through NCache’s implementation of .NET 4.0 Cache interface, you can now adopt .NET 4.0 Cache as your standard and at the same time benefit from an enterprise level distributed cache for your .NET applications running in a multi-server environment.

Download NCache Trial | NCache Details

Posted in .NET 4.0 Cache, Distributed Cache, Distributed caching | Tagged , , | Leave a comment

How to Cache ASP.NET View State in a Distributed Cache?

ASP.NET today is widely used for high traffic web applications that need to handle millions of users and are deployed in load balanced web farms. One important part of ASP.NET is View State that many applications use. ASP.NET View State is a very powerful mechanism that stores pages, controls and custom values between multiple HTTP requests across client and the web server.

ASP.NET View State values are stored in a hidden field on the page and encoded as a Base64 string. An ASP.NET View State looks like this:

<input id="__VIEWSTATE" type="hidden"
name="__VIEWSTATE"
value="/wEPDwUJNzg0MDMxMDA1D2QWAmYPZBYCZg9kF..." />

Although very useful, ASP.NET View State frequently becomes quite large especially in situations where your ASP.NET application has grid controls and many other controls on pages. This is added to your HTTP request and response and that really slows down your ASP.NET response time to unbearable levels.

Another downside of heavy ASP.NET View State is the increased bandwidth usage that increases your bandwidth cost considerably. For example, if for each HTTP requests, you end up appending 60-100k of additional ASP.NET View State data, just multiply it by the total number of transactions and you’ll quickly see how much more it will cost you in bandwidth consumption.

Finally, in some situations, there is a security risk with sending confidential data as part of ASP.NET View State. Encrypting it before sending is also costly.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

To resolve all these problems you can cache ASP.NET View State on the web server and assign a GUID as its key in the cache. This GUID is then sent to the browser in a hidden field and it comes back along with the next HTTP request and is used to fetch the corresponding ASP.NET View State from the cache. This reduces your payload sent to the browser, which not only improves ASP.NET response time, but also reduces your bandwidth consumption cost dramatically.

If your ASP.NET application is running in a load balanced web farm, then you must use a distributed cache. A stand-alone cache like ASP.NET Cache won’t work. NCache is an enterprise level distributed cache that provides ASP.NET View State caching module. In order to use it, you don’t need to do any programming. Just modify your ASP.NET web.config for it.

Here are the steps to use NCache for caching ASP.NET View State:

  1. Create an app.browser file in your ASP.NET application. Create it under the directory App_browsers. Plug in page adapters in the app.browser file as following:
    <browser refID="Default">
    <controlAdapters>
    <adapter controlType="System.Web.UI.Page"
     adapterType="Alachisoft.NCache.Adapters.PageAdapter"/>
    </controlAdapters></browser>
    
  2. Then add the following assembly reference in compilation section of web.config file.
    
    <compilation defaultLanguage="C#" debug="true">
    <assemblies>
    <add assembly="Alachisoft.NCache.Adapters,
    Version=1.0.0.0,Culture=neutral,
    PublicKeyToken=cff5926ed6a53769"/></assemblies>
    </compilation>
    
  3. Register your config section in web.config file.
    <configSections>
    <sectionGroup name="ncContentOptimization">
    <section name="settings"
    type=
    "Alachisoft.NCache.ContentOptimization
    .Configurations.ContentSettings"
    allowLocation="true" allowDefinition="Everywhere"/>
    </sectionGroup>
    </configSections>
    
    
  4. Specify settings for your config section in web.config file (that was registered above).
    <ncContentOptimization>
    <settings enableMinification="true"
    enableViewstateCaching="true"
    groupedViewStateWithSessions="true"
    viewstateThreshold="0"
    enableTrace="true">
    <cacheSettings cacheName="mycache">
    <expiration type="Sliding" duration="300"/>
    </cacheSettings></settings>
    </ncContentOptimization>
    
  5. In the end, register the HTTP handler in the HttpHandlers section of web.config as following:
    <add verb="GET,HEAD" path="NCResource.axd"
     validate="false"
    type="Alachisoft.NCache.Adapters
    .ContentOptimization.ResourceHandler,
    Alachisoft.NCache.Adapters, Version=1.0.0.0,
    Culture=neutral,PublicKeyToken=cff5926ed6a53769"/>
    

After configuring NCache, you can see the ASP.NET View State tag in your application as:

< input type="hidden"
name="__NCPVIEWSTATE"
id="__NCPVIEWSTATE"
value="vs:cf8c8d3927ad4c1a84da7f891bb89185" />
< input type="hidden"
name="__VIEWSTATE" id="__VIEWSTATE" value="" />

Notice that another hidden tag is added with ASP.NET View State. It contains the unique key assigned to ASP.NET View State of your page, in your distributed cache. So whenever your application server needs ASP.NET View State value it can get it from cache easily.

With this, you will see a remarkable performance boost in your ASP.NET response times and your bandwidth consumption cost is reduced significantly as well.

Please explore more about ASP.NET View State caching by trying NCache ASP.NET View State module yourself.

Download NCache Trial | NCache Details

Posted in ASP .NET performance, ASP.Net, ASP.NET Cache, Distributed Cache, Distributed caching | Tagged , , | 2 Comments

How to Configure Preloading in a Distributed Cache?

Today’s applications need to scale and handle extreme levels of transaction loads. But, databases are unable to scale and therefore become a bottleneck. To resolve this, many people are turning to in-memory distributed cache because it scales linearly and removes the database bottlenecks.

A typical distributed cache contains two types of data, transactional data and reference data. Transactional data changes very frequently and is therefore cached for a very short time. But, caching it still provides considerable boost to performance and scalability.

Reference data on the other hand does not change very frequently. It may be static data or dynamic data that changes perhaps every hour, day, etc. At times, this data can be huge (in gigabytes).

It would be really nice if this reference data could be preloaded into a distributed cache upon cache start-up, because then your applications would not need to load it at runtime. Loading reference data at runtime would slow down your application performance especially if it is a lot of data.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

How Should Reference Data be Preloaded into a Distributed Cache?

One approach is to design your application in such a way that during application startup, it fetches all the required reference data from the database and puts it in the distributed cache.

However, this approach raises some other issues. First, it slows down your application startup because your application is now involved in preloading the cache. Second, if you have multiple applications sharing a common distributed cache, then you either have code duplication in each application or all your applications depend on one application preloading the distributed cache. Finally, embedding cache preloading code inside your application corrupts your application design because you’re adding code that does not belong in your application. Of course, neither of these situations is very desirable.

What if we give this preloading responsibility to the distributed cache itself? In this case, preloading could be part of the cache startup process and therefore does not involve your application at all. You can configure the cache to preload all the required data upon startup so it is available for all the applications to use from the beginning. This simplifies your application because it no longer has to worry about preloading logic.

NCache provides a very powerful and flexible cache preloading capability. You can develop a cache loader and register it with NCache and then NCache calls this custom code developed by you upon cache startup. Let me demonstrate how to do this below:

  • Implement a simple interface named ICacheLoader. It is called to assist the cache in answering the question “How and which data to load?”
class CacheLoader : ICacheLoader
{

 public void Init(System.Collections.IDictionary parameters)
 {
   // Initialization of data source connection, assigning parameters etc.
 }

 public bool LoadNext(
 ref System.Collections.Specialized.OrderedDictionary data,
 ref object index)
 {
   // Fill ref objects with data that should be loaded in cache.
 }

 public void Dispose()
 {
   // Disposing connections and other required objects.
 }

}

  • Next step is to configure above implemented startup loader with cache. We can do it by using NCache Manger that comes with NCache or simply by adding following configuration in cache config.
<cache-loader retries="3" retry-interval="0" enable-loader="True">
   <provider
    assembly-name="TestCacheLoader, Version=1.0.0.0, Culture=neutral,
    PublicKeyToken=null"
    class-name="TestCacheLoader.CacheLoader"
    full-name="TestCacheLoader.dll" />
   <!—parameters that will be passed to Init method of the loader-->
   <parameters name="connectionString"
   value="Data Source= SQLEXPRESS;\
   Initial Catalog=testdb;Integrated Security=True;Pooling=False"/>
</cache-loader>

Any exception occurred during startup loader processing is logged without creating any problem for your application. Simple and effective!

As you can see, NCache provides you a powerful mechanism to preload your distributed cache and keep the performance of your applications always high.

Download NCache Trial | NCache Details

Posted in Distributed Cache | Tagged , , | Leave a comment

How to use a Distributed Cache for ASP.NET Output Cache?

ASP.NET Output Cache is a mechanism provided by Microsoft that allows you to keep an in memory copy of the rendered content of ASP.NET page. Due to this, ASP.NET is able to serve the subsequent user requests for this page from an in-memory cached copy instead of re-executing this page which can be quite expensive because it usually also involves database calls.

So, ASP.NET Output Cache not only improves your application performance, but also reduces a lot of expensive database trips. This improves your ASP.NET application scalability because otherwise the database would become a scalability bottleneck if all those ASP.NET pages were executed again and again.

But, the problem with the ASP.NET Output Cache is that it resides in your ASP.NET worker process address space. And, worker process resets or recycles quite frequently. When that happens, all the ASP.NET Output Cache is lost. Secondly, in case of a web garden (meaning multiple worker processes) the same page output is cached multiple times, once in each worker process and this consumes a lot of extra memory. Also, did you know that one of the most common support calls for ASP.NET is out-of-memory errors caused by ASP.NET Output Cache?

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

To overcome these limitations of ASP.NET Output Cache and to take advantage of its benefits, try using a distributed cache for storing all of ASP.NET Output Cache content. In this regard, NCache has implemented an ASP.NET Output Cache provider to enable the caching of ASP.NET rendered output in out-of-process (out-proc) cache instead of worker process address space. This way, the output of your rendered ASP.NET page is available to all other web servers in the web farm without even rendering the same ASP.NET page locally in each worker process.

By using NCache as ASP.NET Output Cache provider you can not only cache more data in out-proc cache,  but can also dramatically reduce the load on your database. This is because each rendered ASP.NET page output is accessible to all web servers in web farm without executing the page rendering process in each worker process which involves expensive database trips.

Further, NCache as ASP.NET Output Cache provider gives you the flexibility to even cache the output of certain parts of your ASP.NET page instead of complete page. This approach is very helpful in scenarios where you want certain parts of your ASP.NET to render each time. In addition, NCache also provides you high availability because even if your worker process resets or recycles, your data is not lost because it is not part of your worker process address space and resides on separate caching servers.

Steps to Configure NCache Output Caching Provider

1. Register NCache as ASP.NET Output Cache Provider: Modify your ASP.NET applications web.config to register NCache output caching provider as follows:


<caching>
    <outputCache defaultProvider ="NOutputCacheProvider" />
<providers>
<add name="NOutputCacheProvider"
     type="NCOutputCache.NOutputCacheProvider"
     exceptionsEnabled="true" enableLogs="false"
     cacheName="mypartitionofReplicaCache" />
</providers>
</outputCache>
</caching>

<compilation debug="true" targetFramework="4.0">
   <assemblies>
      <add assembly="Alachisoft.NCache.OutputCache,
           Version=4.1.0.0, Culture=neutral"/>
   </assemblies>
</compilation>

2. Add ASP.NET Output Cache tag: Add the under mentioned Output cache tag to those pages whose output you want to cache.


<%@ OutputCache VaryByParam="ID" Duration="300" %>

By the way, ASP.NET versions earlier than ASP.NET 4.0 do not provide support of custom ASP.NET Output Cache providers. Therefore, to support all earlier versions of ASP.NET NCache has also implemented another version of ASP.NET Output Cache provider using an HttpModule. This HttpModule based ASP.NET Output Cache provider by NCache enables you to use distributed cache to store rendered ASP.NET page output, even if your application is using ASP.NET versions earlier than 4.0.

In summary, by using NCache output caching provider you can easily boost your ASP.NET application response time and can reduce database load.

Download NCache Trial | NCache Details

Posted in ASP.Net, ASP.NET Output Cache, Distributed Cache | Tagged , , | Leave a comment

How to Share Different Object Versions using Distributed Cache?

It is a fact of life that organizations have different applications using different versions of the same classes because they were developed at different points in time and are unable to keep up. It is also a fact of life that these applications often need to share data with each other.

You can do this through the database.  However, that is slow and not scalable. What you really want is to share objects through a distributed cache that is fast and scalable. But, object sharing at runtime immediately raises the issue of version compatibility.

One way to do this is through XML in which you can transform one version of an object to another. But it is extremely slow. Another way is to implement your own custom transformation that takes one object version and transforms it to another version. But, you have to maintain this, which is a lot of effort for you.

Wouldn’t it be nice if the distributed cache somehow took care of version compatibility for you? Well, NCache does exactly that. NCache provides you a binary-level object transformation between different versions. You can map different versions through an XML configuration file and NCache understands how to transform from one version to another.

Additionally, since NCache stores all these different versions in a binary format (rather than XML), the data size stays very compact and small and therefore fast. In a high traffic environment, object size adds up to be a lot of extra bandwidth consumption, which has its own cost associated with it.

Here is an example of NCache config.ncconf with class version mapping:

<cache-config name="myInteropCache" inproc="False" config-id="0"
    last-modified="" type="clustered-cache" auto-start="False">
...
    <data-sharing>
      <type id="1001" handle="Employee" portable="True">
        <attribute-list>
          <attribute name="_Name" type="System.String" order="1"/>
          <attribute name="_Age" type="System.Int32" order="2"/>
          <attribute name="_Address" type="System.String" order="3"/>
          <attribute name="_ID" type="System.String" order="4"/>
          <attribute name="_PostalAddress" type="System.String" order="5"/>
        </attribute-list>
        <class name="DataModel.Employee:1.0.0.0" handle-id="1"
               assembly="DataModel, Version=1.0.0.0, Culture=neutral,
               PublicKeyToken=null" type="net">
          <attribute name="_Name" type="System.String" order="1"/>
          <attribute name="_Age" type="System.Int32" order="2"/>
          <attribute name="_Address" type="System.String" order="3"/>
        </class>
        <class name="DataModel.Employee:2.0.0.0" handle-id="2"
               assembly="DataModel, Version=2.0.0.0, Culture=neutral,
               PublicKeyToken=null" type="net">
          <attribute name="_ID" type="System.String" order="4"/>
          <attribute name="_Name" type="System.String" order="1"/>
          <attribute name="_Age" type="System.Int32" order="2"/>
          <attribute name="_PostalAddress" type="System.String" order="3"/>
        </class>
      </type>
    </data-sharing>
...
  </cache-config>

How does NCache do it?

In the config.nconf file that you see above, you’ll notice that the Employee class has a set of attributes defined first. These are version independent attributes and appear in all versions. This is actually a superset of all attributes that appear in different versions. Below that, you specify version-specific attributes and map them to version-independent attributes above.

Let’s say that you saved Employee version 1.0.0.0, which had a subset of the Employee version 2.0.0.0. Now, when another application tries to fetch the same Employee, but it wants to see it as version 2.0.0.0, NCache knows which version 2.0.0.0 attributes to fill with data and which ones to leave blank.

Secondly, in above sample config you will notice that Employee version 2.0.0.0 does not have the Address field in it even though version 1.0.0.0 has it. So, in this case, when NCache tries to read Employee 1.0.0.0 stored in the cache and tries to transform it to Employee version 2.0.0.0, it knows not to copy the Address field because it is not there in this newer version.

There are many other scenarios that NCache handles seamlessly for you. Please read the online product documentation for more detail on this.

Finally, the best part in all of this is that you don’t have to write any serialization code or make any code changes to your application in order to use this NCache feature. NCache has implemented a runtime code generation mechanism, which generates in-memory serialization and deserialization code of your classes at runtime, and uses the compiled form which is very fast.

In summary, using NCache you can now share different object versions between your applications without even modifying your application code.

Download NCache Trial | NCache Details

Posted in Binary Serialization, Data Sharing with Distributed Cache, Distributed Cache, Object Versions Sharing, Serialization | Tagged , , , | Leave a comment

Query a Distributed Cache with SQL-Like Aggregate Functions

Today, distributed cache is widely used to achieve scalability and performance in high traffic applications. Distributed cache offloads your database servers by serving cached data from in-memory store. In addition, few distributed caches also provide you SQL-like query capability, which you can easily query your distributed cache the way you query your database i.e. “SELECT employee WHERE employee.city=‘New York’”

First of all, most distributed caches don’t even provide SQL-like querying capabilities. Even a few that do provide this have a very limited support for it. They only provide searching of distributed cache based on simple criteria. Whereas, there are several scenarios where you have to find the result based on aggregate functions i.e. “SELECT COUNT(employee) WHERE salary > 1000” or “SELECT SUM(salary) WHERE employee.city = ‘New York’”. In order, to achieve this you have to first query the distributed cache and then calculate the aggregate function on fetched cache data.

This approach has two major drawbacks. First is that you have to execute query on distributed cache, which involves fetching of all the data from distributed cache to cache client. This data may vary from MBs to GBs, and this operation becomes more expensive when you are also paying for the consumed network bandwidth. Moreover, mostly you don’t need this data after you are done with aggregate function calculations.

Second drawback is that it involves custom programming for aggregate function calculation. This adds extra man hours and still most of the complex scenarios cannot be covered. It would be much nicer if you could continue to develop application for the purpose that it is being built and not worry about designing and implementing these extra features yourself.

These are the reasons why NCache provides you the flexibility to query distributed cache using aggregate functions like COUNT, SUM, MIN, MAX and AVG as part of its Object Query Language (OQL). Using NCache OQL aggregate functions, you can easily perform the required aggregate calculations inside the distributed cache domain. This approach not only avoids the network traffic, but also provides you much better performance in term of aggregate functions calculation. This is because all the selections and calculations are done inside the distributed cache domain and no network trips are involved.

Here is the code sample to search NCache using OQL aggregate queries:

public void Main(string[] args)
{
    ...
    NCache.InitializeCache("myPartitionReplicaCache");

    String query = "SELECT COUNT(Business.Product) WHERE
                           this.ProductID > ?  AND this.Price < ?";

    Hashtable param = new Hashtable();
    param.Add("ProductID", 100);
    param.Add("Price", 50);

    // Fetch the cache items matching this search criteria
    IDictionary searchResults = _cache.SearchEntries(query, values);
    ...
}

For more reduced query execution time, NCache runs the SQL-like query in parallel by distributing it to all the cache servers  just like the map-reduce mechanism. In addition, you can use NCache OQL aggregate queries in both .NET and Java applications.

In summary, NCache provides you not only the scalability and performance, but also the flexibility of searching distributed cache using SQL like aggregate function.

Download NCache Trial | NCache Details

Posted in Aggregate Functions, Distributed Cache, Object Query | Tagged , , , | Leave a comment

Class Versioning in Runtime Data Sharing with Distributed Cache

Today many organizations use .NET and Java technologies to develop different high traffic applications. At the same time, these applications not only have a need to share data with each other, but also want to support runtime sharing of different version of the same class for backward compatibility and cost reduction.

The most common way mostly used to support runtime sharing of different class versions between .NET and Java application is through XML serialization. But, as you know XML serialization is an extremely slow and resource hungry process. It involves XML validation, parsing, transformations, which really hampers your application performance and uses extra resources in term of memory and CPU.

The other approach widely used to support sharing of different class versions between .NET and Java is through database. However, the problem with this approach is that it’s slow and also doesn’t scale very well with the growing transactional load. Therefore, your database quickly becomes a scalability bottleneck because you can linearly scale your application tier by adding more application servers, but you cannot do the same at the database tier.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

This is where a distributed cache like NCache comes in really handy. NCache provides you a binary-level object transformation between different versions not only of the same technology but also between .NET and Java. You can map different versions through an XML configuration file, and NCache understands how to transform from one version to another.

NCache class version sharing framework implements interoperable binary serialization custom protocol that generates byte stream based on specified mapping in such a format that any new and old versions of the same class can easily de-serialize it, regardless of its development language, which can be .NET or Java.

Here is an example of NCache config.ncconf with class version mapping:

<cache-config name="InteropCache" inproc="False" config-id="0" last-modified="" type="local-cache" auto-start="False">
 ...
      <type id="1001" handle="Employee" portable="True">
        <attribute-list>
          <attribute name="Name" type="Java.lang.String" order="1"/>
          <attribute name="SSN" type="Java.lang.String" order="2"/>
          <attribute name="Age" type="int" order="3"/>
          <attribute name="Address" type="Java.lang.String" order="4"/>
          <attribute name="Name" type="System.String" order="5"/>
          <attribute name="Age" type="System.Int32" order="6"/>
          <attribute name="Address" type="System.String" order="7"/>
        </attribute-list>
        <class name="com.samples.objectmodel.v1.Employee:1.0" handle-id="1"
   assembly="com.jar" type="Java">
          <attribute name="Name" type="Java.lang.String" order="5"/>
          <attribute name="SSN" type="Java.lang.String" order="2"/>
        </class>
        <class name="com.samples.objectmodel.v2.Employee:2.0" handle-id="2"
   assembly="com.jar" type="Java">
          <attribute name="Name" type="Java.lang.String" order="5"/>
          <attribute name="Age" type="int" order="6"/>
          <attribute name="Address" type="Java.lang.String" order="7"/>
        </class>
        <class name="Samples.ObjectModel.v2.Employee:2.0.0.0" handle-id="3"
   assembly="ObjectModelv2, Version=2.0.0.0, Culture=neutral,
   PublicKeyToken=null" type="net">
          <attribute name="Name" type="System.String" order="5"/>
          <attribute name="Age" type="System.Int32" order="6"/>
          <attribute name="Address" type="System.String" order="7"/>
        </class>
        <class name="Samples.ObjectModel.v1.Employee:1.0.0.0" handle-id="4"
               assembly="ObjectModelv1, Version=1.0.0.0, Culture=neutral,
               PublicKeyToken=null" type="net">
          <attribute name="Name" type="System.String" order="5"/>
          <attribute name="Age" type="System.Int32" order="6"/>
        </class>
      </type>
    </data-sharing>
    ...
  </cache-config>

How does NCache do Class Versioning in Runtime Data Sharing?

In the ncache.config file that you see above, you’ll notice that the Employee class has a set of attributes defined first. These are version independent attributes and appear in all versions of .NET and Java classes. This is actually a superset of all attributes that appear in different versions. Below that, you specify version-specific attributes and map them to version-independent attributes above.

Now, let’s say that you saved .NET Employee version 1.0.0.0. Now, when another application tries to fetch the same Employee, but it wants to see it as Java version 1.0 or 2.0. NCache knows which attributes of .NET version 1.0.0.0 to fill with data and which ones to leave blank and vice versa.

There are many other scenarios that NCache handles seamlessly for you. Please read the online product documentation for how NCache runtime data sharing works.

Finally, the best part is that you don’t have to write any serialization and deserialization code or make any code changes to your application in order to use this NCache feature. NCache has implemented a runtime code generation mechanism, which generates the in-memory serialization and deserialization code of your interoperable classes at runtime, and uses the compiled form so it is super-fast.

In summary, using NCache you can now share different class versions between your .NET and Java applications without even modifying your application code.

Download NCache Trial | NCache Details

Posted in Data Sharing with Distributed Cache, Distributed Cache | Tagged , , | Leave a comment