How to Use Locking in a Distributed Cache for Data Consistency?

Businesses today are developing high traffic ASP.NET web applications that serve tens of thousands of concurrent users. To handle this type of load, multiple application servers are deployed in a load balanced environment. In such a highly concurrent environment, multiple users often try to access and modify the same data and trigger a race condition. A race condition is when two or more users try to access and change the same shared data at the same time but end up doing it in the wrong order. This leads to high risk of loosing data integrity and consistency. This is where distributed lock mechanism comes in very handy to achieve data consistency.

Distributed Locking for Data Consistency

NCache provides you with a mechanism of distributed locking in .NET that allows you to lock selective cache items during such concurrent updates. This helps ensure that correct update order is always maintained. NCache is a distributed cache for .NET that helps your applications handle extreme transaction loads without your database becoming a bottleneck.

But before going into the details of distributed locking, you first need to know that all operations within NCache are themselves thread-safe. NCache operations also avoid race conditions[1] when updating multiple copies of the same data within the cache cluster. Multiple copies of the data occur due to data replication and NCache ensures that all copies are updated in the same correct order, thereby avoiding any race conditions.

Since we have that part cleared, consider the following code to understand how, without a distributed locking service, data integrity could be violated;

// Fetch BankAccount object from NCache
BankAccount account = cache.Get("Key") as BankAccount; // balance = 30,000
Money withdrawAmount = 15000;
if (account != null && account.IsActive)
    // Withdraw money and reduce the balance
    account.Balance -= withdrawAmount;
    // Update cache with new balance = 15,000
    cache.Insert("Key", account);
// Fetch BankAccount object from NCache
BankAccount account = cache.Get("Key") as BankAccount; // balance = 30,000
Money depositAmount = 5000;
if (account != null && account.IsActive)
    // Deposit money and increment the balance
    account.Balance += depositAmount;
    // Update cache with new balance = 35,000
    cache.Insert("Key", account); 

In this example consider the following possibility;

  • Two users simultaneously access the same Bank Account with balance = 30,000
  • One user withdraws 15,000 whereas the other user deposits 5,000.
  • If done correctly, the end balance should be 20,000.
  • If race condition occurs but not handled, the balance would be either 15,000 or 35,000 as you can see above. Here is how this race condition occurs:
    • Time t1: User 1 fetches Bank Account with balance = 30,000
    • Time t2: User 2 fetches Bank Account with balance = 30,000
    • Time t3: User 1 withdraws 15,000 and updates Bank Account balance = 15,000
    • Time t4: User 2 deposits 5,000 and updates Bank Account balance = 35,000

In both the cases, this code block would be disastrous to the Bank. To maintain data consistency in such cases, NCache acts as a distributed lock manager[2] and provides you with two types of locking:

1. Optimistic Locking (Item Versions)

In optimistic locking, NCache uses cache item versioning. At the server side, every cached object has a version number associated with it which gets incremented at every cache item update. When you fetch a CacheItem object from NCache, it comes with a version number. When you try to update this item in the cache, NCache checks if your version is latest or not. If not, then it rejects your cache update. This way, only one user gets to update and other user updates fail. Take a look at the following code explaining the case we presented above;

// Cache Item encapsulates the value and its meta data.
CacheItem cacheItem = cache.GetCacheItem("Key");
BankAccount account = cacheItem.Value as BankAccount;
if (account != null && account.IsActive)
    // Withdraw money or Deposit
    account.Balance -= withdrawAmount;
    // account.Balance += depositAmount;
        // Update Balance w.r.t the ItemVersion held by the application
        // cacheItem.Version defines the item version your application holds
        // OperationFailedException is thrown if there is a version mismatch with NCache server
        cache.Insert("Key", account, cacheItem.Version);
    catch (OperationFailedException operationExcep)
        // Item has been updated by another application
        // Retry

In the above example, if your cacheitem version is the latest, NCache performs the operation successfully. If not then an operation failed exception is thrown with the detailed message. In this case, you should re-fetch the latest version and redo your withdrawal or deposit operation.

With optimistic locking, NCache ensure that every write to the distributed cache is consistent with the version each application holds.

2. Pessimistic Locking (Exclusive Locking)

The other way to ensure data consistency is to acquire an exclusive lock on the cached data. This mechanism is called Pessimistic locking. It is essentially a writer-lock that blocks all other users from reading or writing the locked item.

To clarify it further, take a look at the following code;

// Instance of the object used to lock and unlock cache items in NCache
LockHandle lockHandle = new LockHandle();
// Specify time span of 10 sec for which the item remains locked
// NCache will auto release the lock after 10 seconds.
TimeSpan lockSpan = new TimeSpan(0, 0, 10); 
    // If item fetch is successful, lockHandle object will be populated
    // The lockHandle object will be used to unlock the cache item
    // acquireLock should be true if you want to acquire to the lock.
    // If item does not exists, account will be null
    BankAccount account = cache.Get(key, lockSpan, 
    ref lockHandle, acquireLock) as BankAccount;
    // Lock acquired otherwise it will throw LockingException exception
    if(account != null && account.IsActive)
        // Withdraw money or Deposit
        account.Balance += withdrawAmount;
        // account.Balance -= depositAmount;
        // Insert the data in the cache and release the lock simultaneously 
        // LockHandle initially used to lock the item must be provided
        // releaseLock should be true to release the lock, otherwise false
        cache.Insert("Key", account, lockHandle, releaseLock); 
        // Either does not exist or unable to cast
        // Explicitly release the lock in case of errors
        cache.Unlock("Key", lockHandle);
catch(LockingException lockException)
    // Lock couldn't be acquired
    // Wait and try again

Here, we first try to obtain an exclusive lock on the cache item. If successful, we will get the object along with the lock handle. If another applications had already acquired the lock, a LockingException would be thrown. In this case, you must retry fetching the item after a small delay.

Upon successfully acquiring the lock while fetching the item, the application can now safely perform operations knowing that no other application can fetch or update this item as long as you have this lock. To finally update the data and release the lock, we will call the insert API with the same lock handle. Doing so, it will insert the data in the cache and release the lock, all in one call. After releasing the lock, the cached data will be available for all other applications.

Just remember that you should acquire all locks with a timeout. By default, if the timeout is not specified, NCache will lock the item for an indefinite amount of time. If the application crashes without releasing the lock, the item will remain locked forever. For a work around you could forcefully release it but this practice is ill advised.

Failover Support in Distributed Locking

Since NCache is an In-Memory Distributed Cache, it also provides complete failover support so that there is simply no data loss. In case of a server failure your client applications keep working seamlessly. In a similar fashion, your locks in the distributed system are also replicated and maintained by the replicating nodes. If any node fails while a lock was acquired by one of your applications, the lock will be propagated to a new node automatically with its specified properties e.g. Lock Expiration.


So which locking mechanism is best for you, optimistic or pessimistic? Well, it depends on your use case and what you want to achieve. Optimistic Locking provides an improved performance benefit over Pessimist Locking especially when your applications are read intensive. Whereas, Pessimist Locking is more safe from a data consistency perspective. Choose your locking mechanism carefully. For more details head on to the website. In case of any questions head over to the support page or put in a question either on StackOverFlow or on Alachisoft Forum.



Posted in Distributed Cache | Tagged | Leave a comment

How is a .NET Distributed Cache Superior to Key Value Store?

ASP.NET web applications, .NET web services applications, and other .NET server applications need to handle extreme transaction loads without slowing down. And, although their application tier scales linearly, the data storage and database tier does not scale and therefore becomes a bottleneck. As a result, the entire application cannot scale.

Originally, simple in-memory distributed key-value stores like Memcached and later Redis were introduced on Unix/Linux platforms to help resolve this scalability problem. They quickly became quite popular mainly because they provided linear scalability just like the application tiers and removed the database bottleneck.

Limitations in Key Value Stores

But, despite their popularity, these solutions were very simple and basic in nature and didn’t really solve many problems facing real life applications. Some of the areas where these solutions were very weak included:

  • Lack of high availability
  • Lack of intelligent ways of keeping the cache fresh
  • Lack of SQL searching
  • Lack of server-side caching code (e.g. Read-through)

For example, high availability was so poor in Memcached that third parties started developing high availability “fix ins” for it. But, the underlying architecture was just not designed for high availability and therefore these solutions remained quite limited in nature.

Redis had the same high availability issues but later re-architected the product to incorporate some high availability features like data replication and some failover support.

But, the other areas are still big holes in all key-value store products like Memcached and Redis. This is where distributed cache solutions came to the rescue.

.NET Distributed Cache as 2nd Generation Key Value Store

A .NET distributed cache like NCache on the other hand was designed from day one to address all the above mentioned limitations and more. So, in essence, NCache is a 2nd Generation to the original key value stores like Memcached and Redis. NCache is a popular 10-year old distributed cache for .NET.

Dynamic Cache Cluster & Data Replication

A distributed cache like NCache has a self-healing dynamic cache cluster that pools all the CPU and memory resources from all cache servers in the cluster. At the same time, NCache provides a variety of caching topologies with different data distribution and replication strategies. This allows NCache to linearly scale without compromising on high availability. And, even if a cache server goes down, no data loss occurs, the cache cluster continues running, and all the applications using the cache also continue without any interruptions.

Keeping Cache Fresh

Another area where a distributed cache like NCache shines is keeping the data fresh and always consistent with the database. NCache does this through a variety of features including expirations, event driving SqlDependency, polling based DbDependency, and support for CLR procedures for relational databases. Expirations work just like key value stores but SqlDependency and DbDependency allow NCache to synchronize the cache with any changes in the database for the related data. And, CLR stored procedures allow you to directly update the cache from your SQL Server database when the corresponding data changes.

All of this means that even if a third party application changes data in the database, NCache immediately updates itself accordingly. And, the benefit of all this is that you can now cache almost all your application data instead of only caching read-only data. And this provides huge and real performance and scalability gains.

Searching Cache with SQL

So, when you’re able to cache almost all your data due to “keep cache fresh” features, you run into the issues of not being able to find data easily if the only mechanism is key-value. But, if you could search data based on attributes, then a distributed cache like NCache becomes as easy to search as a database. NCache provides you SQL and LINQ searching for this purpose.

In addition to SQL searching based on object attributes, you can assign group/subgroup, tags, and named tags to cached items and later include them in your SQL searching. Below is an example of SQL searching in C#. For example;

using Alachisoft.NCache.Runtime;
using Alachisoft.NCache.Runtime.Exceptions;
using Alachisoft.NCache.Web.Caching;

public void SearchDataUsingSQL()

    Cache cache  = NCache.InitializeCache("myparitionreplica");
    string query = "SELECT this.Category, "
                    + "MAX(Prod.Product.ProductID) "
                    + "WHERE this.Category = ? "
                    + "GROUP BY this.Category";
    Hashtable values = new Hashtable();
    values.Add("Category", 4);
    ICacheReader reader = cache.ExecuteReader(query, values);
    if (reader.FieldCount > 0)
        while (reader.Read())
            //you can get value through the field name...
            object category = reader.GetOrdinal("Category");
            //perform operations
    return data;

Server-Side Code

Finally, there is server-side code like Read-through, Write-through, Custom Dependency, and Cache-Loader that is very useful for you. This code is developed by you but is called by the distributed cache and runs in the cache cluster. With the help of this code, you can simplify your applications and move a lot of commonly used code to the caching tier.

For example, NCache calls your Read-through handler when your application asks for some data that is not in the cache and the application tells NCache to call Read-through in that case. Similarly, you can combine Read-through with expirations and database synchronizations to auto-reload the cached item instead of removing it from the cache.

using Alachisoft.NCache.Runtime.Caching;
using Alachisoft.NCache.Runtime.DatasourceProviders;
using Alachisoft.NCache.Runtime.Dependencies;

public class SampleReadThruProvider : IReadThruProvider
    public void Init(IDictionary parameters, string cacheId)
        // Create SQL connection and other initializations at the server side

    //Responsible for loading an item from the external data source. 
    public void LoadFromSource(string key, out ProviderCacheItem cacheItem)
        //where LoadFromDataSource is the dummy method to load data from data source.
        object value = LoadFromDataSource(key);

        //Attach SQL dependency to your object
        string query = "SELECT ProductID FROM dbo.Products WHERE ProductID = 1001";
        cacheItem = new ProviderCacheItem(value);
        cacheItem.Dependency = new SqlCacheDependency(connectionString, query);
        //Set expirations
        cacheItem.SlidingExpiration = new TimeSpan(0, 5, 0);

        //Indicates whether item should be reloaded on expiration if 
        //ReadThru provider is specified.
        cacheItem.ResyncItemOnExpiration = true;

    public void Dispose() { //... }

Write-through works in the same fashion as Read-through but for updates. It updates your database when your application updates the cache. And, if you prefer, Write-behind updates the database asynchronously even though the cache gets updated synchronously. Finally, Cache Loader is called when the cache is started so you can pre-load it with the desired data.


As you can see, NCache, an open source .NET distributed cache, offers a lot more power to your cache than a simple Redis key-value store or Memcached. Below is a detailed key value store comparison with a distributed cache service i.e NCache:

Posted in Cache dependency, CLR procedures, Distributed Cache, LINQ Query, Uncategorized | Leave a comment

NCache Celebrating 10 Years of Success


A Look Back and Our Future Vision

The Launch

With enthusiasm and stars in our eyes, NCache was launched in July 2005. And wow have we all come a long way since then!  We’ve grown up with the .NET community and its adoption of caching solutions.  Back then, caching was pretty new for .NET.  I remember we wrote articles, created slideware, went to shows, meetups and prospects everywhere educating developers and managers about the benefits of caching, how to architect it and what it can do for the enterprise.  At the time, .NET itself was not used as much in high transaction, high traffic applications as it is today. But it took hold.

And then the explosion in ASP.NET and .NET Web Services took off.  Because .NET is such an easy environment to program in, and developers love the tools they get, use of these technologies kept on growing.  Today Internet of Things drives further growth, especially with the integration of smart devices.  So we’ve pretty much been through the entire .NET ‘scalability movement’ where server side applications need to scale because of booming web technologies, with more and more users and now…Big Data.

More and more important

With .NET reaching as much as 30% share of the server market in some industries, there is sizeable, growing demand for caching.  We’ve seen NCache become more and more important for customers.  We are blessed that NCache has top brand recognition in .NET developer community.  Because of this, when the need arises for caching, .NET developers often turn to NCache first.

Enterprises that don’t have budget – but need scalability – choose the NCache Open Source version released at the start of this year.  Previously these folks would look at Microsoft AppFabric – a free caching solution.  When Microsoft announced AppFabric support would be discontinued, NCache saw another round of interest from some of the biggest and fastest data companies we’ve seen.  Thanks to all of you, we are celebrating 10 years of NCache success.

There When You Needed It

Since 2005 these 10 happy years have shown steady progression with the .NET development community.  With high transaction apps growing and just a few full featured distributed caching options in the market, we’ve had a pretty smooth time convincing development shops they can use, and rely on, NCache for mission critical apps.

Overall NCache has been at the right place at the right time… in fact a little ahead of the curve.  It gave just enough time for the product to mature.  When the need for fast data spiked, our customers were relieved to have an ‘already stable, battle-tested product’.  Now after 10 years we serve hundreds and hundreds of customers in every possible industry.  Many run NCache non-stop, and some have run non-stop for years.  We call that success!!

NCache is loaded with distributed caching features that are typically available with In-Memory Data Grids (IMDGs) built on Java.  So we developed a Java version of NCache from the ground up. It’s called TayzGrid (you can find it at  TayzGrid is an In-memory data grid with the same maturity as NCache. The NCache source code was converted into native Java code.  It incorporates all those bug fixes and feature implementations from 10 years of NCache evolution.  From the first release TayzGrid is a fully mature product making it a strong option among the IMDGs available today.

More to Come

Building on business’ comfort level of NCache – and success of TayzGrid with its Big Data analytics, we are evolving more.  So far, Big Data capabilities have been driven by the Java projects Hadoop, Apache Ignite and Apache Spark.  As Big Data grows in popularity in .NET, I expect Microsoft will be entering this space and am seriously looking forward to this.  When Microsoft provides their equivalent of Hadoop and Spark, we will be filling the requirements for Big Data with NCache, replicating what Apache Ignite offers for Java.

Look for .NET Big Data in 2016.  And we look forward to serving your organization with NCache – to make your enterprise data run faster, with zero data loss, in the years to come.  Thank you for helping us achieve 10 years of success with NCache!

Iqbal Khan – President and Technology Evangelist

Posted in News | Leave a comment

Microsoft Ends AppFabric Support, NCache Keeps on Going!

April 2nd 2016 is proving to be an important date for Alachisoft’s product NCache. This is the date when Microsoft ends AppFabric support – essentially withdrawing the product from the market. See details on Microsoft AppFabric 1.1 for Windows Server Ends Support 4/2/2016

AppFabric was Microsoft’s answer to .NET industry’s need for an in-memory distributed cache to provide performance and scalability boost for .NET server applications. But, first it was late to the party with AppFabric 1.0 released in June 2010 (five years after NCache). And, second, it struggled to provide the features and functionalities considered de rigueur for in-memory distributed caches. Eventually, Microsoft decided to pull the plug on a product that essentially provided less features than other .NET caching solutions like NCache.

Read AppFabric vs NCache comparison and see for yourself how much AppFabric lacked in features and capabilities.

NCache is the most popular Open Source In-Memory Distributed Cache for .NET. It is available through GitHub or can be downloaded from our Alachisoft website. NCache is available for use either on-premises or in the Cloud (like Azure, Amazon, and others).

Interestingly, Microsoft’s proposed AppFabric replacement, Azure Redis Cache, is a cloud-based managed cache service. The on-premises version of Redis is not supported by Microsoft. And, you should also know that using Azure Redis Cache as a managed cache service means you lose all fine grained control over your cache and you are restricted to a simpler version of the client-side API.

On the other hand, NCache allows you to deploy it in Azure on a Virtual Machine as part of your infrastructure and gain full control over client-side and server-side code on an In-Memory Distributed Cache. You can also use the exact same version of NCache on-premises that you’re using in the Cloud. This way, you can start on-premises and later migrate to the Cloud without any code changes. Or, you can choose to have a hybrid of Cloud and on-premises application deployment. NCache works in all those scenarios seamlessly making it the only superior choice for Redis alternative.

On top of all this, NCache provides a lot of powerful features and capabilities than even the on-premises Redis that has more features than Azure Redis Cache. Here is a detailed Redis vs NCache comparison.

We have several hundred satisfied name-brand customers using NCache for years, including customers in over 30 countries, dozens of industries and some of the biggest ecommerce and financial institutions on the planet.

So, if you want to replace AppFabric with a 100% native .NET distributed caching solution, try NCache. We are convinced that you will be impressed with its combination of powerful features and yet very simple and easy to use interface and tools. And, rest assured that NCache will be fully supported by Alachisoft including 24×7 support and you won’t be left hanging.

Fortunately, we’ve made it really easy for you to migrate from AppFabric to NCache. We’ve implemented an AppFabric Wrapper for NCache that allows you to keep your existing application code and migrate to NCache without any code changes (except changing namespaces). This takes all the risk out of the AppFabric-to-NCache migration.

NCache is industry’s oldest and most popular distributed cache. See some details on it.

NCache Details                  Edition Comparison                         Performance Benchmarks

Posted in ASP .NET performance, ASP.NET Cache, Distributed Cache | Tagged , | 1 Comment

In-Memory NoSQL – Improve .NET App Scalability

Modern day .NET applications must be well equipped to cope with the scalability demands being put on them due to the explosion of ASP.NET web applications reaching millions of users, WCF web services applications handling Internet of Things (IoT) devices, and Big Data analytics applications processing huge amounts of data. Scalability here means to maintain the same high performance under extremely heavy loads that you see with no load.

The architectures of these .NET applications are very scalable because you can add more app servers as you need higher transaction loads. But, the problem is with data storage, especially relational databases like SQL Server and Oracle, and mainframe. Basically, data storage is unable to scale in the same manner as the application tier.

Is Migrating to NoSQL a Right Strategy?

Some people propose that these .NET applications should stop using relational databases or mainframe and instead use NoSQL databases. But that is not possible in most cases because of business and legacy reasons. Relational databases also have various technical merits over NoSQL databases in how they’re able to handle the complexity of real world data.

Although, NoSQL databases don’t have the same scalability problems as relational databases, they can only be used for storing unstructured data and all structured data must still be kept in a relational database.

So, the bottom line is that NoSQL databases cannot completely replace relational databases. As a result, you still need to work with relational databases like SQL Server and Oracle whether you like or not and they have major scalability issues.

In-Memory NoSQL Store is different from NoSQL Database

So, what is the answer to this problem? Well, the answer is to continue using relational database but with another type of NoSQL. And, this is called In-Memory NoSQL store. Although, the name sounds very similar to NoSQL databases, however in reality is actually different. In-Memory NoSQL store is also called distributed cache or In-Memory Data Grid.

In-Memory NoSQL store does not persist any data on the disk for permanent storage. It keeps it in-memory and is therefore good only as a temporary store. As a result, it is not meant to replace relational databases that NoSQL databases claim to do. Instead, it is intended to be used “along with” relational databases and helps reduce pressure on them so they’re no longer a scalability bottleneck.

NCache as an In-Memory NoSQL Store

In-Memory NoSQL store like NCache can be used for caching data coming from relational databases so you can reduce database traffic that making them the bottleneck. It can also be used to store transient data (temporary data) created at runtime for a short period. One example of transient data is ASP.NET Session State storage in a load balanced web farm deployment.

In-Memory NoSQL store is linearly scalable in comparison to relational databases because it builds a dynamic cluster of In-Memory NoSQL storage servers and distributes all the data across these storage servers. And, you can easily add more servers at runtime without interrupting your applications as you need to handle a bigger transaction load.

In-Memory NoSQL Store
Figure 1: In-Memory NoSQL Store

Figure 1 shows you how an In-Memory NoSQL store like NCache is used by your application. NCache also synchronizes its store with relational databases to make sure the data in the In-Memory NoSQL store is always correct.

In most cases, your application uses a simple “cache API” to talk to the In-Memory NoSQL store and sees it as a cache. In other cases like ASP.NET Session State or ASP.NET View State caching, In-Memory NoSQL store usually has a pluggable module that plugs in directly without any code changes.

So, by using an In-Memory NoSQL store like NCache, your application can truly scale in a linear fashion and handle extreme transaction load. Now, whether you have web applications, web services applications, big data analytics applications, or other server applications with scalability needs, you are all set.

If you have a .NET application with scalability requirements, I recommend you take a look at NCache. Here are some useful links:

NCache Details          Edition Comparison              Quick Start Demo

Posted in NoSQL In-memory Datastore | Tagged , , , , | Leave a comment

How .NET Cache Improves Performance and Scalability?

Some of the very basic needs for .NET applications to stay competitive in today’s market are to be extremely responsive and scalable. The bottleneck in the way of achieving these benchmarks is your relational database.

This is a two fold bottleneck, first the reads from disk are very inefficient and time consuming. Secondly, you cannot scale out the database tier by adding more database servers. Whereas if you have a .NET distributed cache, it provides fast data access because it is in-memory and it can also scale linearly the same way your application tier does.

NCache is a .NET distributed cache that provides performance and scalability for your applications. It comes with a rich set of features including but not limited to cache elasticity, high availability, data replication, seamless integration with existing technologies and ease of management. Lets just focus on performance and scalability, as identified in the beginning. These are two fundamental metrics needed by .NET applications to survive in today’s world. Let’s see how NCache is positioned to cater for both.

NCache gets its performance edge over relational database because it keeps its data in memory and not on the disk. The performance boost over relational databases is ten times or higher depending on your hardware and .NET cache positioning in the network. For example if you deploy NCache as a local in-proc cache in your environment, the data access becomes lightning fast.

And the way NCache provides scalability is by allowing you to add more cache servers when your transaction load grows. So, if you see that your application getting overwhelmed by transaction load, just add a new cache server at run-time. You don’t even have to stop your application for this. With this new cache server added, you have the ability to serve more requests and all of this happens transparent to the user. Now, that’s what I mean by scalability.



There are a number of caching topologies that NCache offers to choose from depending on your specific need. The caching topology defines how your data is stored and the way individual cache servers in the cluster interact with one another. For example, Partitioned Cache, Partition-Replica Cache, Replicated Cache, and Mirrored Cache are the caching topologies.

If your primary concern with your .NET cache is scalability and not reliability you can use the ‘Partitioned Cache’ topology. On the other hand, if your primary focus is to get reliability and not scalability you should go for the ‘Replicated Cache’ topology. The ‘Partition-Replica Cache’ is the combination of both of these and gives you the best of both worlds. It provides you with reliability and scalability at the same time with some trade offs.

I would like to conclude by saying that if you want your application to be at par with the growing needs of performance and scalability, .NET distributed cache is the way to go.

Posted in .NET 4.0 Cache, ASP.NET Cache, Distributed Cache | Tagged , , | Leave a comment

Couchbase Alternatives Open Source Distributed Cache

The database tier based on traditional RDBMS has proven to be the biggest bottleneck in the way of achieving competitive response times for applications. This has forced the application vendors to look for alternatives that can provide improved performance. One such alternative is storing data in a distributed cache.

From the available cache technologies you need to pick one that answers most if not all of the major questions asked in this domain. Going forward I will be comparing two products in this arena, Couchbase and NCache – Open Source Distributed Cache.

1 – ASP.NET Sessions

ASP.NET Session State caching has come a long way starting from keeping the session information in-memory on the Web server (default), to having it on a State server, to storing it on a SQL server. All of these have one limitation in common and that is the Single Point of Failure. Session state is lost if any of the following happens: Web server goes down, the State server goes down or the SQL server goes down.

To answer all these concerns NCache provides a solution by saving session state in its Open Source Distributed Cache. Since it’s distributed there is no single point of failure. Despite its importance Couchbase does not support saving ASP.NET sessions.

2 – ASP.NET View State

ASP.NET uses View State to store pages, controls and custom values between multiple HTTP requests. In some cases where we have complex controls on a page e.g. Data Grid Control, the string representing the View State grows very large. In such a case you would be using extra bandwidth to pass this string back and forth without any real benefit, in addition you are opening up a loop hole with regards to security.

What are the ways to address these issues ? All we need to have is a distributed cache that can store the View State text and pass back an identifier that can be used to retrieve our View State from the store. NCache provides this exact functionality in form of ASP.NET View State Caching whereas Couchbase does not.

3 – Memcached Smart Wrapper

NCache provides support to integrate with Memcached in a transparent way using Memcached integration.

Let me just say a few words about Memcached, it is a popular distributed cache that is being used in the market but offers very basic caching features. It does not provide any support for high availability, data replication, cache elasticity, and ease of management.

Couchbase does not provide any such integration, so to be able to adapt to Couchbase for someone using Memcached there is only one way; Rewrite your code from scratch!

4 – Security & Encryption

One of the fundamental requirements of applications needing fast response times is that the data be secured. This makes security and encryption must haves for providers of distributed caching.

NCache is well equipped to provide support for both of these features in a comprehensive manner, Couchbase on the other hand fails to provide support for data encryption and active directory/ LDAP authentication. Read more on NCache encryption here.

5 – Read-through & Write-through

Read-through means that your application always asks the cache for data and the cache gets it from your data source if it doesn’t have it and caches this data for future accesses. This simplifies your application code greatly because cache API is very simple to use in comparison to the database.

Similarly write-through allows your application to write to the cache and the cache then writes the same data to the database either synchronously or asynchronously. Both these features facilitate you to have the cache as your enterprise data store and have all applications read from it and write to it.

NCache provides full support for both Read-through and Write-through but Couchbase fails to do so and hence lags behind NCache here as well. More on Read-through & Write through.

Further Reading

For feature by feature comparison of Couchbase with NCache please take a look at the following link:

Couchbase vs. NCache

Posted in Distributed Cache, Open Source Distributed Cache, Product Comparison | Tagged , , , | Leave a comment

AppFabric Alternatives Open Source Distributed Cache

Looking for Appfabric Alternatives?

Today’s applications need to scale and handle extreme data loads due to explosion of web technologies and Internet of Things. The biggest bottleneck in the way of achieving this is the data tier with relational databases. The need of the hour is to be able to make this data available to applications in a faster and more efficient way.

You can achieve this by using distributed caching of data that facilitates faster response times. There are a few big names in this domain with less than competitive products, e.g. Microsoft with their AppFabric caching product. In contrast to this NCache is an Open Source Distributed Cache that offers cutting edge technology and features.

An interesting point to note here is that Microsoft is no longer pushing AppFabric in their flagship cloud environment called Microsoft Azure. Instead, they’re recommending another Open Source cache called Redis (read NCache vs Redis Comparsion). Why Microsoft chose to do so will become clearer as you read this article.

Appfabric Alternative

In the next few paragraphs, I’m taking a somewhat detailed look at some of the core differences between NCache and AppFabric.

1 – High Availability of Cache (Peer to Peer Architecture)

A distributed cache runs in your production environment and as part of your application deployment. Therefore, you should treat it just like a database when it comes to high availability requirements.

Any distributed cache that does not provide a peer to peer architecture for the cache cluster becomes very limiting. AppFabric does not provide such a clustering architecture and therefore is not as flexible as NCache for providing high availability. NCache provides a truly peer to peer clustering.

2 – Synchronization with Data Sources

A very core requirement of caching data is to keep it from getting stale. In plain simple words this means that the cache needs to refresh its data every time there is an update or remove operation in the master database. NCache provides a very strong mechanism to do this called “Data Synchronization” through three types of dependencies:

  •           SqlDependency (SQL Server 2005-2012 events)
  •           OracleDependency (Oracle events)
  •           Db Dependency (polling for OLEDB databases)

This is a feature that AppFabric lacks even though it is core to a powerful distributed cache.

3 – WAN Replication

As the name suggests, WAN replication deals with the availability of data in geographically dispersed data centers. NCache provides powerful WAN replication capability for its distributed cache in the following data center configurations:

  •        Active – Passive
  •        Active – Active

The first case is applicable to the situation where you want to have one data center to handle all user requests i.e. Active data center. And at the same time have a backup data center for disaster recovery, i.e. Passive data center. All the data operations on the active data center are replicated to the passive data center asynchronously. So that if the active DC goes down the passive will become active and start serving user requests.

The second case is applicable to the situation where you want to have two active data centers serving users in their nearby geographical area. Since both DCs are active, data would be getting replicated in both directions. So that if a need arises to reroute all traffic to one data center, in case one of the DCs is overwhelmed or goes down, it can be done without any data integrity issues.

AppFabric fails to provide this very much needed functionality at all levels.

4 – Search Cache with SQL

The real power of a cache lies in the fact that once it has the data, it should be made easily searchable and accessible. NCache comes with very handy tools to accomplish this task. Here they are:

  •        Object Query Language (OQL)
  •        Group-by for Queries
  •        Delete Statement in Queries
  •        LINQ Queries

AppFabric does not provide any of this and hence lacks behind NCache on this avenue as well.

5 – Memcached Smart Wrapper

NCache provides the ability to integrate with Memcached in a seamless way using Memcached integration.

For those of you not familiar with Memcached, it is a popular distributed cache that is being used in the market but offers very basic caching features. Does not provide any support for high availability, data replication, cache elasticity, and ease of management.

AppFabric does not provide any such integration, so to be able to adapt to AppFabric for someone using MemCached there is only one way; Rewrite your code from scratch!

6 – Cache Size Management – Priority Evictions

To be able to use memory available to the cache efficiently we have the concept of evictions. Simply put it means to remove relatively old data from the cache so that we may have space for newer items. There are a number of algorithms that NCache employs to carry out this important task, such as:

  •        Least Recently Used (LRU) Evictions
  •        Least Frequently Used (LFU) Evictions
  •        Priority Evictions
  •        Do Not Evict Option

Each of these options has its own advantages and disadvantages over others, with one being most suitable for a particular use case. AppFabric only supports LRU evictions; this is a big limitation as some scenarios might demand to keep older data and to evict using some other matrix.

Further Reading

For feature by feature comparison of NCache with AppFabric please take a look at the following link: AppFabric vs NCache

Also read: Step-by-Step Appfabric Migration

Posted in Distributed caching, Open Source Distributed Cache, Product Comparison | Tagged , , , | Leave a comment

Redis Alternatives Open Source Distributed Cache

Redis is an in-memory key value store developed in C/C++ with clients for various programming languages like .NET, Java and C. Redis has features to tackle critical issues but falls short in some fundamental aspects. NCache – a .NET distributed caching solution on the other hand when compared with Redis answers all these concerns effectively and also provides additional features that are absent in Redis making it an ideal replacement.

The demand for high speed data access along with its integrity and fault tolerance is ever increasing in the contemporary application arena. Traditional disk based RDBMS systems have failed to answer these concerns in a comprehensive manner. This paved way for in-memory data solutions that cover all the bases listed above.

Here’s why you want to use NCache as an alternative to Redis.

1 – WAN Replication

First and the foremost Redis has no support for WAN replication of the cached data. The need for this feature becomes indispensable when your application is deployed in multiple data centers across the country or world. NCache provides powerful WAN replication capability for its distributed cache in the following data center configurations:

  •        Active – Passive
  •        Active – Active

The first situation is suitable when you have an active data center and a disaster recovery backup data center. The backup data center only becomes active if the active one goes down. And, for this to be possible, the distributed cache must be replicated to the passive data center so it is available when the data center becomes active.

The second situation is applicable where you have two active data centers serving geographically dispersed users so as to improve their access time. And, you want the ability to either reroute some of the traffic from one data center to another in case of overflow. Or, one of the data centers goes down and you want to route all its traffic to the other one.

In this second case, you need the distributed cache updates from both data centers to be replicated to the other ones and also handle conflict situations. This is the capability that NCache provides which Redis does not support.

2 – Security & Encryption

Many of the applications needing a distributed cache are dealing with sensitive and highly confidential data. Therefore, security and encryption are two areas that hold fundamental importance when talking about data storage and retrieval.

Redis is lacking support for both of these neither authentication nor encryption. NCache in contrast provides support for authentication and authorization through Active Directory / LDAP. NCache also provides very strong encryption options to encrypt the stored data. Here they are:

  •        3DES
  •        AES-128
  •        AES-192
  •        AES-256

Read more on NCache encryption here.

3 – Read-through & Write-through

Read-through and write-through are familiar concepts in the domain of distributed caching. But for people starting to get familiarized with caching it won’t hurt to give a brief definition for these.

Read-through means that your application always asks the cache for data and the cache gets it from your data source if it doesn’t have it. So, this simplifies your application code greatly because cache API is very simple to use in comparison to database.

Similarly write-through allows your application to write to the cache and the cache then writes the same data to the database either synchronously or asynchronously.

Both these features allow you to designate the distributed cache as your enterprise data store and have all applications read from it and write to it. And, the cache then deals with the database. This results in the cache always being synchronized with your database.

Despite its importance Redis lacks this feature but NCache covers it in length and breadth.

4 – Cache Administration

The effectiveness of distributed cache also depends on your ability to administer and monitor it through user friendly GUI tools. In this regard, Redis does not provide any GUI tools for its cache administration or monitoring. The only things available to you are the command line tools.

On the other hand, NCache provides powerful GUI based tools like NCache Manager and NCache Monitor for cache administration and monitoring. Some people however prefer to use command line tools because you can use them in scripts for automation. In this regard, NCache also provides all the cache administration through command line tools.

5 – ASP.NET View State Caching

View State is a powerful mechanism that ASP employs to store pages, controls and custom values between multiple HTTP requests across client and the server. This View State is passed in form of an encrypted text which becomes very large in cases involving forms with lot of controls, Data Grid control, or some other complex controls. This raises two concerns:

  •        Security risk
  •        Bandwidth usage

Both of these concerns will be answered if we have a distributed cache that can store the View State text and passes back an identifier that can be used to retrieve our View State from the store. NCache provides this exact functionality in the form of ASP.NET View State Caching whereas Redis does not.

6 – Memcached Smart Wrapper

Memcached is a popular distributed cache that is being used by a host of applications to benefit from the performance boost being discussed herein. It however has a number of limitations in the areas of high availability, data replication, cache elasticity, and ease of management.

The easiest way to address these issues for the customers using Memcached is to use NCache’s Memcached integration, such that the users can plug-in NCache to their existing Memcached applications without any code change. The users only need to change their application configuration files to take advantage of NCache’s distributed caching system.

Redis does not provide any such integration and hence falls behind on this as well.

Further Reading

For feature by feature comparison of NCache with Redis please take a look at the following link:

Redis vs. NCache

Posted in Open Source Distributed Cache, Product Comparison | Tagged , , , , , | 1 Comment

ASP.NET View State Caching in Microsoft Azure for Better Performance

ASP.NET View State is a client side state management mechanism, which is used to save page and control values. ASP.NET View State is a hidden field on the page as an encoded Base64 string. It is sent to client as part of every response and is returned to the server by the client as part of a post back.

<input id="__VIEWSTATE" type="hidden"

Problems with ASP.NET View State in Microsoft Azure

ASP.NET View State is a very important feature for applications deployed as web/worker roles in Microsoft Azure Cache. But, View state comes with some issues that you need to understand and resolve in order to take full benefit from it.

First of all, ASP.NET View State becomes very large especially when your Microsoft Azure ASP.NET application has a lot of heavy controls on its pages. This results in heavy view state payload which travels back and forth between browser and your application on each request. Heavy view state payload slows down performance and also results in extra bandwidth consumption especially when an average ASP.NET View State ends up in 100’s of kilobytes and when millions of such requests are processed within your Microsoft Azure application.

ASP.NET View State is also a security risk when sending confidential data as part of view state to client. This data is vulnerable to attacks and can be tampered with by an attacker, which is a serious security threat.

Solution to ASP.NET View State Problems

You can resolve ASP.NET View State issues in Microsoft Azure applications by storing the actual ASP.NET View State on the server side in a distributed cache and never send it back to browser along with request payload.

NCache for Azure is an extremely fast and scalable distributed cache for Microsoft Azure. It allows you to store actual ASP.NET View State in Distributed Cache on server side and instead send a small token as view state to the client in a request payload. This dramatically reduces the request payload size. View State token is then used on the server side to find the right ASP.NET View State in NCache for Azure Distributed Cache on post backs. A smaller payload resolves issues related to performance and bandwidth utilization because you are not dealing with huge view state anymore on each request in your Microsoft Azure Application. Moreover, View State stored on the server side in NCache for Azure distributed cache is never exposed to clients so it addresses the above mentioned security concerns.

Here is an example of a token being used in place of ASP.NET View State with NCache for Azure ASP.NET View State provider:

<input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE"
value="cf8c8d3927ad4c1a84dsadfgsdsfdsda7f891bb89185" />

Using NCache for Azure ASP.NET View State Caching

Step 1: Create an app.browser file in App_browsers directory. Plug in page adapters in the app.browser file as follows:

   File: App_browsersapp.browser 

<browser refID="Default">
        <adapter controlType="System.Web.UI.Page"
            adapterType="Alachisoft.NCache.Adapters.PageAdapter" />

Step 2: Add the following assembly reference in compilation section of web.config file.

   File: web.config

<compilation defaultLanguage="c#" debug="true" targetFramework="4.0">
        <add assembly="Alachisoft.NCache.Adapters, Version=,
            Culture=neutral, PublicKeyToken=CFF5926ED6A53769"/>

Step 3: Register NCache for Azure config section in your web.config file.

   File: web.config

    <sectionGroup name="ncContentOptimization">
        <section name="settings"
            allowLocation="true" allowDefinition="Everywhere"/>

Step 4: Specify settings for your config section in web.config file (that was registered above). These settings control NCache for Azure ASP.NET View State Caching features.

   File: web.config

    <settings viewstateThreshold="12" enableViewstateCaching="true"
        enableTrace="false" groupedViewStateWithSessions="false"
        <cacheSettings cacheName="myCache"
        <expirationtype="Absolute" duration="1"/>


NCache for Azure provides a no code change option for your Microsoft Azure applications to store ASP.NET View State on the server side in a Distributed Cache. NCache for Azure ASP.NET View State provider optimizes performance by reducing request payload and bandwidth consumption while addressing security issues related to client side View State.

Download NCache Open Source and run it on Microsoft Azure.

Posted in ASP .NET performance, ASP.Net, ASP.NET View State, Distributed Cache, Microsoft Azure | 2 Comments