Query a Distributed Cache with SQL-Like Aggregate Functions

Today, distributed cache is widely used to achieve scalability and performance in high traffic applications. Distributed cache offloads your database servers by serving cached data from in-memory store. In addition, few distributed caches also provide you SQL-like query capability, which you can easily query your distributed cache the way you query your database i.e. “SELECT employee WHERE employee.city=‘New York’”

First of all, most distributed caches don’t even provide SQL-like querying capabilities. Even a few that do provide this have a very limited support for it. They only provide searching of distributed cache based on simple criteria. Whereas, there are several scenarios where you have to find the result based on aggregate functions i.e. “SELECT COUNT(employee) WHERE salary > 1000” or “SELECT SUM(salary) WHERE employee.city = ‘New York’”. In order, to achieve this you have to first query the distributed cache and then calculate the aggregate function on fetched cache data.

This approach has two major drawbacks. First is that you have to execute query on distributed cache, which involves fetching of all the data from distributed cache to cache client. This data may vary from MBs to GBs, and this operation becomes more expensive when you are also paying for the consumed network bandwidth. Moreover, mostly you don’t need this data after you are done with aggregate function calculations.

Second drawback is that it involves custom programming for aggregate function calculation. This adds extra man hours and still most of the complex scenarios cannot be covered. It would be much nicer if you could continue to develop application for the purpose that it is being built and not worry about designing and implementing these extra features yourself.

These are the reasons why NCache provides you the flexibility to query distributed cache using aggregate functions like COUNT, SUM, MIN, MAX and AVG as part of its Object Query Language (OQL). Using NCache OQL aggregate functions, you can easily perform the required aggregate calculations inside the distributed cache domain. This approach not only avoids the network traffic, but also provides you much better performance in term of aggregate functions calculation. This is because all the selections and calculations are done inside the distributed cache domain and no network trips are involved.

Here is the code sample to search NCache using OQL aggregate queries:

public void Main(string[] args)
{
    ...
    NCache.InitializeCache("myPartitionReplicaCache");

    String query = "SELECT COUNT(Business.Product) WHERE
                           this.ProductID > ?  AND this.Price < ?";

    Hashtable param = new Hashtable();
    param.Add("ProductID", 100);
    param.Add("Price", 50);

    // Fetch the cache items matching this search criteria
    IDictionary searchResults = _cache.SearchEntries(query, values);
    ...
}

For more reduced query execution time, NCache runs the SQL-like query in parallel by distributing it to all the cache servers  just like the map-reduce mechanism. In addition, you can use NCache OQL aggregate queries in both .NET and Java applications.

In summary, NCache provides you not only the scalability and performance, but also the flexibility of searching distributed cache using SQL like aggregate function.

Download NCache Trial | NCache Details

Posted in Aggregate Functions, Distributed Cache, Object Query | Tagged , , , | Leave a comment

Class Versioning in Runtime Data Sharing with Distributed Cache

Today many organizations use .NET and Java technologies to develop different high traffic applications. At the same time, these applications not only have a need to share data with each other, but also want to support runtime sharing of different version of the same class for backward compatibility and cost reduction.

The most common way mostly used to support runtime sharing of different class versions between .NET and Java application is through XML serialization. But, as you know XML serialization is an extremely slow and resource hungry process. It involves XML validation, parsing, transformations, which really hampers your application performance and uses extra resources in term of memory and CPU.

The other approach widely used to support sharing of different class versions between .NET and Java is through database. However, the problem with this approach is that it’s slow and also doesn’t scale very well with the growing transactional load. Therefore, your database quickly becomes a scalability bottleneck because you can linearly scale your application tier by adding more application servers, but you cannot do the same at the database tier.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

This is where a distributed cache like NCache comes in really handy. NCache provides you a binary-level object transformation between different versions not only of the same technology but also between .NET and Java. You can map different versions through an XML configuration file, and NCache understands how to transform from one version to another.

NCache class version sharing framework implements interoperable binary serialization custom protocol that generates byte stream based on specified mapping in such a format that any new and old versions of the same class can easily de-serialize it, regardless of its development language, which can be .NET or Java.

Here is an example of NCache config.ncconf with class version mapping:

<cache-config name="InteropCache" inproc="False" config-id="0" last-modified="" type="local-cache" auto-start="False">
 ...
      <type id="1001" handle="Employee" portable="True">
        <attribute-list>
          <attribute name="Name" type="Java.lang.String" order="1"/>
          <attribute name="SSN" type="Java.lang.String" order="2"/>
          <attribute name="Age" type="int" order="3"/>
          <attribute name="Address" type="Java.lang.String" order="4"/>
          <attribute name="Name" type="System.String" order="5"/>
          <attribute name="Age" type="System.Int32" order="6"/>
          <attribute name="Address" type="System.String" order="7"/>
        </attribute-list>
        <class name="com.samples.objectmodel.v1.Employee:1.0" handle-id="1"
   assembly="com.jar" type="Java">
          <attribute name="Name" type="Java.lang.String" order="5"/>
          <attribute name="SSN" type="Java.lang.String" order="2"/>
        </class>
        <class name="com.samples.objectmodel.v2.Employee:2.0" handle-id="2"
   assembly="com.jar" type="Java">
          <attribute name="Name" type="Java.lang.String" order="5"/>
          <attribute name="Age" type="int" order="6"/>
          <attribute name="Address" type="Java.lang.String" order="7"/>
        </class>
        <class name="Samples.ObjectModel.v2.Employee:2.0.0.0" handle-id="3"
   assembly="ObjectModelv2, Version=2.0.0.0, Culture=neutral,
   PublicKeyToken=null" type="net">
          <attribute name="Name" type="System.String" order="5"/>
          <attribute name="Age" type="System.Int32" order="6"/>
          <attribute name="Address" type="System.String" order="7"/>
        </class>
        <class name="Samples.ObjectModel.v1.Employee:1.0.0.0" handle-id="4"
               assembly="ObjectModelv1, Version=1.0.0.0, Culture=neutral,
               PublicKeyToken=null" type="net">
          <attribute name="Name" type="System.String" order="5"/>
          <attribute name="Age" type="System.Int32" order="6"/>
        </class>
      </type>
    </data-sharing>
    ...
  </cache-config>

How does NCache do Class Versioning in Runtime Data Sharing?

In the ncache.config file that you see above, you’ll notice that the Employee class has a set of attributes defined first. These are version independent attributes and appear in all versions of .NET and Java classes. This is actually a superset of all attributes that appear in different versions. Below that, you specify version-specific attributes and map them to version-independent attributes above.

Now, let’s say that you saved .NET Employee version 1.0.0.0. Now, when another application tries to fetch the same Employee, but it wants to see it as Java version 1.0 or 2.0. NCache knows which attributes of .NET version 1.0.0.0 to fill with data and which ones to leave blank and vice versa.

There are many other scenarios that NCache handles seamlessly for you. Please read the online product documentation for how NCache runtime data sharing works.

Finally, the best part is that you don’t have to write any serialization and deserialization code or make any code changes to your application in order to use this NCache feature. NCache has implemented a runtime code generation mechanism, which generates the in-memory serialization and deserialization code of your interoperable classes at runtime, and uses the compiled form so it is super-fast.

In summary, using NCache you can now share different class versions between your .NET and Java applications without even modifying your application code.

Download NCache Trial | NCache Details

Posted in Data Sharing with Distributed Cache, Distributed Cache | Tagged , , | Leave a comment

Distributed Cache Continuous Query for Developing Real Time Applications

High traffic real time applications are widely used in the enterprise environment. In real-time applications, information is made available to you in moments it’s produced and any delay in doing so can cause a serious financial loss. The main challenge faced by these high traffic real time applications is to get notified about any changes in data set so that the corresponding views can be updated.

But, these high traffic real time applications cannot rely on the traditional database because they only support queries on the residing data and in order to get the updated data set you have to again execute the query after specific interval which is not instantaneous. And, this periodic polling also causes performance and scalability issues because you are making expensive database trips mostly even when there are no changes in data set.

SqlDependency is provided by Microsoft in SQL Server, and Oracle on Windows also supports it. SqlDependency allows you to specify a SQL statement, and SQL Server monitors this data set in the database for any additions, updates, or deletions and notifies you when this happens. But the problem with SqlDependency is that once it is fired, it gets unregistered from the database. Therefore, all future changes in your data set are missed and you don’t get notified.

Moreover, SqlDependency does not provide details of the record where change occurred. So in order to find the change in data set, you have to fetch the complete data set again instead of directly fetching only the specific record, which is added, updated or removed from data set. And, of course, this is not efficient.

In addition to the limitations of SqlDependency, your database is also not able to cope with the transactional demands of these high traffic real time applications where tens of thousands of queries are executed every second and database quickly becomes a scalability bottleneck. This is because although you can linearly scale your application-tier by adding more application servers, you cannot do the same with your database server.

This is where a distributed cache like NCache comes in because it allows you to cache data and reduce those expensive database trips which are causing scalability bottleneck.

NCache has a powerful Continuous Query capability that enables you to register an SQL-like query with the cache cluster. This Continuous Query remains active in cache cluster and if there is any change in data set of this query, then NCache notifies your real time application. This Continuous Query approach saves you from periodically executing the same expensive query against database to poll.

Here is sample code for NCache Continuous Query:

public void Main(string[] args)
{
    ...
    NCache.InitializeCache("myPartitionReplicaCache");
    String query = "SELECT NCacheQuerySample.Business.Product WHERE
                    this.ProductID > 100";

    Hashtable values = new Hashtable();
    values.Add("ProductID", 100);
    ...
    onItemAdded = new ContinuousQueryItemAddedCallback(OnQueryItemAdded);
    onItemUpdated = new ContinuousQueryItemUpdatedCallback(OnQueryItemUpdated);
    onItemRemoved = new ContinuousQueryItemRemovedCallback(OnQueryItemRemoved);

    ContinuousQuery query = new ContinuousQuery(queryString, values);
    query.RegisterAddNotification(onItemAdded);
    query.RegisterUpdateNotification(onItemUpdated);
    query.RegisterRemoveNotification(onItemRemoved);
    _cache.RegisterCQ(query);
    ...
}
//data set item is removed
void OnQueryItemRemoved(string key){
   ...
   Console.WriteLine("Removed key: {0}", key);
   ...
}
//data set item is updated
void OnQueryItemUpdated(string key){
   ...
   Console.WriteLine("Updated key: {0}", key);
   ...
}
//data set item is removed
void OnQueryItemAdded(string key){
   ...
   Console.WriteLine("Added key: {0}", key);
   ...
}

And, unlike SqlDependency, NCache Continuous Query remains active and doesn’t get unregistered on each change notification. So, your high traffic real time applications continue to be notified across multiple changes.

NCache Continuous Query also provides you the flexibility to be notified separately on ADD, UPDATE, and DELETE. And, you can specify these events at runtime even after creating a Continuous Query, something SqlDependency doesn’t allow you to do. This also reduces events traffic from the cache cluster to your real time application.

In summary, NCache provides you a very powerful event driven Continuous Query that no other database has. And, NCache is also linearly scalable for your high traffic real time applications.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in Continuous Query, Distributed Cache, Distributed events | Tagged , , | Leave a comment

.NET and Java Data Sharing with Binary Serialization

Many organizations today use a variety of .NET and Java technologies to develop different high traffic applications. At the same time, these organizations have a need to share data at runtime between .NET and Java applications.

One way to share data is through the database, but that is slow and also doesn’t scale very well. A much better approach is to use an in-memory distributed cache as a common data store between multiple applications. It’s fast and also scales linearly.

As you know, Java and .NET types are not compatible. Therefore, you end up transforming the data into XML for sharing. Additionally, most distributed caches either don’t provide any built-in mechanism for sharing data between .NET and Java applications or only provide XML based data sharing. If a cache doesn’t provide a built-in data sharing mechanism, then you have to define the XML schema and use a third party XML serialization to construct and read all the XML data.

But, XML serialization is an extremely slow and resource hungry process. XML serialization involves XML validation, parsing, transformations which really hamper the application performance and uses extra resources in term of memory and CPU.

Distributed cache by design is used to improve your application performance and scalability. It allows your applications to cache your application data and reduce those expensive database trips that are causing a scalability bottleneck. And, XML based data sharing goes against these performance and scalability goals for your application. If you increase transaction load on your application, you’ll see that XML manipulation ends up becoming a performance bottleneck.

A much better way is to do data sharing between .NET and Java applications at binary level where you would not have to do any XML transformations. NCache is a distributed cache that provides you runtime data sharing .NET and Java application with binary serialization.

How does NCache provide runtime data sharing between .NET and Java?

Well, before that you need to understand why your native .NET and Java binary serialization are not compatible. Java and .NET have their own binary serializations that interpret objects in their own ways and which are totally different from each other and also have different type systems. Moreover, the serialized byte stream of an object also includes the data type details as fully qualified name which are again different in .NET and Java. This, of course, also hinders the data type compatibility between .NET and Java.

To handle this incompatibility, NCache has implemented its own interoperable binary serialization that is common for both .NET and Java. NCache interoperable binary serialization identifies objects based on type-ids that are consistent across .NET and Java instead of fully qualified name that are technology specific. This approach not only provides interoperability but also reduces the size of the generated byte stream. Secondly, NCache interoperable binary serialization implements a custom protocol that generates byte stream in such a format that its .NET and Java implementations can easily interpret.

Here is an example of NCache config.ncconf with data interoperable class mapping:

  <cache-config name="InteropCache" inproc="False" config-id="0" last-modified=""
   type="clustered-cache" auto-start="False">
    ...
      <data-sharing>
       <type id="1001" handle="Employee" portable="True">
         <attribute-list>
           <attribute name="Age" type="int" order="1"/>
           <attribute name="Name" type="java.lang.String" order="2"/>
           <attribute name="Salary" type="long" order="3"/>
           <attribute name="Age" type="System.Int32" order="4"/>
           <attribute name="Name" type="System.String" order="5"/>
           <attribute name="Salary" type="System.Int64" order="6"/>
         </attribute-list>
   <class name="jdatainteroperability.Employee:0.0" handle-id="1"
    assembly="jdatainteroperability.jar" type="java">
           <attribute name="Age" type="int" order="1"/>
           <attribute name="Name" type="java.lang.String" order="2"/>
           <attribute name="Salary" type="long" order="3"/>
          </class>
          <class name="DataInteroperability.Employee:1.0.0.0" handle-id="2"
           assembly="DataInteroperability, Version=1.0.0.0, Culture=neutral,
           PublicKeyToken=null" type="net">
           <attribute name="Age" type="System.Int32" order="1"/>
           <attribute name="Name" type="System.String" order="2"/>
           <attribute name="Salary" type="System.Int64" order="3"/>
          </class>
       </type>
      </data-sharing>
    ...
  </cache-config>

As a result, NCache is able to serialize a .NET object and de-serialize it in Java as long as there is a compatible Java class available. This binary level serialization is more compact and much faster than any XML transformations.

Finally, the best part in all of this is that you don’t have to write any serialization code or make any code changes to your application in order to use this feature in NCache. NCache has implemented a runtime code generation mechanism, which generates the in-memory serialization and de-serialization code of your interoperable classes at runtime, and uses the compiled form so it is super fast.

In summary, using NCache you can scale and boost your application performance by avoiding the extremely slow and resource hungry XML serialization.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in data sharing, Distributed Cache, Serialization | Tagged , , | Leave a comment

JSP Session Persistence and Replication with Distributed Cache

As you know, JSP applications have the concept of a Session object in order to handle multiple HTTP requests. This is because HTTP protocol is stateless and Session is used in maintaining user’s state across multiple HTTP requests.

In a single web server configuration, there is no issue but as soon as you have a multi-server load balanced web farm running a JSP application, you immediately run into the issue of where to keep the Session. This is because a load balancer ideally likes to route each HTTP request to the most appropriate web server. But, if the Session is kept only on one web server then you’re forced to use the sticky session feature of the load balancer where it sends requests related to a given Session only to the web server where the Session resides.

This means one web server might get overwhelmed even when you have other web server sitting idle because the Session for that user resides on that web server. This of course hampers scalability. Additionally, keeping the Session object only on one web server also means loss of Session data if that web server goes down for any reason.

To avoid this data loss problem, you must have a mechanism where Session is replicated to more than one web server. Here, the leading Servlet Engines (Tomcat, WebLogic, WebSphere, and JBoss) all provide some form of Session persistence. They even include some form of Session replication but all of them have issues. For example, file based and JDBC persistence are slow and cause performance and scalability issues. Session replication is also very weak because it replicates all sessions to all web servers thereby creating unnecessary copies of the Session when you have more than two web servers even though fault tolerance can be achieved with only two copies.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

In such situations, a Java distributed cache like NCache is your best bet to ensure the session persistence across multiple servers in web cluster is done very intelligently and without hampering your scalability.  NCache has a caching topology called “Partition-Replica” that not only provides you high availability and failover through replication but also provides you large in-memory session storage through data partitioning. Data partitioning enables you to cache large amounts of data by breaking up the cache into partitions and storing each partition on different cache servers in the cache cluster.

NCache Partition-Replica topology replicates the session data of each node to only one other node in cluster this approach eradicates the performance implications of replicating session data on all of the server nodes without compromising the reliability.

In addition, session persistence provided by the web servers (Apache, Tomcat, Weblogic and WebSphere) uses the resource and memory of web cluster. This approach hinders your application performance because the web cluster nodes which are responsible to process the client request are also handling the extra work of session replication and its in-memory storage.  Whereas, you can run NCache on separate boxes other the one part of your web cluster. This way you can free the web cluster resources and web cluster can use those resources to handle more and more client requests.

For JSP session persistence, NCache has implemented a session module NCacheSessionProvider as Java filter. NCache JSP Servlet Filter dynamically intercepts requests and responses and handles session persistence behind the scenes. And, you don’t have to change any of your JSP application code.

Here is a sample NCache JSP Servlet Filter configuration you need to define in your application deployment descriptor to use NCache for JSP Session persistence:


<filter>
<filter-name>NCacheSessionProvider</filter-name>
<filter-class> com.alachisoft.ncache.web.sessionstate.NSessionStoreProvider
</filter-class>
</filter>
<filter-mapping>
<filter-name>NCacheSessionProvider</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>

<init-param>
<param-name>cacheName</param-name>
<param-value>PORCache</param-value>
</init-param>

<init-param>
<param-name>configPath</param-name>
<param-value>/usr/local/ncache/config/</param-value>
</init-param>

Hence, NCache provide you a much better mechanism to achieve session persistence in web cluster along with performance boost and scalability.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in Distributed Cache, Java, JSP Session Persistence | Tagged , , | Leave a comment

Scaling your Java Spring Applications with Distributed Cache

Spring is a popular lightweight dependency injection and aspect oriented development container and framework for Java. It reduces the overall complexity of J2EE development and provides high cohesion and loose coupling. Because of the benefits Spring provides, it is used by a lot of developers for creating high traffic small to enterprise level applications.

Here is an example of a Spring Java application:

import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

public class MySampleApp {
   …
 ApplicationContext ctx = new ClassPathXmlApplicationContext("spring.xml");
     //In spring.xml
 Department dept = (Department) ctx.getBean"department");

 List employees = dept.getEmployees();
   for(int i=0; i< employees.size();i++)
   {
      Employee emp = employees.get(i);
      System.out.println("Student Name :"student.getName());
   }
   …
 }

But, these high traffic Spring applications face a major scalability problem. Although these applications can scale by adding more servers to the application server farm, their database server cannot scale in the same fashion to handle the growing transactional load.

In such situations, a Java distributed cache is your best bet to handle the database scalability problem. Java distributed cache offloads your database by reducing those expensive database trips that are causing scalability bottlenecks. And, it also improves your application performance by serving data from in-memory cache store instead of the database.

NCache is a Java distributed cache and has implemented the Spring cache provider. NCache Spring provider introduces a generic Java cache mechanism with which you can easily cache the output of your CPU intensive, time consuming, and database bound methods of Spring application. This approach not only reduces the database load but also reduces the number of method executions and improves application performance.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

NCache Spring provider has a set of annotations including @Cacheable to handle cache related tasks. Using these annotations you can easily mark the method required to be cached along with cache expiration, key generation and other strategies.

When a Spring method marked as @Cacheable is invoked, NCache checks the cache storage to verify whether the method has already been executed with the given set of parameters or not. If it has, then the results are returned from the cache. Otherwise, method is executed and its results are also cached. That is how, expensive CPU, I/O and database bound methods are executed only once and their results are reused without actually executing the method.

Java distributed cache is essentially a key-value store, therefore each method innovation should translate to a suitable unique key for the access. To generate these cache keys NCache Spring provider uses the combination of class name, method and arguments. However, you can also implement your custom NCache Spring key generator by using com.alachisoft.ncache.annotations.key.CacheKeyGenerator interface.

Here are the steps to integrate NCache cache provider in your application: 

  1. Add Cache Annotations: Add NCache Spring annotation to methods which require caching. Here is a sample of Spring method using NCache annotation:
  2. @Cacheable
     (cacheID="r2-DistributedCache",
      slidingExpirator = @SlidingExpirator (expireAfter=15000)
     )
     public Collection<Message>; findAllMessages()
     {
    	...
          Collection<Message> values = messages.values();
    	Set<Message> messages = new HashSet<Message>();
    	synchronized (messages) {
    		Iterator<Message> iterator = values.iterator();
    		while (iterator.hasNext()) {
    			messages.add(iterator.next());
    		}
    	}
    	...
       return Collections.unmodifiableCollection(messages);
     	}
    
    
  3. Register NCache Spring Provider:  Update your Spring application spring.xml to register NCache Spring provider as follows:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:p="http://www.springframework.org/schema/p"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:oxm="http://www.springframework.org/schema/oxm"
xmlns:mvc="http://www.springframework.org/schema/mvc"
xmlns:ncache="http://www.alachisoft.com/ncache/annotations/spring"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd
http://www.springframework.org/schema/oxm http://www.springframework.org/schema/oxm/spring-oxm-3.0.xsd
http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd
http://www.alachisoft.com/ncache/annotations/spring http://www.alachisoft.com/ncache/annotations/spring/ncache-spring-1.0.xsd">

<ncache:annotation-driven>
        <ncache:cache id="r1-LocalCache" name="myCache"/>
        <ncache:cache id="r2-DistributedCache" name="myDistributedCache"/>
   </ncache:annotation-driven>

Hence, by using NCache Spring cache provider you can scale your Spring applications linearly and can boost performance.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in Distributed Cache, Spring | Tagged , | Leave a comment

Using NCache as Hibernate Second Level Java Cache

Hibernate is an Object-Relational Mapping library for Java language. It provides you mapping from Java classes to database table and reduces the overall development cycle. Because of benefits Hibernate provides, more and more high transactional applications are developed using Hibernate. Here is an example of Hibernate in a Java application.


import org.hibernate.*;

public class HibernateSample
{
	...
Session session = factory.openSession();
session = factory.openSession();
Transaction tx = session.beginTransaction();
Query query =  session.createQuery("from Customer c");
Iterator it = query.list().iterator();
while (it.hasNext ()){
Customer customer = (Customer) it.next();
...
}
       tx.commit();
session.close();
}

But, these high traffic Hibernate applications are encountering a major scalability issue. Although they are able to scale at application-tier level, their database or data storage is not able to scale with the growing number of transactional load.

Java distributed caching is the best technique to solve this problem because it reduces the expensive trips to database that is causing scalability bottlenecks. For this reason, Hibernate provides a caching infrastructure that includes first-level and second-level cache.

Hibernate first-level cache provides you a basic standalone (in-proc) cache which is associated with the Session object, and is limited to the current session only. But, the problem with Hibernate first-level cache is that it does not allow the sharing of object between different sessions. If the same object is required by different sessions all of them make the database trip to load it which eventually increases database traffic and causes scalability problem. Moreover, when the session is closed all the cache data is also lost and next time you have to fetch it again from database.

These high traffic Hibernate applications with only first-level when deployed in the web farm also faces across server cache synchronization problem. In web farm, each node runs a web server – Apache, Oracle WebLogic etc. – with multiple instances of httpd process to server the requests. And, Hibernate first-level cache in each httpd worker process has a different version of the same data cached directly from the database.

Download NCache free trial - Extremely fast and scalable in-memory distributed cache

That is why Hibernate provides you a second-level cache with provider model. Hibernate second-level cache allows you to plug-in 3rd party distributed (out-proc) caching provider to cache object across sessions and servers. Hibernate second-level cache is associated with SessionFactory object and is available to entire application, instead of single session.

When you enable Hibernate second-level cache, you end up with two caches; one is first-level cache and the other is second level cache. Hibernate always tries to retrieve the objects from first-level cache if fails tries to retrieve them from second-level cache. If that also fails then objects are directly loaded from the database and cached as well. This configuration significantly reduces the database traffic because most of the data is served by the second-level distributed cache.

NCache Java has implemented a Hibernate second-level caching provider by extending org.hibernate.cache.CacheProvider. You can easily plug in NCache Java Hibernate distributed caching provider with your Hibernate application without any code changes. NCache allows you to scale your Hibernate application to multi-server configurations without database becoming the bottleneck. NCache also provides you all the enterprise level distributed caching features like data size management, data synchronization across server and database etc.

You can plug in NCache Java Hibernate caching provider by simply modifying your hibernate.cfg.xml and ncache.xml as follows

<hibernate-configuration>
  <session-factory>
<property name = "cache.provider_class">
alachisoft.ncache.integrations.hibernate.cache.NCacheProvider,
alachisoft.ncache.integrations.hibernate.cache
</property>
  </session-factory>
</hibernate-configuration>

<ncache>
<region name = "default">
   <add key = "cacheName" value = "myClusterCache"/>
   <add key = "enableCacheException" value = "false"/>
   <class name = "hibernator.BLL.Customer">
<add key = "priority" value = "1"/>
<add key = "useAsync" value = "false"/>
<add key = "relativeExpiration" value = "180"/>
   </class>
</region>
</ncache>

Hence, by using NCache Java Hibernate distributed cache provider you can linearly scale your Hibernate applications without any code changes.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in Hibernate Second Level Cache, Java Distributed Cache | Tagged , | Leave a comment

Synchronize Distributed Cache with Database using CLR Stored Procedures

Distributed caching has become a very important part of any high transaction application in order to ensure that the database does not become a scalability bottleneck. But, since a distributed cache keeps a copy of your application data, you must always ensure that it is kept synchronized with your database. Without this, the distributed cache has older stale data that causes data integrity problems. SQL Server provides an event notification mechanism where the distributed cache like NCache can register itself for change notification through SqlCacheDependency and then receive notifications from SQL Server when underlying data changes in the database. This allows NCache to immediately invalidate or reload the corresponding cached item and this keeps the cache always synchronized with the database. However, SqlCacheDependency can become a very resource intensive way of synchronizing the cache with the database. First of all, you have to create a separate SqlCacheDependency for each cached item and this could easily go into tens of thousands if not hundreds of thousands. And, SQL Server uses data structures to maintain each SqlCachDependency separately so it can monitor any data changes related to it. And, this consumes a lot of extra resources and can easily choke the database server.

Secondly, SQL Server fires separate .NET events for each data change and NCache catches these events. And, these .NET events can be quite heavy and could easily overwhelm the network traffic and overall performance of NCache and your application. There is a better alternative. This involves you writing a CLR stored procedure that connects with NCache from within SQL Server and directly updates or invalidates the corresponding cached item. And, then you can call this CLR stored procedure from an update or delete trigger of your table. You can do this either with SQL Server 2005 or 2008 and also from Oracle 10g or later but only if it is running on Windows. A CLR stored procedure is more resource efficient because it is not creating data structures related to SqlCacheDependency. And, it also does not fire .NET events to NCache. Instead, it open up an NCache client connection and directly tells NCache whether to invalidate a cached item or reload it. And, this connection with NCache is highly optimized and much faster and lighter than .NET events.

Below is an example of how to use a CLR stored procedure.

  1. Copy log4net and protobuf-net from Windows GAC to NCache/bin/assembly/2.0 folder (choose 4.0 if the target platform is .NET 4.0).

2.   Register NCache and following assemblies in SQL server. Example is given below. In this example we are using Northwind as a sample database.

use Northwind

alter database Northwind
set trustworthy on;
go

drop assembly SMdiagnostics
drop assembly [System.Web]
drop assembly [System.Messaging]
drop assembly [System.ServiceModel]
drop assembly [System.Management]

CREATE ASSEMBLY SMdiagnostics AUTHORIZATION dbo
FROM N'C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\SMdiagnostics.dll'
WITH permission_set = unsafe

CREATE ASSEMBLY [System.Web] AUTHORIZATION dbo
FROM N'C:\Windows\Microsoft.NET\Framework64\v2.0.50727\System.Web.dll'
WITH permission_set = unsafe

CREATE ASSEMBLY [System.Management] AUTHORIZATION dbo
FROM N'C:\Windows\Microsoft.NET\Framework64\v2.0.50727\System.management.dll'
WITH permission_set = unsafe

CREATE ASSEMBLY [System.Messaging] AUTHORIZATION dbo
FROM N'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Messaging.dll'
WITH permission_set = unsafe

CREATE ASSEMBLY [System.ServiceModel] AUTHORIZATION dbo
FROM N'C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0\System.ServiceModel.dll'
WITH permission_set = unsafe

CREATE ASSEMBLY NCache
FROM N'C:\Program Files\NCache\bin\assembly\2.0\Alachisoft.NCache.Web.dll'
WITH permission_set = unsafe

3. Open Visual Studio to write a stored procedure against NCache And create a SQL CLR Database project as mentioned below. Add a reference to the NCache assembly that you created in the last step. The assembly that you need to refer is highlighted above. It will appear under SQL Server with the same name as “NCache”.

CLR_VS_Studio

4.  Write your stored procedure. Here is a sample code given:

public partial class StoredProcedures
{
    [Microsoft.SqlServer.Server.SqlProcedure]
    public static void TestSProc(string cacheName)
    {
        //--- Put your code here
        SqlPipe sp = SqlContext.Pipe;

        try
        {
            sp.Send("Starting .....");

            if (string.IsNullOrEmpty(cacheName))
                cacheName = "mycache";

            Cache _cache = NCache.InitializeCache(cacheName);
            _cache.Insert("key", DateTime.Now.ToString());
            sp.Send("Test is completed ...");
        }

5.  Enable CLR integration on database as given below:

sp_configure 'clr enabled', 1
GO
RECONFIGURE
GO

6. Deploy the stored procedure from Visual Studio and test it.
7. After deploying the stored procedure, you need to place your stored procedure assembly in (C:\Program Files\NCache\bin\assembly\2.0) folder as it does not resolve assembly references directly from windows GAC folder and needs them locally.

CLR based stored procedures or triggers can greatly improve the application performance as compared to the SqlCacheDependency that is relatively slower and can be overwhelming for large datasets.

Download NCache Trial | NCache Details

Posted in CLR procedures, Database synchronize, Distributed caching | Tagged , , | 3 Comments

How Compact Object Serialization Speeds up Distributed Cache?

Serialization transforms an object into a byte-stream so it can be moved out of a process either for persistence or to be sent to another process. And de-serialization is the reverse process that transforms a byte-stream back into an object.

And, unlike a stand-alone cache, a distributed cache must serialize objects so it can send them to different computers in the cache cluster. But, the serialization mechanism provided by .NET framework has two major problems:

1.  Very slow: .NET Serialization uses Reflection to inspect type information at runtime. Reflection is an extremely slow process as compared to precompiled code.

2.  Very bulky: .NET Serialization stores complete class name, culture, assembly details, and references to other instances in member variables and all this makes the serialized byte-stream many times the original object in size.

Since a distributed cache is used to improve your application performance and scalability, anything hampering this becomes very critical. And, the regular .NET Serialization is a major performance overhead in a distributed cache because thousands of objects need to be serialized every second before being sent to distributed cache for in-memory storage. And, any slowdown here becomes a slowdown for the distributed cache.

The other issue is that a bulky serialized byte-stream consumes 2-3 times extra space and reduces the overall storage capacity of a distributed cache. An in memory storage can never be as big as a disk storage which makes this an even more sensitive issue for a distributed cache.

To overcome .NET serialization problems, NCache has implemented a Compact Serialization Framework. In this framework, NCache stores two-byte type-ids instead of fully qualified assembly/class names. It further reduces the serialized byte-stream by only serializing the field values and excluding their type details. Finally, NCache Compact Serialization Framework avoids the use of .NET Reflection because of its overhead by directly accessing fields and properties of the instance object.

There are two ways to use NCache Compact Serialization in your application.

  1. Let NCache generate Compact Serialization code at runtime
  2. Implement an ICompactSerializable interface yourself

In this blog, I will stick to first approach only. I’ll discuss the second approach in a separate blog.

Let NCache generate Compact Serialization code at runtime

Identify the types of objects you are caching, and register them with NCache as Compact Serialization types as shown in Figure. That is all you have to do, and NCache takes care of the rest.

Compact Serialization

Compact Serialization

 Figure 1: Register Types for Compact Serialization with NCache

NCache sends the registered types to NCache client at the time of initialization. Based on received types, NCache client generates the runtime code to serialize and deserialize each type. The runtime code is generated only once by the NCache client at the time of initialization and used over and over again. It runs much faster than Reflection based serialization.

Hence, using NCache Compact Serialization you can efficiently utilize your distributed cache memory and can boost your application performance.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in Distributed Cache, Serialization | Tagged , | 1 Comment

Entity Framework Applications Using Distributed Cache

Entity Framework is an object-relational mapping engine that provides abstraction from underlying relational database and therefore greatly simplifies development. Because of these benefits, more and more data-centric and high transactional applications and services are developed with Entity Framework.

But, these high traffic applications are facing scalability problems. Although the application-tier level is scalable, their database or data storage cannot keep up with growing number of transactions being thrown at them.

This is where a distributed cache comes in because it allows you to cache data and reduce those expensive database trips that are causing scalability bottlenecks.  But, Entity Framework does not provide an out of the box solution that allows you to use distributed cache in your application. There are however two ways in which you can incorporate distributed cache into your Entity Framework application. One is to modify your Entity Framework application code and make direct API calls to the distributed cache. Second, is to use a distributed cache that has implemented a custom ADO.NET provider that incorporates caching behind-the-scenes.

Entity Framework has public provider model for ADO.NET providers where you can write providers for 3rd party databases. NCache has implemented a custom Entity Framework ADO.NET provider of its own through which it is able to make distributed cache calls to NCache API. This custom Entity Framework ADO.NET provider intercepts all the database query calls and puts the result-sets of these queries in a distributed cache. Then, NCache custom Entity Framework provider intercepts all subsequent query calls and simply returns the results from its distributed cache rather than making that expensive database trip. If result-set for a query does not exist in the distributed cache then query is executed against database and it’s result-set is then put in the distributed cache.

And, NCache custom Entity Framework provider also needs to ensure that data in the distributed cache is always consistent and synchronized with the database. And, for that NCache uses SqlCacheDependency provided in .NET. SqlCacheDependeny registers a SQL query with SQL Server so if any row in the dataset represented by this query is changed in the database, SQL Server throws a .NET event notification to NCache. NCache catches this .NET event and removes the corresponding result-set from the distributed cache.

Figure 1 shows how NCache Entity Framework Provider plugs into an Entity Framework application.

EF Caching

EF Caching

Figure 1 NCache Entity Framework Provider being used

 You can integrate NCache custom Entity Framework ADO.NET provider in your application in just four simple steps:

  1. Replace default provider: Replace your applications default provider with NCache Entity framework provider in app.config/web.config and .edmx file.
  2. Register NCache provider: Register your application in NCache Entity Framework config (efcaching.conf). In efcaching.config, you can easily specify log level and expiration policies etc. for your Entity Framework application.
  3. Run app in analysis mode: Run your application in analysis mode. In Analysis mode, NCache Entity Framework provider, logs Entity Framework queries executed by your application along with their frequency. Based on the logs you can scrutinize Entity Framework queries you want to cache.
  4. Run app normally: Switch to caching mode and run your application.

Hence, by using NCache Entity Framework caching provider you can easily achieve linear scalability without changing your Entity Framework application code.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in 3rd party integrations, Distributed caching, Entity Framework | Tagged , | 2 Comments

How to Use LINQ for Searching Distributed Cache?

Distributed caching is becoming really popular among developers of high transaction applications because it improves your application’s performance and scalability. And, this popularity means that developers are caching more and more data in it which they also want to be able to search just like they are able to search relational databases.

But, one major limitation of many distributed caches is that they only provide (key, value) Hashtable interface to you. This means that for you to fetch any cached item, you must know its key. But in real life this is not always possible and in many cases you need to search for data based on other search criteria (e.g. “Give me all customers from New York”). So, if you’re not able to search a distributed cache, you’re likely to only cache data for which you always know their keys. And, this prevents you from caching a lot of data that would otherwise really boost your application’s performance and scalability.

Fortunately, NCache provides you a very powerful SQL-like querying capability (called Object Query Language or OQL) to let you search the cache based on object attributes and not just the keys. And, for .NET applications, NCache provides LINQ integration and allows you to search the cache through LINQ.

LINQ is a set of extensions to the .NET Framework that encompass language-integrated query, set, and transform operations. And, it connects the object world and data world and makes searching very easy and manageable. Moreover, LINQ allows you to integrate your own data storage to it.

NCache provides a LINQ plug-in. So, you now issue LINQ queries from your .NET application and behind the scene this query runs against an NCache distributed cache and a result set is returned to your application. NCache is integrated with LINQ by implementing a class named “NCacheQuery”, which further implements the interface called “IQueryable” provided by .NET framework. By this integration you can run LINQ queries on cached items.
Here is a source code example of LINQ Query.


namespace NCacheLINQ
{
      class Program
      {
          static void Main (string[] args)
          {
             IQueryable<Product> products = new NCacheQuery<Product> (_cache);
             try
             {
                var result1 =  from product in products
                		    where product.ProductID > 10
                		    select product;
                if (result1 != null)
                {
                       foreach (Product p in result1)
                       {
                            Console.WriteLine("ProductID : " + p.ProductID);
                       }
                }
                else
                {
                       Console.WriteLine("No record found.");
                }
             }
             catch (Exception)
             {
                 Console.WriteLine(_error);
             }
          }
      }
}

In summary, querying in-memory collections wasn’t this much easy and manageable before LINQ and NCache gives LINQ integration in such a way that you don’t have to change code all you need to do is add a new assembly reference and namespace in your application.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in ASP .NET performance, ASP.Net, ASP.NET Cache, Cache dependency, Distributed caching, LINQ Query | Tagged , , , | Leave a comment

How SQLCacheDependency Synchronizes Distributed Cache with Database?

Distributed Caching has become a popular way of improving .NET application performance and scalability. That is why developers are caching more and more data in distributed cache. However along with this come a few challenges. One important challenge is to ensure that data in the cache is always synchronized with the database. This is because the cache is keeping a copy of the data that already exists in the database.

If you have multiple applications updating the same data in the database but not all of them have access to the distributed cache, you’ll end up with a situation where data in the cache is older and different than its counterpart in the database. And, while this may be okay for some reference type of data, it is definitely not acceptable for transactional data. Reference data is one that you read a lot but don’t modify very frequently (e.g. product catalog) while transactional data is something you read and modify frequently (e.g. customer or account data).

How do you ensure that the distributed cache stays synchronized with the database?

The answer is SqlCacheDependency. SqlCacheDependency is part of ASP.NET Cache (System.Web.Caching) and allows you to specify a dataset in the database with an SQL statement and then receive .NET event notifications from SQL Server 2005/2008 whenever your dataset is modified in the database.

NCache has internally incorporated SqlCacheDependency for the purpose of synchronizing cache with SQL Server 2005/2008 or Oracle database. To you, NCache provides a similar interface called SqlDependency that allows you to specify an SQL statement representing one or more rows in a given table that make up your cached item. NCache then internally uses SqlCacheDependency to establish a link with the database against these rows.

So, if your data is updated in the database by one of your applications, SQL Server fires a .NET event notification which NCache catches and removes the corresponding item from the distributed cache. This resolves data inconsistency issue of having two different copies of the same data. This is because when your application wants the same data next time, it doesn’t find it in the cache and is forced to retrieve the latest copy from the database which it then caches as well. This way, NCache ensures that the data in the cache is always consistent with the data in the database.

Here is a source code example of using SqlDependency of NCache that internally uses SqlCacheDependency:


public class Program {

    // A standard Load method that loads a single row from database
    public Customer LoadCustomer(Customer cust)
    {
	String key = "Customer:CustomerID:" + cust.CustomerID;

	Customer cust2 = (Customer)NCache.Cache.Get(key);
	if (cust2 != null)
	   return cust2;

	CustomerFactory custFactory = new CustomerFactory();

	// Load a single customer from the database
	// SELECT * FROM Customers WHERE CustomerID = 'ALFKI'
	custFactory.Load(cust);

	// Create a SqlCacheDependency for this item
	CacheItem item = new CacheItem(cust);
	item.Dependency = SqlDependencyFactory(connectionString,
	    "SELECT CustomerID FROM Customers WHERE CustomerID = '"
		+ cust.CustomerID + "'");

	// Store item in the cache along with SqlCacheDependency
	NCache.Cache.Insert(key, item);
	return cust;
    }

    // A query method
    public List<Customer> FindCustomers(String city)
    {
	String key = "List<Customer>:City:" + city;
	List<Customer> custList = (List<Customer>)NCache.Cache.Get(key);
	if (custList != null)
	    return custList;

	CustomerFactory custFactory = new CustomerFactory();

	// Load a list of customers from database based on a criteria
	// SELECT * FROM Customers WHERE City = 'San Francisco'
	custList = custFactory.FindByCity(city);

	// Create a SqlCacheDependency for this list of customers
	CacheItem item = new CacheItem(custList);
	item.Dependency = SqlDependencyFactory.(connectionString,
	    "SELECT CustomerID FROM Customers WHERE City = '" + city + "'");

	// Store list of customers in the cache along with SqlCacheDependency
	NCache.Cache.Insert (key, item);
	return custList;
    }
}

In summary, SqlDependency feature of NCache allows you to synchronize cache with the database and maintain data integrity. You can now start caching all data without the fear of using stale data from the cache. And, of course, the more data you cache, the better your application performance and scalability becomes.

So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in ASP .NET performance, ASP.Net, ASP.NET Cache, Cache dependency, Distributed caching, SQL cache dependency | Tagged , , , | Leave a comment

How to Improve ASP.NET Performance with Distributed Caching?

If your ASP.NET application only has a few users, you probably don’t care how fast or slow it is and it is probably giving you pretty good performance anyway. But, as you add more load to your ASP.NET application, the chances are quite high that ASP.NET performance will drop significantly. It might even grind to a halt if enough load is put on it. And, ironically, all of that happens just when your business is seeing more activity so the impact is even greater.

ASP.NET today has become really popular for high traffic apps and it is now common to see 10-20 server load balanced web farms and in some cases even 50-100 server farms. So, in these situations, ASP.NET performance is even more sensitive issue to resolve.

The main reason for ASP.NET performance drop as you increase load on it is your database which cannot handle larger loads the way your ASP.NET application web farm can. This is because you can add more servers to the ASP.NET web farm but you cannot do the same with your database.

So, in these situations, your best bet is to use a distributed cache like NCache. NCache is in-memory so it is much faster than the database. And, NCache builds a cluster of cache servers and you can grow the cluster linearly just like the web farm. As a result, with NCache, your ASP.NET performance remains great even under extreme transaction loads.

You can use NCache in two ways:

1. ASP.NET Session State Storage

You can configure your ASP.NET application to store ASP.NET Session State in NCache instead of InProc, State Server, or SQL Server. Please note that no programming is needed here. You only modify web.config code as following:

  <sessionstate cookieless="false"
                     regenerateExpiredSessionId="true"
                     mode="Custom"
                     customProvider="NCacheSessionProvider"
                     timeout="20">
     <providers>
         <add name="NCacheSessionProvider"
	      type="Alachisoft.NCache.Web.SessionState.NSessionStoreProvider"
	      exceptionsEnabled="true" enableSessionLocking="true"
	      emptySessionWhenLocked="false"  sessionLockingRetry="-1"
	      sessionAppId="NCacheTest" useInProc="false" enableLogs="false"
	      cacheName="myReplicatedCache" writeExceptionsToEventLog="false"
	      AsyncSession="false"/>
     </providers>
   </sessionstate>

2. ASP.NET Application Data Cache

The other way is for you to cache application data in a distributed cache like NCache so the next time your ASP.NET application needs this data, it will find it in the cache. Here is a small code sample on how to cache application data:

using Alachisoft.NCache.Web.Caching;

      ...

      Cache cache = NCache.InitializeCache("myCache");

      // Create a key to lookup in the cache
      // The key for will be like “Employees:PK:1000”
      string key = "Employee:EmployeeId:" + emp.EmployeeId.ToString();
      Employee employee = (Employee)Cache[key];
      if (employee == null) {
            // item not found in the cache. load from db
            LoadEmployeeFromDb(employee);

            // Now, add it to the cache for future reference
            Cache.Insert(key, employee, null,
                         Cache.NoAbsoluteExpiration,
                         Cache.NoSlidingExpiration,
                         CacheItemPriority.Default, null );
        }

The more data you cache, the less you have to go to the database and the faster is your ASP.NET application performance.

Download NCache Trial | NCache Details

Posted in ASP .NET performance, ASP.NET Cache, Distributed caching | Tagged , | Leave a comment

How Cache Dependency Manages Data Relationships?

Distributed cache is becoming very popular because it is a powerful way to boost your application performance and scalability and handle extreme transaction load without slowing down. Both .NET and Java applications are using it more and more each day.

But, one challenge people face with distributed cache is how to map and store relational data in a HashTable (key, value) pairing storage that a distributed cache is. Most caches today do not provide any mechanism to handle this. Today, I will discuss Cache Dependency which ASP.NET Cache provides and that NCache incorporated from day one.

Just like ASP.NET Cache, in NCache, Cache Dependency allows you to specify a dependency in the distributed cache between two cached items. Cached item A depends on cached item B. And, if B is ever updated or removed from the distributed cache, A is automatically removed. This ensures that if there is a referential integrity constraint between A and B in the database that it is also honored in the distributed cache. You can also specify cascading Cache Dependency where A depends on B and B depends on C. Then, if you update or remove C, A and B are both removed. Here is a brief example of Cache Dependency:

Cache Dependency lets you create one-to-one, one-to-many, and many-to-many relationships in the distributed cache. Here is how you would handle different scenarios:

One-to-one relationship

A has one-to-one with B. Add B without any Cache Dependency. Then, add A and specify a Cache Dependency for B. If A and B both have a mutual dependency, then when update B afterwards and specify a dependency on A.

One-to-many relationship

A has one-to-many with B. Add A first without any Cache Dependency. Then, add one or more B items and specify a Cache Dependency for the given A for all of them. This way, if A is updated or removed, all of B’s will be removed automatically by NCache.

Many-to-many relationship

A and B have many-to-many with each other. Add one or more A’s. Then, add one or more B’s and specify Cache Dependency for the appropriate A’s. Then, go back and update A’s to specify Cache Dependency on appropriate B’s.

public void CreateDependencies(Cache cache)
{
   string keyB = "objectB-1000";
   ObjectB objB = new ObjectB();
   string keyA = "objectA-1000";
   ObjectA objA = new ObjectA();
    // “null” third argument means objB does not depend on anything
   _cache.Add(keyB, objB, null,
              Cache.NoAbsoluteExpiration,
              Cache.NoSlidingExpiration,
              CacheItemPriority.Default, null, null);
    // third argument specifies that objA depends on objB
   string[] ADependsOn = { keyB };
   _cache.Add(keyA, objA, new CacheDependency(null, ADependsOn),
              Cache.NoAbsoluteExpiration,
              Cache.NoSlidingExpiration,
              CacheItemPriority.Default, null, null);
    // Removing "objB" automatically removes “objA” as well
   _cache.Remove(keyB);
   _cache.Dispose();
}

So, NCache allows you to take advantage of Cache Dependency and specify data relationships in the distributed cache. Download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in ASP .NET performance, ASP.NET Cache, Cache dependency, Distributed caching, SQL cache dependency | Tagged , , , | 4 Comments

When to Use Client Cache with Distributed Caching?

A distributed cache is essential for any application that demands fast performance during extreme transaction loads. An in-memory distributed cache is faster than a database. And, it can provide linear scalability in handling greater transaction loads because it can easily let you add more servers to the cache cluster that a database server cannot do.

Despite all these benefits, there is still one problem. In most cases a distributed cache is hosted on a set of dedicated cache servers across the network so your application has to make network trips to fetch any data. And, this is not as fast as accessing data locally and especially from within the application process. This is where client cache comes in handy.

In NCache, a client cache keeps a connection open to the distributed cache cluster and receives event notifications from the cache cluster whenever client cache data changes there. A distributed cache cluster knows which data items are being kept in which client cache so event notifications are sent only the relevant client cache instead of broadcasting them to all client caches.

How Does Client Cache Work?

A client cache is nothing but a local cache on your web/application server but one that is aware of a distributed cache and is connected to it. Additionally, a client cache can either be in-process, meaning a client cache exists inside your application process, or out-of-process. This allows a client cache to deliver much faster read performance than even distributed cache and at the same time ensure that client cache data is always kept synchronized with the distributed cache.

ncache-client-cache

However, a distributed cache notifies client cache asynchronously after it successfully updates data in the distributed cache cluster. This means that there is technically a small window of time (in milliseconds) during which some of the data in client cache is older than the distributed cache. Now, in most cases, this is perfectly acceptable to applications. But, in some cases, applications demand 100% accuracy of data.

So, to handle such situations, NCache provides a pessimistic synchronization model for client cache as well. In this model, every time the application tries to fetch anything from the client cache, the client cache first checks whether the distributed cache has a newer version of the same cached item. If it does, then the client cache fetches newer version from the distributed cache. Now, this trip to the distributed cache has its cost but it is still faster than fetching the cached item entirely from the distributed cache.

When to Use a Client Cache?

So, having known all of this, the main question that comes to mind is when to use a client cache and when not to use it. Well, the answer is pretty straight forward. Use a client cache if you’re doing a lot more reads than writes and especially if you’re reading the same items over and over again. And, if you’re doing a lot of updates or at least as many updates as reads (e.g. in case of ASP.NET Session State or JSP Servlet Session storage in NCache) then don’t use a client cache because the updates are actually slower with a client cache because you’re updating two different caches now, the client cache and the distributed cache.

So, NCache allows you to take advantage of client cache with a distributed cache. Download a fully working 60-day trial of NCache Enterprise and try it out for yourself.

Download NCache Trial | NCache Details

Posted in ASP.Net, client cache, Distributed caching | Tagged , , | 1 Comment

When to Use ASP.NET Cache in Web Farms?

ASP.NET has now become a really popular technology for web apps and more and more people are developing high traffic applications in it. And, to handle higher traffic, these ASP.NET apps are deployed in load balanced web farms where you can add more servers as your traffic load increases. So, it is very scalable except for one problem.

And, that problem is the database and your data storage which cannot scale in the same fashion to handle the higher traffic loads. So, what you get very quickly is a bottleneck where your ASP.NET application slows down and can even grind to a halt.

In such situations, data caching is an excellent way of resolving this database and data storage bottleneck. Caching allows you to store application data close-by and reduce those expensive database trips.

What is ASP.NET Cache?

ASP.NET Cache allows you to cache application data and is actually a fairly feature-rich cache including the following features:

  • Expirations: Automatic absolute and sliding expirations
  • CacheDependency: To manage data relationships in the cache
  • SqlCacheDependency: To synchronize cache with database
  • Callbacks: To be notified when items are updated in cache

Here is some sample code showing ASP.NET Cache usage.

using System.Web.Caching;

// Create a key to lookup in the cache
// The key for will be like “Employees:PK:1000”
string key = "Employee:EmployeeId:" + emp.EmployeeId.ToString();

Employee employee = (Employee)Cache[key];
if (employee == null) {
    // item not found in the cache. load from db
    LoadEmployeeFromDb(employee);

    // Now, add it to the cache for future reference
    Cache.Insert(key, employee, null,
                 Cache.NoAbsoluteExpiration,
                 Cache.NoSlidingExpiration,
                 CacheItemPriority.Default, null );
}

ASP.NET Cache Limitations in Web Farms

Despite very useful caching features, ASP.NET Cache has some serious limitations. They are:

  • Does not synchronize across server or worker processes: It does not synchronize across multiple servers or even multiple worker processes. So, you cannot use it in a web farm or even a web garden unless your data is read-only whereas you need to cache all kinds of data, including one that changes somewhat frequently.
  • Cache size limitation: You cannot grow the ASP.NET Cache to be more than what one ASP.NET worker process can contain. For 32-bit systems, this is 1GB and that includes app code as well. Even for 64-bit systems, the size cannot scale.

Use ASP.NET Cache Compatible Distributed Cache

The way to work around these limitations of ASP.NET Cache is to use a distributed cache like NCache for web farms. NCache provides the same features that ASP.NET Cache plus more. But, as a distributed cache, NCache easily synchronizes across multiple servers. Here are some benefits you get from NCache:

  1. Scales transaction load very nicely: You can keep adding more cache servers to the cache cluster as your web farm grows from 2 to 200 servers. NCache never becomes a bottleneck in handling more traffic.
  2. Scales data storage nicely: As you add more cache servers, your cache storage capacity grows due to Partition Cache topology.
  3. Replicates data for reliability: You can ensure that no data loss occurs even if a server goes down because data is replicated to other servers.
  4. Dynamic self healing cache cluster: NCache provides 100% uptime through this. And, you can add or remove cache servers at runtime without stopping the cache or your application.

asp-net-cache-blog-figure1

Well, if you have an ASP.NET application running in a web farm, take a look at NCache and see how it will help improve your application’s performance and scalability. Here are some useful links for NCache:

 Download NCache Trial  |  NCache Details

Posted in ASP.Net, ASP.NET Cache, Distributed caching, SQL cache dependency | Tagged , , , , , | 3 Comments

Configure ASP.NET Session State for Web Farms

There is absolutely no doubt about it. ASP.NET has come to age and a good majority of ASP.NET applications now are high traffic and mission-critical. That means you cannot afford unscheduled downtime of either the entire website or some of the servers where many users are getting bumped out. This is because these downtimes cost you dearly in lost revenue and a bad reputation that is hard to fix.

ASP.NET Session State storage if not handled correctly can cause unscheduled downtimes. Microsoft offers four storage options for ASP.NET Session State:

  • InProc: Sessions kept inside worker process
  • StateServer: Session kept in a separate process
  • SqlServer: Session kept in SQL Server
  • Custom: Session kept in a third-party custom store

Both InProc and StateServer don’t have the capability to replicate ASP.NET Session State to more than one server and therefore cause data loss if any web server goes down. In fact, if you have a single StateServer for the entire website and it goes down, you’re totally hosed because your entire website will go down.

SqlServer is the third storage option for ASP.NET Session State and it does provide server redundancy and data replication because you can build a database cluster, either mirrored or load-balanced cluster.

But, it’s expensive to setup SQL Server clusters and there are cheaper and more viable alternatives available. Additionally, SQL Server (like all relational databases) was designed to store structured relational data and not BLOBs; whereas, ASP.NET Session State is stored in SQL Server as BLOBs. So, not only are the sessions slow to access but the database quickly becomes a bottleneck as you try to scale your application.

A considerably better alternative to all of this is to use the Custom storage option of ASP.NET Session State and use an in-memory distributed cache (NCache) as your ASP.NET Session State storage. NCache replicates sessions across multiple servers so if any one server goes down, there’s no loss of session data. NCache is also much faster to access than SQL Server because it is in-memory. Finally, NCache allows you to easily scale the cache cluster as your web farm size grows. Simply add more cache servers to the cluster so there is never a bottleneck.

And, best of all, there is no programming needed to use NCache for ASP.NET Session State storage. Simply modify your web.config to specify the following:


<assemblies>
     <add assembly="Alachisoft.NCache.SessionStoreProvider, Version=4.4.0.0, Culture=neutral, PublicKeyToken=CFF5926ED6A53769"/>
</assemblies>
<sessionState cookieless="false" regenerateExpiredSessionId="true" mode="Custom" customProvider="NCacheSessionProvider" timeout="20">
 <providers>
     <add name="NCacheSessionProvider" type="Alachisoft.NCache.Web.SessionState.NSessionStoreProvider" sessionAppId="NCacheTest" enableLogs="false" cacheName="myReplicatedCache"/>
     </providers>
</sessionState>

Well, if you have an ASP.NET application running in a web farm, take a look at NCache and see how it will improve your ASP.NET Session State storage. Here are some useful links for NCache:

Download NCache Trial  |  NCache Details

Posted in ASP .NET performance, ASP.Net, Distributed caching | Tagged , , | 1 Comment