Many organizations today use a variety of .NET and Java technologies to develop different high traffic applications. At the same time, these organizations have a need to share data at runtime between .NET and Java applications.
One way to share data is through the database, but that is slow and also doesn’t scale very well. A much better approach is to use an in-memory distributed cache as a common data store between multiple applications. It’s fast and also scales linearly.
As you know, Java and .NET types are not compatible. Therefore, you end up transforming the data into XML for sharing. Additionally, most distributed caches either don’t provide any built-in mechanism for sharing data between .NET and Java applications or only provide XML based data sharing. If a cache doesn’t provide a built-in data sharing mechanism, then you have to define the XML schema and use a third party XML serialization to construct and read all the XML data.
But, XML serialization is an extremely slow and resource hungry process. XML serialization involves XML validation, parsing, transformations which really hamper the application performance and uses extra resources in term of memory and CPU.
Distributed cache by design is used to improve your application performance and scalability. It allows your applications to cache your application data and reduce those expensive database trips that are causing a scalability bottleneck. And, XML based data sharing goes against these performance and scalability goals for your application. If you increase transaction load on your application, you’ll see that XML manipulation ends up becoming a performance bottleneck.
A much better way is to do data sharing between .NET and Java applications at binary level where you would not have to do any XML transformations. NCache is a distributed cache that provides you runtime data sharing .NET and Java application with binary serialization.
How does NCache provide runtime data sharing between .NET and Java?
Well, before that you need to understand why your native .NET and Java binary serialization are not compatible. Java and .NET have their own binary serializations that interpret objects in their own ways and which are totally different from each other and also have different type systems. Moreover, the serialized byte stream of an object also includes the data type details as fully qualified name which are again different in .NET and Java. This, of course, also hinders the data type compatibility between .NET and Java.
To handle this incompatibility, NCache has implemented its own interoperable binary serialization that is common for both .NET and Java. NCache interoperable binary serialization identifies objects based on type-ids that are consistent across .NET and Java instead of fully qualified name that are technology specific. This approach not only provides interoperability but also reduces the size of the generated byte stream. Secondly, NCache interoperable binary serialization implements a custom protocol that generates byte stream in such a format that its .NET and Java implementations can easily interpret.
Here is an example of NCache config.ncconf with data interoperable class mapping:
<cache-config name="InteropCache" inproc="False" config-id="0" last-modified="" type="clustered-cache" auto-start="False"> ... <data-sharing> <type id="1001" handle="Employee" portable="True"> <attribute-list> <attribute name="Age" type="int" order="1"/> <attribute name="Name" type="java.lang.String" order="2"/> <attribute name="Salary" type="long" order="3"/> <attribute name="Age" type="System.Int32" order="4"/> <attribute name="Name" type="System.String" order="5"/> <attribute name="Salary" type="System.Int64" order="6"/> </attribute-list> <class name="jdatainteroperability.Employee:0.0" handle-id="1" assembly="jdatainteroperability.jar" type="java"> <attribute name="Age" type="int" order="1"/> <attribute name="Name" type="java.lang.String" order="2"/> <attribute name="Salary" type="long" order="3"/> </class> <class name="DataInteroperability.Employee:188.8.131.52" handle-id="2" assembly="DataInteroperability, Version=184.108.40.206, Culture=neutral, PublicKeyToken=null" type="net"> <attribute name="Age" type="System.Int32" order="1"/> <attribute name="Name" type="System.String" order="2"/> <attribute name="Salary" type="System.Int64" order="3"/> </class> </type> </data-sharing> ... </cache-config>
As a result, NCache is able to serialize a .NET object and de-serialize it in Java as long as there is a compatible Java class available. This binary level serialization is more compact and much faster than any XML transformations.
Finally, the best part in all of this is that you don’t have to write any serialization code or make any code changes to your application in order to use this feature in NCache. NCache has implemented a runtime code generation mechanism, which generates the in-memory serialization and de-serialization code of your interoperable classes at runtime, and uses the compiled form so it is super fast.
In summary, using NCache you can scale and boost your application performance by avoiding the extremely slow and resource hungry XML serialization.
So, download a fully working 60-day trial of NCache Enterprise and try it out for yourself.