Click here to close now.

Welcome!

Containers Expo Blog Authors: Pat Romanski, Liz McMillan, Elizabeth White, Hovhannes Avoyan, David Sprott

Blog Feed Post

The Attack of Oracle Guest

Last October I published a post that identified the features that both JBoss Data Grid and Oracle Coherence provide (link). My goal was to establish a baseline for the features that a data grid should provide. It was not to state that one data grid was better than the other. Little did I know an Oracle employee would respond by attacking Red Hat, its engineers, and myself.

It is fear? Is it hostility? I don’t know.

I have engaged in discussions with competitors before. Roman and I engaged in a competitive discussion in response to one of my posts comparing IBM WebSphere and JBoss EAP (link). However, we both conducted ourselves in a professional manner. I’ve engaged in competitive discussions with Spring evangelists, but we focused on the technology.

I always enjoy reading the discussions between Cameron, Nikita, and Nati on the TheServerSide. I find their discussions to be insightful. They conduct themselves in a professional manner. To me, it looks like they respect each other and they respect each others products.

To be fair, this is just a single anonymous visitor. If they had not left a comment while connected to Oracle’s network, I would not have known that they work for Oracle. However, I would have inferred it.

Let the show begin.

Oracle Guest

I’m also interested in this question too RK. JDG lack of references that could corroborate performance majority against Coherence. Coherence in the other hand has a lot of public cases that shows how scalable, reliable and fast it is. It is the word’s first in-memory computing platform of the world, so this blog doesn’t offer credibility at all, mostly because Shane is a marketing guy from Red Hat.

He’s just using a old marketing technique to improve reliability of their offers product, comparing it with another one which is leader in its industry, like Coherence. Comparing with Coherence would pass the idea of “JDG is so good just like Coherence, so instead of buying Coherence, buy from Red Hat” but in fact it is not true. JDG should implement A LOT OF features to be comparable with Coherence.


Me

I published the results of a performance test (JDG 6.0.1) last December (link). I have written a technical white paper that includes the results of a number of performance tests (JDG 6.0.1). However, it is awaiting publication. I expect it to be made available via the Red Hat Customer Portal. In addition, I will be publishing the results of a few performance tests (JDG 6.1) executed on better hardware to How to JBoss within the next two weeks.

I executed the performance tests with RadarGun (link), an open source project for data grid performance testing. When I published the results, I provided both the RadarGun and the JDG configuration files. The best way for an organization to select a data grid based on performance and reliability is to configure and execute their own performance tests based on their own requirements in their own controlled environment.

Oracle Guest

Those results compares JDG against Terracotta from Software AG, not with Coherence from Oracle. You cannot say at all that JDG is better than Coherence because you’ve never tested. Again, not reliable statements coming from you. You’ve tried to use a Terracotta comparison to generalize JDG performance results. Lets call Oracle, VMware, IBM, Gigaspaces and TIBCO to participate of the tests.

Me

I have never stated that JDG performs better than Oracle Coherence. Thus, I am unaware of these “not reliable statements”. Can you could point them out? You are welcome to call Oracle, VMWare, IBM, GigaSpaces, and TIBCO. They are welcome to use the my RadarGun and JDG configuration files to configure and execute their own performance tests with RadarGun. This is essentially what I did after coming across the performance test results published by Terracotta. I simply used the parameters that they made available. As I mentioned previously, a number of organizations evaluating JDG are doing just that. They are executing performance tests against both JDG and Oracle Coherence using RadarGun or YCSB (Yahoo! Cloud Serving Benchmark).


Me

Tongosol Coherence was an innovative product in its day, but that day was several years ago. I do not question that it remains reliable. However, there have been a number of advancements in distributed systems over the past few years. JBoss Data Grid brings together the reliability of the previous generation of data grids and the innovation of the next generation of data grids.

Oracle Guest

For some unique features of Coherence like its non-blocking I/O TCP/IP network based on TCMP, which allow it to achive better results with distributed transactions, fail-over detection (the fastest of the industry) WAN replication with latency issues due geographical distribution and the HTTP Session offload from AppServers. Not mentioning integration with A LOT OF AppServers like WebLogic, GlassFish, Websphere, Tomcat, IIS, Resin and even your JBoss AS. JDG only gives support for which is from Red Hat. What a nice example of being “open” hãm ?! :)

Me

The TCMP features that you listed are not unique to TCMP. They are provided by JGroups as well. Those features include non-blocking I/O (NIO), failure detection, and cross site (WAN) replication. I would hope that Oracle Coherence*Web would support both Oracle WebLogic and Oracle GlassFish. Is that really a feature? If so, Red Hat provides it as well. JDG supports both JBoss EAP and JBoss EWS (Apache Tomcat). However, there is no reason for an organization to use JDG or Oracle Coherence*Web for session replication with IBM WebSphere.


Me

Red Hat public references include both Chicago Board Options Exchange (CBOE) and Cisco, and they have both presented at Red Hat Summit / JBoss World. I can’t think of an environment with higher demands for both performance and reliability than financial trading. The Pentaho BI Platform / Server includes a plugin for Infinispan (link). There is no Oracle Coherence plugin.

Oracle Guest

Only this? Coherence has thousands of customer references, including mission critical ones that for years NEVER, I mean, NEVER restarted their servers. Come on, you can do better than this. Red Hat (you) should be a little bit more humble when talking about leaders like Oracle. Someday Red Hat will be a huge company, I don’t doubt that, but that didn’t happened so far and will take some time.

Me

Can you point out a list that includes thousands of public customer references for Oracle Coherence? The only list that I found includes 39 customer references, and that list includes duplicates (link). You state that thousands of Oracle Coherence customers have NEVER restarted their servers. That is a bold claim. Do you have evidence to support such a claim? After all, servers may be restarted to upgrade the hardware and / or operating system. That, and enterprise software typically has a finite life cycle. Has not a just one of those thousands of Oracle Coherence customers ever upgraded their original version to the latest version? I suspect that you and I have different interpretations of “huge company”. I find it ironic that you demand humility while showing disrespect.


Me

Are you stating that because the company you work for (Oracle) productized (well, acquired) a data grid before the company I work for (Red Hat) and that my role is now in marketing, I lack credibility in the data grid domain? I would advise against such a statement. My technical knowledge of data grids is second to none, and it is not derived from my role in marketing. I have worked with a number of enterprise organizations in the financial, telecommunication, and media sectors in a developer / architect capacity in my previous role to integrate data grids in demanding environments.

Oracle Guest

Oh yes? Give me examples of data grid technologies you’ve worked with, scenarios of data partitioning and JVM tuning you’ve implemented for, entity domain versioning strategies you’ve designed it, hashCode algorithms strategies that you’ve proposed for a complex based key node, examples of KPIs that you retrieved from JMX and from the DG, and of course, examples of the following DG scenarios: average latency less than 600 microseconds, 5k TPS or higher considering a transaction with a minimum of 15KB of size, client applications both based on Java, C++, .NET and “the rest of world” that could be accessed with REST or SOAP, projects with more than 20K hours of duration (real one projects) instead of stupid POCs, usage of at least three serious data grids technologies including Coherence, GemFire, Websphere eXtreme Scale, Gigapaces, TIBCO ActiveSpaces, etc.

Me

I do not question your knowledge and experience, nor am I going to. I am dumbfounded as to why you feel justified in questioning mine.

I look at it like this. You have pilots, and you have mechanics. You have users, and you have engineers. A pilot knows how the controls work, a mechanic knows the parts work. When it comes to JDG, I have been a full time pilot and a part time mechanic. However, the activities that you have mentioned are those of a user, not of an engineer. Further, they are not specific to data grids. It’s one thing to talk about metrics, latency, and throughput. It’s another to talk about concurrency, algorithms, and how distributed systems work.

JVM tuning. I have posted a handful of notes on both OS and JVM tuning (link / link / link). Instead of talking about JVM tuning, let’s talk about implementations of ConcurrentMap (link). JMX. I have monitored and analyzed the performance of JDG with JBoss Operations Network, in-house tools, and BTrace. Here is a list of JMX attributes and operations for JDG (link). An average of latency of 600 microseconds is not particularly impressive in the financial trading industry. Nor is 5,000 transactions per second. Did I not mention, the fact that I collaborated with their engineers, and that co-presented with them at Red Hat Summit / JBoss World? Instead of talking about latency and throughput, let’s talk about data structures and eviction algorithms (link). I’ll be honest, I have not worked on projects that required integration in a heterogeneous environment. Those that have, have done so with REST and memcache. Oracle Coherence doesn’t support the memcache protocol, does it? Instead of talking about REST and SOAP, let’s talk about local / remote transaction contexts and the number of remote procedure calls (RPC) required for optimistic / pessimistic locking. Partitioning and hashing. JDG has implemented consistent hashing and virtual nodes. A modern solution. It uses an implementation of the excellent MurmurHash3 algorithm (link). It does not rely on dated implementation based on centralized and / or manual hashing. Does Oracle Coherence? Instead of talking about hashing, let’s talk about vector clocks.

Let’s talk about rebalancing and push / pull implementations.

Of course, that is the benefit of open source software. Users can be engineers. The can understand the implementation by studying the code. That is exactly what I did. I studied the code, I modified the code, I created and submitted patches, and I engaged in discussions with Red Hat engineers on implementations details. With proprietary software, users can only be users.

Asking how fast someone has flown will not reveal how much they know about planes.

Are you familiar with all of the projects that I have been on? I ask because I’m uncertain as to why you would describe them as “stupid POCs”. I do not think that the engineers at CBOE or any of the other organizations that I have collaborated with would appreciate you calling the work that they put into production “stupid POCs”. I know I don’t.


Me

How am I “improving the reliability” of JBoss Data Grid by identifying the functionality that both JBoss Data Grid and Oracle Coherence provide? Do you not believe that JBoss Data Grid has implemented A LOT OF features? The functionality descibed in this post represents nearly all of the features and benefits listed in the Oracle Coherence data sheet (link). JBoss Data Grid lacks a few features provided by Oracle Coherence. Oracle Coherence lacks a few features provided by JBoss Data Grid. Would you say that Oracle Coherence has not implemented A LOT OF features because it lacks a few features provided by JBoss Data Grid?

Oracle Guest

No! It just had integrated a couple open-source existing technologies into a new ecosystem and productized in a minimum level to take some money from the customers with subscriptions. Nothing really new, innovated, creative or respectable. The type of thing Red Hat likes to do: take existing technologies, combine them and make some money.

Me

What are these open-source, existing technologies that you are referring to? Could they be Infinispan? Of course they exist, Red Hat created them. It would be hard to productize something that does not exist. I find it both disrespectful and insulting to Red Hat engineers to describe their work as not new, innovative, creative, or respectable.  You said “take existing technologies, combine them and make some money”. Interesting. Is that not what Oracle did with Tangosol Coherence? Oracle purchased their data grid. Red Hat created its data grid.

Oracle Guest

You really knows to play with words, starting with the usage of the word “nearly” :)

You forgot some key features that only Coherence has like: Elastic Data (off-heap and SSD storage of data), distributed GC against any type of storage and cache layout, ability to handle thousands of GB being able to handle even terabytes of data. Don’t came say to me that with on-heap allocation and regular JVM like HotSpot (or OpenJDK which is even worse) you could allocate terabytes of data. Native SDKs for C/C++ and .NET, Continuous Queries, support for many AppServers rather than only JBoss, integration with Java EE 6 using @Resource annotation, monitoring and management capabilities both integrated with the product and with other external tools like Enterprise Manager, integration with CEP world to enrich events and being the clustering enabled mechanism to handle fail-over scenarios, security features that could deal with scenarios of authentication, authorization, SSL and load-balancers (Eg: BigIP) integration. Pre-built filters and a powerful query language that could make easier for the developers to interact with the cache instead of force them to write Java code, support for Hibernate, Toplink, EclipseLink, GoldenGate, etc. Thousands of pre-implemented scenario patterns in the product and externally with the incubator strategy started by Tangosol and now owned by Oracle. Oh and of course: support for a high performance serialization strategy and a highly scalable TCP/IP implementation like TCMP. Not mentioning that support for InfiniBand based networks.

Me

I admit that off-heap storage is an interesting concept. However, I question how practical it is. I would not recommend storing a TB of data on a single node with or without off-heap storage. I recommend partitioning physical servers into multiple virtual servers. It increases node portability while reducing the effects (e.g. rebalancing) of adding or removing nodes. JDG supports Java EE integration with both @resource and @inject. Does Oracle Coherence not support @inject? JDG includes management and monitoring as well. JDG clients are smart clients. It is not practical to load balance requests. Can you point me to a list of these “thousands of pre-implemented scenario patterns”? JDG supports both high performance serialization (JBoss Marshalling) and a highly scalable TCP / IP implementation (JGroups). However, JDG does not require developers to write additional code to use high performance serialization unlike Oracle Coherence and Portable Object Format (POF) (link). I will give you Infiniband, but it may not matter for long (link).

Update: I thought when you referred to off-heap storage that you referring to off-heap memory. I had not realized that “off-heap storage” is the new marketing term for “disk storage”. It turns out that Elastic Data is marketing for “overflow to disk” (link). This was a featured provided by Ehcache 10 years ago. It’s a feature provided by JDG. You mentioned 1TB of data. However, Elastic Data can only support up to 100GB per node. It is not a persistence solution. It does not support eviction. It should not be used with aggregation (i.e. map / reduce) or entry processors.

Oracle Guest

All of the “unique” features provided by JDG are not considered by real customers, independent analysts like Gartner, Forrester and IDC as really important. Are features that just align with the Red Hat strategy to force its entrance in the Big Data world, which on the other hand is a terrible strategy because to a real Big Data strategy Red Hat lacks A LOT OF technology stacks compared with real Big Data vendors like Oracle, EMC and IBM. Just an example, even Oracle does not consider Coherence as its Big Data strategy. When Oracle talk about Coherence, they’re talking about caching, grid and in-memory computing scenarios, which fits perfectly to elastic data grid technologies.

Me

It find it funny that justify the lack of features by stating that they are not important to analysts. Will you go on record as stating that Oracle Coherence will never implement JDG features that it lacks? Software evolves. New features become standard features. Personally, I think data grids will continue to incorporate features provided by NoSQL implementations. Eventual consistency comes to mind. What is a “real’ customer”? Is there another kind of customer? What do these features have to do with Red Hat’s big data strategy? Did you see the Red Hat big data announcement (link)? It was quite clear on what is and what is not our big data strategy. Just as Coherence is not Oracle’s big data strategy, JDG is not Red Hat’s big data strategy. We too place our data grid in the context of in-memory distributed data and parallel processing. I would expect a data grid to “fit perfectly to elastic data grid technologies” as it is, after all, a data grid and one of the defining characteristics of a data grid is that it is elastic. However, there is some overlap between in-memory data grids, NoSQL, and big data platforms. They all distribute data and implement parallel processing. The provide data locality. As such, in-memory data grids fit perfectly inside of broader big data solutions.


Read the original blog entry...

More Stories By Daniel Thompson

I curate the content on this page, but the credit goes to my talented colleagues for the posts that you see here. Much of what you read on this page is the work of friends at How to JBoss, and I encourage you to drop by the site at http://www.howtojboss.com for some of the best JBoss technical and non-technical content for developers, architects and technology executives on the Web.

@ThingsExpo Stories
How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happens, where data lives and where the interface lies. For instance, it's a mix of architectural styles ...
We certainly live in interesting technological times. And no more interesting than the current competing IoT standards for connectivity. Various standards bodies, approaches, and ecosystems are vying for mindshare and positioning for a competitive edge. It is clear that when the dust settles, we will have new protocols, evolved protocols, that will change the way we interact with devices and infrastructure. We will also have evolved web protocols, like HTTP/2, that will be changing the very core of our infrastructures. At the same time, we have old approaches made new again like micro-services...
Connected devices and the Internet of Things are getting significant momentum in 2014. In his session at Internet of @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, examined three key elements that together will drive mass adoption of the IoT before the end of 2015. The first element is the recent advent of robust open source protocols (like AllJoyn and WebRTC) that facilitate M2M communication. The second is broad availability of flexible, cost-effective storage designed to handle the massive surge in back-end data in a world where timely analytics is e...
Collecting data in the field and configuring multitudes of unique devices is a time-consuming, labor-intensive process that can stretch IT resources. Horan & Bird [H&B], Australia’s fifth-largest Solar Panel Installer, wanted to automate sensor data collection and monitoring from its solar panels and integrate the data with its business and marketing systems. After data was collected and structured, two major areas needed to be addressed: improving developer workflows and extending access to a business application to multiple users (multi-tenancy). Docker, a container technology, was used to ...
The true value of the Internet of Things (IoT) lies not just in the data, but through the services that protect the data, perform the analysis and present findings in a usable way. With many IoT elements rooted in traditional IT components, Big Data and IoT isn’t just a play for enterprise. In fact, the IoT presents SMBs with the prospect of launching entirely new activities and exploring innovative areas. CompTIA research identifies several areas where IoT is expected to have the greatest impact.
The 4th International Internet of @ThingsExpo, co-located with the 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - announces that its Call for Papers is open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, data security and privacy.
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists will addresses this very serious issue of profound change in the industry.
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
We are reaching the end of the beginning with WebRTC, and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) i...
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, June 9-11, 2015, at the Javits Center in New York City. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be
Container frameworks, such as Docker, provide a variety of benefits, including density of deployment across infrastructure, convenience for application developers to push updates with low operational hand-holding, and a fairly well-defined deployment workflow that can be orchestrated. Container frameworks also enable a DevOps approach to application development by cleanly separating concerns between operations and development teams. But running multi-container, multi-server apps with containers is very hard. You have to learn five new and different technologies and best practices (libswarm, sy...
SYS-CON Events announced today that DragonGlass, an enterprise search platform, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. After eleven years of designing and building custom applications, OpenCrowd has launched DragonGlass, a cloud-based platform that enables the development of search-based applications. These are a new breed of applications that utilize a search index as their backbone for data retrieval. They can easily adapt to new data sets and provide access to both structured and unstruc...
The Internet of Things is a misnomer. That implies that everything is on the Internet, and that simply should not be - especially for things that are blurring the line between medical devices that stimulate like a pacemaker and quantified self-sensors like a pedometer or pulse tracker. The mesh of things that we manage must be segmented into zones of trust for sensing data, transmitting data, receiving command and control administrative changes, and peer-to-peer mesh messaging. In his session at @ThingsExpo, Ryan Bagnulo, Solution Architect / Software Engineer at SOA Software, focused on desi...
An entirely new security model is needed for the Internet of Things, or is it? Can we save some old and tested controls for this new and different environment? In his session at @ThingsExpo, New York's at the Javits Center, Davi Ottenheimer, EMC Senior Director of Trust, reviewed hands-on lessons with IoT devices and reveal a new risk balance you might not expect. Davi Ottenheimer, EMC Senior Director of Trust, has more than nineteen years' experience managing global security operations and assessments, including a decade of leading incident response and digital forensics. He is co-author of t...
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
While great strides have been made relative to the video aspects of remote collaboration, audio technology has basically stagnated. Typically all audio is mixed to a single monaural stream and emanates from a single point, such as a speakerphone or a speaker associated with a video monitor. This leads to confusion and lack of understanding among participants especially regarding who is actually speaking. Spatial teleconferencing introduces the concept of acoustic spatial separation between conference participants in three dimensional space. This has been shown to significantly improve comprehe...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists will peel away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud environment, and we must architect and code accordingly. At the very least, you'll have no problem fil...
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. According to a recent IDG Research Services Survey this rate of traffic will only grow. What's driving t...