|By Dana Gardner||
|December 3, 2012 06:00 AM EST||
The latest BriefingsDirect IT trends discussion targets enterprise backup, why it’s broken, and how to fix it.
Nowadays the backup of enterprise information and associated data protection are fragmented, complex, and inefficient. But new approaches are helping to simplify the data-protection process, keep costs in check, and improve recovery speed and confidence.
Joining us to share insights on how data protection became such a mess -- and how new techniques are being adopted to gain comprehensive and standard control over the data lifecycle -- are John Maxwell, Vice President of Product Management for Data Protection at Quest Software, now part of Dell, and George Crump, Founder and Lead Analyst at Storage Switzerland, an analyst firm focused on the storage market. The chat is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Quest Software is a sponsor of BriefingsDirect podcasts.]
Here are some excerpts:
Gardner: Why has something seemingly as straightforward as backup become so fragmented and disorganized?
Maxwell: Dana, I think it’s a perfect storm, to use an overused cliché. If you look back 20 years ago, we had heterogeneous environments, but they were much simpler. There were NetWare and UNIX, and there was this new thing called Windows. Virtualization didn’t even really exist. We backed up data to tape, and a lot of data was in terabytes, not petabytes.
Flash forward to 2012, and there’s more heterogeneity than ever. You have stalwart databases like Microsoft SQL Server and Oracle, but then you have new apps being built on MySQL. You now have virtualization, and, in fact, we're at the point this year where we're surpassing the 50 percent mark on the number of servers worldwide that are virtualized.
Now we're even starting to see people running multiple hypervisors, so it’s not even just one virtualization platform anymore, either. So the environment has gotten bigger, much bigger than we ever thought it could or would. We have numerous customers today that have data measured in petabytes, and we have a lot more applications to deal with.
And last, but not least, we now have more data that’s deemed mission critical, and by mission critical, I mean data that has to be recovered in less than an hour. Surveys 10 years ago showed that in a typical IT environment, 10 percent of the data was mission critical. Today, surveys show that it’s 50 percent and more.
Crump: I would dovetail into what he just mentioned about mission criticality. There are definitely more platforms, and that’s a challenge, but the expectation of the user is just higher. The term I use for it is IT is getting "Facebooked."
I've had many IT guys say to me, "One of the common responses I get from my users is, 'My Facebook account is never down.'" So there is this really high expectation on availability, returning data, and things of that nature that probably isn’t really fair, but it’s reality.
One of the reasons that more data is getting classified as mission critical is just that the expectation that everything will be around forever is much higher.
The other thing that we forget sometimes is that the backup process, especially a network backup, probably unlike any other, stresses every single component in the infrastructure. You're pulling data off of a local storage device on a server, it’s going through that server CPU and memory, it’s going down a network card, down a network cable, to a switch, to another card, into some sort of storage device, be it disk or tape.
So there are 15 things that happen in a backup and all 15 things have to go flawlessly. If one thing is broken, the backup fails, and, of course, it’s the IT guy’s fault. It’s just a complex environment, and I don’t know of another process that pushes on all aspects of the environment in one fell swoop like backup does.
Gardner: So the stakes are higher, the expectations are higher, the scale and volume and heterogeneity are all increased. What does this mean, John, for those that are tasked with managing this, or trying to get a handle on it as a process, rather than a technology-by-technology approach?
Maxwell: There are two issues here. One, you expect today's storage administrator, or sysadmin, to be a database administrator (DBA), a VMware administrator, a UNIX sysadmin, and a Windows admin. That’s a lot of responsibility, but that’s the fact.
A lot of people think that they are going to have as deep level of knowledge on how to recover a Windows server as they would an Oracle database. That’s just not the case, and it's the same thing from a product perspective, from a technology perspective.
Is there really such thing as a backup product, the Swiss Army knife, that does the best of everything? Probably not, because being the best of everything means different things to different accounts. It means one thing for the small to medium-size business (SMB), and it could mean something altogether different for the enterprise.
We've now gotten into a situation where we have the typical IT environment using multiple backup products that, in most cases, have nothing in common. They have a lot of hands in the pot trying to manage data protection and restore data, and it has become a tangled mess.
Gardner: Before we dive a little bit deeper into some of these major areas, I'd like to just visit another issue that’s very top of mind for many organizations, and that’s security, compliance, and business continuity types of issues, risk mitigation issues. George Crump, how important is that to consider, when you look at taking more of a comprehensive or a holistic view of this backup and data-protection issue?
Crump: It's a really critical issue, and there are two ramifications. Probably the one that strikes fear in the heart of every CEO on the planet is all the disclosure laws that exist now that say that, when you lose a customer’s data, you have to let him know. Unfortunately, probably the only effective way to do that is to let everybody know.
I'm sure everybody listening to this podcast has gotten more than one letter already this year saying their Social Security number has been exposed, things like that. I can think of three or four I've already gotten this year.
So there is the downside of legally having to admit you made a mistake, and then there is the legal requirements of retaining information in case of a lawsuit. The traditional thing was that if I got a discovery motion filed against me, I needed to be able to pull this information back, and that was one motivator. But the bigger motivator is having to disclose that we did lose data.
And there's a new one coming in. We're hearing about big data, analytics, and things like that. All of that is based on being able to access old information in some form, pull it back from something, and be able to analyze it.
That is leading many, many organizations to not delete anything. If you don't delete anything, how do you store it? A disk-only type of solution forever, as an example, is a pretty expensive solution. I know disk has gotten a lot cheaper, but forever, that’s a really long time to keep the lights on, so to speak.
Gardner: Let's look at this a bit more from the problem-solution perspective. We have multiple platforms, we have operating systems, hypervisors, application types, even appliances. What's the solution?
Maxwell: The problem is we need to step back, take inventory of what we've got, and choose the right solution to solve the problem at hand, whether you're an SMB or an enterprise.
But the biggest thing we have to address is, with the amount and complexity of the data, how can we make sysadmins, storage administrators, and DBAs productive, and how can we get them all on the same page? Why do each one of these roles in IT have to use different products?
George and I were talking earlier. One of the things that he brought up was that in a lot of companies, data is getting backed up over and over by the DBA, the VMware administrator, and the storage administrator, which is really inefficient. We have to look at a holistic approach, and that may not be one-size-fits-all. It may be choosing the right solutions, yet providing a centered means for administration, reporting, monitoring, etc.
Gardner: Is there anything different and specific about backup that makes this even harder to move from that point solution, best-of-breed mentality, into more of a comprehensive process standardization approach?
Demands and requirements
Crump: It really ties into what John said. Every line of business is going to have its own demands and requirements. To expect not even a backup administrator, but an Oracle administrator that’s managing an Oracle database for a line of business, to understand the nuances of that business and how they want to keep things is a lot to ask.
When backup is broken, the default survival mechanism is to throw everything out, buy the latest enterprise solution, put the stake in the ground, and force everybody to centralize on that one item. That works to a degree, but in every project we've been involved with, there are always three or four exceptions. That means it really didn’t work. You didn't really centralize.
Then there are covert operations of backups happening, where people are backing up data and not telling anybody, because they still don't trust the enterprise application. Eventually, something new comes out. The most immediate example is virtualization, which spawned the birth of several different virtualized specific applications. So bringing all that back in again becomes very difficult.
I agree with John. What you need to do is give the users the tools they want. Users are too sophisticated now for you to say, "This is where we are going to back it up and you've got to live with it." They're just not going to put up with that anymore. It won't work.
So give them the tools that they want. Centralize the process, but not the actual software. I think that's really the way to go.
Gardner: So we recognize that one size fits all probably isn’t going to apply here. We're going to have multiple point solutions. That means integration at some level or multiple levels. That brings us to our next major topic. How do we integrate well without compounding the complexity and the problems set? John?
Maxwell: We've been working on this now for almost two years here at Quest, and now at Dell, and we are launching in November, something called NetVault XA. “XA” stands for Extended Architecture. We have a portfolio of very rich products that span the SMBs and the enterprise, with focus on virtual backup, heterogeneous backup, instantaneous snapshots and deep application recovery, and we’re keenly interested in leveraging those technologies for the DBAs and sysadmins in ways that make their lives easier and make sure they are more productive.
NetVault XA solves some really big issues. First of all, it unifies the user experience across products, and by user, I mean the sysadmin, the DBA, and the storage administrator, across products. The initial release of NetVault XA will support both our vRanger and NetVault Backup, as well as our NetVault SmartDisk product, and next year, we'll be adding even more of our products under NetVault XA as well.
So now we've provided a common means of administration. We have one UI. You don’t have to learn something different. Everyone can work on the same product, yet based on your login ID, you will have access to different things, whether it's data or capabilities, such as restoring an Oracle or SQL Server database, or restoring a virtual machine (VM).
That's a common UI. A lot of vendors right now have a lot of solutions, but they look like they're from three, four, or five different companies. We want to provide a singular user experience, but that's just really the icing on the cake with NetVault XA.
If we go down a little deeper into NetVault XA, once it’s is installed, learning alongside vRanger, NetVault, or both, it's going to self identify that vRanger or NetVault environment, and it's going to allow you to manage it the way that you have already set about from that ability.
We're really delivering a new approach here, one we think is going to be unique in the industry. That's the ability to logically group data and applications within lines of business.
You gave an example earlier of Oracle. Oracle is not an application. Oracle is a platform for applications, and sometimes applications span databases, file systems, and multiple servers. You need to be looking at that from a holistic level, meaning what makes up application A, what makes up application B, C, D, etc.?
Then, what are the service levels for those applications? How mission critical are they? Are they in that 50 percent of data that we've seen from surveys, or are they data that we restored from a week ago? It wouldn’t matter, but then, again, it's having one tool that everyone can use. So you now have a whole different user experience and you're taking up a whole different approach to data protection.
Gardner: There really seems to be a drilling down into these technologies and surfacing information to such a degree that it strikes me as similar to what IT service management (ITSM) did for managing IT systems at a higher level. We're now bringing that to a discrete portion backup and recovery. Does that sound about right, George, or did I overstate it?
Crump: No, that's dead-on. The benefits of that type of architecture are going to be substantial. Imagine if you are the vRanger programmer, when all this started. Instead of having to write half of the backend, you could just plug into a framework that already existed and then focus most of your attention on the particular application or environment that you are going to protect.
You can be releasing the equivalent of vRanger 6 on vRanger 1, because you wouldn’t have to go write this backend that already existed. Also, if you think about it, you end up with a much more reliable software product, because now you're building on a library class that will have been well tested and proven.
Say you want to implement deduplication in a new version of the product or a new product. Instead of having to rewrite your own deduplication engine, just leverage the engine that's already there.
One common means
Maxwell: By having one common means -- whether you're a DBA, a sysadmin, a VMware administrator, or a storage administrator -- you are all on the same page. You can have people all buying into one way of doing things, so we don't have this data being backed up two or three times.
But the other thing that you get, and this is a big issue now, is protecting multiple sites. When we talk about multiple sites, people sometimes say, "You mean multiple data centers. What about all those remote office branch offices?" That right now is a big issue that we see customers running into.
The beauty of NetVault XA is I can now have various solutions implemented, whether it's vRanger running remotely or NetVault in a branch office, and I can be managing it. I can manage all aspects of it to make sure that those backups are running properly, or make sure replication is working properly. It could be halfway around the country or halfway around the world, and this way we have consistency.
Speaking of reporting, as you said earlier, what about a dashboard for management? One of our early users of NetVault XA is a large multinational company with 18 data centers and 250,000 servers. They have had to dedicate people to write service-level reports for their backups. Now, with NetVault XA, they can literally give their IT management, meaning their CIO and their CTOs, login IDs to NetVault XA, and they can see a dashboard that’s been color coded.
It can say, "Well, everything is green, so everything is protected," whether it's the Linux servers, Oracle databases, Exchange email, whatever the case. So by being able to reduce that level of complexity into a single pane of glass -- I know it's a cliché, but it really is -- it's really very powerful for large organizations and small.
Even if you have two or three locations and you're only 500 employees, wouldn’t it be nice to have the ability to look at your backups, your replicas, and your snapshots, whether they're in the data center or in branch offices, and whether you're a sysadmin, DBA, storage administrator, to be using one common interface and one common set of rules to all basically all get on the same plane?
So it's having a means to take an inventory and ensure that the servers are being maintained, that everything is being protected, because next to your employees, your data is the most important asset that you have.
Data is everywhere now. It’s in mobile devices. It certainly could be in cloud-based apps. That's one of the things that we didn’t talk about. At Quest we use seven software-as-a-service (SaaS)-based applications, meaning they're big parts, whether it's Salesforce.com or our helpdesk systems, or even Office 365. This is mission-critical corporate data that doesn’t run in our own data center. How am I protecting that? Am I even cognizant of it?
The cloud has made things even more interesting, just as virtualization has made it more interesting over the past couple of years. With NetVault XA, we give you that one single pane of glass with which you can report, analyze, and manage all of your data.
Gardner: Just to be clear John, this console is something you can view as a web interface, and I'm assuming therefore also through mobile devices. I'm going to guess that at some point, there will perhaps be even a more native application for some of the prominent mobile platforms.
Maxwell: It’s funny that you mentioned that. This is an HTML5-based application. So it's very new, very fresh, and very graphical. If you look at the UI, it was designed with tablets and laptops in mind. It's gotten to where you can do controls with your thumbs, assuming you're running this on a tablet.
In-house, and with early support customers, you can log into this remotely via laptops, or tablet computing. We even have some people using them on mobile phones, even though we're not quite there yet. I'm talking about the form factor of how the screens light up, but we will definitely be going that way. So a sysadmin or storage administrator can have at their fingertips the status of what’s going on in the data-protection environment.
What's nice is because this is a thin client, a web UI, you can define user IDs not only for the sysadmins and DBAs and storage administrators, but like I said earlier, IT management.
So if your boss, or your boss’ boss, wants to dial in and see the health of things, how much data you’re protecting, how much data is being replicated, what data is being protected up in the cloud, which is on-prem, all of that sort of stuff, they can now have a dashboard approach to seeing it all. That’s going to make everyone more productive, and it's going to give them a better sense that this data is being protected, and they can sleep at night.
Gardner: Is there anything here going forward that will make having a process approach to a data lifecycle and backup and recovery even more important?
Maxwell: Dana, you hit on something that's really near and dear to my heart, which is data deduplication. We have a very broad strategy. We offer our own software-based dedupe. We support every major hardware based dedupe appliance out there, and we're now adding support for Dell’s DR Series, DR4000 dedupe appliances. But we're still very much committed to tape, and we're building initiatives based on storing data in the cloud and backing up, replicating, failover, and so forth.
One of the things that we built into NetVault XA that's separate from the policy management and online monitoring is that we now have historical data. This is going to give you the ability to do some capacity management and capacity planning and see what the utilization is.
How much storage are your backups taking? What's the most optimum number of generations? Where are you keeping that data? Is some data being kept too long? Is some data not being kept long enough?
By offering a broad strategy that says we support a plethora of backup targets, whether it's tape, special-purpose backup appliances, software-based dedupe, or even the cloud, we're giving customers flexibility, because they have unique needs and they have different needs, based on service levels or budgets. We want to make them flexible, because, going back to our original discussion, one size doesn’t fit all.
Crump: Just to tie in with what John said, we need flexibility that doesn’t add complexity. Almost everything we've done so far in the environment up to now, has added flexibility, but also, for every ounce of flexibility, it feels like we have added two ounces of complexity, and it's something we just can't afford to deal with. So that's really the key thing.
Looking forward, at least on the horizon, I don't see a big shift, something like virtualization that we need to be overly concerned with. What I do see is the virtual environment becoming more and more challenging, as we stack more and more VMs on it. The amount of I/O and the amount of data protection process that will surround every host is going to continue to increase. So the time is now to really get the bull by the horns and institute a process that will scale with the business long-term.
You may also be interested in:
- For Dell’s Quest Software, BYOD puts users first -- and with IT’s blessing
- New Levels of Automation and Precision Needed to Optimize Backup and Recovery in Virtualized Environments
- Ocean Observatories Initiative: Cloud and Big Data come together to give scientists unprecedented access to essential climate insights
- Case Study: Strategic Approach to Disaster Recovery and Data Lifecycle Management Pays Off for Australia's SAI Global
- Columbia Sportswear extends deep server virtualization to improved ERP operations, disaster recovery efficiencies
When it comes to IoT in the enterprise, namely the commercial building and hospitality markets, a benefit not getting the attention it deserves is energy efficiency, and IoT’s direct impact on a cleaner, greener environment when installed in smart buildings. Until now clean technology was offered piecemeal and led with point solutions that require significant systems integration to orchestrate and deploy. There didn't exist a 'top down' approach that can manage and monitor the way a Smart Building actually breathes - immediately flagging overheating in a closet or over cooling in unoccupied ho...
Oct. 5, 2015 11:45 PM EDT Reads: 178
In his session at @ThingsExpo, Tony Shan, Chief Architect at CTS, will explore the synergy of Big Data and IoT. First he will take a closer look at the Internet of Things and Big Data individually, in terms of what, which, why, where, when, who, how and how much. Then he will explore the relationship between IoT and Big Data. Specifically, he will drill down to how the 4Vs aspects intersect with IoT: Volume, Variety, Velocity and Value. In turn, Tony will analyze how the key components of IoT influence Big Data: Device, Connectivity, Context, and Intelligence. He will dive deep to the matrix...
Oct. 5, 2015 11:00 PM EDT Reads: 214
SYS-CON Events announced today that IBM Cloud Data Services has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. IBM Cloud Data Services offers a portfolio of integrated, best-of-breed cloud data services for developers focused on mobile computing and analytics use cases.
Oct. 5, 2015 11:00 PM EDT Reads: 601
The enterprise is being consumerized, and the consumer is being enterprised. Moore's Law does not matter anymore, the future belongs to business virtualization powered by invisible service architecture, powered by hyperscale and hyperconvergence, and facilitated by vertical streaming and horizontal scaling and consolidation. Both buyers and sellers want instant results, and from paperwork to paperless to mindless is the ultimate goal for any seamless transaction. The sweetest sweet spot in innovation is automation. The most painful pain point for any business is the mismatch between supplies a...
Oct. 5, 2015 03:30 PM EDT Reads: 115
Mobile messaging has been a popular communication channel for more than 20 years. Finnish engineer Matti Makkonen invented the idea for SMS (Short Message Service) in 1984, making his vision a reality on December 3, 1992 by sending the first message ("Happy Christmas") from a PC to a cell phone. Since then, the technology has evolved immensely, from both a technology standpoint, and in our everyday uses for it. Originally used for person-to-person (P2P) communication, i.e., Sally sends a text message to Betty – mobile messaging now offers tremendous value to businesses for customer and empl...
Oct. 5, 2015 01:15 PM EDT Reads: 157
SYS-CON Events announced today that ProfitBricks, the provider of painless cloud infrastructure, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. ProfitBricks is the IaaS provider that offers a painless cloud experience for all IT users, with no learning curve. ProfitBricks boasts flexible cloud servers and networking, an integrated Data Center Designer tool for visual control over the cloud and the best price/performance value available. ProfitBricks was named one of the coolest Clo...
Oct. 5, 2015 01:00 PM EDT Reads: 719
“The Internet of Things transforms the way organizations leverage machine data and gain insights from it,” noted Splunk’s CTO Snehal Antani, as Splunk announced accelerated momentum in Industrial Data and the IoT. The trend is driven by Splunk’s continued investment in its products and partner ecosystem as well as the creativity of customers and the flexibility to deploy Splunk IoT solutions as software, cloud services or in a hybrid environment. Customers are using Splunk® solutions to collect and correlate data from control systems, sensors, mobile devices and IT systems for a variety of Ind...
Oct. 5, 2015 12:00 PM EDT Reads: 572
Organizations already struggle with the simple collection of data resulting from the proliferation of IoT, lacking the right infrastructure to manage it. They can't only rely on the cloud to collect and utilize this data because many applications still require dedicated infrastructure for security, redundancy, performance, etc. In his session at 17th Cloud Expo, Emil Sayegh, CEO of Codero Hosting, will discuss how in order to resolve the inherent issues, companies need to combine dedicated and cloud solutions through hybrid hosting – a sustainable solution for the data required to manage I...
Oct. 5, 2015 12:00 PM EDT Reads: 415
You have your devices and your data, but what about the rest of your Internet of Things story? Two popular classes of technologies that nicely handle the Big Data analytics for Internet of Things are Apache Hadoop and NoSQL. Hadoop is designed for parallelizing analytical work across many servers and is ideal for the massive data volumes you create with IoT devices. NoSQL databases such as Apache HBase are ideal for storing and retrieving IoT data as “time series data.”
Oct. 5, 2015 11:45 AM EDT Reads: 444
Clearly the way forward is to move to cloud be it bare metal, VMs or containers. One aspect of the current public clouds that is slowing this cloud migration is cloud lock-in. Every cloud vendor is trying to make it very difficult to move out once a customer has chosen their cloud. In his session at 17th Cloud Expo, Naveen Nimmu, CEO of Clouber, Inc., will advocate that making the inter-cloud migration as simple as changing airlines would help the entire industry to quickly adopt the cloud without worrying about any lock-in fears. In fact by having standard APIs for IaaS would help PaaS expl...
Oct. 5, 2015 11:30 AM EDT Reads: 430
As more and more data is generated from a variety of connected devices, the need to get insights from this data and predict future behavior and trends is increasingly essential for businesses. Real-time stream processing is needed in a variety of different industries such as Manufacturing, Oil and Gas, Automobile, Finance, Online Retail, Smart Grids, and Healthcare. Azure Stream Analytics is a fully managed distributed stream computation service that provides low latency, scalable processing of streaming data in the cloud with an enterprise grade SLA. It features built-in integration with Azur...
Oct. 5, 2015 10:00 AM EDT Reads: 726
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Bradley Holt, Developer Advocate at IBM Cloud Data Services, will demonstrate techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, faster user experience, both offline and online. The focus of this talk will be on IBM Cloudant, Apa...
Oct. 5, 2015 09:45 AM EDT Reads: 435
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Oct. 5, 2015 09:00 AM EDT Reads: 562
As enterprises capture more and more data of all types – structured, semi-structured, and unstructured – data discovery requirements for business intelligence (BI), Big Data, and predictive analytics initiatives grow more complex. A company’s ability to become data-driven and compete on analytics depends on the speed with which it can provision their analytics applications with all relevant information. The task of finding data has traditionally resided with IT, but now organizations increasingly turn towards data source discovery tools to find the right data, in context, for business users, d...
Oct. 5, 2015 08:00 AM EDT Reads: 376
SYS-CON Events announced today that MobiDev, a software development company, will exhibit at the 17th International Cloud Expo®, which will take place November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software development company with representative offices in Atlanta (US), Sheffield (UK) and Würzburg (Germany); and development centers in Ukraine. Since 2009 it has grown from a small group of passionate engineers and business managers to a full-scale mobile software company with over 150 developers, designers, quality assurance engineers, project manage...
Oct. 5, 2015 05:00 AM EDT Reads: 722
The broad selection of hardware, the rapid evolution of operating systems and the time-to-market for mobile apps has been so rapid that new challenges for developers and engineers arise every day. Security, testing, hosting, and other metrics have to be considered through the process. In his session at Big Data Expo, Walter Maguire, Chief Field Technologist, HP Big Data Group, at Hewlett-Packard, will discuss the challenges faced by developers and a composite Big Data applications builder, focusing on how to help solve the problems that developers are continuously battling.
Oct. 5, 2015 04:00 AM EDT Reads: 400
Learn how IoT, cloud, social networks and last but not least, humans, can be integrated into a seamless integration of cooperative organisms both cybernetic and biological. This has been enabled by recent advances in IoT device capabilities, messaging frameworks, presence and collaboration services, where devices can share information and make independent and human assisted decisions based upon social status from other entities. In his session at @ThingsExpo, Michael Heydt, founder of Seamless Thingies, will discuss and demonstrate how devices and humans can be integrated from a simple clust...
Oct. 4, 2015 12:00 PM EDT Reads: 621
SYS-CON Events announced today that Cloud Raxak has been named “Media & Session Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Raxak Protect automates security compliance across private and public clouds. Using the SaaS tool or managed service, developers can deploy cloud apps quickly, cost-effectively, and without error.
Oct. 3, 2015 01:15 PM EDT Reads: 581
Who are you? How do you introduce yourself? Do you use a name, or do you greet a friend by the last four digits of his social security number? Assuming you don’t, why are we content to associate our identity with 10 random digits assigned by our phone company? Identity is an issue that affects everyone, but as individuals we don’t spend a lot of time thinking about it. In his session at @ThingsExpo, Ben Klang, Founder & President of Mojo Lingo, will discuss the impact of technology on identity. Should we federate, or not? How should identity be secured? Who owns the identity? How is identity ...
Oct. 3, 2015 11:00 AM EDT Reads: 415
SYS-CON Events announced today that Solgeniakhela will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Solgeniakhela is the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions. Designed to “Bridge the Gap” between Personal and Professional Social, Mobile and Cloud user experiences, our solutions help large and medium-sized organizations dramatically improve productivity, reduce collaboration costs, and increase the overall enterprise value by bringing ...
Oct. 2, 2015 10:00 PM EDT Reads: 549