Welcome!

Containers Expo Blog Authors: Liz McMillan, Elizabeth White, Zakia Bouachraoui, Pat Romanski, Yeshim Deniz

Related Topics: Containers Expo Blog

Containers Expo Blog: Article

SANs and NAS: Improved Efficiency Through Virtualization

SANs and NAS: Improved Efficiency Through Virtualization

SANs, NAS, iSCSI, virtualization, in-band, out-of-band, the terminology seems never ending when it comes to storage and what's worse, no-one will tell you what's best.

Unfortunately, it's not that simple. The advent of SANs and the introduction of new technology has increased the number of options available, but there are no clear guidelines as to which one to use and when. There isn't a silver bullet or golden configuration that is good for everyone, the solution has to be tailored to the specific environment.

But all is not lost, there has been a lot written about storage and storage architectures, and if all else fails look at what you are trying to achieve and how much money you have to spend.

While it is widely thought that SANs are for big enterprises and NAS for smaller ones, this is not true. Most enterprises, whether big or small, now have NAS servers and many are using them for more than just file serving. The cost of SANs has fallen such that they are now a very real prospect for smaller organizations that want to take advantage of improved connectivity and performance to utilize with technologies such as third-party copy and clustered file systems.

So it is the applications and the business requirements that should drive the architecture, not the "latest and greatest" technology or the cheapest solution. Storage is not just about the online disk. Backup (which now might be to disk before going to tape), disaster recovery, and legislative compliance all have their part to play. Without a big picture of what needs to be achieved (from the business perspective) the decisions made will be insufficient.

Another factor to include is storage growth. If the space required in 12 months is 100% more than you have today, will that influence your architecture decision; what happens if it is 1000% in three years? How long do you plan to remain with the architecture that has been defined? The immediate logical conclusion is to go for the biggest you can buy - now. But we know this is not a pragmatic business decision, the architecture should be designed so that it can be grown - and this might mean starting with NAS and expanding into a SAN just as much as starting with SAN and acquiring a NAS solution later.

Utility computing is a trend we are hearing a great deal about, with many vendors touting it as the next big thing. When it comes to storage, applying utility computing principles and creating a storage utility is a great place to start. By using storage virtualization tools storage can be pooled and then provisioned when required; by having it attached to a SAN it can be allocated to any server that needs it. Additional functionality allows file systems to be grown automatically without the need to take the application using it down.

Business reporting tools enable departments (or lines of business) to see how much storage they are using. The IT organization can then choose to apply costs to the storage and could present each business with a bill (a.k.a., chargeback) if they so wished. More often than not it is the insight into costs that is useful, and it can be an invaluable guide as to where best to invest money in IT to get the greatest return for the business. In addition, utility computing is all about improving efficiency through best practice and automation. Again, storage is a great place to begin and putting in some best practices and simple automation - e.g., increasing space on servers when they are running out - can save a business a great deal of money, no matter what its size.

The grid is also seen as the next big thing and again, storage is a key component of a grid architecture. However most grid applications, while they need a large amount of space to store data centrally, it is then farmed out and generally processed in memory within the grid so the actual storage requirements are virtually nonexistent on the fringe nodes. For the main central storage, ensuring that the application serving out the data is highly available and that the data is sufficiently protected, i.e. backed up or replicated, is generally adequate.

Outside of storage, a general comparison of grid versus utility computing is interesting because while both have very different applications running on the architecture and so from 30,000 feet look very different, from the ground level there are many similarities. What is being used, how much is it being used, can it be used more - either to improve efficiency and/or utilization. --

More Stories By Guy Bunker

Dr. Guy Bunker, an Independent Expert at Bunker and Associates, is co-author with Gareth Fraser-King of "Data Leaks For Dummies" (John Wiley & Sons, February 2009). He holds a PhD in Artificial Neural Networks from King’s College London, several patents, and is a Chartered Engineer with the IET.

Comments (1)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
@CloudEXPO and @ExpoDX, two of the most influential technology events in the world, have hosted hundreds of sponsors and exhibitors since our launch 10 years ago. @CloudEXPO and @ExpoDX New York and Silicon Valley provide a full year of face-to-face marketing opportunities for your company. Each sponsorship and exhibit package comes with pre and post-show marketing programs. By sponsoring and exhibiting in New York and Silicon Valley, you reach a full complement of decision makers and buyers in ...
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
As data explodes in quantity, importance and from new sources, the need for managing and protecting data residing across physical, virtual, and cloud environments grow with it. Managing data includes protecting it, indexing and classifying it for true, long-term management, compliance and E-Discovery. Commvault can ensure this with a single pane of glass solution – whether in a private cloud, a Service Provider delivered public cloud or a hybrid cloud environment – across the heterogeneous enter...
LogRocket helps product teams develop better experiences for users by recording videos of user sessions with logs and network data. It identifies UX problems and reveals the root cause of every bug. LogRocket presents impactful errors on a website, and how to reproduce it. With LogRocket, users can replay problems.
Data Theorem is a leading provider of modern application security. Its core mission is to analyze and secure any modern application anytime, anywhere. The Data Theorem Analyzer Engine continuously scans APIs and mobile applications in search of security flaws and data privacy gaps. Data Theorem products help organizations build safer applications that maximize data security and brand protection. The company has detected more than 300 million application eavesdropping incidents and currently secu...
Rafay enables developers to automate the distribution, operations, cross-region scaling and lifecycle management of containerized microservices across public and private clouds, and service provider networks. Rafay's platform is built around foundational elements that together deliver an optimal abstraction layer across disparate infrastructure, making it easy for developers to scale and operate applications across any number of locations or regions. Consumed as a service, Rafay's platform elimi...
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound e...
In today's enterprise, digital transformation represents organizational change even more so than technology change, as customer preferences and behavior drive end-to-end transformation across lines of business as well as IT. To capitalize on the ubiquitous disruption driving this transformation, companies must be able to innovate at an increasingly rapid pace.
Here are the Top 20 Twitter Influencers of the month as determined by the Kcore algorithm, in a range of current topics of interest from #IoT to #DeepLearning. To run a real-time search of a given term in our website and see the current top influencers, click on the topic name. Among the top 20 IoT influencers, ThingsEXPO ranked #14 and CloudEXPO ranked #17.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...