Welcome!

Containers Expo Blog Authors: Elizabeth White, Pat Romanski, Liz McMillan, Yeshim Deniz, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Blog Feed Post

Serverless | @CloudExpo #Serverless #IoT #ML #Lambda #OpenWhisk

Over a year ago we tuned into 'the need for speed' & how a concept like 'serverless computing' was increasingly catering to this

Tune into: Hype Hopping

About a year ago we tuned into “the need for speed” and how a concept like "serverless computing” was increasingly catering to this. We are now a year further and the term “serverless” is taking on unexpected proportions. With some even seeing it as the successor to cloud in general or at least as a successor to the clouds’ poorer cousin in terms of revenue, hype and adoption: PaaS.

The question we need to ask is whether this constitutes an example of Hype Hopping: to effortlessly pivot to the next new thing once the previous one turns out to be just a bit less attractive and certainly a lot more complex then we all thought at first. The Gartner Hype Cycle has been calling this for years the trough of disillusionment, a valley that only the strongest of innovations manage to pass in order to reach the slope of enlightenment or even the plateau of productivity that lies beyond.

But even before the through there are pitfalls that hypes need to survive in order to thrive. Cloud computing itself once started its journey as “on demand” or “utility” computing. Terms that in retrospect were not sexy enough to survive. Whether Serverless will survive the markets sexiness test is something that remains to be seen. Also because – unlike there are no real clouds in cloud computing – there are plenty of real servers in what is called serverLESS computing.

Whether serverless indeed will be sufficiently different to be deemed as the next generation of “How to do IT” is not easy to answer. After all, we saw plenty of earlier generations of different approaches, such as Structured Programming, Object Orientation, Service Oriented Architectures and now Micro Services, all lay claim to such a change agent role. But deep down they all were just similar enough to allow a grey bearded mainframe type to claim: “been there, done that, on our sixties S/360”. And let’s face it, when it comes to serverless, did not virtualization, software appliances, containers and everything delivered “as a service” already take many steps to remove any physical servers from our direct field of vision.

The most visible incarnation of serverless is currently Amazon Web Services’ Lambda. Although this was by most accounts not the first implementation of the idea. Manta of IaaS provider Joyent – recently acquired by consumer electronics giant Samsung – and Iron.IO’s Iron Worker arguably were earlier. And neither is Lambda any longer one of a few. Due to rapid succession introductions of new offerings such Azure Functions, Google Cloud Functions and IBM OpenWhisk. Although many of these newbies are still in a beta or even alpha stage, the term functions is rapidly becoming a standard when it comes to naming serverless offerings. And Functions as a Service (FAAS) or Function Platform as a Service (fPaaS) is even used broadly as a more precise (but therefore more confined) alternative for the term Serverless altogether.

Most of today’s serverless implementations enable users to execute user-defined functions based on various event triggers. For example, one can have a thumbnail created every time someone saves a picture, or send a bill every time someone streams a song, or verify the identity of a user every time he triggers an event. Behind the scenes the invoked function is usually performed in a container (mainly because they are so fast to start-up). While the individual containers are often running in an isolated and secured user dedicated environment, in most cases a Virtual Machine (mainly because they provided proven insulation and safety). On some platforms you declare your functions by inserting a piece of code or a script, with others you can insert functions in the form of a ready to run (binary) container. The latter feels -with a container basically being a portable machine incarnation – somehow a lot less “serverless” than the script approach.

The essence of serverless in my view is, however, that in addition to no longer having to worry about WHERE (on which server or which virtual machine) your functionality will be running, you also don’t have to worry anymore about WHEN your function will be performed. This is taken care of by the trigger or event engine of the serverless platform. This may make classic programming constructs such as loops and infinite, nested and complex “If – then – else ” trees, a thing of the past. And we all know how much code it can take to handle the logistics, versus the core transformation, in any real world applications. Not to mention how hard it is to debug such logistical flow code. Not surprisingly one of the most frequent comments heard about serverless computing is how amazingly little code you have to write to get something done.

With serverless the provider/operator of the platform is responsible for the WHERE and the WHEN and the user/developer needs only to determine the WHAT. In some way this sounds as a familiar promise. Did not non-procedural and event-driven 4th generation languages lay similar claims? And if so, could serverless long-term turn out to have the same disadvantages as these predecessors. And I don’t mean just the increased lock-in that these platforms brought, but the fact that in case of performance issues tuning let alone refactoring was almost impossible, as the environment almost fully abstracted the user/developer from the HOW.

Whether we will all be shredding our just recently printed business cards and updated linked-in profiles claiming our new found role as “Cloud Something” to replace them with “Serverless Whatever” is therefore questionable. If only because “it runs in the cloud” still sounds so much better than it runs “at the Serverless”.

“At the Hop” by Danny & the Juniors rose directly to the top of the charts in 1958 and turned out to be the bands’ biggest but certainly not their only hit song. Others were the largely forgotten “Dottie” and the more persistent “Twistin ‘USA”. Although the dance moves of the Hop were clearly different from the Twist and from subsequent Rock & Roll and Hip Hop variants, for many older people it all seemed just more of the same pointless hopping around.

More Stories By Gregor Petri

Gregor Petri is a regular expert or keynote speaker at industry events throughout Europe and wrote the cloud primer “Shedding Light on Cloud Computing”. He was also a columnist at ITSM Portal, contributing author to the Dutch “Over Cloud Computing” book, member of the Computable expert panel and his LeanITmanager blog is syndicated across many sites worldwide. Gregor was named by Cloud Computing Journal as one of The Top 100 Bloggers on Cloud Computing.

Follow him on Twitter @GregorPetri or read his blog at blog.gregorpetri.com

IoT & Smart Cities Stories
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
The Founder of NostaLab and a member of the Google Health Advisory Board, John is a unique combination of strategic thinker, marketer and entrepreneur. His career was built on the "science of advertising" combining strategy, creativity and marketing for industry-leading results. Combined with his ability to communicate complicated scientific concepts in a way that consumers and scientists alike can appreciate, John is a sought-after speaker for conferences on the forefront of healthcare science,...
Disruption, Innovation, Artificial Intelligence and Machine Learning, Leadership and Management hear these words all day every day... lofty goals but how do we make it real? Add to that, that simply put, people don't like change. But what if we could implement and utilize these enterprise tools in a fast and "Non-Disruptive" way, enabling us to glean insights about our business, identify and reduce exposure, risk and liability, and secure business continuity?
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
"The Striim platform is a full end-to-end streaming integration and analytics platform that is middleware that covers a lot of different use cases," explained Steve Wilkes, Founder and CTO at Striim, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"MobiDev is a Ukraine-based software development company. We do mobile development, and we're specialists in that. But we do full stack software development for entrepreneurs, for emerging companies, and for enterprise ventures," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...