Containers Expo Blog Authors: Pat Romanski, Flint Brenton, Elizabeth White, Yeshim Deniz, Gordon Haff

Related Topics: Containers Expo Blog, Java IoT, @CloudExpo

Containers Expo Blog: Blog Feed Post

Musings on Neural Networking By @DaveGraham | @CloudExpo #Cloud

I’ve always had a fascination with the way information is acquired and process

Given my last post was in November of 2013 (trust me, I’ve been busy), I figured I’d start out with a heady topic like “Neural

Networking” in an age where Deep Machine Learning and perhaps its lesser cousin, assisted Machine Learning (I’ll define in a bit), seem to be all the rage.  However, before we begin, I want to make a few things clear:

  • I’m no expert in these fields.
  • I’m musing out loud here.  You’re my audience and what you determine to be salient and what you deem junk is, well, your problem, not mine.
  • DML/AML, Neural Networking, and a whole host of other terms, acronyms, mindf**k level events, etc. are here. Deal with it.

So with such an illustrious preface, I suppose we should let the party begin.

I’ve always had a fascination with the way information is acquired and process. Reading back through the history of this site, you can see this tendency towards more fanciful thinking, e.g., GPGPU assisted network analytics, future storage systems using Torrenza-style processing.  What has once been theory has made its way into the realm of praxis; looking no further than ICML 2015, for example, to see the forays into DML that nVidia is making with their GPUs.  And on the story goes.  Having said all this, there are elements of data, of data networking, of data processing, which, to date, have NOT gleaned all the benefits of this type of acceleration.  To that end, what I am going to attempt to posit today is an area where Neural Networking (or at least the benefits therein) can be usefully applied to an area interacted with every single nanosecond of every day: the network.

Before we get much further, we should probably have a definition of some terms that I will be using:

  • Deep Machine Learning (DML): burgeoning area of machine learning research focused on machine intelligence utilizing underlying principles of neural networking
  • Assisted Machine Learning (aka Hybrid; AML): a half-step towards DML where pre-pended processing is done by fixed systems within a rough grid approach  and learning takes place on these processed chunks of data.
  • Neural Networking: “a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.” (In “Neural Network Primer: Part I” by Maureen Caudill, AI Expert, Feb. 1989)
  • Packet Forwarding Engines (PFE): base level of hardware in a contemporary network switch

State of the Union: Networks
To talk about the future, some mention is needed of the current état de fait of systems networking.

Packet Forward Engines (PFEs) are the muscle of networking switches. Today, we’re facing routinely more powerful PFEs, both custom as well as mainline/merchant.  Companies like Cisco, Broadcom, Xpliant, Intel, Marvel, Juniper, etc. have propagated designs and delivered ever-increasingly scalable devices that can process billions of bits of information at a time.  The traceable curve here closely follows an analog of Moore’s law while not exactly staying within the same bounds (e.g. I could point out that Broadcom’s Trident/Trident+ compared to the currently shipping Trident 2 are not all that far removed from each other both in frequency, scale, latency, and processing power).  If we allow for interstitial comparisons cross-vendor, the story changes somewhat and, to my mind, the curve becomes even more pronounced.  Comparing custom silicon from Juniper or Cisco to that of Broadcom, for example, shows a higher level of capability present in these more custom designs, albeit with a slower time to market.  All this is being said by way of pointing out that compared to host-level development of processors (like Intel’s Xeon/Core and AMD‘s APU/CPU line ups), these specialized processing units have a different scale in/scale out process.  Consequently, their application has been mostly stagnant; a switch line or two released with a regular cadence of roughly 18 months or so, interspersed by the next important part of networking: the software.

Software development is as critical to the current state of networking as the hardware is.  Relying on fixed pipeline devices (as the Trident 2 is), requires a certain level of determinism to be designed into the software that controls it.  With the seminal development of software development kits (SDKs), the de-coupling has allowed for vendors to write against a known set of functions with a healthy separation from the underlying hardware.  This abstraction has both accomplished a level of increasing functionality and capability within the systems (e.g. Broadcom’s concept of a programmable unified forwarding table (UFT)),  as well as allowing for agile development of the overlaying software (e.g. quicker time to market for a network operating system (NOS) built on top of said SDK).  Having this level of functionality is important as it allows more agile decisions to be made as standards or protocols are ratified for implementation.    An NOS is only as capable as the hardware it lies upon, however, and that leads us to the third part of the current network: the control plane processing.

The control plane of a network switch is the brain of the operations. A PFE is useless as a commodity processor.  If you examine its structure closely, its functional blocks are designed for very purpose driven applications.  This type of processing, while important for the datagrams it will functionally serve, is useless for running more banal applications like an NOS.  However, generic processing hardware, like PowerPC, MIPS, ARM, or even x86 cores can be harnessed to manage this type of workload very effectively.  In recent years, there has been increasing momentum to moving these control plane processing entities from more archaic and proprietary architectures like PPC and MIPS, to more modern and commercially available standards like ARM and x86.  This move has allowed for modernizing the control plane from an embedded system to a discrete “system on a switch” running modern operating systems and either virtualizing the NOS (e.g. like Juniper’s QFX5100 switch line) or partitioning via containers or some other level of abstraction.  The benefits of such systems cannot be ignored as again, time to market and feature development becomes more agile in nature.  (Side note: the role of ARM as a valid control plane foundation cannot be overlooked and will be the subject of another post at some point in the not-so-distant future).

In summary, the current networking switch present in the data center is comprised of a PFE, a network operating system (NOS), and a control plane to run the NOS. This is not unlike a commodity server with lots of physical interfaces designed for ingress and egress of data.  These switches are increasingly complex and performance-heavy and provide a robust foundation upon which to build neural networks.

Becoming Neural, not Neurotic
When you walk into your living room, tell your Xbox One to turn itself on (“Xbox On!”) and watch as the always-listening machine powers up your TV and itself and then scans you really quick to determine identity, you’re watching machine learning in action. This process makes use of both audio and visual queuing and localization of data (a core component of neural networking) to derive identity and causality.  You had to walk through a setup process to both capture your image as well as your vocalization.  This was stored in a local database and used as a reference point.  The system is given rough control points to operate against but is functionally able to interact against this baseline; case in point, depending on my level of beard growth or not, my Xbox has various levels of success in determining who I am by sight.  The same goes for my iPhone, my Android, my Amazon Echo, etc.  Each of these machines has a minimal database connected to a backend process (the “cloud” or another hosted platform) and performs a fixed function (voice recognition, facial recognition).  All this explanation is to demonstrate that we’re in the throes of neural networks without even realizing.  If we look at the network as a necessary part of this process, it becomes the springboard for incredible capability.

So how can a transport layer become “neural”?  Looking back at our definition of “neural networks” we see that at its very foundation is the concept connectedness.  A network is a collection of interconnected devices using some sort of medium, whether copper, optical, or radio frequency that allows them to interoperate or exchange data.  Transporting data, whether electrical, radio frequency, or optical, is just that: transport.  It implies neither intelligence nor insight.  The sender and the receiver, however, can operate on data and make decisions with some level of determinism, though, and this is where we will focus.  Historically, one would look for the systems attached to the transport layer as the true members of the network.  However, as noted previously, with the advent of “system on a switch” control planes, suddenly we have the appearance of systems as joining points, not just transport pipes.

Moving further, if these transport junctions or pipes suddenly develop the intelligence, based on no other inputs but data, to route “conversations” or data in ways that logically make sense and have derived value to either the sender, receiver, or both, have we achieved a neural network? We can see some basic interworkings of this in the use of LLDP (link layer discovery protocol) as a low level exchange of “who are you?” information, but this is derived from extant specifications of what a datagram should look like.  This isn’t flaunting the concepts of neural networking but belies that data, exclusive of content and context, is known already.  So, the next logical leap is how that data is interpreted.

Let’s presuppose that LLDP has provided two neighboring switches with the identity, capability, and proximity to each other.  What then?  As hosts are connected one side to another, data will flow based on the hosts requirements for connectedness and data.  The transport layer, at that point, is nothing more than transport; simple forwarding devices.  However, let’s also assume that these two switches have a system attached to each respective control plane that is constantly watching traffic as it flows across and is “learning.”  What these switches are learning can be perceived as raw input and can be manipulated and quantified as such.  In a neural networking world, these systems are nascent; raw with no heuristic capability as yet designed.

The situation described above is precisely why networking systems function so completely today.  They’re not tasked with anything beyond fixed parameters or inspection.  Think of it:  IETF and IEEE have specified what a datagram should look like.  It should have Layer-2 source and destination media access control (mac) address along with payload, for example.  But beyond this, what is accomplished?  The PFE is looking for datagrams that conform to these standards to pass along; anything else is malformed and dropped.  You quickly reach a situation where, heuristically, you’re limiting the overall potential of these machines to be simple engines, receiving parameters and doing as told.  What, then, could be done?

Vision Casting
I can sit here and postulate any number of ideas that my peers have already done.  I’m more interested in what we can do with the data that is already present.  We can argue that daemons that run in the kernel, statistic packages that collect PFE-published data points, or other such utilities are useful.  In a way, they are, but they represent a subset of capabilities and are mostly human driven (AML at its finest).  What if, however, each time a request is made, the switch learns what data points are being requested and viewed and is able to selectively feed only the most salient points back to its consumers without flooding tons of useless information?  What if this is a priori to a receiver (in the classic SNMP use case)? What if this is machine driven (DML) and becomes part of the flow?

For a network to become “aware” and fully realized as neural in nature (and presupposing the eventual coupling of machine state to machine state thru a hyperaware network as my conclusion) it must be able to functionally process data on its own, either by simple heuristic learning (profiling, as noted above, is just one method) or through the contrived mechanisms of its NOS in a non-rigid manner (e.g. not L2 learning, etc.)  Certainly the use of standardized protocols for initial communication is encouraged, since it can engage heterogenous systems together in communication without other proprietary lower-level protocols like HiGig, but beyond this initial negotiation, the hope and desire is that learning, forwarding, reporting, and engaging become autonomous and self-forming.  As systems interact, then, decisions will be made based on what the datagram contains, the way the PFE is responding to traffic flows and utilization, and also what the next connected device is doing.  This capability is present, to some extent, today in systems that use a network management system (NMS) that wholistically can see the network for what it is, but this external intelligence, is again, driven from the outside in and not organic to the devices themselves.

I’ve laid out what I hope is the framework for an ongoing discussion of neural networks (without delving into AML/DML this go around) and their role within the actual network space.  I’m curious as to your thoughts (constructive, please).

Read the original blog entry...

More Stories By Dave Graham

Dave Graham is a Technical Consultant with EMC Corporation where he focused on designing/architecting private cloud solutions for commercial customers.

@ThingsExpo Stories
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPO also offers s...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...