An intimate but disconnected pairing, Red Hat on edge complexity

An intimate but disconnected pairing, Red Hat on edge complexity

RHEL OS, Red Hat Enterprise Linux operating system commercial market distribution logo, symbol, sticker on a laptop keyboard.
Image: Tomasz/Adobe Stock

Edge is complex. Once we get past the shuddering enormity and shattering reality of understanding this basic statement, we can perhaps start to build frameworks, architectures and services around the task in front of us. Last year’s State Of The Edge report from The Linux Foundation said it succinctly: “The edge, with all of its complexities, has become a fast-moving, forceful and demanding industry in its own right.”

Red Hat appears to have taken a stoic appreciation of the complex edge management role that lies ahead for all enterprises who now move their IT stacks to straddle this space. The company says it views edge computing as an opportunity to “extend the open hybrid cloud” all the way to all the data sources and end users that populate our planet.

Pointing to edge endpoints as divergent as those found on the International Space Station and your local neighborhood pharmacy, Red Hat now aims to clarify and validate the portions of its own platform that address specific edge workload challenges.

At the bleeding edge of edge

The mission is, although edge and cloud are intimately tied, we need to enable compute decisions outside of the data center, at the bleeding edge of edge.

“Organizations are looking at edge computing as a way to optimize performance, cost and efficiency to support a variety of use cases across industries ranging from smart city infrastructure, patient monitoring, gaming and everything in between,” said Erica Langhi, senior solution architect at Red Hat.

SEE: Don’t curb your enthusiasm: Trends and challenges in edge computing (TechRepublic)

Clearly, the concept of edge computing presents a new way of looking at where and how information is accessed and processed to build faster, more reliable and secure applications. Langhi advises that although many software application developers may be familiar with the concept of decentralization in the wider networking sense of the term, there are two key considerations to focus on for an edge developer.

“The first is around data consistency,” said Langhi. “The more dispersed edge data is, the more consistent it needs to be. If multiple users try to access or modify the same data at the same time, everything needs to be synced up. Edge developers need to think about messaging and data streaming capabilities as a powerful foundation to support data consistency for building edge-native data transport, data aggregation and integrated edge application services.”

Edge’s sparse requirements

This need to highlight the intricacies of edge environments stems from the fact that this is different computing — there’s no customer offering their “requirements specification” document and user interface preferences — at this level, we’re working with more granular machine-level technology constructs.

The second key consideration for edge developers is addressing security and governance.

“Operating across a large surface area of data means the attack surface is now extended beyond the data center with data at rest and in motion,” explained Langhi. “Edge developers can adopt encryption techniques to help protect data in these scenarios. With increased network complexity as thousands of sensors or devices are connected, edge developers should look to implement automated, consistent, scalable and policy-driven network configurations to support security.”

Finally, she says, by selecting an immutable operating system, developers can enforce a reduced attack surface thus helping organizations deal with security threats in an efficient manner.

But what truly changes the game from traditional software development to edge infrastructures for developers is the variety of target devices and their integrity. This is the view of Markus Eisele in his role as developer strategist at Red Hat.

“While developers usually think about frameworks and architects think about APIs and how to wire everything back together, a distributed system that has computing units at the edge requires a different approach,” said Eisele.

What is needed is a comprehensive and secured supply chain. This starts with integrated development environments — Eisele and team point to Red Hat OpenShift Dev Spaces, a zero-configuration development environment that uses Kubernetes and containers — that are hosted on secured infrastructures to help developers build binaries for a variety of target platforms and computing units.

Binaries on the base

“Ideally, the automation at work here goes way beyond successful compilation, onward into tested and signed binaries on verified base images,” said Eisele. “These scenarios can become very challenging from a governance perspective but need to be repeatable and minimally invasive to the inner and outer loop cycles for developers. While not much changes at first glance, there is even less margin for error. Especially when thinking about the security of the generated artifacts and how everything comes together while still enabling developers to be productive.”

Eisele’s inner and outer loop reference pays homage to complexity at work here. The inner loop being a single developer workflow where code can be tested and changed quickly. The outer loop being the point at which code is committed to a version control system or some part of a software pipeline closer to the point of production deployment. For further clarification, we can also remind ourselves that the notion of the above-referenced software artifacts denotes the whole panoply of elements that a developer might use and/or create to build code. So this could include documentation and annotation notes, data models, databases, other forms of reference material and the source code itself.

SEE: Hiring kit: Back-end Developer (TechRepublic Premium)

What we know for sure is that unlike data centers and the cloud, which have been in place for decades now, edge architectures are still evolving at a more exponentially charged rate.

Parrying purpose-builtness

“The design decisions that architects and developers make today will have a lasting impact on future capabilities,” stated Ishu Verma, technical evangelist of edge computing at Red Hat. “Some edge requirements are unique for each industry, however it’s important that design decisions are not purpose-built just for the edge as it may limit an organization’s future agility and ability to scale.”

The edge-centric Red Hat engineers insist that a better approach involves building solutions that can work on any infrastructure — cloud, on-premises and edge — as well as across industries. The consensus here appears to be solidly gravitating towards choosing technologies like containers, Kubernetes and lightweight application services that can help establish future-ready flexibility.

“The common elements of edge applications across multiple use cases include modularity, segregation and immutability, making containers a good fit,” Verma. “Applications will need to be deployed on many different edge tiers, each with their unique resource characteristics. Combined with microservices, containers representing instances of functions can be scaled up or down depending on underlying resources or conditions to meet the needs of customers at the edge.”

Edge, but at scale

All of these challenges lie ahead of us then. But although the message is don’t panic, the task is made harder if we have to create software application engineering for edge environments that is capable of securely scaling. Edge at scale comes with the challenge of managing thousands of edge endpoints deployed at many different locations.

“Interoperability is key to edge at scale, since the same application must be able to run anywhere without being refactored to fit a framework required by an infrastructure or cloud provider,” said Salim Khodri, edge go-to-market specialist of EMEA at Red Hat.

Khodri makes his comments in line with the fact that developers will want to know how they can harness edge benefits without modifying how they develop and deploy and maintain applications. That is, they want to understand how they can accelerate edge computing adoption and combat the complexity of a distributed deployment by making the experience of programming at the edge as consistent as possible using their existing skills.

“Consistent tooling and modern application development best practices including CI/CD pipeline integration, open APIs and Kubernetes-native tooling can help address these challenges,” explained Khodri. “This is in order to provide the portability and interoperability capabilities of edge applications in a multi-vendor environment along with application lifecycle management processes and tools at the distributed edge.”

It would be tough to list the key points of advice here on one hand. Two would be a challenge and it may require the use of some toes as well. The watchwords are perhaps open systems, containers and microservices, configuration, automation and of course data.

Decentralized edge might start from data center DNA and consistently retain its intimate relationship with the cloud-native IT stack backbone, but this is an essentially disconnected relationship pairing.

Source of Article