Edge computing has a bright future, although no one really knows what it looks like • The Register
Edge computing is easy to sell but hard to define. More of a philosophy than any single architecture, the edge and the cloud are on a spectrum, with the current cloud service model often dependent on in-browser processing, and even the most sophisticated deployments depend on central infrastructure.
The philosophy on board, like most Reg readers know to doubt, it means pushing as many treatments and calculations as close as possible to the points of collection and use.
If biology is a guide, advanced computing is a good evolutionary strategy. The octopus has a central brain, but each tentacle has the ability to analyze its environment, make decisions, and react to events. The human gut takes care of itself, with roughly the same processing power as a deer, while the eyes and ears do local processing before relaying the data back. All of these natural systems provide efficiency, robustness, and flexibility – attributes that IT Edge deployments should also expect.
But these natural analogies also illustrate another of Edge’s most important aspects: its diversity. 5G is often cited as the quintessential peak case. It owes most of its potential to its design around edge principles, shifting decision making about configuration and management of connections to distributed control systems. The combination of high bandwidth, low latency, prioritization of traffic management, on all moving targets simply cannot work unless as many processing as possible has been done. place as close as possible to radios (and therefore to users).
But another high-profile, high-profile application, transportation, requires a very different approach. An aircraft can generate a terabyte of performance and diagnostic data on a single flight, which is beyond the capabilities of in-flight data communications.
Spread that out over an ever-changing global fleet, and central control is not an option. Autonomous on-board processing, prioritization of immediate safety information such as moment-to-moment engine parameters for available real-time links, and efficient bulk data retrieval when possible, lead to design decisions far removed from 5G engineering.
Each industry presented as a natural fit for Edge – IoT, digital health, manufacturing, energy, logistics – challenges the idea of Edge as a single discipline. As the Openstack project says in its article “Edge Computing: Next Steps in Architecture, Design and Testing”:. “
Yet by focusing on common characteristics, their benefits and challenges, we can see where it could be heading.
Edge computing needs a scalable and flexible network. Even if a particular deployment is stable in size and resources over a long period of time, to be economical it must be built from general purpose tools and techniques capable of meeting a wide variety of demands. To this end, software-defined networking (SDN) has become a priority for future cutting-edge developments, although a series of recent research has identified areas where it is not yet quite the task.
The typical approach of SDN is to divide the networking task into two tasks of control and data transfer. It has a control plane and a data plane, the first managing the second by dynamic reconfiguration based on a combination of rules and monitoring. This seems to fit well with edge computing, but SDN typically has a centralized control plane that expects a holistic view of all network activity. As researchers from Imperial College London point out in a recent article [PDF], it is neither scalable nor robust, two key requirements. Various approaches – multiple control planes, increased intelligence in edge switching hardware, dynamic network partitioning on demand, geography and flow control – are being explored, as are the interactions between security and SDN in the network. management of the periphery.
The conclusion, here as elsewhere, is that this is a very active area of research and that although the potential is not yet realized, these techniques will form the basis of an effective edge network.
The reason for this latter conclusion leads to another aspect of Edge development: what rules to apply in general to develop and manage infrastructure, services, and applications.
State-of-the-art development and management
As edge architectures continue to evolve, extending DevOps principles to infrastructure provides more visibility into how things work, adoption of common, proven open source components and approaches, and benefits. practices of rapid reconfiguration and deployment.
With the all-as-code approach, exemplified by SDN and container management / deployment tools like Kubernetes, the variety of edge architectures, from highly centralized to highly distributed, can be managed with the same tools, an important consideration as technologies evolve and take their place in the market.
Kubernetes provides a common abstraction layer on top of physical resources such as compute, storage, and networking, enabling standard deployment anywhere, including to heterogeneous edge devices across various infrastructures. This fits well with the increased performance of cross-platform development tools, enabling a device-independent approach that fits well with the economy of the periphery and its need to cultivate diverse ecosystems.
With all of this, monitoring and testing must follow. One approach to creating maintainable edge deployments is artifact review, where everything that is created to be part of an overall system is sufficiently well documented to be tested and developed by others, with reproducible results.
In general, all the ideas of DevOps best practice – communication between teams, standardization of practices, automation where possible, instrumentation – need to be strengthened to cope with the new scale, ever-changing composition and business demands. variety that Edge brings. .
This problem of managing edge deployments which in many cases, such as IoT, have end nodes of varying age, capabilities, and technology, quickly leads to extremely complex permutations of different configurations. Mobile app developers know this all too well, with constant decisions about what minimum configuration to support, how to handle different geographies, and how to support customers who deviate from the norm. They are a good test bed, albeit subconsciously, for the realities of certain aspects of the ship.
Edge standards are being developed to control this. ETSI, the European telecommunications standardization body and the 3GPP mobile standardization group have collaborated [PDF] to integrate cloud services and mobile networks at the edge, including how to manage application discovery of edge services. Internet-based systems like DNS have underlying assumptions that the entities they expose stay where they are; edge, especially mobile edge, does not work this way.
Another major focus of activity is LF Edge, the Linux Foundation’s edge group, which has just released EdgeX 2.0 Ireland, a major update to its nascent standards package. This includes secure APIs to connect devices and networks and manage data channels, and the Open Retail Reference Architecture (ORRA), a common deployment platform to manage apps, devices, and services.
Although the EdgeX standard has seen some flow, the intention is to use it as the basis for a Long Term Support (LTS) release later in 2021. The standards package is available in a Docker container, highlighting the consensus that Edge will need to be built on DevOps lines to be viable.
Edge’s hidden defects
For Edge to make a good business case, it has to be the most effective way to solve a problem. For poster children – 5G, transport, IoT – this is often the only possible solution. But in more general cases, it should be more efficient than the cloud-first, device-second model. The big cloud providers are in fierce competition here, with their hyper-efficient internal management systems and economies of scale.
In a review [PDF] of the technological, economic and industrial future of advanced computing in the European Union, it should be noted that Google claims that its administrators each monitor 10,000 servers, compared to one administrator per hundred servers in standard data centers of enterprise class, and Amazon’s data centers as three and a half times more energy efficient in a similar comparison. If your edge deployment is to remove a lot of the data processing from the cloud, these are economies of scale that it may have to contend with. It’s the raw economics that make clouds so dominant, and they don’t change.
Security is also very difficult. Moving workloads from the data center to the edge removes the physical protection against theft and vandalism, and the management of security credentials for thousands or hundreds of thousands of nodes when connectivity or feeding may be intermittent for some is not a resolved problem. With care, Edge can be more secure than standard approaches – many IoT sensors have few spare resources for strong encryption, but a local control node can add it before sending it.
But deployment at the edge increases the surface area of a system. Therefore, active monitoring and analysis of logs for signs of problems should be scaled to match.
The future of edge computing, more than most growing technologies, depends on everyone in the business. From top academic researchers to the DevOps machine room, all layers of industry need to be aware of what the other is doing. For Edge to work, a whole mesh of existing ideas for infrastructure, management, development, monitoring, security, and architectural understanding must explore the options together.
No organization, not even tech giants, can take it where it doesn’t fit, and no organization can hold it back when it locks into viable innovation. Even without the hype, life at the limit is going to be interesting. ®