Realize opportunities at the edge with a distributed cloud database

Image: Studio Deemerwha/Adobe Stock

The hype around edge computing is growing, and rightly so. By bringing compute and storage closer to where data is generated and consumed, such as IoT devices and end-user applications, organizations are able to deliver low-latency, reliable, and highly available experiences even to the most bandwidth-intensive and data-intensive applications.

While delivering fast, reliable, immersive and seamless customer experiences is one of the biggest drivers of technology, another often underestimated reason is that edge computing helps organizations comply with strict laws in data privacy and governance issues that hold companies accountable for moving sensitive information to the central cloud. waiters.

Improved network resiliency and bandwidth costs are also driving adoption. In short, without breaking the bank, edge computing can enable compliant, always-on, always-fast applications anywhere in the world.

SEE: Research: Digital transformation initiatives focus on collaboration (TechRepublic Premium)

It’s no surprise that market research firm IDC predicts that edge networks will account for more than 60% of all cloud infrastructure deployed by 2023, and that global edge computing spending will reach $274 billion. by 2025.

Additionally, with the influx of IoT devices – the State of IoT Spring 2022 report estimates that approximately 27 billion devices will be connected to the internet by 2025 – businesses have an opportunity to leverage technology to innovate. at the forefront and stand out from their competitors.

In this article, I’ll walk through the progress of edge computing deployments and discuss ways to develop an edge strategy for the future.

From on-premises servers to the edge of the cloud

The first instantiations of edge computing deployments were custom hybrid clouds. Backed by a cloud data center, applications and databases ran on on-premises servers that a company was responsible for deploying and managing. In many cases, a basic batch file transfer system was typically used to move data between on-premises servers and the backup data center.

Between capital costs and operational expenses, scaling and managing on-premises data centers can be out of reach for many organizations. Not to mention, there are use cases such as offshore oil rigs and aircraft where setting up on-premises servers is simply not possible due to factors such as space and power requirements.

To address concerns about the cost and complexity of managing distributed edge infrastructures, it is important that the next generation of edge computing workloads take advantage of the managed edge infrastructure solutions offered by leading vendors. cloud services, including AWS Outposts, Google Distributed Cloud, and Azure Private MEC.

Rather than having multiple on-premises servers storing and processing data, these edge infrastructure offerings can get the job done. Organizations can save money by reducing the expense of managing distributed servers, while benefiting from the low latency offered by edge computing.

Additionally, offerings such as AWS Wavelength enable edge deployments to utilize the high bandwidth and low latency features of 5G access networks.

Using managed cloud edge infrastructure and accessing high-bandwidth edge networks solves part of the problem. A key part of the advanced technology stack is database and data synchronization.

In the example of edge deployments that use outdated file-based data transfer mechanisms, edge applications run the risk of running on old data. Therefore, it is important for organizations to develop an edge strategy that takes into account a database suitable for today’s distributed architectures.

Using an edge-ready database to bolster edge strategies

Organizations can store and process data at multiple tiers in a distributed architecture. This can happen in central cloud data centers, cloud edge locations, and on end-user devices. Service performance and availability improves with each level.

To this end, integrating a database with the on-device application provides the highest levels of reliability and responsiveness, even when network connectivity is unreliable or non-existent.

However, there are cases where local data processing is not sufficient to obtain relevant information or where devices are unable to store and process local data. In such cases, distributed applications and databases at the edge of the cloud can process data from all downstream edge devices while taking advantage of the edge network’s low-latency, high-bandwidth channels.

Of course, hosting a database in central cloud data centers is essential for long-term persistence and aggregation of data across edge locations. In this tiered architecture, by processing most of the data at the edge, the amount of data routed through the Internet to central databases is minimized.

With the right distributed database, organizations can ensure that data is consistent and in sync at every level. This process does not consist of duplicating or replicating data on each level; rather, it is about transferring only the relevant data in a way that is unaffected by network disturbances.

Take retail, for example. Only store-related data, such as in-store promotions, will be transferred to store edge locations. Promotions can be synchronized in real time. This ensures that store locations only work with data relevant to the store location.

SEE: Microsoft Power Platform: What you need to know about it (free PDF) (TechRepublic)

It is also important to understand that in distributed environments, data governance can become a challenge. At the edge, organizations often deal with ephemeral data, and the need to enforce data access and retention policies at the granularity of an edge location makes things extremely complex.

That’s why organizations planning their edge strategies should consider a data platform that can grant access to specific subsets of data only to authorized users and implement data retention standards. across all tiers and devices, while ensuring that sensitive data never leaves the edge.

An example of this would be a cruise line granting access to voyage-related data to a sailboat. At the end of the trip, data access is automatically revoked for cruise line employees, with or without an internet connection, to ensure data protection.

Advance, edge first

The right edge strategy enables organizations to capitalize on the growing ocean of data emanating from edge devices. And with the increase in applications at the edge, organizations looking to be at the forefront of innovation should extend their core cloud strategies with edge computing.

Priya Rajagopal
Priya Rajagopal, Director of Product Management at Couchbase

Priya Rajagopal is Director of Product Management at Couchbase, (NASDAQ: BASE), a leading modern database provider for enterprise applications that 30% of Fortune 100 companies depend on. With more than 20 years of experience in creating software solutions, Priya is co-inventor of 22 technology patents.