As edge computing evolves into a familiar feature of today’s network processes, many companies are shifting to edge data centers to help implement these plans. The rising demand has pushed data center providers to reconsider how they position themselves in growing markets that stand to benefit the most from edge computing.
Edge data centers are somewhat difficult to define, but they are generally smaller facilities that extend the edge of the network to deliver cloud computing resources and cached streaming content to local end users. Since they are positioned closer to those end users, they can deliver faster services with minimal latency thanks to edge caching. For internet of things (IoT) networks, edge data centers also serve as clearinghouses for data being generated by IoT devices that require additional processing, but are too time-sensitive to be transmitted back to a centralized cloud server.
From tier-1 markets (major cities like New York) with immense content demands to tier-2 markets (mid-sized cities like Pittsburgh, St. Louis, or Austin) looking to rapidly expand service and delivery speed, edge data centers have the potential to address needs in a variety of situations.
But for companies hoping to transform and take advantage of this burgeoning IT infrastructure, it can be difficult to identify what distinguishes an edge data center from conventional or hyperscale facilities. While some organizations utilize micro data centers for their IoT needs, these infrastructures should not be confused with true edge data centers.
Read Also: RTO Vs RPO
Checklist: How to Define an Edge Data Center?
Does It Provide Extensive Local User Service?
Almost by their very definition, edge data centers should be located close to end users. Although many of them are managed remotely with very little on-ground staff, they should form an important part of the local network. Most edge data centers are located in tier-2 markets that don’t have easy access to larger, more powerful colocation facilities. According to Cisco’s estimates, about one-third of all traffic will use these data centers and IoT devices to bypass the core altogether by 2022, keeping data at the edge near end users.
Simply being situated locally isn’t enough to make a data center an “edge” data center. Unless a large percentage of local users are utilizing its services (streaming content, accessing cloud applications, playing games, implementing Industry 4.0 practices, etc), the center doesn’t incorporate enough of the internet to viably be considered part of any network edge.
Is it Part of a Larger Network?
Edge data centers may provide a range of services on their own, but they typically connect back to a larger data center deployment that provides cloud resources and centralized data processing like machine learning or analytics. In some instances, they are even connected to multiple additional edge data centers, each one storing and caching data to deliver content as quickly as possible.
This scattered arrangement isn’t without potential downsides, however. Since each edge data center can be managed locally, there is a greater possibility of disruption throughout the network due to conflicting on-site processes and lack of coordination. If one data center receives software or server updates, it could implement changes that cause problems for other data centers in the network. These problems could result in system downtime, which can have serious consequences for even the biggest companies.
Is It Fast?
Speed should be the hallmark trait of any edge data center. The whole purpose of moving data processing to the edge of the network is to speed up response times by reducing latency. Since edge data centers are physically closer to end users, their performance should be faster in almost every situation. That improved performance should not also come at an increased cost. Edge computing doesn’t deliver better service by laying better cables or boosting power (although it will help improve the performance of 5G networks when they’re fully implemented); it’s simply a more efficient architecture for transferring and processing data because it can deliver content quickly to local users with minimal latency.
If a local data center claiming to offer edge computing doesn’t deliver better performance at a better value, then it’s probably not providing a true edge service. A good edge data center should be able to provide measurable results demonstrating how it helps its customers deliver content faster and cheaper to local end users.
Read Also: Data Center Migration Best Practices
How Big is Its Footprint?
Many of today’s edge data centers are relatively new, either built within the last few years to meet the increasing demand for such facilities or renovated to convert outdated centers into edge-capable ones. Regardless of their origins, though, edge data centers tend to be significantly smaller than traditional data centers. They often have to be hyper-aware of their utilization of space and what environmental cooling strategies they utilize. Companies hoping to utilize their services need to be aware of these requirements when considering which IT assets they want to deploy there.
Is It Reliable?
Being located on the edge of the network doesn’t make an edge data center any less mission-critical than its bigger, cloud-hosting cousins. Since a true edge center usually provides at least 75% of local internet content to the surrounding market, even a temporary loss of service can be devastating.
Because reliable uptime is so important, anything below a tier-3 data center should not be considered viable for edge computing. Tier-3 centers offer 99.982% uptime and are considered the standard for content-heavy media providers like Netflix and Facebook as well as companies in the healthcare and financial service industries.
Data centers can take a number of forms, but not all of them can be categorized as true edge data centers. By asking these key questions about a data center, organizations can better identify whether or not the facility will address its edge computing needs.