As the networking market continues its evolution to 5G, companies face new issues and challenges to ensure performance meets the objectives and for service providers ensure Service Level Agreements (SLAs) are met.  In addition to typical network performance monitoring, edge computing environments add a few new wrinkles. 

Monitoring edge computing environments can pose several challenges and issues. Here are some common ones:

  1. Distributed nature. Edge computing involves a distributed network of devices and systems located at various edge locations. Monitoring such a diverse and geographically dispersed infrastructure can be challenging. Ensuring consistent and reliable monitoring across all edge nodes can be difficult.
  2. Network connectivity. Edge devices may operate in remote or hostile environments with limited or intermittent network connectivity. Monitoring solutions must be designed to handle network disruptions gracefully and provide effective monitoring even when devices experience intermittent connectivity.
  3. Scalability. Edge computing environments can scale to accommodate a large number of edge devices and locations. Monitoring systems must be able to scale accordingly and handle the increasing volume of data generated by these devices without sacrificing performance or accuracy.
  4. Security and privacy. Edge computing involves processing sensitive data at the edge, which raises concerns about security and privacy. Monitoring solutions need to ensure that monitoring data is securely transmitted, stored, and accessed. Additionally, monitoring tools should not compromise the privacy of the data being processed at the edge.
  5. Heterogeneous infrastructure. Edge computing environments often consist of a variety of devices, operating systems, and software stacks from different vendors. Monitoring solutions should be compatible with this heterogeneous infrastructure and provide a unified view of the entire edge ecosystem.
  6. Latency and real-time monitoring. Edge computing is often deployed to reduce latency and enable real-time processing. Monitoring tools need to capture and analyze data in near real-time to provide timely insights and alerts. Any delays or latency introduced by the monitoring process itself should be minimized.

Addressing these challenges requires specialized monitoring solutions designed specifically for edge computing environments, considering the unique characteristics and requirements of the edge and the expectations of 5G.  Below are three key metrics that must be monitored in these new networks.

5G Infrastructure cause of delays

There are three key measurements in the 5G basket of deplorables affecting performance:

  1. Jitter is a time delay in the sending of packets across your network. This is often caused by network congestion and sometimes by automated route changes. The longer data packets take to arrive; the more jitter can negatively impact the video and audio quality. Jitter is measured in milliseconds (ms). A delay of 30 ms or more can result in distortion and disruption to a call or video.
  • Network latency is the time it takes for a data packet to travel from the sender to the receiver and back. 5G provides much lower latency than previous generations of technology.  This enables low latency applications such as AR/VR to be implemented, but it is imperative to ensure these levels are met and maintained on a consistent basis.
  • Packet loss is when transmitted packets fails to reach their intended destination forcing a re-transmission. This results in slow service and even total loss of network connectivity. Video is the most likely application disrupted by packet loss because it relies on real-time processing. Packet loss shouldn’t be more than 1%.

What is the cause?

All these issues can create havoc in an operation.  The culprit can be difficult to find because it can be transitory, caused by an automated routing change, a hardware failure, bandwidth, distance, or congestion. As a matter of fact, 80% of a technician’s time is spent on discovery and 20% on remediation.

How can I improve the discovery time and MTTR?

Network flow or traffic is the amount of data transmitted across a network over a specific period. Monitoring network flows is key to understanding your network’s typical behavior and performance for each user/device with real session data, not hypothetical synthetic data. DART generates Intellidata to represent the traffic on the network for each session. Intellidata’s capability exceeds traditional NetFlow Generation and Analysis by first evaluating each packet in a flow not sampled data as typically performed by traditional NetFlow Generators. This allows the ability to provide detailed metrics for each flow, including latency, jitter, and error conditions met along the flow path for each user.

If you are deploying edge computing servers in your network to eliminate distance and reduce bandwidth requirements. Then, strategically deploy this type of full visibility sensor in your network because you can fix what you can’t see. 100% visibility into the network edge is critical to your network performance.


5G network edge computing platforms will handle 75% of the network data by 2025, so now is the time to review your network visibility software. Make sure it has the capability to monitor 100% of packets at the edge to know what each user/device is experiencing and to provide a roadmap to any issues encountered in a flow.

Read this post and others in Top 15 Network Monitoring Blogs