Using big data to ensure application and network performance

Let’s assume the hypothetical example of your CRM application performance lagging; your end-users and call center agents are complaining, customers are enduring longer waits as agents try to respond to their queries, the contact center call queue is increasing, your reputation takes a hit and so does your employee productivity.

While the application and the impact may be different in your environment, this is a familiar example to most organizations and one most of us fear because of what happens next.

The way this usually plays out is that one or more support ticket get raised scrambling the support team, and then the network and application teams end up pointing fingers at one another, each claiming that the problem is not in their domain given what they can observe within their respective domain.

Sound familiar?

Well, then you may be happy to hear that there is a solution to the problem that is not that complex and requires the application of a technology that has been with us for a while in a new and innovative manner.

But let’s take a step back and have a look at what we have here.

Using our hypothetical example, the cause for poor end-user experience and “slowness” may have one or more root causes that are typically not simple to isolate and pinpoint. They can range from poor designs on (parts of) the database to poorly implemented (web or other) software modules, cloud and application server and storage performance problems right through to LAN and WAN networking issues and more. Or even more likely, a combination of the above.

Add to this the increasing complexity of our IT environments with multiple locations, SaaS services, Cloud services and agile development and you have a complex puzzle with many moving parts.

The fundamental problem in identifying root causes, however, is that the different teams evaluating the incident rarely possess a common dataset to analyze and they have limited data and often are forced to rely on indicative rather than authoritative information, (e.g. log files, system performance information, etc) rather than real evidence of what is going over the wire.

While logs and system performance information are essential and useful information that may well help pinpoint a problem, very often they provide an indication at best or can even result in false positives.

That’s partly the reason why modern application and network performance management software suites collect a range of data from the network using Netflow and other technologies as well as ingesting log files and configurations to help piece together a picture of what may cause the degraded performance.

Advanced network and performance management suites such as for example Maltem’s Insight Performance suite also apply agents to help generate synthetic data streams and even a level of machine learning to predict possible future issues.

Using such tools is a big step up and provides a far more effective means to measure performance and pinpoint root causes of specific issues.

However, these tools still rely on metadata and in many cases do not have access to an authoritative data set – the actual network data that went over the wire. This is the only authoritative data source that can allow network and application engineers to fully understand what is happening by using a single authoritative data source.

If used correctly, this data is extremely useful since it contains all traffic, decoded and timestamped to fully understand what went on and if the root cause of the poor application performance is due to networking issues, application and server issues or even both.

This is where full packet capture appliances come in.

They allow the storing and searching of months of network data if required, while integrating with a range of tools that network management, application management and even cybersecurity teams use on a daily basis.

Whether it is a tool such as the Maltem Insight Performance suite, Splunk or Dynatrace, or even Cybersecurity tools such as SIM/SIEM platforms and Forensic Investigation tools, all can share and benefit from a single source of the “truth” – a full capture of what happened on the network.

Only with this data at hand can teams start to collaborate effectively, analyzing a single authoritative data source and matching it against other data available.

And the benefits don’t end here, the same “big data” can also be mined for performing effective capacity planning and a range of other analytics tasks. Implantation is easy, simply hook the appliance into a network tap or span port and off you go.

By effectively applying readily available technology and providing all teams, (e.g. network management, application management and even cybersecurity teams), with the same authoritative data source that often integrate seamlessly with their existing tools, organizations can resolve issues more effectively.

But more importantly they can put measures in place to prevent the issue from occurring again because the root cause can be fully understood and hence addressed.

Isn’t it nice when an old technology is reborn to help us with current challenges?

Related Posts

About Us
A woman working on a modern computer monitor

Axim specializes in Enterprise Communications Transformation, from cloud to legacy technology to customer experience.

Let’s Socialize

Popular Post