Tips on Selecting the Most Effective Alert Thresholds

bell_curve_small

One of the most common features Web application monitoring solutions promote is their ability to alert administrator’s of specific errors and issues. This form of monitoring is known as alert thresholds, and for many application managers, it’s the lifeblood of their entire operation. Because modern Web applications are continuously growing in complexity and purpose, the need for dynamic and highly-advanced monitoring techniques is also ever-growing. In an attempt to rectify this need and provide the level of monitoring that’s essential for most modern enterprises, the most well-equipped monitoring solutions offer customizeable alert thresholds.

While this is an imperative feature to look for in a monitoring solution, if you don’t properly utilize its functions, you won’t achieve the level of monitoring your application requires. Of course, every monitoring solution is different in terms of how the alert thresholds are designed and customized; however, there are several universal tips applicable to all monitoring tools regardless of size and function.

Tip #1 Be Specific With Metric Alerts

Typically, a monitoring solution features specific pre-set alert thresholds for a specific list of metrics. While this is a great start, it is unlikely a great fit for monitoring a web application. Therefore, it’s imperative that you create a specific list of system and application metrics in which thresholds should be set. For example, set the performance thresholds for CPU performance, Disk Usage Space, Network Health, Web Server Health and Database Transactions.

Tip #2 Setting Threshold Values Based Upon Historical Trends

While this may not be applicable for new Web applications that do not have a history of use and performance, those who are implementing monitoring solutions on established applications must reference the baseline performance and operation of the aforementioned (and other) application metrics. As a general rule of thumb, the threshold value for specific metrics must be based upon the established performance baseline for your unique infrastructure. Don’t be afraid of implementing recommended values for specific metrics this is especially important for those who are setting up threshold monitoring on new application deployments.

Tip #3 Utilize Synthetic Monitoring to Establish Updated Thresholds

As your Web applications grow and become more complex, the use of synthetic monitoring is imperative to the establishment of new baselines. While you may have historical data for previously released application versions, newer versions may feature different performance trends and different utilization rates. Until the new version is used by a significant amount of users, exact threshold values are unknown this is where synthetic monitoring comes into play. With synthetic monitoring, the monitoring tool interacts with the application as a real user, which provides basic-level performance and operation data. Utilize this data to establish first-run baselines.

Web Application Monitoring Essentials – System Metrics

Monitoring

When it comes to monitoring the diverse and complex environments found within modern Web applications, there are several metrics that must be monitored. This monitoring ensures full functionality within the code and user-interface levels of an application infrastructure. Even though the specific metrics you decide to monitor is dependent on application function and architecture, choosing to monitor select system metrics ensures stability within enterprise hardware components. The following system metrics should be continuously monitored to reduce application errors and subpar performance.

Disk Usage Rates

When it comes to the various issues and reasons why an application under-performs, one of the most common hardware causes is a lack of available disk space. Did you know most of the speed attributed to a Web application is based upon the functionality of the disk. As a general rule of thumb, the faster a hard disk is able to handle data, the faster your application will perform. However, it’s the amount of available disk space that supports the overall performance of your hard disk. The more crowded a disk, the slower it performs. Not only does high disk usage destroy the performance of an application, but it can also create security vulnerabilities due to various, fragmented blocks of data being sent to multiple areas within the drivers.

CPU Function

If you were to think about your hardware components as a human, the CPU would be its brain. CPU, an acronym for Central Processing Unit, oversees the operation of all programs. Through a series of complex logical and arithmetical operations, the functionality of an entire network is based upon the performance of its CPU. Factors such as application usage determine the overall functionality of this component. Therefore, it’s essential to monitor the health and events within a CPU. When using a free web application monitoring service, you can determine the precise needs for your server, allowing CPU resources and various allocations to be adjusted to accommodate peak usage periods.

Physical Memory Usage

Also referred to as RAM, or Random Access Memory, this component acts as a temporary storage space. This is where frequently accessed data is stored, which enhances the speed and performance of programs as they’re loaded from this point before being accessed by the system CPU. To look at system loads and other various on-page elements, you can use a program like Every-Step, or choose from another one of the available options if you need something more capable.

As a general rule of thumb, the greater your physical memory space, the faster your entire network will perform. Therefore, the majority of development experts suggests adding as much physical memory space as possible by your system. This reduces the occurrence known as a swap space, which is when stored files are switched to another resource as the physical memory usage becomes full. This action typically results in severe performance issues, some of which may be critical if it occurs during high-usage moments.