Myth #1: A free monitoring option is the best choice

You know the old adage, “you get what you pay for?” Well, it’s true. While it may be tempting to set up a free service to monitor your websites and services, in the end, you’re better off spending the small, monthly investment on a paid service. Nearly all free services limit the number of websites you can monitor or they only check for outages every hour. Many times they only offer notifications through email while paid services can natively use Twitter, SMS, Hipchat and everything in between.

Myth #2: I need to be notified the SECOND something goes down!

When you have the responsibility for making sure that servers and websites aren’t having uptime issues, you probably lean toward wanting to be notified whenever the tiniest of blips happen. Despite this natural tendency, if you’re already operating this way, you know that it’s taxing. Getting emails at 3 a.m. or texts while you’re sitting down for dinner? Most of the time the outage was reported even though nothing actually happened. When you’re vetting a monitoring service make sure they have the functionality in place to prevent false notifications. The best services double-check outages from multiple locations to ensure accuracy.

Myth #3: If I monitor my website, I’m good to go!

When you’re first setting up monitoring for your business you always think of the main website first. It makes sense that the customer-facing website needs to have as minimal downtime as possible, especially if you’re running something like an e-commerce that directly depends on revenue from the website. While this is important, if you stop there, you’re forgetting about other important monitoring needs.

For example, do you have a client access FTP server? What happens when it goes down and the client experience is compromised? Do you run your own email servers without monitoring them for downtime? What happens when they go down for hours? Maybe everyone in your office accesses a network drive for back-ups and file sharing. What would happen if that went down and you didn’t know about it right away?

Monitoring beyond the website helps ensure that your business runs as usual even if your website is working perfectly.

Myth #4: I can just do it myself with open source!

This myth is similar to #1 but often times it can lead to even more unsatisfactory outcomes. First off, don’t get us wrong, open source software is amazing. However, it can become a problem when you’re using it to solve a problem that has already been solved in the name of “saving money.” Think about how many times you have approached a problem and attempted to “homebrew” a solution. Have you ever kept track of the upfront cost of setting it up? What about the ongoing costs when problems arise? There are no support teams or forums that can replace a dedicated team of developers behind a reliable monitoring product.

We hope that this article has helped you dispel some of the myths surrounding website monitoring and empowers you to make an informed choice! Know any other common myths around monitoring products or solutions? Share them in the comments so we can dispel them in future posts.

Visitors, conversion rates and security are critical, but if your site is down or slow, they become insignificant. Site availability and visitor experience are the sine qua non of e-commerce. We pulled from industry research and our own experience to share three reasons every e-commerce manager needs to monitor uptime and performance on their sites.

  1. Downtime can do serious damage to your brand.

In the world of retail, your brand is your promise. It’s how your customers know you. The same is true in e-commerce and if your brand does business online, every facet of your customers’ interaction needs to be top-notch. Making sure your website is up to snuff and mitigating any downtime is pivotal in making sure that your brand remains consistent. Don’t just take our word for it: KISSMetrics published a study showing that 44% of online shoppers who have a bad experience on a website will tell others about it. Don’t keep spending on acquiring new customers until you make sure that they’re having a great experience when they come to shop with you online.

  1. Customers don’t come back to sites that are slow or are down.

Tick, tock. Did you know that after three seconds of waiting time, 40% of visitors will leave your website? It makes sense that people don’t like to wait around online. In the age of Netflix, Amazon and Google, consumers expect on-demand services and results. Not only does waiting time make visitors leave, it can also prevent them from coming back. Research from KISSmetrics has shown that “79% of shoppers who are dissatisfied with website performance are less likely to buy from the same site again.” For e-commerce, lifetime customer value and brand loyalty is not something you can afford to ignore. Monitoring helps you keep your customers happy and coming back.

  1. You can actually turn customers over to your competitors.

Market share is all about getting a leg up on your competitors but what happens when all of your effort to get people to your site ends up helping the competition more than you? This isn’t just a marketer’s nightmare, it’s actually happening! If your website is frustrating your visitors, 72% of them will try a competitor instead! If your site is lagging or it’s down, all of your marketing efforts are actually helping the other guys.

These are just three great reasons why you need to monitor the uptime and performance of your sites.  A monitoring solution that will let you know the second your site goes down, monitor overall health and performance, and provide actionable insights, can be incredibly valuable for e-commerce managers.

For a high-level overview of uptime performance for some of the largest e-commerce brands, check out our Holiday Availability Index.


We are excited to announce Panopta now integrates with the application status page tool StatusCast. This integration will allow customers of both companies to easily:

  • Automatically push alerts from Panopta’s notification engine to your StatusCast page*
  • Provide end-users with subscription options for alerts and updates
  • Track and show the history of uptime for your application
  • Communicate service disruptions in a customer-friendly manner

*important note: these alerts can be delayed and/or filtered so that only certain types of alerts are displayed in real-time.

Powerful, Real-Time Data for End-users

The same kind of information that your DevOps/IT team reviews in Panopta can be automatically pushed to your hosted status page as a real-time application availability and performance report visible to end-users. You can also configure rules-based automation and privacy settings so that the end-user sees the data they need to see and nothing more.

Talk to Your End-Users Where They’re Listening

You can now let your end-users know about application availability via their preferred means of communication whether it be Twitter, SMS, email, etc. They can sign up for simple subscriptions and stay up-to-date on both planned outages (e.g. maintenance) and unforeseen issues.

Prove Your SLA

StatusCast provides a simple view of the various system components you’re using Panopta to monitor, making it easy for end-users to see and understand what service disruptions, if any, have occurred over a period of time. You can use this feature to show how you’ve exceeded the requirements of your SLA and share the actual data with end-users at any time from your hosted status page.

Automatic and Understandable Customer Communication

The data your DevOps/IT team is getting from Panopta won’t make much sense to your end-users. StatusCast makes it easy to translate that data into terms that end-users, customer support and executives can all understand. Keeping your end-users in the loop on service disruptions is an easy way to regain some good will lost from the service disruption itself. Moreover, the communication can be automated, so your DevOps/IT teams don’t even have to take time or attention from resolving the issue to keep your users informed as issues arise, progress is made and full service is restored!

We supported the development of this integration because we believe it helps extend the value of Panopta for our customers by helping you better serve your end-users. Please let us know what you think in the comments below!

We are excited to announce that we are expanding our infrastructure. On March 16th, we will add 8 new monitoring nodes around the world. As our infrastructure expands, so too does our the flexibility and accuracy of our monitoring.

We are very proud to add two monitoring nodes in Australia, one in Moscow, two in Frankfurt, Germany, one in Los Angeles, California, one in Milan, Italy and one in Tel Aviv, Israel.

  • Sydney 3 –
  • Melbourne 2 –
  • Moscow –
  • Frankfurt 1 –
  • Frankfurt 2 –
  • Los Angeles –
  • Milan 2 –
  • Tel Aviv –

There are also three new IP changes which include Chicago 2, Jakarta and Hong Kong.

The monitoring node locations that have changed IP addresses are listed below:

  • Chicago 2 –  Previously:  – Will become:
  • Jakarta – Previously: – Will become:
  • Hong Kong – Previously: – Will become:

In addition to the two new nodes in Frankfurt, we are shutting down the old ones in Berlin and Nuremberg on the same days as the new ones come online. Their IPs were:

  • Berlin –
  • Nuremberg –

Any of our customers that have firewall restrictions for our monitoring nodes should update their system to account for these new IP addresses.

We will continue to add monitoring nodes periodically. If there is anywhere in the world that you would like us to expand to, then please let us know in comments below. After all, we love to improve our service.

Welcome back to our latest update on the internet’s top retailers. If you are new to our blog, we have been publishing our analysis of passing and failing uptime for ecommerce webpages. Last time, we found over the course of the year, the 26% internet’s top 132 retailer’s suffered from less than 99.9% uptime. Only 19 websites had less than 99.9% uptime this holiday season. Overall, this is a positive sign for retailers across the country and their internet infrastructure. This comes on the heels of our earlier report of fewer retailers having website downtime issues over the 2013 year. Of the 132 retailers we were tracking this season on our holiday page, only 14% had substantial downtime. This is far better than the 35% downtime we have seen this year. With a total of 618 outages and 102 hours of downtime across those 134 retailers, this means performance has improved a great deal compared to 1006 outages and 155 hours of downtime in 2012. This upbeat news, we speculate, can be attributed to a few changes to internet retailer’s webpage infrastructure:

  1. For retailer’s who maintain their own webservers and webservices, the cost of getting better web server hardware and software has fallen from highs earlier in the decade making it more feasible to do more with less. The growth of virtual servers has also expanded retailer’s capabilities of scaling their options up as well.
  2. For retailer’s who have put their sites in the hands of cloud providers, there are now more and more cloud providers giving a myriad assortment of possibilities for retailers to shop around with. This also has the affect of making pricing more competitive for brick and mortar retailer’s who want to have a smaller relationship with their web presence. These expanded options make it simple for retailer’s to shop around for upgrades to their online infrastructure.
  3. More passively, the quality of internet resources has improved greatly since the beginning of this decade making it easier for retailers to create, edit, and control their online web presence.

Despite this, the 19 retailers with less than 99.9% uptime were all retailers with persistent problems that put them on under performing holiday season lists for either 2011 or 2012. The list includes: CDW, J&R, Victoria’s Secret, Shutterfly, Office Depot, Pixmania, Gamefly, Cabelas, Sears, Backcountry, Blockbuster, Dell, Harry and David, Lululemon, Guess, Tiger Direct, Sony, Urban Outfitters, and Banana Republic. For these retailers, there performance was below standard but there is a silver lining in that improvements are simple and easy to start as a new years resolution. Good early steps include beefing up current online infrastructure with additional servers or virtual machines, adding network service monitoring, possibly agent side monitoring, and finally, developing a team system for dealing with outages.

We are excited to announce a new release full of improvements to our system. This release includes:

  • Updated Reporting Engine
  • Improvements to Compound Services
  • Template Support for Agent Manifest Files
  • Additional CPU Metric for the Linux Agent
  • Custom Agent Metrics

Reporting Engine

Our outage reports now include server resource data (collected by the Agent) and the reasons which we capture for outages (connection refuse, 404, 500 etc.). We now also compress our outage reports into a zip file to allow for faster downloads.

CPU Resources

Our Linux monitoring agent now has an additional CPU check. You can now monitor CPU usage by percentage used for all of the cores on your server. You can learn more about this plugin here.

Compound Service

We have improved compound services by allowing you to construct them with individual network service and agent resource checks. If you would like to try out our new compound service check out this article.

Manifest Files

We have improved our Agent manifest file installations by adding the ability to specify server templates within the manifest file. Now, when a server is created after installing the agent with the manifest file, any templates specified will be automatically applied. To learn more, see our updated Agent manifest file documentation.

Custom Agent Metrics

You are now able to create custom metrics to report and monitor via our monitoring agent with a simple command line call. This can be seen as a simpler alternative to writing a custom plugin and allows you to integrate metric collection into your own system. To learn more about this new feature you can see this article.

In the past couple months, we have done a lot to improve our service and we are proud to announce our newest release. There are a ton of new features and improvements that come with this release and we hope you are as excited as we are. This release includes:

  • A new version of our monitoring Agent
  • New integrations
  • More robust server templates
  • The release of Panopta OnSight
  • A new support page


Agent 2.0

We have new versions of both the Linux and Windows Agents. We have streamlined installation of the Agent using Debian and Redhat repositories. The new Agent now supports manifest files that allow for automatic configuration of server resource monitoring upon installation. In addition to all of this, you now have the ability to create as many alert thresholds as you would like (including none if you just want data).

We’re continuing to extend the agent and will have a number of new plugins coming soon. If you would like to try out the new Agent see this article for instructions.


Server Templates

By popular demand we have made our server templates more robust and customizable. Server templates now give you complete control over the monitoring locations to use for each check, including the ability to setup multiple checks from different locations. You can now now add as many default templates to a server group as you wish, allowing these templates to stack on top of each other. Along with the new Agent manifest file and provisioning API, this gives you a number of ways to streamline configuration and enable very powerful automated deployment scenarios.

Check out our updated documentation to learn more about the template functionality, and contact our support team if you’d like to discuss custom deployment options for your infrastructure.


Panopta OnSight

The latest version of Panopta OnSight (formally the “Monitoring Appliance”) is finished! With Panopta OnSight, you can securely monitor your servers with network service checks (HTTP, Ping, FTP etc…) and agent monitoring from behind a firewall. The new OnSight is easier to use and install and supports VMWare, XenServer, Hyper-V and VirtualBox environments. If you would like to try it out today check it our documentation here.


New Support Documentation

We have completely overhauled our support documentation. It is better looking, easier to navigate and more detailed than the last incarnation. There is a lot of new content on the site as well, including introductory a getting started guide and a glossary.


Monitoring Network Expansion

We have also added a number of new monitoring nodes around the world. We are proud to expand our infrastructure to India and South America for the first time. All of our new monitoring nodes are listed below. You can see our full monitoring network here.

  • Sydney, Australia 2:
  • Adelaide, Australia:
  • Brisbane, Australia:
  • Beijing, China:
  • Santiago, Chile:
  • Sao Paulo, Brazil:
  • Istanbul, Turkey:
  • Chennai, India:
  • Seoul, South Korea:



In addition to these larger improvements we have a number of smaller upgrades.

  • OpsGenie_darkWe now integrate with OpsGenie, which allows you to merge notifications from Panopta with other monitoring and alerting systems then manage all of your incidents across a range of platforms and devices. If you would like to send your outage alerts to OpsGenie you can learn how here.
  • cancel-alertsWe have also given you the ability to silence further alerts for all your current outages – great for times when you’re firefighting and want to get some silence so you can think clearly.


We are excited to announce that we are expanding our infrastructure. In one week, on May 28, we will add 7 new monitoring nodes around the world. As our infrastructure expands, so too does our the flexibility and accuracy of our monitoring.

We are very proud to add two monitoring nodes in South America, as well as one in India. This is our first time expanding to either area, and we are looking forward to providing reliable server monitoring to the expanding economies in India and South America.

We have also added three monitoring nodes in Australia. Bandwidth leaving Australia has always been very expensive and by expanding our infrastructure their we can ensure affordable monitoring to our clients on the other side of the world.


The new monitoring node locations and IP addresses are listed below:

  • Sydney, Australia 2:
  • Adelaide, Australia:
  • Brisbane, Australia:
  • Beijing, China:
  • Santiago, Chile:
  • Sao Paulo, Brazil:
  • Istanbul, Turkey:
  • Chennai, India:
  • Seoul, South Korea:

Any of our customers that have firewall restrictions for our monitoring nodes should update their system to account for these new IP addresses.

We will continue to add monitoring nodes periodically. If there is anywhere in the world that you would like us to expand to, then please let us know in comments below. After all, we love to improve our service.

With this winter’s historic snow still piling up here in Chicago, our developers
have been stuck indoors delivering lots of new functionality for all of our
customers. We’ve got a wide range of improvements this time, hitting most of
our major systems so there should be something for everyone in this release. It
looks like spring is still several months away (at least) so there will
definitely be more to come soon – keep an eye on our blog soon for details of
what’s coming up.

American politics is always a hectic affair and the rollout of the for Americans everywhere has been a bumpy path. In response to this, we would like to release some facts about the response time and availability of the website for bloggers and journalists to use as a resource in their own coverage. Using our own Panopta server monitoring system, we set up network checks on the Affordable Care Act’s Healthcare.Gov website finding it was only available for use by the American public 86% of the time during the month of November!

That 86% availability is, by the standards of any online industry, abysmal. Now, it is understood that the roll out of was “fumbled” but how and where was fumbled? We checked the servers, every minute, to check different aspects of the public facing infrastructure including Authoritative DNS, HTTP availability and content checks.