Website developers and admins in today’s ever expanding web have a number of solutions to handle high availability, failover, and performance. One of those solutions is called Anycast; Anycast is a routing scheme which you can use to deal with the challenges of serving a global audience. By using a routing scheme like Anycast, you can ensure users are routed to the node closest to them. It also provides fault tolerance in the event that one of your POP’s (point-of-presence) is unavailable.

(image credit:

(image credit:

In the diagram above, you can see how an Anycast scheme works in context of a CDN (Content Delivery Network). In a CDN, website visitors in different parts of the world have their request routed to the server nearest them, though the results are the same. This helps with page load-time and also takes much of the burden off of your origin servers. Anycast is what enables routing requests to the nearest datacenter/network.

However, the benefits of Anycast add significant complications to monitoring. The Anycast POP that you are monitoring will be determined by the location of your monitoring probes. Running the test from a single probe leaves you with a sizable blind spot in that you won’t know about problems with the other Anycast POP’s because you’ll always be routed to the same location. In addition, any outage confirmation which is done by other nodes will likely test the wrong location thus causing the outage to get marked as false.

So how do you embrace Anycast in your architecture but still effectively monitor your resources?

Network Coverage
First, determine which POP’s your sites are being served from with your Anycast/CDN provider. You will then need to work with your monitoring provider to understand which of their probes will terminate to each of the aforementioned locations. This is where using a monitoring provider with a wide network footprint and diverse carrier backbone really helps. Upstream providers/carriers are important to mention here because defining which probes terminate to which Anycast POP’s is determined by more than just the geographic location. It’s determined by routing and the networks which the request travel through.

“Safe” Outage Confirmations
If your monitoring system attempts to verify outages from multiple locations (like Panopta does), the outage could get incorrectly ruled out. Panopta uses a outage voting process to verify authenticity of the outage; meaning if one of our nodes detect an outage on a server or website, 3-5 nearby nodes instantly attempt to confirm the outage from other locations. This rules out any local network or server issues with the primary node. If a majority vote is reached, that the site or server is considered down, and we begin the alerting process. If not, we check it again in 60 seconds per the normal schedule. With Anycast monitoring, there is the danger of the monitoring node checking the wrong server because of the scheme. One way to mitigate this is to fine-tune which probes get used for confirmation. You can determine which probes are safe to use with your monitoring provider.

We’ve pre-determined the mappings of our monitoring probes for common CDN providers like CloudFlare and MaxCDN. If you are using either of these providers (or any other provider), feel free to get in touch with our support team in order to determine how to best set up your monitoring.

About Panopta: Panopta provides advanced network and server monitoring for online businesses and service providers. We go beyond providing basic monitoring to give operations teams the tools they need to detect issues before they occur and minimize the impact of outages or slow load time. Contact us with any questions you may have, or sign up for the free trial and see for yourself!

Online sales during the 2015 Holiday season are expected to sky rocket with a 38% increase compared to 2014. Panopta has been monitoring the web’s top ECommerce sites since 2011 and we’ve captured an immense amount of interesting data. The below infographic takes the mountain of data on which we’re sitting and merges it with some industry benchmarks on how downtime and performance impact online sales.

Click on the image below to download the infographic:


Monitoring the top ECommerce websites is just one aspect of what our new performance index site is all about. If you’d like to read more about our Performance Index, read our last blog post!

For the last few years, we’ve been monitoring the top E-commerce websites and publishing their uptime during the holiday season in our Holiday Availability Index. As a consumer, it can get especially frustrating during this time of year when you’re trying to shop for last minute gifts and a site is unresponsive or slow. Today, we’re excited to announce the next generation of bringing that same visibility to other segments of the Internet. We just launched the Panopta Performance Index which not only focuses on uptime and availability, but also speed! After all, slow is the new down.

Although holiday online shopping is a critical time for site operators, there is so much more than just E-Commerce on the web. And for that reason, we’ve extended the performance index to include information about uptime and speed for the Fortune100, the SaaS 100 and the top 50 media/news sites. You’ll find the most well-known companies across each of the 4 indexes, like: Target, Costco, Zappos, SalesForce, IBM, Oracle, and more. The Index Site is a powerful, free tool that you can use to see how your most visited sites compare to their competitors.
Here’s a small preview of what you’ll see when you go there:

performance index w_ outage 2

Click on any of the tabs along the top to see what’s happening across the different indexes. If any of the sites are down, you’ll see them highlighted in red. In the table, we’re showing their performance index score, uptime, and how quickly their site loads. Click on any of the sites to see a detailed report on how it’s performing, including up to a years worth of performance trending.

performance index sears sample page

All of this data is being captured by the core Panopta monitoring engine which our customers have come to trust and rely upon since 2007. Keep up with how each of these sites are performing by going to or by clicking on the “Performance Index” link at the top of Want to stay on top of real-time events? We’ve created Twitter accounts for each segment where index activity will stream as outages happen. We’ll inform you when a site went down, when it comes back, and for how long it was unavailable. Follow @PanoptaSaaS, @PanoptaMedia, @PanotpaEcom, and @PanoptaF100 for the latest updates on how each of these top sites are performing.

Here is an example from the Panopta Media feed after ABC News’ website had a 3-minute outage and what you can expect to see on our Twitter timeline.

Panopta Media ABC screenshot tweets

In addition to the real-time stream of events on the Index specific accounts, we’ll be summarizing how each index is performing and highlighting notable outages on the main @Panopta twitter account.

We’re excited to launch the new Performance Index and bring this level of visibility to the Internet. If we’ve caught your interest, and you’d like information on how the data is captured, feel free to email We’d love to hear feedback as well! Tweet us at @panopta to let us know when the site comes in handy.

There’s a lot that sets Panopta apart. From our promise to eliminate false notifications to our ability to scale with your business, there are many features that push Panopta ahead of our competitors. One of the features that we’re especially proud of is our robust notification system. It’s one of the cornerstones of the Panopta experience and we want to use this article to take a deep dive into advanced notifications. Whether you’re evaluating Panopta or currently using it, this should help you learn how to get the most out of the advanced notification capabilities.

There are three main aspects of Panopta’s advanced notifications: alert types, integrations and advanced features.

The first aspect of the notifications system is the plethora of alert types at your fingertips. Currently we support e-mail, SMS, Twitter direct messages, phone calls, mobile push notifications (with the Panopta app on iOS or Android), webhooks and instant messenger services from Google Hangouts to AIM. What’s the best part of this list? It’s always getting bigger. We believe that you should be able to be notified in whatever way works best for you so that you can tackle a problem quickly and get back to uptime. That bring us to the next important aspect of our notification system …

Integrations! The best modern software tools are ones that communicate seamlessly with other industry-leading tools and support teams working with solutions from Slack and Hipchat to PagerDuty and VictorOps. Our approach to integrations is to make your experience as seamless as possible and if that means you want to have downtime notifications get blasted to the #Houstonwehaveaproblem channel on your Slack team or integrated right into a StatusCast page, then we can make it happen right out of the box.

Powerful integrations and a smorgasbord of notification types are both awesome, but we wanted to take the Panopta experience to the next level and provide an unprecedented level of customization. Using our advanced features, the possibilities are limitless. Let’s do an example:

Martin, Alyssa and Ahmed work as part of a DevOps team where Alyssa supervises Martin and Ahmed. They want the best for their customers and decide to implement Panopta as their monitoring system. They use Slack to stay on the same page with their dev sprints. The team is rolling out a new project and they’re using Panopta to keep track of uptime and performance because their new web app just went live. At 15:35 CST, the team gets a notification about a resource spike on their #fixnow channel. Alyssa assigns Ahmed to the fix and they all head home for the night.

The next morning at 1:45 CST, a server goes down that has a direct impact on their app’s performance. After double-checking the outage from global monitoring nodes, Panopta notifies Martin and Ahmed. Martin likes getting texts so his alert comes through SMS and Ahmed is a Twitter junkie so he gets a DM. Unfortunately, Martin forgot to plug his phone in and it ran out of battery and Ahmed’s wife put his phone on silent because his ringtone kept going off. Normally, this would be a disaster. The server resources could stay taxed and then shut down until the next morning and the team would scramble to figure out what happened and how to get up and running quickly while dealing with angry members of the sales team who are handling customer complaints.

Lucky for Alyssa, she put a contingency plan in place. With Panopta she could set up an escalation plan that notified her in progressively aggressive ways since her team didn’t take care of the problem. Alyssa got a simple e-mail when the problem happened and because it wasn’t addressed, she received an SMS 15 minutes later. That didn’t wake her up but she had a phone call scheduled for whenever a problem wasn’t solved within 25 minutes. At 2:10 CST, Alyssa woke up to the phone call, solved the problem and went back to bed. She had a sit-down with Martin and Ahmed in the morning.

This made-up scenario gives a peek into the power of Panopta and its advanced features for notifications like escalation alerts. The possibilities are endless and they put you in the driver’s seat when it comes to managing uptime and monitoring. In a few weeks we’ll explore on-call schedules and more powerful team features.

Myth #1: A free monitoring option is the best choice

You know the old adage, “you get what you pay for?” Well, it’s true. While it may be tempting to set up a free service to monitor your websites and services, in the end, you’re better off spending the small, monthly investment on a paid service. Nearly all free services limit the number of websites you can monitor or they only check for outages every hour. Many times they only offer notifications through email while paid services can natively use Twitter, SMS, Hipchat and everything in between.

Myth #2: I need to be notified the SECOND something goes down!

When you have the responsibility for making sure that servers and websites aren’t having uptime issues, you probably lean toward wanting to be notified whenever the tiniest of blips happen. Despite this natural tendency, if you’re already operating this way, you know that it’s taxing. Getting emails at 3 a.m. or texts while you’re sitting down for dinner? Most of the time the outage was reported even though nothing actually happened. When you’re vetting a monitoring service make sure they have the functionality in place to prevent false notifications. The best services double-check outages from multiple locations to ensure accuracy.

Myth #3: If I monitor my website, I’m good to go!

When you’re first setting up monitoring for your business you always think of the main website first. It makes sense that the customer-facing website needs to have as minimal downtime as possible, especially if you’re running something like an e-commerce that directly depends on revenue from the website. While this is important, if you stop there, you’re forgetting about other important monitoring needs.

For example, do you have a client access FTP server? What happens when it goes down and the client experience is compromised? Do you run your own email servers without monitoring them for downtime? What happens when they go down for hours? Maybe everyone in your office accesses a network drive for back-ups and file sharing. What would happen if that went down and you didn’t know about it right away?

Monitoring beyond the website helps ensure that your business runs as usual even if your website is working perfectly.

Myth #4: I can just do it myself with open source!

This myth is similar to #1 but often times it can lead to even more unsatisfactory outcomes. First off, don’t get us wrong, open source software is amazing. However, it can become a problem when you’re using it to solve a problem that has already been solved in the name of “saving money.” Think about how many times you have approached a problem and attempted to “homebrew” a solution. Have you ever kept track of the upfront cost of setting it up? What about the ongoing costs when problems arise? There are no support teams or forums that can replace a dedicated team of developers behind a reliable monitoring product.

We hope that this article has helped you dispel some of the myths surrounding website monitoring and empowers you to make an informed choice! Know any other common myths around monitoring products or solutions? Share them in the comments so we can dispel them in future posts.

Visitors, conversion rates, and security are critical, but if your site is slow or down, they become insignificant. Site availability and visitor experience are the sine qua non of e-commerce. We compiled industry research and some of our own experience to come up with three big reasons every e-commerce manager needs to monitor uptime and performance on their sites.

  1. Downtime can do serious damage to your brand.

In the world of retail, your brand is your promise. It’s how your customers know you. The same is true in e-commerce and if your brand does business online, every facet of your customers’ interaction needs to be top-notch. Making sure your website is up to snuff and mitigating any downtime is pivotal in making sure that your brand remains consistent. Don’t just take our word for it: KISSMetrics published a study showing that 44% of online shoppers who have a bad experience on a website will tell others about it. Don’t keep spending money on acquiring new customers until you make sure that they’re having a great experience when they come to shop with you online.

  1. Customers don’t come back to sites that are slow or are down.

Tick, tock. Did you know that after three seconds of waiting time, 40% of visitors will leave your website? It makes sense that people don’t like to wait around online. In the age of Netflix, Amazon, and Google, consumers expect on-demand services and results. Not only does waiting time make visitors leave, it can also prevent them from coming back. Research from KISSmetrics has shown that “79% of shoppers who are dissatisfied with website performance are less likely to buy from the same site again.” For e-commerce, lifetime customer value and brand loyalty is not something you can afford to ignore. Monitoring helps you keep your customers happy and coming back.

  1. You can actually turn customers over to your competitors.

Market share is all about getting a leg up on your competitors but what happens when all of your effort to get people to your site ends up helping the competition more than you? This isn’t just a marketer’s nightmare, it’s actually happening! If your website is frustrating your visitors, 72% of them will try a competitor instead! If your site is lagging or it’s down, all of your marketing efforts are actually helping the other guys.

These are just three great reasons why you need to monitor the uptime and performance of your sites.  A monitoring solution that will let you know the second your site goes down, monitor overall health and performance, and provide actionable insights, can be incredibly valuable for e-commerce managers.

For a high-level overview of uptime performance for some of the largest e-commerce brands, check out our Performance Index.


We are excited to announce Panopta now integrates with the application status page tool StatusCast. This integration will allow customers of both companies to easily:

  • Automatically push alerts from Panopta’s notification engine to your StatusCast page*
  • Provide end-users with subscription options for alerts and updates
  • Track and show the history of uptime for your application
  • Communicate service disruptions in a customer-friendly manner

*important note: these alerts can be delayed and/or filtered so that only certain types of alerts are displayed in real-time.

Powerful, Real-Time Data for End-users

The same kind of information that your DevOps/IT team reviews in Panopta can be automatically pushed to your hosted status page as a real-time application availability and performance report visible to end-users. You can also configure rules-based automation and privacy settings so that the end-user sees the data they need to see and nothing more.

Talk to Your End-Users Where They’re Listening

You can now let your end-users know about application availability via their preferred means of communication whether it be Twitter, SMS, email, etc. They can sign up for simple subscriptions and stay up-to-date on both planned outages (e.g. maintenance) and unforeseen issues.

Prove Your SLA

StatusCast provides a simple view of the various system components you’re using Panopta to monitor, making it easy for end-users to see and understand what service disruptions, if any, have occurred over a period of time. You can use this feature to show how you’ve exceeded the requirements of your SLA and share the actual data with end-users at any time from your hosted status page.

Automatic and Understandable Customer Communication

The data your DevOps/IT team is getting from Panopta won’t make much sense to your end-users. StatusCast makes it easy to translate that data into terms that end-users, customer support and executives can all understand. Keeping your end-users in the loop on service disruptions is an easy way to regain some good will lost from the service disruption itself. Moreover, the communication can be automated, so your DevOps/IT teams don’t even have to take time or attention from resolving the issue to keep your users informed as issues arise, progress is made and full service is restored!

We supported the development of this integration because we believe it helps extend the value of Panopta for our customers by helping you better serve your end-users. Please let us know what you think in the comments below!

We are excited to announce that we are expanding our infrastructure. On March 16th, we will add 8 new monitoring nodes around the world. As our infrastructure expands, so too does our the flexibility and accuracy of our monitoring.

We are very proud to add two monitoring nodes in Australia, one in Moscow, two in Frankfurt, Germany, one in Los Angeles, California, one in Milan, Italy and one in Tel Aviv, Israel.

  • Sydney 3 –
  • Melbourne 2 –
  • Moscow –
  • Frankfurt 1 –
  • Frankfurt 2 –
  • Los Angeles –
  • Milan 2 –
  • Tel Aviv –

There are also three new IP changes which include Chicago 2, Jakarta and Hong Kong.

The monitoring node locations that have changed IP addresses are listed below:

  • Chicago 2 –  Previously:  – Will become:
  • Jakarta – Previously: – Will become:
  • Hong Kong – Previously: – Will become:

In addition to the two new nodes in Frankfurt, we are shutting down the old ones in Berlin and Nuremberg on the same days as the new ones come online. Their IPs were:

  • Berlin –
  • Nuremberg –

Any of our customers that have firewall restrictions for our monitoring nodes should update their system to account for these new IP addresses.

We will continue to add monitoring nodes periodically. If there is anywhere in the world that you would like us to expand to, then please let us know in comments below. After all, we love to improve our service.

Welcome back to our latest update on the internet’s top retailers. If you are new to our blog, we have been publishing our analysis of passing and failing uptime for ecommerce webpages. Last time, we found over the course of the year, the 26% internet’s top 132 retailer’s suffered from less than 99.9% uptime. Only 19 websites had less than 99.9% uptime this holiday season. Overall, this is a positive sign for retailers across the country and their internet infrastructure. This comes on the heels of our earlier report of fewer retailers having website downtime issues over the 2013 year. Of the 132 retailers we were tracking this season on our holiday page, only 14% had substantial downtime. This is far better than the 35% downtime we have seen this year. With a total of 618 outages and 102 hours of downtime across those 134 retailers, this means performance has improved a great deal compared to 1006 outages and 155 hours of downtime in 2012. This upbeat news, we speculate, can be attributed to a few changes to internet retailer’s webpage infrastructure:

  1. For retailer’s who maintain their own webservers and webservices, the cost of getting better web server hardware and software has fallen from highs earlier in the decade making it more feasible to do more with less. The growth of virtual servers has also expanded retailer’s capabilities of scaling their options up as well.
  2. For retailer’s who have put their sites in the hands of cloud providers, there are now more and more cloud providers giving a myriad assortment of possibilities for retailers to shop around with. This also has the affect of making pricing more competitive for brick and mortar retailer’s who want to have a smaller relationship with their web presence. These expanded options make it simple for retailer’s to shop around for upgrades to their online infrastructure.
  3. More passively, the quality of internet resources has improved greatly since the beginning of this decade making it easier for retailers to create, edit, and control their online web presence.

Despite this, the 19 retailers with less than 99.9% uptime were all retailers with persistent problems that put them on under performing holiday season lists for either 2011 or 2012. The list includes: CDW, J&R, Victoria’s Secret, Shutterfly, Office Depot, Pixmania, Gamefly, Cabelas, Sears, Backcountry, Blockbuster, Dell, Harry and David, Lululemon, Guess, Tiger Direct, Sony, Urban Outfitters, and Banana Republic. For these retailers, there performance was below standard but there is a silver lining in that improvements are simple and easy to start as a new years resolution. Good early steps include beefing up current online infrastructure with additional servers or virtual machines, adding network service monitoring, possibly agent side monitoring, and finally, developing a team system for dealing with outages.

We are excited to announce a new release full of improvements to our system. This release includes:

  • Updated Reporting Engine
  • Improvements to Compound Services
  • Template Support for Agent Manifest Files
  • Additional CPU Metric for the Linux Agent
  • Custom Agent Metrics

Reporting Engine

Our outage reports now include server resource data (collected by the Agent) and the reasons which we capture for outages (connection refuse, 404, 500 etc.). We now also compress our outage reports into a zip file to allow for faster downloads.

CPU Resources

Our Linux monitoring agent now has an additional CPU check. You can now monitor CPU usage by percentage used for all of the cores on your server. You can learn more about this plugin here.

Compound Service

We have improved compound services by allowing you to construct them with individual network service and agent resource checks. If you would like to try out our new compound service check out this article.

Manifest Files

We have improved our Agent manifest file installations by adding the ability to specify server templates within the manifest file. Now, when a server is created after installing the agent with the manifest file, any templates specified will be automatically applied. To learn more, see our updated Agent manifest file documentation.

Custom Agent Metrics

You are now able to create custom metrics to report and monitor via our monitoring agent with a simple command line call. This can be seen as a simpler alternative to writing a custom plugin and allows you to integrate metric collection into your own system. To learn more about this new feature you can see this article.