Archive for the ‘Clicky’ Category
By popular demand, local searches now support multiple parameters, as well as path based pretty searches. You can mix the two types as well. To use multiple of either one, just separate with commas.
For path based searches, what you want to enter is everything that goes between the domain name and the search term. For example, if someone searches for apples on your web site and they end up here:
Then you would enter in /search/ into the box. Without quotes of course.
For good measure, here is an example of a comma separated list of both types, pretending your site has both q and s as search parameters, and also does path based search with the search term appearing after /search/, as shown above:
Note 1: DO NOT PANIC. You don’t need to change anything with the code installed on your site. We’ve simply made some changes to the way the code works, adding a couple of features and fixing a few bugs.
Note 2: Other than the tel: tracking, most of this only applies to advanced users.
- Tracking tel: URLs – This has been requested here and there over the years, but as Skype et al become more ubiquitous, these URLs are on more and more pages. So we just added support for automatically tracking them. Like mailto: links, these will show up in your outbound link report. You don’t need to do jack diddly, it should just work.
You should ONLY use this for events that don’t bubble up, or you will experience oddness.
So the queue system was made to store events and goals in a cookie, which is then processed every 5 seconds. So if the person is just sitting on the same page still, the queue will be processed shortly and send that event/goal to us. But if instead a new page is loaded, the cookie is still there holding the event/goal that wasn’t logged on the last page, and can be processed immediately on the new page view (which we do before processing the new page view itself, to ensure things are in the correct chronological order).
ANYWAYS… there were some customers who were using clicky.goal to log goals when visitors were leaving their site. The queue would intercept these goals though, resulting in a snowball’s chance in hell of the goal ever being logged.
SO… we added a new parameter to clicky.goal() called no_queue, which will tell our code to skip the queue and just log the goal immediately. Check the docs for more.
This doesn’t affect many of you, but if it does, the back story I’ve written above is probably worth a read.
- New method to check if a site ID has been init()’d – for customers using multiple tracking codes on a single site/page. This was a specific request from one customer, but we realized our code itself wasn’t even doing this sanity check, so if you had the SAME code on your site multiple times, there were some minor bugs that resulted from this.
If for some reason you think this applies to you, the new method is clicky.site_id_exists(123), which returns true or false indicating whether this site ID has been passed through the clicky.init() function yet. Note: 123 is an example site ID. Use a real one.
Bug fixes for sites using multiple tracking codes
In addition to the last item above about loading the same site ID multiple times resulting in oddities (and which is now fixed), we’ve made another change to the way the init process works.
There are a number of things that happen when a site ID is init()’d, but it turns out most of those things only needed to happen once, even if you had multiple site IDs on a aingle page. However, our code was executing this entire init process for every site ID on a page, which resulted in bugs such as:
- clicky_custom.goals and clicky_custom.split only working with the first site ID that was init()’d.
- The automatic pause that we inject for tracking downloads and outbound links was being called once for every site ID, rather than once per click (which is all that’s needed)
- When loading heatmaps by clicking the heatmap link from clicky.com, the heatmap would sometimes load twice (making it extra dark).
There were a few other much more minor bugs, but those were the ones that were really irritating. So now what happens is we split the setup procedure into a different method, and wait 100 milliseconds before calling it (just once), giving a chance for all site IDs to be passed into the init process first. And the actual init() method now just puts each site ID into an array which we loop through when any request to log data is called.
Been requested a number of times and something we will definitely add in the coming months. That being when you set custom visitor data with clicky_custom.session (or utm_custom), we will store this data in a cookie so the data will be applied to all future visits by this person, so even if they’re not logged in, they’ll still be tagged as they were last logged in / tagged visit.
We’ll probably only do this with a few specific keys though, since people use clicky_custom.session for all kinds of crazy purposes, many of which can be session specific. But we’ll probably do something like, only do it for keys like username, name, email, and a few others.
Just something to watch out for. We think this will be a nice addition when we add it.
Local searches (searches performed with your site’s own search engine) has been one of the biggest feature requests we’ve had over the years, so we’re happy to finally support it!
First, you need to tell us what the search parameter is that your site uses. Common ones would be q or search. You can do that in your site preferences:
You will then start seeing data in the new local searches report:
You can click on any of the searches to filter down to the visitors who performed said searches:
They will also show up in Spy:
As well as the actions log (both globally, and when viewing a session):
The action log can also be filtered down to just show local searches:
And that about covers everything!
Our old method for tracking Youtube was really ugly, requiring a good bit of custom code for every single video you wanted to track. We wanted it to work more like what we read above.
So, now it does! The old method still works for those of you who already have it deployed, but the new method is great because it works with the default iframe embed code that Youtube gives you, and it requires pretty much no work on your end.
Head on over to the video analytics docs to see what you need to do to get it working (scroll down, click ‘youtube’).
We’ve got a couple new features we hope to release this week. One is local search support, probably our biggest feature request of all time. Another is tracking clicks on tel: URLs. Another is the ability to click on any graph to view/segment visitors based on what you clicked. Last, Monitage (uptime monitoring) is being finalized, which also means we’ll have up to 1 minute monitoring available and the ability to setup more than 3 checks. Monitage won’t launch this week, but soon thereafter.
About once a year we have to perform major database maintenance on a number of servers, and that time has come.
16 of our 57 database servers will be affected. During maintenance, the servers will remain online, however no traffic will be processed until the maintenance has completed. The process takes between 4 and 12 hours, depending on the server. When the maintenance is done, each server will begin processing its backlog, which usually takes the same amount of time that it was paused. For example, a server paused for 6 hours will typically take about 6 hours to catch back up with real time, but sometimes up to ~25% longer.
Here is the schedule:
Friday March 8, 10pm PST: db1, db5, db8, db11, db21
Saturday March 9, 10pm PST: db2, db10, db23, db25, db40
Friday March 15, 10pm PST: db3, db4, db24, db36, db37, db41
To see if you will be affected, you can look up the database server for each of your sites. This can be found on the main page of the site preferences area.
The analytics API has been a complete free for all in almost 6 years of existence. This has rarely been an issue, save for once or twice a year maybe, we’d have to ask someone to please relax themselves.
But recently it’s become a serious ongoing problem. We’ve had at least 3 different people in the last few weeks all doing utterly massive exports of data, causing some of the database servers to lag quite badly (up to almost 2 hours in the most severe case).
When a server is lagging it affects thousands of customers. We can’t have this anymore, so today we have implemented some API throttling functionality and it is live now.
Throttling will only apply for visitors-list, actions-list, and segmentation requests, as those are by far the biggest drain on resources. All other requests are unaffected.
Here is how it works:
- Maximum of 1 simultaneous request per IP address at any point in time. Part of the issue recently has been people doing automated simultaneous requests for exporting data, in one case over 20 requests at the same time for the same site ID, spread across multiple dates. This will no longer work. You will receive an API error.
- Maximum of 3 simultaneous requests per site ID at any point in time. We will allow multiple requests for any given site ID at the same time (up to 3), to accommodate custom widgets, people developing apps, etc. But more than 3 will spit out an API error.
- Maximum of 500 results per request (down from 5,000), maximum date range of 1 day. This one is pretty strict and we will likely raise these limits, but we have to get API usage under control immediately. We will be monitoring things and plan to raise the limits as things calm down.
To repeat, these changes only apply for visitors-list, actions-list, and segmentation requests. No other types of requests are affected by anything mentioned here.
We know this is pretty lame, but it’s in the interest of keeping the service as close to real time as possible for all customers and that’s important. Hope you understand.
Many Clicky users have asked us about site monitoring. We are happy that this is top-of-mind for some of you because its importance should not be understated. Often we will receive emails from Clicky users asking why there was a dramatic lag in visitors tracked or a complete drop-off altogether. While there can be many explanations, it is not uncommon that the site itself went down unbeknownst to the site owner. Standard web analytics will not be able to tell you this, but site uptime monitoring and alerts will.
With this in mind, we are excited to announce a closed beta in partnership with Monitage, a newly-developed site uptime monitoring service by Roxr Software (that’s us). We have integrated Monitage into Clicky to give you a bigger picture of the health and activity of your web sites.
Monitage monitors web sites from five locations around the world (three in the US, one in Paris, one in Japan) and only declares a downtime event if a majority of its servers agree on it. This prevents network hiccups on the monitoring end from sending false alarms.
Pro Plus users and above receive access to the Monitage closed beta. When we officially launch, you will have the ability to create up to 30 checks per site with intervals as fast as 1 minute, but during testing we want to keep resource usage within a reasonable range. So for the time being, we are limiting to 3 checks per site with a max interval of 5 minutes. We expect to officially launch within 4 weeks, at which point Monitage will also be available as a standalone service.
To access Monitage, go to your site dashboard and click the Uptime tab. You can create checks for HTTP, HTTPS, SSH, FTP, IMAP, IMAPS, and ICMP (ping). We’ve also created a dashboard module.
You can also access uptime stats from the API. Check the API docs and search for uptime. type=uptime will give you the current status of all of your checks for a site. type=uptime-list will give you a chronological list of all downtime events for your site for the date range requested.
Last, we added uptime stats as an option in email reports as well.
When Monitage officially launches, we will determine what intervals and types of tests will be included with Clicky Pro Plus plans and above.
We are asking that you test Monitage, and let us know your thoughts, what you like, don’t like, and want to see. As Monitage is in its infancy, we want your feedback to help mature it into a stalwart companion to Clicky.
Note to white label customers: Monitage will be added as an option to white label service when it officially launches, but for now it is only available to Clicky users.
You may have noticed that we’re now live on clicky.com! Yay!
For 6 years, our brand has always been Clicky. Excluding our domain name, nowhere on our site has the brand GetClicky ever been mentioned, other than some testimonials that we didn’t edit. However, because of the old domain name, the majority people have always referred to us as GetClicky. It didn’t help that our Twitter handle was also @getclicky, but we wanted it to match our domain name (bad idea). It’s always kind of driven us insane, but it’s been an interesting lesson. If you can afford the right domain name, do yourself a favor and buy it as soon as possible. We paid a pretty freaking penny for clicky.com, but the second it was in our possession was just pure unadulterated joy.
We’ve had the @clicky Twitter handle from the beginning, but it was just a placeholder. We’ve now swapped @getclicky with @clicky so all of our followers and tweet history remains in place. Fun fact, Twitter does not offer an official method to do this – you have to actually relinquish your brand for a brief few seconds while you swap things around, which is extremely nerve racking to say the least. So @clicky became @clickyx, @getclicky became @clicky, and last @clickyx became @getclicky. Scary 20 seconds or so. Anyways, please note: If you tweet at us in the future, use @clicky, not @getclicky! But if you were already following us you will keep getting our tweets without having to do anything.
Any page viewed on getclicky.com is now redirected to clicky.com, with a little note at the top about updating your bookmarks. That note will go away in a few weeks, but for now we want as many people to notice the change as possible.
Don’t worry, getclicky.com isn’t going anywhere. Your existing tracking code, along with any integrations you have with the API, widgets, or whatever via getclicky.com, all will continue to work. Going forward, getclicky.com will only be seen in the tracking code (we are using it indefinitely for tracking). And we’ll continue to use it for email, for now at least, since that’s a nightmare to change. So… clicky.com = web site domain, getclicky.com = tracking domain, email, and indefinite backwards compatibility.
This was not as easy as just acquiring clicky.com and pointing it to a new IP address. A service of our size and complexity, there’s a LOT of things to deal with when changing your domain name. My checklist is ridiculous in length and I hope to go into it more in a later blog post.
There may be a few bugs lying around… we’ll squash them tomorrow as they come to our attention.
Just pushed a much needed tracking update for sub-user accounts, so now they can use on-site analytics, and their traffic will now also be automatically ignored.
Previously this was not possible because of the way our tracking servers were caching information about each site in memcached. After banging heads on keyboards for a while, we finally figured out a way to do it (it’s more complicated than it sounds). So, now sub-users will have their traffic ignored automatically, and they will see the on-site analytics widget.
One caveat – the master account’s preference for disabling on-site analytics on an account wide basis will override a sub-user’s preference.
Making this change also addresses a bug that popped up after the four-week heatmap trial ended a few days ago. All sub-user accounts are created as Pro, since Pro has always had access to every feature, and a Pro account was required to have sub-user accounts. But now that heatmaps require Pro Plus or higher, heatmap tracking broke for some of you if you had created another sub-user account and assigned them admin access to a site in your account. Their account features were overriding yours on the tracking servers for the sites they had admin access to. So this update fixes that. But we went ahead anyways and made sub-users account types mirror the master accounts – any time the master account type changes, the sub-users are updated accordingly, to address any future updates that might cause this scenario to come up again.
Heatmaps launched four weeks ago. As we said in that post, everyone would have it for four weeks, but then we would be requiring a Pro Platinum account to continue usage after that point. The reason for this is because the extra bandwidth and storage requirements for this data are significant, which has proven to be the case as we’ve monitored resource consumption since launch.
After listening to your feedback, we’ve decided to add a new plan (Pro Plus) with pricing in between Pro and Pro Platinum. It is the exact same as standard Pro, except that it includes heatmaps.
As of now, these changes are live. So if you are the standard Pro plan, you will need to upgrade to at least Pro Plus in order to keep heatmap tracking going forward.