13 October 2007

Pack Your Bags

In the last couple weeks, I've spent some time doing very difficult and exciting web analytics research. I've also made some trips out to Las Vegas and Napa, California for the purpose of gaining perspective and insight to my practice. In the process, I had the pleasure of a lengthy discussion with Eric T. Peterson from Web Analytics Demystified.

We talked a little about the mobile web and web analytics 3.0. We touched on some philosophies and the evolution of the practice. Then, we decided I would move my blog and some research and testing methods contributions to his blog. That being the case, I will no longer be publishing new information at this address. There is a possibility we will be continuing this publication, however, at this point, it seems unlikely.

Moving to WebAnalyticsDemystified.com will hopefully provide Judah and Eric with a third useful contributor. From now on, please read up about analytics, testing methods and help us develop something akin to practicum. As always, my contact is open for any interested parties.

That said, here is a link to the new blog: Daniel at Web Analytics Demystified

See you there!!!!

28 August 2007

Blackle.com: Energy Saving Screen.....Bahhhhhhhhh!!!!

We love testing. Its what makes an analyst get up in the morning and drive 15 miles through bumper to bumper traffic a different way everyday to find the optimal path. Its what makes us count our gas efficiency in our Honda. It makes sense. Its who we are. In the spirit of this, CableOrganizer has taken to some additional testing methodologies with regard to the internet. Some of these have some very real world ecological implications. On this occassion, we've taken to getting to the real story behind Blackle.com.

Blackle.com touts a running count of how many Watts it has saved. This is a counter based on the idea that the level of energy consumption from non-illuminated pixels is significant enough to actually help ebb the wasteful flow of energy from the 17" screens in front of each of our faces. Did we believe it? Hell No baby...we question everything. At first thought, I said, I'll bet its true. I admit, I know nothing about LCD energy consumption. So, Paul, myself, and Jason Hernandez, the sales associate at the company who brougt this to our attention, slated and performed a test to see what the truth was. Hint: Get Snopes.com on this right away.

Our test included one 19" flat screen LCD model E197FP, FireFox 2.0 and a Watts Up? EZ Electrical Consumption meter in our offices. We opened the browser to the Google.com page, and opened a tab to the Blackle.com page. When in the primary position, Google.com consumed 28.2 watts, give or take 1-2 tenths of a watt. When we switched over to Blackle.com in the same browser, pointed to a primary navigation position, the same monitor consumed 29.4 watts, give or take the same confidence interval. In order to help with our validation of this test, we performed the test again with the camera running for everyone to see. Now, keep in mind, we did not use a CRT (Cathode Ray Tube) should that be the majority of screens used, maybe the explanation of consumption would be viable. However, with the percentage and pervasiveness of LCD, the expense of the site does not justify or envirtue its existence.

Now, this has peaked our interest. The meter itself is fun. We can measure in volts, amps, watts, or even set it to record how much money each item will cost. Business and residential users alike have to love the fact that you can see Watt your electrical bill will be before you get it and how much each appliance is costing. True, at least for me, that our utility (FP And L) actually provides this data to help curb excessive consumption of energy. But, for some, this may actually make a difference in expense, and aid in the battle for awareness of consumption. Imagine that...

27 August 2007

CableOrganizer.com INC. 500 # 126

I'd like to take an opportunity to brag a bit about the company where I work. I know many people out there who have a great deal to hold over us. There are companies which sell very consumer friendly things like purses or jewelry, and companies which sell consumer electronics. For those companies, success comes in the form of large revenues and the ability to say that they compete. For CableOrganizer.com, our success is measured in the ability of 30+ geeks and nerds (insert loving and empathic tone) to carve a living out of selling hand tools and server racks. Most of our market barely knows they can utilize our site, much less visit and purchase.

So, in honor of our being listed as an Inc.500 Company (number 126) and as a validation of our collective feeling of achievement, I have decided to splurge in grabbing your attention by including a link to a press release for our Big Little Business by the Beach. Please read and enjoy...

CableOrganizer.com Named to Inc. Magazine's 26th Annual List of America's 500 Fastest-Growing Private Companies
— having realized astounding
3-year sales growth of 1,413.3%.

22 August 2007

Putting Limitations in Their Place in Google Website Optimizer

This appears to be an area where I'm finding new and exciting things to discuss with regard to the ability to run multivariate tests on page elements within a site. In the spirit of Google and its pursuit of excellence in business and academia, and its willingness to allow us to push the limitations imposed upon us by ourselves, I've decided to open a post with the idea of having people respond with possible obstacles to their testing which they have conceived of, and have prevented them from executing a test in GWO. Here's how it works:

You tell me the situation: You, your boss, or some smart-(insert posterior here) in your office has created a mental block to performing a high-level, and very useful multivariate test or A/B test which, if run, could produce a multitude of insights and win you office praise, and put the cherry on your bonus. You have Google Website Optimizer and a thorough understanding of the major points which promote your ability to test. If only this one thing, this tiny script or tag, or placement, or file type wasn't there, you would be all set. What is that thing, that, Bucky Dent of a item which is a severe detriment to your intracubicle ticker-tape parade?

We Take a Look: You, me and the boss....my boss. We sit down and examine the test. In doing so, we will carefully look into your problem and create a version of the problem outside of your live site. Then, we'll go to work building the solution necessary for you to get exactly what you want out of the test without a serious loss of resources; mainly time and money.

We Form A Plan: When we've come up with some theoretical work-arounds for the problems which we plan on eliminating, we'll implement them in a basic operational test. At this point, we'll also choose the appropriate place to collect success (thank you page, cart addition, proxy page, etc) Let'em rip without a sincere impact on your live traffic or their variations. This will cost you a spot on your Google Website Optimizer list, which means nothing and costs less. But it will confuse you if you go back and wonder why you have multiple tests with the same name. Be ready.

Launch the Test: After the tests produce acceptable results, we'll launch the fully-prepared test. This includes plimping your multivariate test to the thousands of visitors who will (or not) react to the output bringing back valuable stats to your interface. In a few days, we'll check up to ensure that you're getting the right stats and collecting what you need.

Allow for Completion: As a rule, we try to not look at the test progress on these everyday. We suggest the same, simply for the preservation of sanity. Looking at the way these items interact and change and progress can be like watching a horserace in ultra-high speed film. By that I mean, the suspense is terrible, and it goes on forever. You start to formulate hypotheses about events which aren't really ocurring and a whole slew of other non-useful behaviors.

Analyze Results: Take a good look at what the outcome was. Go back and verify your files to ensure that you received all the collection tags as they were executed. Then, use your information to make a follow-up experiment. Avinash implies in his book that you can narrow these from the field earlier on than the completion of the test, but, for our methodology, we will not. Take the top five variations and a 'dog' or two, and place them out again to verify the results.

Something for nothing is a great idea at family retreats and picnics at the park, but it doesn't work that way. So, for the effort of putting together a test and its strategy, we're going to ask you for two things, which, considering the payoff, may be extremely cheap for the return and praise you may receive. We'll discuss it when you have a chance to write me on the blog or in the Yahoo! Group.

11 August 2007

Update: Call-To-Action on Button Color Outcome

Quite a few weeks back, I wrote about the colors of buttons and their ability to appeal to buyer's impulses on an ecommerce site. Since then, I've placed 8 different buttons in two element areas on a deep path page in the cableorganizer site for testing. Using both Omniture and Google Website Optimizer, I was able to produce a positive correlation between our 'View Pricing' button on our Product Detail pages and sales conversion (represented in a proxy with the addition of an item to the shopping cart). The process is as follows:

  1. Using whichever metrics you choose to determine a page which can yield good results quickly, choose the page on which you expect to have the greatest impact by testing. (We actually use a few calculated metrics which help us determine these; they include: primary content composition(the percentage of total entries on which this page is the landing page within the context of the path) bounce rate, and a highly complicated weighting algorithm (developed out of the idea of proxy-measurements) to determine the importance of each page, within their context, with relation to their path, in the sequence of conversion.

  2. When the page is chosen, set up the testing scenario in Google Website Optimizer for the purpose of a multivariate test on chosen elements. Ensure that all the tagging is done and there are no overlapping elements, tables, or div tags between the script/noscript tags.

  3. Create multiple elements for each of these sections. My suggestion is to always include at least one or two 'crummy' variations on the elements. That way, you can verify their relevance rating in the process. It may lengthen the test, but it will also strengthen the output.

  4. Using Omniture, or whichever provider you are using, you can place a 'Custom Link' inside of each of the element tags which you hope to verify. Make sure each "custom Link' is unique so that when each is clicked, they show independently of each other. For our purposes, we laid each out with a color combination as to immediately present us with an idea of what was happening. (Also, make sure you place these in the appropriate report suite or ASI segment as necessary. Internal traffic could influence these if you are not careful. )

  5. Set your tests up completely and hash-out the previews before launching. Look at how they appear in different browser sizes. Know what your users will be seeing.

  6. Lastly, set up your cookies to not expire for an extended period. Make this period equal to whatever you find to be the average amount of time your customers take before making a purchase from the page which you are testing. Remember the Google code for this is made up of seconds. (i.e. 3 days equals 259,200 seconds).

Once you've ensured that all of this will work, you can feel comfortable letting the test loose.

Having received significant data from the test being performed in both Omniture and Google Website Optimizer, I'm sure that the results which I can see now are having the true correlational effect on our conversion. The really funny thing is, now that I see this ocurring with such high relational specificity to this page, and the context of the images within the colors contained within the page; I am met with more perplexing questions. These are of the type which appeals to my nature of understanding peoples consumption of visual cues with respect to the composition of a painting or image. Shapes and colors play off one another. Does this mean that because the color dominating the top button is present in the page and this is the centermost position of that color that I may have to find the individually best color for each button on the page? If so, is there some concentric model which can help explain shades and variations of intensity of colors to create the visibility, appeal and action on these colors? I'd be very interested to hear any explanations on this topic or tests which have been performed with regard to them.

As usual, I love to spend time thinking about and discussing advanced principles in testing, analytics, optimization and, well, anything about this field. If anyone has any ideas or suggestions, please feel free to contact me. I have been very busy for the past five weeks, but I think my posting may become more frequent for at least the next couple months. If I don't get back to you right away, it only indicates I am busy trying to keep my job.

24 July 2007

CableOrganizer.com Senior VP In Featured Carol Krol Article of B2B Magazine

Last week, Paul Holstein, the senior VP at CableOrganizer.com was interviewed, along with Eric T. Peterson, (CEO of Web Analytics Demystified) for an article on Free Internet Analytics Tools. The interview and subsequent article was compiled and written by Carol Krol. The full text of the article is available here.


The article discusses several of the tools currently used by web analytics process practicioners. Anyone who has a new or vested interest in this topic should feel free to read and respond on this site or go directly to our website (CableOrganizer) and inquire about anything you wish.

17 July 2007

Usability Testing - TechSmith Morae Product Review

With a significant buzz existing and growing for website optimization methods it makes since that a certain level of interest would exist for things like Usability Testing. Some questions might exist for the people looking to take their site or their business down this road to happiness. These may have something to do with software, scripts (not javascripts but actually scripts which help user and observer navigate through the test and hit on the major issues), importance, value, finding candidates, and even its set up and implementation. The purpose of this blog is to point out the seemingly unlimited value which can be uncovered by performing Usability Tests, provide some insights, and essentially report on our experiences with the TechSmith software called Morae.

I have written about Usability before, there is an article in the blog from May 22 if you want to supplement this article (http://danalytics.blogspot.com/2007/05/usability-testing-driving-office-unity.html)

Usability is the means by which a company or individual can assess user experience on their internet site or internal applications by observing the way in which the subject interacts with them. It can include things as simple as sitting and watching a person use the site in a normal environment or become as complicated as recording eye-tracking, facial responses and audio communication from users in a scientifically controlled environment. In the first case, you may want to get some simple impressions of navigability issues or quick responses to appearance, in the latter, you can extract more detailed insights, however, costs increase as you get more complicated. Some agencies have even begun consulting in this area with the purpose of providing an avenue to these tests when it does not make sense to do this in-house.

At CableOrganizer.com, we're lucky enough to have had the space to create a usability lab. We spent a couple hundred bucks for some paint and some office furniature, and bought a new computer to load up with some software. After taking a look at several of the alternatives, we finally decided to use TechSmith Morae. Morae is a three-part software comprised of a software called the recorder which is loaded on the test machine; the remote viewer, which allows up to four observers to participate in watching and making notes on the file from remote locations; and the Manager, which is the video parsing and editing software which should be installed on your analysts computer or on that person which will be creating the final usability videos. The total cost for the whole she-bang was probably around a couple thousand dollars. Maybe close to $3,000.00 by my estimate plus the cost of the space.

Many people we've spoken to recently, some at the eMetrics Summit in San Francisco were starting to discuss the benefits of usability testing. We haste to push people in that direction for a couple reasons. First, we've experienced its success and value first hand. Using usability tests, we've uncovered problems with our search, navigation, call-to-action, and shopping cart which immediately led to recovering the costs associated with the investment. Second, as internet frequenters, we're quite sick of trying to fumble around and guess what people are trying to present us with for navigation. In other words, trying to get anywhere is impossible unless you are familiar with the site.

As a web analytics and usability practicioner, I have come up with a fairly significant amount of data which may help you in your efforts to increase the ease of your navigation and, in the end, lead to a more positive user experience and higher rate of conversion. Over the past 4 months I have been compiling, analyzing, parsing and re-analyzing dozens of tests performed by all different kinds of subjects which have lead me to these conclusions. Any of these particular issues can help you know where and how to look at your site through your customers eyes:

  1. Focus on the areas of the site which are universal in application by many different types of sites first. Things like your Home Page, On-Site Search, structure, detail pages, shopping cart and every single button which people may or may not click on has a way in which they look and feel to the user. Find out what that is and how yours need to look to make the user feel comfortable. Test and retest until all your methods are producing the positive results which you desire, or can, at least, live with.
  2. Help the user connect to the site. Not all your testing subjects are going to be familiar with your site unless you are a giant. In order to try to get the user engaged in the scenarios and providing greater perspective on the cues and content, try to find a way to relate how your site would fit into some aspect of their life. It will make them feel more confident and comfortable in the test and provide you with more input.
  3. Find candidates with or without a connection to the industry you're in. If you have a site dedicated to one product or subject, it doesn't mean you have to be able to connect them to the ability to yield revelations. Like every medium, there is a great deal of inter-related cues which people react to accross cultures and geography. Occasionally bringing in the 'blind-taste-test' subject can have its virtues for uncovering things which will help to broaden your market.
  4. Pay close attention to when and how people use your search. This is a sweet little side effect of your usability test. You can snag wonderful little keyword combinations which, in the long run, may be very valuable. In addition, you can get a sense of how long each 'Type' of user takes before resigning or executing (depending on the mood of the action) an on-site search. You WILL notice a difference between people who are those who use search and those who submit to search by navigational frustration. If you find that a majority of your usability tests are the latter, you need to try to find and get some data from the former.
  5. Keep users talking. Just sitting and watching is sometimes not enough. When you need to get value out of each and every test, you need to coax continued communication from the subject. The only way you can learn what a customer or subject is thinking is to ask them. On each major action on and throughout the site, ask a question or make a comment that will open discussion on the appearance, or issues concerning what the user is seeing or feeling.

Each of the above, and a great deal more will make your usability testing program produce value minutes after you begin your first test. Adhering to these principles and creating more customized options will help you build a unique set of scripts and paths on which to travel and make huge gains.

In our context, usability is made possible by the use of the Morae software described above. By using this we are able to compile a growing number of tests in a single area with notes on each of the tests by observers. This gives me the chance to provide useful data to the people making decisions on the site in our office, as well as provide some insight to the companies whom we hire to handle certain applications on our site. We have portions of videos compiled together to report on issues which continue to arise for people using our search or shopping cart, or even smaller applications like our reviews.

TechSmith seems to have gotten everything right with Morae. It provides a close to real time interface for observers to add little notes flagged for the manager. It has highlights on things like clicks and other actions on areas of the site. It also includes syncronized audio and PIP video so you can see a screen-shot and simultaneous facial reaction in a single screen. I actually have NO complaints about it and, with a $1300 price tag, I think its worth it for any small internet based business. When your setting it up, you have to test the interface and ensure that the people sitting in on the tests know how to use the little mine-flags, but having said that, there are no other particular issues which our agency has had trouble making use of.

If your thinking about usability, it means your thinking about evolving. With the advent of all kinds of new applications and new site ideas emerging with the next phases of growth, you need to ensure that you are doing everything you can to stamp your footprint into the landscape that is emerging with the new dawn on the internet. Creating even minor obstacles to success in this environment could mean the difference between thriving and extinction. What has always worked for mankind is their propensity for making and using tools. Usability is a tool, much like multivariate testing, statistical analysis, A/B testing, etc. Ensure your survival.....use tools.

If you have any questions or comments about what I've written, please feel free to leave a post. I would be glad to help with any issue which may arise by taking some of our advice or getting involved with products which we have discussed here. I am a full-time web analyst for CableOrganizer.com which means that there are demands on my time and obligations which I must fulfill. My responses tend to run long and stem from time on weekends, so, please, if you write, be patient so that I can ensure my continued employment.

12 July 2007

Some Tips and Tricks for Omniture Custom Link Tracking

Like anything else with Omniture, it seems that one of its valuable items, the Custom Link tracking available in the traffic reports is poorly explained, and consequently, difficult for people to understand or appropriately implement. Here I have compiled a list of ideas and tricks to not only help you install your own custom links, but to give you a hand making them readable, parsable, dynamic and useful. In some instances, like those related to on-site search, these may provide more insights than your company will know what to do with, in other instances, the simplicity alone may astound you. Read on and feel free to post and questions or comments to aid in the discussion.

Custom Link tracking is the installation of a small event call on certain parts of the actionable areas of your website. Some readers may not be using Omniture SiteCatalyst, in which case, inquire of your provider which method to use to gather this particular data. You can create a tag which calls a script based on the event you wish to collect on. By default, the documentation for this explains this should take place with the "=onclick" action. This works, but sometimes, and often we find, people are clicking less on buttons which hitting the 'Enter' key makes more sense. Therefore, it becomes less accurate in its reporting. To combat this, try using "=onsubmit" or some other action of your choice which can provide a more complete picture of the use of these links or areas of the site.

Using this method, CableOrganizer.com has been able to pull off a few neat stunts.

First, we know how many people are using our on-site search in comparison to what our search provider is telling us. This is important because of all our features on the site, the search is truly the single thing which ties directly to our conversion. The better our search and the more visible, the more people will use it. The more people use it, the more information available to draw from and increase activity of the learning algorithm tied into it. The better the results, the more likely people are to click and buy.

Second, we are able to gather information about the navigational habits of people using the search. Our IT department built a search-aid solution. Its a real time search suggestion application which pops out from the search box based on user input. By attaching a Custom Link to the application, we can actually determine when people use the suggestions, what they chose, and how they executed the action (either by a mouseclick or by hitting the 'Enter' key). This provides us with information about what people are searching for, how often they are searching, what the habits of the majority are and so on. Its a wonderful thing. We've used this to rethink how our navigation and our buttons should work throughout the site as well as groomed some of our keyword strategies.

Lastly, and while this may seem complicated I assure you its simple enough and valuable, we used the custom links in variations in and throughout our multivariate testing. When setting up our most recent multivariate test in Google Website Optimizer, I took some time and extra care to tag each of the variation areas with separate individual Custom Links. While the test is running, I can not only gain insight to the best possible combination as provided by the Google interface, but I can also understand how individual elements are performing their tasks. For instance, in this case, I have 6 buttons which all have their reason and strategy associated with them. Each of these buttons are passed into the page by the Website Optimizer. Along with the button, a little Custom Link tag is passed in as well. While the Website Optimizer is using 'Add to Cart' as the action representing conversion, clicking on the individual button is reporting information to me on the particular visibility of the version. This gives me a more complete picture of what the impact of the button is and how it is related to success of the page. I honestly can't wait to answer that question: "Does performing action A correlate to outcome?"

By themselves Custom Links are valuable without all the tricks and fancy dressing. If for no other reason they help the analyst understand BEHAVIOR. That's really all that matters. If you can't boil out some idea of what a majority of your users are doing, there is no use being an analyst. Without that piece, you might as well make decisions based on the Magic 8 Ball. For me, for us and for the science, there is no room for speculation.

As is regular and customary, if you feel like you have something to add, or you have a question, you can feel free to contact me or post here. I will get to it and answer when time allows.

21 June 2007

Google Website Optimizer Tips

Having really put the Google Website Optimizer to the test at this point I have decided to publish some possibly useful tips for people trying to push the limits on this free tool. Here I will list a goup of things which I had to spend some serious time on trial and error with in order to correct. It may save some time for people willing to read.

1. The importance of testing is pretty obvious, but how do you really do a full page A/B test. I know Matt Belkin at Omniture will tell you never test more than one element as an A/B without making considerations of the possibility that what you are discussing is a multi-variate test. However, we have some pages which have identical content which we decided to test in a new layout, format. When it was put in front of me to be able to test, I hastily said, "Sure, I can do that...", and the truth was, I believed it would be as simple as the taking the code, tagging it, and let'er go. This was not the case.

2. Make the fancy page the original and use the original as the variation. As you may or may not know, when using GWO, you have some limitations with respect to what can be passed in and out of the 'utmx_' sections which you define in the set-up. The variations cannot contain active scripts for things like drop downs or tab. So, in a case like ours, where this is the proposed next phase of transition for a number of our pages, and the test design doesn't allow for it, you have two choices. Make it work, or.......make it work. What we were able to do was, instead of taking the original, splicing it up with a tag, and then passing the new page in and out of the original (our site uses four PHP includes which build a template around the static pages); We moved the new content into the original slot, where all the links push, and made the less complicated original version the variation which could be passed in and out of the tags. BINGO....works like a charm.

3. Split traffic with a redirect. Our pages work on a template, like I stated above. For that reason, when we used a redirect to attempt splitting traffic, it worked, but only after performing a load of the page includes before the redirect occurred. This does not mean that this should be an idea which is so easily abandoned. If your page does not rely on using includes or any server code which has to be called prior to the body, you can create a variation which, when passed into a variable set to a priority in the code, will call the redirect and split the traffic to a new page on the site. For me, it was more important to maintain fluidity over the possibility of making the redirect work. So, I saw no value in that, but I'm sure someone can use that tidbit.

I hope these relatively simple tips are helpful to you if you are interested in getting the most from the Google Website Optimizer. If you have questions, suggestions, or comments, please feel free to voice them. A note to those interested, however, is that I will be migrating the blog to the CableOrganizer.com site over the next week. Any articles which you have read here will be made available there from this point on. When the migration is complete, I will publish the link here, in the Yahoo! web group and through my Technorati profile.

16 June 2007

Call to Action: Hierarchical Cruciality in Colors & Their Intensity

Throughout my recent experiments and tests which I have been running, a notion occured to me of which I have found it difficult to uncover any existing, definitive, and supported best practices. This notion, which for lack of formal acknowledgement I will call hierachical cruciality, is that which may or may not state that people will respond to calls to action based on their position within a path to conversion with a correlative level of intensity of the calls. By saying that, I should more clearly state, I mean that choosing button colors and their intensity should be based on their linear point within a funnel to conversion. It is my theory that not only is the color and intensity important, but that the path-relative color and intensity tuned at the right frequency may, at the outcome of my experiment, lead to substantial increases in feature-specific use as well as conversion.

I've recently read a blog by Jonathan Mendez, I believe its called Optimize&Prophesize, located at http://www.optimizeandprophesize.com, which I thought was somewhat insightful in my quest to uncover current best practices with regard to my topic. In it, Mendez said that he had uncovered through his multivariate tests that the use of a single button or color for buttons and calls to action accross a site was flawed. His testing, he asserted, uncovered the fact that people in different paths react differently to the same stimuli. He listed several factors with respect to buttons and colors specifically that leant themselves to the optimization of certain funnels. I thought this was brilliant. Although a little intensive to implement I think his findings are probably very accurate.

In the case of CableOrganizer.com, pathing reports consistently yield statsitical data which tells me that paths are as unique as the individuals that click-through them. To assume that each of these individual paths and significant funnels react to the same cues is naive. However, being able to predict the behaviors and adjust to them with the appropriate cues is inherently very difficult. So then, the case becomes a matter of testing the most significant and valuable path and optimizing that path with the best variations for maximized outcome.

As a subcomponent of my theory, I would like to state that Orange is the 'New Red'. I think red is a very action-evoking color, don't get me wrong. However, I would be willing to assert that the color red, while valuable in its ability to get the users attention, is not a single powerful cue. Further, as cited by many, it can be related to many negative connotations which often repel as frequently as they entice. 'Red' is the color of debt, imagine that thought as you plan on using your high-interest credit card. It is also the color of danger and several hundred other apprehension illiciting sentiments. Orange, a vibrant alternative, has less volatile implications and can be very visible in digital media. Of course, any good analyst at this point should be saying, prove it.

CableOrganizer.com recently implemented an on-site, action promoted minicart. The packaged button for this cart was a red image with a size 10 font which said "checkout". It was nearly invisible and somewhat repulsive when the cart was brought up after adding an item. Immediately, I thought back to our usability tests and the functional invisibility of our shopping cart. Previously we had used a yellow button. That stunk. Then, we used this red, which was also visibility impaired. So, I created a sizeable orange version. Outcome? Well, suffice it to say that currently, having collected custom link data for more than a few days, more than 40% of the people exposed to the button click on it. My estimation is that, when significant data can be collected on this particular instance, this change will have positively impacted conversion by as much as 10%. In our context that's saying from 2.7% conversion to 2.97% conversion, but, over the course of a year, or more, that is significant.

My goal, now, is to uncover a comprehensive color-based mapping for behavior which could be implemented to our site for the purpose of meeting the de facto policy of cableorganizer.com with a sophisicated method of presenting action elements which are ideal for their position within the funnel and enhancing user experience. Doing so should yield significant positive gains and a cross-section of business to business customer behaviors which could be applied accross the industry in best practices models. Further, it may provide insights to human behaviors and user behaviors online which would be applicable accross cultures and industries.

I would like to ask anyone with interest in this particular study to submit and research, comments, observations, or experiences through this blog or via an email as to aid in my preparation of elements and to make all possible considerations. While this is work related, it may need a significant amount of time dedicated which my employment may or may not make consideration of. If any person is willing to, in a pro bono fashion, provide elements or ideas, please also submit those by these means.

12 June 2007

Web Analytics an Hour a Day: Preliminary Review

On Monday I received my copy of Avinash Kaushik's "Web Analytics: An Hour A Day" published by Sybex. At first glance, it seemed like any other book on analytics you would buy off a shelf in the Geeks and Dweebs section of the local Barnes & Noble. When I opened the book, the first thing that caught my eye was a small box which indicated that all the profits and proceeds of the book were to be donated to two separate medical causes. Aside from my firm belief that the Author is equally as dedicated to the practice of Web Analytics as anyone I know, I am sincerely touched by his strength of character in taking the incentive to provide this much needed source of revenue to respected medicine. Slainte' Avinash...

Avinash seems to have taken the time to pin down the key points of the evolving use of analytics as it applies generally to business. He's built a few systems of thought around the rudimentary ideas which nearly anyone can grasp and developed more complex modes to help coax new ideas. One idea which I particularly enjoyed was his explanation of an idea called a Trinity, commencing on page 15. Using this theme, Kaushik was able to tie together integrated analyses based in Behaviors, Experiences, and Outcomes and how these influence each other.

The central foundation of the Trinity which Avinash discussed is the production and extraction of Actionable Insights and Metrics. For us, this is a simple identification of the low-hanging fruit, or the items which need the most work to perform the satisfactory duty of the page or item for navigation etc. This might be as simple as looking at how clearly defined the navigation is, or how visible our "View Cart" button is.

Based on the Trinity, then, we would find a way to quantify the behavior exhibited by people who had access to that element or were exposed to it. This takes on the chore of asking the question: Of everyone who had the need or opportunity to interact through this provided means, who did or did not use it, and why? In the case of our "View Cart" button on the site, we actually built a custom link in our Omniture Suite, and pair the stats of the collected link executions versus our log files which tell us how many times the item has been presented. We can even go so far as to monetize this button and then compare it to a proposed improvement over similar time lines. When the outcome from the changes made based on the behavior can be clearly correlative, it gets placed in the "experiences" area of the Trinity. These aren't epistles which remain unchanged until the end of time, but the operational relationships which exist for any site at its current and evolving state.

This is only the first of the sections which I read. I enjoyed it very much and I am looking forward to reading and reporting on a great deal more. If you have the means or the gusto to ask the boss, get out and pick up the book. I'm sure you can probably read a great deal of similar insights and news on Avinash's blog, Occam's Razor. (http://www.kaushik.net/avinash/)I'll be continuing my reading and reporting on areas which I think will benefit the useful content of my blog and iterate to people in the practice some of what I was able to get out of the publication.

If anyone wants to open discussion on what I've mentioned here or take some time open up new threads, please feel free to do so here. I will respond when the time allows and does not interfere with my ability to perform the duties responsible for my primary source of income.

06 June 2007

Upside Downside of Taguchi Tests Revealed in Progress

Okay, so there are a couple good things and a small number of bad things which I find are happening in the use of Google Website Optimizer for the Taguchi tests.

First: WEBSITE OPTIMIZER NOTE: Your collection code for the goal DOES have to be after the analytics tag, however, it DOES NOT have to be before the 'body' tag.



Tests are progressing very quickly. I actually had to shut down the test, copy and restart twice since the original post because it ran into some tag deletion from an update in our site from a saved version. It made for a frustrating couple of days until I could clear up the data. However, once they get out there, they run very quickly with limited traffic exposure and you can begin to see projections, albeit with a significant confindence interval, within a couple hours.

Picture of Element Value is Much Clearer. When we built the test variations, we purposely built in a few 'dogs' which we knew were going to be terrible indicators of conversion to each of the element variations. These include purposely bad spacing, terrible wording, image problems, and excess text. I mean, we really tried to get an idea of how bad we could make something. In the variations where there are clearly bad elements, there is corresponding poor performance. This is wonderful as far as I'm concerned because it validates our hypotheses about conversion driving elements. Paul Holstein actually put out a post to the Yahoo! group on these which is available at: http://tech.groups.yahoo.com/group/webanalytics/message/9383
We're pretty sure the list is worth printing and having prior to testing.


Editability is much more complicated. With the boons of quick results and dynamic testability, come the burdens of human development. Errors which have not been discovered prior to launching the test create the possibility that the test is not going to give a clear picture. When things DO go wrong, it wipes out whatever progress was made with the test and you have to copy, then restart the test with the corrections. With the single element, full factorial tests, you will be able to make changes to the elements which are not tied into the html directly (say an image needs to be clearer or its link needs adjustment).

Element Strength is Combination Strength. With Taguchi tests, the combination is the element. This means that the test is geared toward optimizing the best possible combination and not the strength of a single varied element While this falls under the CON section, its only a conditional con as it makes for great follow up testing for your traditional uses of the Website Optimizer.

31 May 2007

Expanding on Taguchi Methods in Google Website Optimizer

I've hypothesized for quite some time that the ability to run a Taguchi Factorial test in Google Website Optimzer was not only possible, but beneficial in the time that it saves when dealing with creating and managing several multi-variate tests. Having taken the time to do the research and the implementation, I am very proud of the fact that CableOrganizer.com is currently running exactly that type of test with the Google Tool at a fraction of the time of a full factorial test and a fraction of the cost of the "professional" tests out there.

Here's how:The Google Optimizer uses scripts with identifiers to "split" areas of the pages being tested in their coding. Essentially, that creates all that is included in the tested element. These are completely determined by the user, so it creates an opportunity to manipulate and build on what they have given to improve it for our uses. What we've done is created a single element which included the use of 7 individual elements. These were based on the series of elements which Paul has mentioned on several posts on the Yahoo! group somewhere in this catalog http://tech.groups.yahoo.com/group/webanalytics/group/webanalytics/.

These included, for this test: (which we, by the way, used the MultiVariate setup calculator in SiteCatalyst to develop the Taguchi recipes.)

So, yep, 7 elements with two variations each and then a control to run along linear with a traffic partition. The modeling calculator above lists 16 recipes which are to be run versus the control. This is instead of something like 350 with regard to the full factorial analysis methodology.

Thereafter, I went about creating the elements which needed to be developed and laid out for each of the pages. I created 9 images which would take the place of the three in the original, plus 6 to test the number of images and how they impact the performance. It took some fooling around to get it to work right, but, I was able to develop 16 unique recipes in Dreamweaver which allowed for me to name each of the variations in accord with the Taguchi output from SiteCatalyst.

One by one, I took the code snippet out of the Dreamweaver code view, and placed it in each of the corresponding named variation elements inside the Google Optimizer. When I was finished, it created 17 variations on the same 'area' of the site. I made sure to look into different sizes of resolutions to see how this would be affected by certain viewers, and looked through our two most popularly reported browsers (MSIE and FireFox) and clicked the last things into place. By 4am, I was sick of looking at the page and tired of adjusting tables and alignment, but I got it right and hit the button. Having alloted 50% of the traffic for the test, it was neat to finally see the numbers start to roll in a couple hours later over coffee at the office.
According to the Google AdWords Testing Calculator
(available at https://www.google.com/analytics/siteopt/siteopt/help/calculator.html), which was a tool brought to my attention by Robbin Steif from LunaMetrics when she was training me on multi-variate testing and conversion science, this test, to perform based on the inputs and the methods of calculating proxies internally, will collect and distribute valid results within a weeks time. As a safety and to reduce the level of doubt, we'll let it run itself all the way through, but it will still only take two weeks at the most to produce a final result. This is a great thing for a small/medium business with moderate to light traffic on the site. If, for no other reason, it helps to level the playing field with companies running tremendous traffic and the luxury to afford having an outside agency perform these experiments.
Maybe you wonder what the value is in something like this besides the ability to say, we did that, or why does this even matter. That's a fair statement. For me, it was necessary to do this because we need to be able to get value out of everything we do. For me to be able to test how product descriptions were presented in the earlier pages in our navigation paths, it meant being able to test and adjust several elements all at the same time without losing customers to a page that was schizophrenic in its presentation. You see, you can't test things like that if you are doing full-factorial analysis on say a category page or an intermediary for the landing page-to-goal. It requires that you have control over several repetitive elements simultaneously and meant that the Google Tool was a freebie we'd have to pass on unless we could make it work. Having said that, I'm thrilled that I gave this a try, the sense of satisfaction is well worth the toil which went into mapping and preparing this highly complicated test.
If you have questions or require more information on this subject, I would be glad to help if you would be so kind as to send me an email. I will respond within a reasonable time frame, so long as doing so doesn't place me in a compromised position with how I spend my time on the clock.
Be sure to check out the site too, CableOrganizer.com is always working on some wild experiments and testing out new uses for analytics principles. Maybe you could get some good ideas.

Taguchi Multi-Variate Test in Google Website Optimizer

This morning at 4:11am I completed the final work for, tested for browser compatibility and launched a Taguchi Factorially based Multi-Variate Test using Google's Website Optimizer.

As I begin to see results and have valid data which can be analyzed, I will share the findings with the realm of practicing analysts. Just a note, while the test itself should take less than 10% of the time for a full factorial analysis, this was very difficult to set up.

In order to perform a Taguchi test, you have to actually parse entire areas of the pages you are testing as single elements and maintain the integrity of their HTML when passing all the variables into GWO. A thorough tutorial and explanation will be provided here and through some work being compiled by Andrew King.

I'm exhausted, but methodology merits the efforts.

27 May 2007

Crystal Xcelsius Rocks

Upon returning from a recent analytics event, I took one valuable piece of information with me which I wish to share. During one presentation, an individual was describing a way to win internal support for analytics and communicate the value to the company through a visual and dynamic means. In and through the presentation, though I know the person worked for another company, much ado was made about a software called Crystal Xcelsius.

Crystal Xcelsius is a data driven application building module making use of the same principles which allow charts inside of Microsoft Excel. It uses cell ranges and attributes visual cues to numerical data. In comparison to how the charts and cues look in Excel versus what is output in Xcelsius, it seems like alien technology got its start at Business Objects. This is good and helpful, and attractive in itself, and probably worth the $300 price tag on its own. However, it is only the beginning of what an analyst is capable of with the software.

Xcelsuis has taken me a couple days to figure out and there are certainly things which I haven't explored completely, but its astonishing to uncover the value of the tool. In a weeks time, I will have created a full scale, completely operable tool to report on all aspects of business at our offices. I've been able to integrate data from numerous sources, and measure performance on each of these provided in a single interface for the stakeholders driven by subordinate interfaces through which to have other managers and reporting parties input their data with relative ease.

I'm able to report on traffic and commerce KPIs, SEO and SEM, Internal Search Performance, Key Page performance, marketing, and other online metrics while simultaneously producing relevant reporting on things like our Customer Service and non-internet performance without skipping a beat or opening new software. Further, I can integrate and report on things like our analytics tool performance and compare to determine where collection may be over counted or no populating correctly. The best part about this, and I mean this as serious as sincere, is that the most annoying part of the reporting, the collection and input, is done a little bit a day by responsible parties and collected in the tool for analysis. I can see and analyze all my data in one place and push that along with these wild graphics.

I'll input screen shots and 'snags' when I get chance to. If you have any tips or experience on Crystal Xcelsius or questions on how to perform what I have described, please feel free to email me or leave a comment. When I'm not on the clock or have a moment to respond, I will with any information I can provide.

26 May 2007

Usability Testing Driving Office Unity

Good Morning...

I know a great number of people tout the benefits of having Usability Testing performed on their site as part of their optimization efforts. Most of these include things like uncovering how real people are using cues to navigate your site, or the web in general, or how your site meets up with expectations. Little is ever mentioned about the really incredible socially fusing properties for diverse disciplines within the workplace.

We perform, on average, 4-6 usability tests per month for the sites which I am currently analyzing. Every other week we schedule and perform three tests. To do this, my employer set out a budget and some space where we took the time to set up a Usability Testing Lab. This is an office in which we created a sense of comfort with some light colors and more home styled office furniature and ambience. We added some lighting to offset the typical flourescent setup and put some plants and artwork in there for added flavor. Lastly, we installed a software called Morae (product review available soon through this blog) and with a Logitech Camera/Microphone, we're off and running.

Well, first, let me say that in terms of value of the investment, Usability Test is one of the more immediately actionable tests with a sudden measurable lift and a long tail. In other words, it provides key insights to all the major areas of the site that help contribute to conversion. It can be really exciting and really scary at times to see the types of things you may have overlooked. Ultimately, not matter what, performing scientific usability testing is worth its cost plenty of times over. There are also added value items which I never expected.

As our tests approach every couple weeks, you can catch the buzz in the office. People know they are coming up and are constantly asking me who is testing, what we're looking at, how their ideas are working out....its really neat. Then, the Thursday morning comes for the test and all of the departments can opt to be observers for the tests. Offices get crammed up in every corner of the building to see the test. We script out the first couple minutes of the test and then, extemporaneously, follow the lead of the subject. After about 30-40 minutes, we shore up the loose ends in the testing and close the connection. By the time the manager files are converted, we have a time set up when the observers and the stakeholders can get together in an office where we pool our thoughts and share ideas about the improvement of the experience.

I kid you not, I've suggested creating highlight reels of this and getting together with the whole of the local employees for the purpose of hanging out and watching these videos. I also create parsed videos with which to analyze multiple experiences for the purpose of specific departments to deal with particular elements of the design, navigation, or further analyze sections provided by outsourced agencies. People in the office have actually come to me to attempt burning usability DVDs so that they can take a look at home and jot down some additional thoughts.

I understand that there are investment considerations in the ability to perform usability testing. In all, the cost to get the software, labor to script and develop testing, the PC, the materials and labor to prepare a proper setting...its probably about $5000.00 or just under it. To get a steady picture about the problems hindering your conversion is worth the cost in a single test. If the lifetime value of a customer is impacted by a single sale with their ease of navigation or the implementation of new, more relevant images or cues, you'll triple your investment, at least, in the first changes you make. (While I say "triple" figuratively, I may be completely underestimating the ROI only because I'm trying to stay casual and effective in my communication of the point.)

If you would like to, please contact me and we can discuss Usability Testing or any analysis issue you may have, at length, so long as it does not inhibit my ability to perform for my employer.

22 May 2007

In Case Anyone At Omniture is Reading

One thing which I would really like to see in SiteCatalyst which would help me save time in generating reports, is the ability to use a 'Manual Update' feature like that which is available in Discover 1.5. Think about it.

The site is so full of people generating reports with exhaustive data day and night that, when I want to get directly to a report which I have not yet bookmarked or set into my dashboards, I have to actually go to that report, then adjust the date accordingly then search and drill down as necessary. Essentially, what that does is creates a series of reports each based on the first one. What I would like is to make a request with all my criterium, then hit update and produce a single report with exactly what I want.

So, by producing that one single button capability, you can reduce requests on the site for ad hoc reports by at least one third. My guess is, that would make the data process much faster and keep more clients working on analysis and reaping the benefits instead of fighting your software and being frustrated. Granted, this is not an easy thing to win support for but, my guess is it would be the single most useful function to the advanced analyst who has become familiar with the tools.

Keyword Bounce Rate Through Omniture SiteCatalyst

If, like just about everyone else I've spoken to, a large part of your site strategy is based around keyword strategy and some search engine marketing, you may want to take some time to think about measuring your keywords by their bounce rate. Keyword bounce is a valuable measure of engagement which will allow for the efficient trimming of fat from your keyword campaigns. Here is how its done in Omniture's SiteCatalyst suite...

SiteCatalyst allows for 'Custom Events' and eVars in the newly named Conversion third of the reporting suite in version 13.5. You will need to use three Custom Events (named Total Click Throughs, Click-Past and Click Through Flag) These, on their own determine the bounce. It takes some inversion of the impression which the terms give to understand how. The first event, of the click-through (as one part of the Total Click Throughs) is the raw number of entrances collected on a keyword. The second event, is a Click-Past, which I think needs to be renamed, but produces a trip in the formula which shows that on the occassion within the path that the user or customer came in on this keyword, they clicked beyond this page and deeper into the site. Thus, you have a non-bounce or an engaged visit. Through a simple operation in a calculated metric, expressed as (1-[click past]/[total click throughs]) you have a solid formula for bounce which can be used in any keyword report inside of the conversion reports (formerly called commerce).

The last event is the Click Through Flag. This is set in place for the purpose of pointing out keywords at a glance which have produced a number at or above a user-defined threshold of success. In other words, I could say, if I have a keyword that does not bounce at least three times, that shows a certain level of promise which I wish to be informed of. In doing so, you may be able to weed out winners from losers in a quick report.

Bonus Materials Packaged with the Click Quality Plug-In
Two additional parts of this operation which, one of which I can actually attest to, are called Time Parting and GeoLocation. The time parting is used in an eVar which is attached to the script at the Click Through. This gives an indication of which half-hour period these click throughs are occuring in. This is helpful because it creates the ability to more precisely identify if a keyword has more weight within a certain timeframe. It also helps to flag and rectify click-fraud should that ever occur (jeez, I wonder if that ever happens).

GeoLocation, I have yet to see work in our conversion suite. When it does work, I'll be happy to share my thoughts on it and what it has provided us with.

If you have an interest in getting this set up in your suites, you should probably contact your rep at Omniture to discuss it. I highly recommend using it, or some similar operation to get an idea of how engaged incoming searches are. The reasoning is this: If you measure everything by the final conversion result, you become too reactive. Its like going deep sea fishing and bringing your Ugly Stik and a can of worms. If your scope is too narrow, you'll never be able to adapt.

If you have any comments or questions about what I've discussed, please feel free to contact me. I'll help where I can within a timeframe that allows for my duties to my employer to be fulfilled first.

19 May 2007

Win-Loss Analytics: New or Evolving Practice of Actionable Analytics

Today I took some time to look up what Wikipedia.org had to say about web analytics and where they were drawing their information from. No surprise WAA and Mr. Eric T. Peterson were cited on the page. There was some talk about emetrics and the "Hotel Problem" which I thought was interesting but somehow trivial. Then, I took some time to read about and look into something called Win/Loss Analytics.

As it turns out, the definition is basically stating that Win/Loss analytics and analysis is a true single user path analysis. You look at a single customer experience compared to others and cite whether or not the path converted. A conversion is a win, a non-conversion is a loss. I've been interested in this for sometime but was unsure of how to arrive at a methodology. Using SiteCatalyst, we've been able to look directly at a single path of conversion from the referrer or entry all the way through to the closure of the path or at the departure post-conversion. It can be a little tricky at times, but the value derived from the information is quite high.

Try it yourself.

Take a solid look at the referring domains into the site through the "Finding Methods" provided in conversion. Then, take a look at the same thing in your traffic reports. Match the keywords up to the path, if collected upon loading your site, or available in the URL provided by the traffic report drill-down blender available. Trace the path all the way through to conversion. It will take some time to really refine your technique and to do some checking to ensure that the information which you are using is correct, but I have found it very useful for a variety of reason.

1. It can help improve your keywords campaigns. Granted, its a little work intensive to do it, but the value can be derived over time to justify the efforts.
2. You get a picture of where your customers, the one's who buy, come from, how they react to areas within the site, and, now that we have the Forms Analysis, where they bump into trouble in the path.
3. You can compare the conversion structure (I think that's probably a new term) posited by the type of referrer you are seeing produce traffic to the site. For example, you can manually segment (which I say only because I haven't found a method yet to automate the process) the referring domains into a couple categories: Primary Search Engines, Sub-Primary Search Engines, Product Feeds, Forums, etc. and measure them versus each other. In the end, you can sort these and find out where it make the most sense to concentrate your efforts.

There's really quite a bit more which can be provided by this. I'm available for discussion as necessary so long as it doesn't interfere with my job functions. Email is best.

Omniture Forms Analysis - Long Road Worth the Journey

As of yesterday evening CableOrganizer.com is number 441 of the Internet Retailer Top 500 sites. Its pretty exciting for all of us considering we sell such obscurely niched items. Its taken the gang a lot of work and constant improvement to produce that result and its something we are all very proud of.

Now, that's out of my system, I'd like to rant a bit if I may about the most exciting recent development in my analytics toolbox. After months of trying to get a handle on the really advanced analysis tools, Sebastien Hoffman, one of our outstanding IT folks, implemented a Forms Analysis tool for reporting to our SiteCatalyst on form errors, success and abandonment. Actually, its in the opposite order: abandon, error, success in terms of its operation flow.

What does that mean? you ask....

It means that we can measure the success of our interface with the customer and gauge their experience. We can see where people are consistently having problems in our forms and make changes to reflect the necessity for ease of use. This is done simply first by looking at the ratio of success of an area to the number of errors. If for every 10 successes of a portion of the form you have 1 error. That probably isn't significant or having a sincere impact on your conversion. If, on the other hand, your ratio is 4:1 or better, you may want to run some usability test scripting through that area for the purpose of scrientifically replicating the problem Once you have realized what it may be, either technical or human experience related, you should probably plan on running some A/B or multivariate testing to find out if some other elements could help influence a better ratio of success. (Note: I do not recommend making this your first multivariate test especially if the form is your billing info form or something of that great importance to your everyday operations!!!!)

17 May 2007

Anil Batra and ZA/AZ Interview Published

This evening, Anil Batra, a blogger and analytics writer from ZA/AZ published an interview on my job with CO. Its available for reading at http://webanalysis.blogspot.com.

It gives a pretty fair look at the job tasks associated with being a web analyst. It also gives a solid explanation of how I ended up upside down in jobs and here in the world of statistics. Feel free to contact me with questions or advice on items which I have discussed in the blog. I will do my best to help but cannot advise at the expense of my employer.

Upcoming Buzz in Google Website Optimizer

Good Evening,

As a portion of the people whom I deal with on a daily basis know, I am a test-happy web analyst for CableOrganizer.com in South Florida, US. I am constantly trying to find new ways to manipulate and blow the gears off existing technology for its improvement. One of the ways in which my job allows me to play with this is the free tool offered by Google called the Website Optimizer. It is a analytics tool (which can be attached to all the other Google goodies) with which to perform multi-variate and split (A/B) testing of page elements etc. on pages.

My initial foray into this multi-variate testing world brought me to this tool and a Beta-test through Robbin Steif at LunaMetrics in Pittsburgh, PA. Through a series of conversations and e-mail correspondence, Robbin was able to help me understand the principles and relatively complex math and ideas behind the tool. I couldn't help it, two weeks into the conversations I started to think about its limitations and what I could do to make its limitations our strengths.

With that, I set out with a handful of wrenches and poured them into a high-performance engine. (no pun intended). What I came up with is this:

1. Although a tool set with the idea that the tests must run a full-factorial analysis, when paired with the Multi-Variable framework available in Omniture's SiteCatalyst, you can developed and test with Taguchi Methods...how you say? That's a future post, and trust me, its a scorcher.

2. Proxies can be excellent indicators to conversion. By this I refer to using multiple pages tagged as a success metric within the Google Website Optimizer. If you are testing a landing page and you are an eCommerce company, chances are, doing a full factorial test with a single conversion metric page(success or goal in the optimizer interface) will limit your ability to test efficiently in one of two ways: A- you can only test limited elements if your page traffic isn't phenomenal and the test will still take quite some time. B - the effect will not be fully realized unless you are able to show that your page metric is a true indicator of conversion. (so you end up using proxies anyway, so why not stretch the limits and measure the ripple).

3. While not an ideal tool, it is free, and it can be manipulated with a little work to do nearly anything. You can pass elements of all kinds in and out between the scripts and the scripts determine the element. Another post on this will be available after we have successfully completed the tests we currently have slated for CableOrganizer.com this month.

4. Although the test doesn't produce a final result until the clear winner has achieved a certain level of successes and the sample size is statistically valid, if you create a clearly different set of variables your optimal choice should be clear.

Website Optimizer is great for creating a series of internal and personalized best practices with which you can rewrite your important landing pages and make an immediate impact. Using the tool is relatively simple and can certainly benefit anyone who can use it. I would be glad to help anyone who has an interest in making this tool work for their needs or I can certainly speak as a reference to the competence and skill of Robbin and Taylor at Lunametrics.

Please feel free to contact me through posting and I can get you information on our methodologies.

15 April 2007

Multi-Variate Testing Info

There are many different methods by which an analytics practicioner can go about making leaps and bounds toward page and site optimization. These include functions at the mindset level of the analyst performing, winning the trust of the design team, and adhering to the principles of science which define our objectives. An analyst must be a business scientist before he or she can call themselves an analyst.

Working with Paul Holstein, I've realized there are a certain number of things which come in very handy. First, test before you speak. Paul is a man focused on results and high levels of confidence. He does not like to make business decisions based on opinion. So, when you sit down to discuss some solid insights, you'd better know damn well how those insights are valuable, where to apply them, how to measure the results, and what to expect in terms of success. Its my impression that Paul is of the mind that science won't let you down as frequently as opinion will. I can't argue with that. It kind of reminds me of math. No matter how you cut it, math is a subject where there is no room for debate. You are either right or wrong. There's no gray area, I like that.

To understand Multi-Variate testing in theory is not difficult. First there are a few functions and properties of the testing which must be uncovered and dealt with prior to making any changes to the site which you are planning on testing. There are a few things you should know.

The first thing to decide is what to test. My suggestion is to create a heirarchy of the pages you feel are most in need of testing outside of any analysis. The purpose of this is to learn the necessity of detachment which must exist. You can list all the pages that really get under your skin, that you hate to look at and which you feel are just hiddeously bad pages for your conversion, whatever that might be. (In my case, I've created an algorithmic formula which can list my best pages from top to bottom taking all my most important KPIs into account with a single sort) After you've done this, run some reports to determine which pages have the highest bounce rates, the most traffic and the least importance within the path of conversion. Sort them in descending order by highest traffic. Within the first ten or fifteen pages, you'll see where you need to start making changes. Keep in mind, if this is your first attempt at the test, you will need to avoid making changes to function pages and pages central to the operation of the site on the whole (i.e. Home Page, Newsletter Forms, Shopping Cart or Checkout, or even Internal Search pages) as this may actually cost you a great deal more by having a single adversely performing test combination.

Compare the list you made before doing a traffic-conversion analysis. Your results may or may not be very interesting, but if they are different, it can be a real eye-opening experience. Consistently, when I produce the list to the boss and the design team, I get interesting reactions. Often they confuse aesthetics for functionality and vice versa. This HAS to be a very common problem as it would be with any internal sections devoted to one hemisphere of the human brain.

After you've chosen a page to examine and test, you should keep in mind that there are some things you can change and some things you can't. In another blog, or possibly the Yahoo! web analytics forum, Paul produced a list of things which will have a greater impact on conversion. When preparing for an multi-variate test, its probably a good idea to print it out and have it close by to draw out how you will attack the methodology. Since this is my first official post, I think I'll cut it here and hope that the response show's enough demand for me to continue with this blog.