This appears to be an area where I'm finding new and exciting things to discuss with regard to the ability to run multivariate tests on page elements within a site. In the spirit of Google and its pursuit of excellence in business and academia, and its willingness to allow us to push the limitations imposed upon us by ourselves, I've decided to open a post with the idea of having people respond with possible obstacles to their testing which they have conceived of, and have prevented them from executing a test in GWO. Here's how it works:
You tell me the situation: You, your boss, or some smart-(insert posterior here) in your office has created a mental block to performing a high-level, and very useful multivariate test or A/B test which, if run, could produce a multitude of insights and win you office praise, and put the cherry on your bonus. You have Google Website Optimizer and a thorough understanding of the major points which promote your ability to test. If only this one thing, this tiny script or tag, or placement, or file type wasn't there, you would be all set. What is that thing, that, Bucky Dent of a item which is a severe detriment to your intracubicle ticker-tape parade?
We Take a Look: You, me and the boss....my boss. We sit down and examine the test. In doing so, we will carefully look into your problem and create a version of the problem outside of your live site. Then, we'll go to work building the solution necessary for you to get exactly what you want out of the test without a serious loss of resources; mainly time and money.
We Form A Plan: When we've come up with some theoretical work-arounds for the problems which we plan on eliminating, we'll implement them in a basic operational test. At this point, we'll also choose the appropriate place to collect success (thank you page, cart addition, proxy page, etc) Let'em rip without a sincere impact on your live traffic or their variations. This will cost you a spot on your Google Website Optimizer list, which means nothing and costs less. But it will confuse you if you go back and wonder why you have multiple tests with the same name. Be ready.
Launch the Test: After the tests produce acceptable results, we'll launch the fully-prepared test. This includes plimping your multivariate test to the thousands of visitors who will (or not) react to the output bringing back valuable stats to your interface. In a few days, we'll check up to ensure that you're getting the right stats and collecting what you need.
Allow for Completion: As a rule, we try to not look at the test progress on these everyday. We suggest the same, simply for the preservation of sanity. Looking at the way these items interact and change and progress can be like watching a horserace in ultra-high speed film. By that I mean, the suspense is terrible, and it goes on forever. You start to formulate hypotheses about events which aren't really ocurring and a whole slew of other non-useful behaviors.
Analyze Results: Take a good look at what the outcome was. Go back and verify your files to ensure that you received all the collection tags as they were executed. Then, use your information to make a follow-up experiment. Avinash implies in his book that you can narrow these from the field earlier on than the completion of the test, but, for our methodology, we will not. Take the top five variations and a 'dog' or two, and place them out again to verify the results.
HERE'S THE CATCH
Something for nothing is a great idea at family retreats and picnics at the park, but it doesn't work that way. So, for the effort of putting together a test and its strategy, we're going to ask you for two things, which, considering the payoff, may be extremely cheap for the return and praise you may receive. We'll discuss it when you have a chance to write me on the blog or in the Yahoo! Group.
Showing posts with label multivariate testing. Show all posts
Showing posts with label multivariate testing. Show all posts
22 August 2007
11 August 2007
Update: Call-To-Action on Button Color Outcome
Quite a few weeks back, I wrote about the colors of buttons and their ability to appeal to buyer's impulses on an ecommerce site. Since then, I've placed 8 different buttons in two element areas on a deep path page in the cableorganizer site for testing. Using both Omniture and Google Website Optimizer, I was able to produce a positive correlation between our 'View Pricing' button on our Product Detail pages and sales conversion (represented in a proxy with the addition of an item to the shopping cart). The process is as follows:


Having received significant data from the test being performed in both Omniture and Google Website Optimizer, I'm sure that the results which I can see now are having the true correlational effect on our conversion. The really funny thing is, now that I see this ocurring with such high relational specificity to this page, and the context of the images within the colors contained within the page; I am met with more perplexing questions. These are of the type which appeals to my nature of understanding peoples consumption of visual cues with respect to the composition of a painting or image. Shapes and colors play off one another. Does this mean that because the color dominating the top button is present in the page and this is the centermost position of that color that I may have to find the individually best color for each button on the page? If so, is there some concentric model which can help explain shades and variations of intensity of colors to create the visibility, appeal and action on these colors? I'd be very interested to hear any explanations on this topic or tests which have been performed with regard to them.
As usual, I love to spend time thinking about and discussing advanced principles in testing, analytics, optimization and, well, anything about this field. If anyone has any ideas or suggestions, please feel free to contact me. I have been very busy for the past five weeks, but I think my posting may become more frequent for at least the next couple months. If I don't get back to you right away, it only indicates I am busy trying to keep my job.

- Using whichever metrics you choose to determine a page which can yield good results quickly, choose the page on which you expect to have the greatest impact by testing. (We actually use a few calculated metrics which help us determine these; they include: primary content composition(the percentage of total entries on which this page is the landing page within the context of the path) bounce rate, and a highly complicated weighting algorithm (developed out of the idea of proxy-measurements) to determine the importance of each page, within their context, with relation to their path, in the sequence of conversion.
- When the page is chosen, set up the testing scenario in Google Website Optimizer for the purpose of a multivariate test on chosen elements. Ensure that all the tagging is done and there are no overlapping elements, tables, or div tags between the script/noscript tags.
- Create multiple elements for each of these sections. My suggestion is to always include at least one or two 'crummy' variations on the elements. That way, you can verify their relevance rating in the process. It may lengthen the test, but it will also strengthen the output.
- Using Omniture, or whichever provider you are using, you can place a 'Custom Link' inside of each of the element tags which you hope to verify. Make sure each "custom Link' is unique so that when each is clicked, they show independently of each other. For our purposes, we laid each out with a color combination as to immediately present us with an idea of what was happening. (Also, make sure you place these in the appropriate report suite or ASI segment as necessary. Internal traffic could influence these if you are not careful. )
- Set your tests up completely and hash-out the previews before launching. Look at how they appear in different browser sizes. Know what your users will be seeing.
- Lastly, set up your cookies to not expire for an extended period. Make this period equal to whatever you find to be the average amount of time your customers take before making a purchase from the page which you are testing. Remember the Google code for this is made up of seconds. (i.e. 3 days equals 259,200 seconds).
Once you've ensured that all of this will work, you can feel comfortable letting the test loose.

Having received significant data from the test being performed in both Omniture and Google Website Optimizer, I'm sure that the results which I can see now are having the true correlational effect on our conversion. The really funny thing is, now that I see this ocurring with such high relational specificity to this page, and the context of the images within the colors contained within the page; I am met with more perplexing questions. These are of the type which appeals to my nature of understanding peoples consumption of visual cues with respect to the composition of a painting or image. Shapes and colors play off one another. Does this mean that because the color dominating the top button is present in the page and this is the centermost position of that color that I may have to find the individually best color for each button on the page? If so, is there some concentric model which can help explain shades and variations of intensity of colors to create the visibility, appeal and action on these colors? I'd be very interested to hear any explanations on this topic or tests which have been performed with regard to them.
As usual, I love to spend time thinking about and discussing advanced principles in testing, analytics, optimization and, well, anything about this field. If anyone has any ideas or suggestions, please feel free to contact me. I have been very busy for the past five weeks, but I think my posting may become more frequent for at least the next couple months. If I don't get back to you right away, it only indicates I am busy trying to keep my job.
12 July 2007
Some Tips and Tricks for Omniture Custom Link Tracking
Like anything else with Omniture, it seems that one of its valuable items, the Custom Link tracking available in the traffic reports is poorly explained, and consequently, difficult for people to understand or appropriately implement. Here I have compiled a list of ideas and tricks to not only help you install your own custom links, but to give you a hand making them readable, parsable, dynamic and useful. In some instances, like those related to on-site search, these may provide more insights than your company will know what to do with, in other instances, the simplicity alone may astound you. Read on and feel free to post and questions or comments to aid in the discussion.
Custom Link tracking is the installation of a small event call on certain parts of the actionable areas of your website. Some readers may not be using Omniture SiteCatalyst, in which case, inquire of your provider which method to use to gather this particular data. You can create a tag which calls a script based on the event you wish to collect on. By default, the documentation for this explains this should take place with the "=onclick" action. This works, but sometimes, and often we find, people are clicking less on buttons which hitting the 'Enter' key makes more sense. Therefore, it becomes less accurate in its reporting. To combat this, try using "=onsubmit" or some other action of your choice which can provide a more complete picture of the use of these links or areas of the site.
Using this method, CableOrganizer.com has been able to pull off a few neat stunts.
First, we know how many people are using our on-site search in comparison to what our search provider is telling us. This is important because of all our features on the site, the search is truly the single thing which ties directly to our conversion. The better our search and the more visible, the more people will use it. The more people use it, the more information available to draw from and increase activity of the learning algorithm tied into it. The better the results, the more likely people are to click and buy.
Second, we are able to gather information about the navigational habits of people using the search. Our IT department built a search-aid solution. Its a real time search suggestion application which pops out from the search box based on user input. By attaching a Custom Link to the application, we can actually determine when people use the suggestions, what they chose, and how they executed the action (either by a mouseclick or by hitting the 'Enter' key). This provides us with information about what people are searching for, how often they are searching, what the habits of the majority are and so on. Its a wonderful thing. We've used this to rethink how our navigation and our buttons should work throughout the site as well as groomed some of our keyword strategies.
Lastly, and while this may seem complicated I assure you its simple enough and valuable, we used the custom links in variations in and throughout our multivariate testing. When setting up our most recent multivariate test in Google Website Optimizer, I took some time and extra care to tag each of the variation areas with separate individual Custom Links. While the test is running, I can not only gain insight to the best possible combination as provided by the Google interface, but I can also understand how individual elements are performing their tasks. For instance, in this case, I have 6 buttons which all have their reason and strategy associated with them. Each of these buttons are passed into the page by the Website Optimizer. Along with the button, a little Custom Link tag is passed in as well. While the Website Optimizer is using 'Add to Cart' as the action representing conversion, clicking on the individual button is reporting information to me on the particular visibility of the version. This gives me a more complete picture of what the impact of the button is and how it is related to success of the page. I honestly can't wait to answer that question: "Does performing action A correlate to outcome?"
By themselves Custom Links are valuable without all the tricks and fancy dressing. If for no other reason they help the analyst understand BEHAVIOR. That's really all that matters. If you can't boil out some idea of what a majority of your users are doing, there is no use being an analyst. Without that piece, you might as well make decisions based on the Magic 8 Ball. For me, for us and for the science, there is no room for speculation.
As is regular and customary, if you feel like you have something to add, or you have a question, you can feel free to contact me or post here. I will get to it and answer when time allows.
Custom Link tracking is the installation of a small event call on certain parts of the actionable areas of your website. Some readers may not be using Omniture SiteCatalyst, in which case, inquire of your provider which method to use to gather this particular data. You can create a tag which calls a script based on the event you wish to collect on. By default, the documentation for this explains this should take place with the "=onclick" action. This works, but sometimes, and often we find, people are clicking less on buttons which hitting the 'Enter' key makes more sense. Therefore, it becomes less accurate in its reporting. To combat this, try using "=onsubmit" or some other action of your choice which can provide a more complete picture of the use of these links or areas of the site.
Using this method, CableOrganizer.com has been able to pull off a few neat stunts.
First, we know how many people are using our on-site search in comparison to what our search provider is telling us. This is important because of all our features on the site, the search is truly the single thing which ties directly to our conversion. The better our search and the more visible, the more people will use it. The more people use it, the more information available to draw from and increase activity of the learning algorithm tied into it. The better the results, the more likely people are to click and buy.
Second, we are able to gather information about the navigational habits of people using the search. Our IT department built a search-aid solution. Its a real time search suggestion application which pops out from the search box based on user input. By attaching a Custom Link to the application, we can actually determine when people use the suggestions, what they chose, and how they executed the action (either by a mouseclick or by hitting the 'Enter' key). This provides us with information about what people are searching for, how often they are searching, what the habits of the majority are and so on. Its a wonderful thing. We've used this to rethink how our navigation and our buttons should work throughout the site as well as groomed some of our keyword strategies.
Lastly, and while this may seem complicated I assure you its simple enough and valuable, we used the custom links in variations in and throughout our multivariate testing. When setting up our most recent multivariate test in Google Website Optimizer, I took some time and extra care to tag each of the variation areas with separate individual Custom Links. While the test is running, I can not only gain insight to the best possible combination as provided by the Google interface, but I can also understand how individual elements are performing their tasks. For instance, in this case, I have 6 buttons which all have their reason and strategy associated with them. Each of these buttons are passed into the page by the Website Optimizer. Along with the button, a little Custom Link tag is passed in as well. While the Website Optimizer is using 'Add to Cart' as the action representing conversion, clicking on the individual button is reporting information to me on the particular visibility of the version. This gives me a more complete picture of what the impact of the button is and how it is related to success of the page. I honestly can't wait to answer that question: "Does performing action A correlate to outcome?"
By themselves Custom Links are valuable without all the tricks and fancy dressing. If for no other reason they help the analyst understand BEHAVIOR. That's really all that matters. If you can't boil out some idea of what a majority of your users are doing, there is no use being an analyst. Without that piece, you might as well make decisions based on the Magic 8 Ball. For me, for us and for the science, there is no room for speculation.
As is regular and customary, if you feel like you have something to add, or you have a question, you can feel free to contact me or post here. I will get to it and answer when time allows.
16 June 2007
Call to Action: Hierarchical Cruciality in Colors & Their Intensity
Throughout my recent experiments and tests which I have been running, a notion occured to me of which I have found it difficult to uncover any existing, definitive, and supported best practices. This notion, which for lack of formal acknowledgement I will call hierachical cruciality, is that which may or may not state that people will respond to calls to action based on their position within a path to conversion with a correlative level of intensity of the calls. By saying that, I should more clearly state, I mean that choosing button colors and their intensity should be based on their linear point within a funnel to conversion. It is my theory that not only is the color and intensity important, but that the path-relative color and intensity tuned at the right frequency may, at the outcome of my experiment, lead to substantial increases in feature-specific use as well as conversion.
I've recently read a blog by Jonathan Mendez, I believe its called Optimize&Prophesize, located at http://www.optimizeandprophesize.com, which I thought was somewhat insightful in my quest to uncover current best practices with regard to my topic. In it, Mendez said that he had uncovered through his multivariate tests that the use of a single button or color for buttons and calls to action accross a site was flawed. His testing, he asserted, uncovered the fact that people in different paths react differently to the same stimuli. He listed several factors with respect to buttons and colors specifically that leant themselves to the optimization of certain funnels. I thought this was brilliant. Although a little intensive to implement I think his findings are probably very accurate.
In the case of CableOrganizer.com, pathing reports consistently yield statsitical data which tells me that paths are as unique as the individuals that click-through them. To assume that each of these individual paths and significant funnels react to the same cues is naive. However, being able to predict the behaviors and adjust to them with the appropriate cues is inherently very difficult. So then, the case becomes a matter of testing the most significant and valuable path and optimizing that path with the best variations for maximized outcome.
As a subcomponent of my theory, I would like to state that Orange is the 'New Red'. I think red is a very action-evoking color, don't get me wrong. However, I would be willing to assert that the color red, while valuable in its ability to get the users attention, is not a single powerful cue. Further, as cited by many, it can be related to many negative connotations which often repel as frequently as they entice. 'Red' is the color of debt, imagine that thought as you plan on using your high-interest credit card. It is also the color of danger and several hundred other apprehension illiciting sentiments. Orange, a vibrant alternative, has less volatile implications and can be very visible in digital media. Of course, any good analyst at this point should be saying, prove it.
CableOrganizer.com recently implemented an on-site, action promoted minicart. The packaged button for this cart was a red image with a size 10 font which said "checkout". It was nearly invisible and somewhat repulsive when the cart was brought up after adding an item. Immediately, I thought back to our usability tests and the functional invisibility of our shopping cart. Previously we had used a yellow button. That stunk. Then, we used this red, which was also visibility impaired. So, I created a sizeable orange version. Outcome? Well, suffice it to say that currently, having collected custom link data for more than a few days, more than 40% of the people exposed to the button click on it. My estimation is that, when significant data can be collected on this particular instance, this change will have positively impacted conversion by as much as 10%. In our context that's saying from 2.7% conversion to 2.97% conversion, but, over the course of a year, or more, that is significant.
My goal, now, is to uncover a comprehensive color-based mapping for behavior which could be implemented to our site for the purpose of meeting the de facto policy of cableorganizer.com with a sophisicated method of presenting action elements which are ideal for their position within the funnel and enhancing user experience. Doing so should yield significant positive gains and a cross-section of business to business customer behaviors which could be applied accross the industry in best practices models. Further, it may provide insights to human behaviors and user behaviors online which would be applicable accross cultures and industries.
I would like to ask anyone with interest in this particular study to submit and research, comments, observations, or experiences through this blog or via an email as to aid in my preparation of elements and to make all possible considerations. While this is work related, it may need a significant amount of time dedicated which my employment may or may not make consideration of. If any person is willing to, in a pro bono fashion, provide elements or ideas, please also submit those by these means.
I've recently read a blog by Jonathan Mendez, I believe its called Optimize&Prophesize, located at http://www.optimizeandprophesize.com, which I thought was somewhat insightful in my quest to uncover current best practices with regard to my topic. In it, Mendez said that he had uncovered through his multivariate tests that the use of a single button or color for buttons and calls to action accross a site was flawed. His testing, he asserted, uncovered the fact that people in different paths react differently to the same stimuli. He listed several factors with respect to buttons and colors specifically that leant themselves to the optimization of certain funnels. I thought this was brilliant. Although a little intensive to implement I think his findings are probably very accurate.
In the case of CableOrganizer.com, pathing reports consistently yield statsitical data which tells me that paths are as unique as the individuals that click-through them. To assume that each of these individual paths and significant funnels react to the same cues is naive. However, being able to predict the behaviors and adjust to them with the appropriate cues is inherently very difficult. So then, the case becomes a matter of testing the most significant and valuable path and optimizing that path with the best variations for maximized outcome.
As a subcomponent of my theory, I would like to state that Orange is the 'New Red'. I think red is a very action-evoking color, don't get me wrong. However, I would be willing to assert that the color red, while valuable in its ability to get the users attention, is not a single powerful cue. Further, as cited by many, it can be related to many negative connotations which often repel as frequently as they entice. 'Red' is the color of debt, imagine that thought as you plan on using your high-interest credit card. It is also the color of danger and several hundred other apprehension illiciting sentiments. Orange, a vibrant alternative, has less volatile implications and can be very visible in digital media. Of course, any good analyst at this point should be saying, prove it.
CableOrganizer.com recently implemented an on-site, action promoted minicart. The packaged button for this cart was a red image with a size 10 font which said "checkout". It was nearly invisible and somewhat repulsive when the cart was brought up after adding an item. Immediately, I thought back to our usability tests and the functional invisibility of our shopping cart. Previously we had used a yellow button. That stunk. Then, we used this red, which was also visibility impaired. So, I created a sizeable orange version. Outcome? Well, suffice it to say that currently, having collected custom link data for more than a few days, more than 40% of the people exposed to the button click on it. My estimation is that, when significant data can be collected on this particular instance, this change will have positively impacted conversion by as much as 10%. In our context that's saying from 2.7% conversion to 2.97% conversion, but, over the course of a year, or more, that is significant.
My goal, now, is to uncover a comprehensive color-based mapping for behavior which could be implemented to our site for the purpose of meeting the de facto policy of cableorganizer.com with a sophisicated method of presenting action elements which are ideal for their position within the funnel and enhancing user experience. Doing so should yield significant positive gains and a cross-section of business to business customer behaviors which could be applied accross the industry in best practices models. Further, it may provide insights to human behaviors and user behaviors online which would be applicable accross cultures and industries.
I would like to ask anyone with interest in this particular study to submit and research, comments, observations, or experiences through this blog or via an email as to aid in my preparation of elements and to make all possible considerations. While this is work related, it may need a significant amount of time dedicated which my employment may or may not make consideration of. If any person is willing to, in a pro bono fashion, provide elements or ideas, please also submit those by these means.
19 May 2007
Omniture Forms Analysis - Long Road Worth the Journey
As of yesterday evening CableOrganizer.com is number 441 of the Internet Retailer Top 500 sites. Its pretty exciting for all of us considering we sell such obscurely niched items. Its taken the gang a lot of work and constant improvement to produce that result and its something we are all very proud of.
Now, that's out of my system, I'd like to rant a bit if I may about the most exciting recent development in my analytics toolbox. After months of trying to get a handle on the really advanced analysis tools, Sebastien Hoffman, one of our outstanding IT folks, implemented a Forms Analysis tool for reporting to our SiteCatalyst on form errors, success and abandonment. Actually, its in the opposite order: abandon, error, success in terms of its operation flow.

What does that mean? you ask....
It means that we can measure the success of our interface with the customer and gauge their experience. We can see where people are consistently having problems in our forms and make changes to reflect the necessity for ease of use. This is done simply first by looking at the ratio of success of an area to the number of errors. If for every 10 successes of a portion of the form you have 1 error. That probably isn't significant or having a sincere impact on your conversion. If, on the other hand, your ratio is 4:1 or better, you may want to run some usability test scripting through that area for the purpose of scrientifically replicating the problem Once you have realized what it may be, either technical or human experience related, you should probably plan on running some A/B or multivariate testing to find out if some other elements could help influence a better ratio of success. (Note: I do not recommend making this your first multivariate test especially if the form is your billing info form or something of that great importance to your everyday operations!!!!)
Subscribe to:
Posts (Atom)