31 May 2007

Expanding on Taguchi Methods in Google Website Optimizer

I've hypothesized for quite some time that the ability to run a Taguchi Factorial test in Google Website Optimzer was not only possible, but beneficial in the time that it saves when dealing with creating and managing several multi-variate tests. Having taken the time to do the research and the implementation, I am very proud of the fact that CableOrganizer.com is currently running exactly that type of test with the Google Tool at a fraction of the time of a full factorial test and a fraction of the cost of the "professional" tests out there.

Here's how:The Google Optimizer uses scripts with identifiers to "split" areas of the pages being tested in their coding. Essentially, that creates all that is included in the tested element. These are completely determined by the user, so it creates an opportunity to manipulate and build on what they have given to improve it for our uses. What we've done is created a single element which included the use of 7 individual elements. These were based on the series of elements which Paul has mentioned on several posts on the Yahoo! group somewhere in this catalog http://tech.groups.yahoo.com/group/webanalytics/group/webanalytics/.

These included, for this test: (which we, by the way, used the MultiVariate setup calculator in SiteCatalyst to develop the Taguchi recipes.)

So, yep, 7 elements with two variations each and then a control to run along linear with a traffic partition. The modeling calculator above lists 16 recipes which are to be run versus the control. This is instead of something like 350 with regard to the full factorial analysis methodology.

Thereafter, I went about creating the elements which needed to be developed and laid out for each of the pages. I created 9 images which would take the place of the three in the original, plus 6 to test the number of images and how they impact the performance. It took some fooling around to get it to work right, but, I was able to develop 16 unique recipes in Dreamweaver which allowed for me to name each of the variations in accord with the Taguchi output from SiteCatalyst.

One by one, I took the code snippet out of the Dreamweaver code view, and placed it in each of the corresponding named variation elements inside the Google Optimizer. When I was finished, it created 17 variations on the same 'area' of the site. I made sure to look into different sizes of resolutions to see how this would be affected by certain viewers, and looked through our two most popularly reported browsers (MSIE and FireFox) and clicked the last things into place. By 4am, I was sick of looking at the page and tired of adjusting tables and alignment, but I got it right and hit the button. Having alloted 50% of the traffic for the test, it was neat to finally see the numbers start to roll in a couple hours later over coffee at the office.
According to the Google AdWords Testing Calculator
(available at https://www.google.com/analytics/siteopt/siteopt/help/calculator.html), which was a tool brought to my attention by Robbin Steif from LunaMetrics when she was training me on multi-variate testing and conversion science, this test, to perform based on the inputs and the methods of calculating proxies internally, will collect and distribute valid results within a weeks time. As a safety and to reduce the level of doubt, we'll let it run itself all the way through, but it will still only take two weeks at the most to produce a final result. This is a great thing for a small/medium business with moderate to light traffic on the site. If, for no other reason, it helps to level the playing field with companies running tremendous traffic and the luxury to afford having an outside agency perform these experiments.
Maybe you wonder what the value is in something like this besides the ability to say, we did that, or why does this even matter. That's a fair statement. For me, it was necessary to do this because we need to be able to get value out of everything we do. For me to be able to test how product descriptions were presented in the earlier pages in our navigation paths, it meant being able to test and adjust several elements all at the same time without losing customers to a page that was schizophrenic in its presentation. You see, you can't test things like that if you are doing full-factorial analysis on say a category page or an intermediary for the landing page-to-goal. It requires that you have control over several repetitive elements simultaneously and meant that the Google Tool was a freebie we'd have to pass on unless we could make it work. Having said that, I'm thrilled that I gave this a try, the sense of satisfaction is well worth the toil which went into mapping and preparing this highly complicated test.
If you have questions or require more information on this subject, I would be glad to help if you would be so kind as to send me an email. I will respond within a reasonable time frame, so long as doing so doesn't place me in a compromised position with how I spend my time on the clock.
Be sure to check out the site too, CableOrganizer.com is always working on some wild experiments and testing out new uses for analytics principles. Maybe you could get some good ideas.

Taguchi Multi-Variate Test in Google Website Optimizer

This morning at 4:11am I completed the final work for, tested for browser compatibility and launched a Taguchi Factorially based Multi-Variate Test using Google's Website Optimizer.

As I begin to see results and have valid data which can be analyzed, I will share the findings with the realm of practicing analysts. Just a note, while the test itself should take less than 10% of the time for a full factorial analysis, this was very difficult to set up.

In order to perform a Taguchi test, you have to actually parse entire areas of the pages you are testing as single elements and maintain the integrity of their HTML when passing all the variables into GWO. A thorough tutorial and explanation will be provided here and through some work being compiled by Andrew King.

I'm exhausted, but methodology merits the efforts.

27 May 2007

Crystal Xcelsius Rocks

Upon returning from a recent analytics event, I took one valuable piece of information with me which I wish to share. During one presentation, an individual was describing a way to win internal support for analytics and communicate the value to the company through a visual and dynamic means. In and through the presentation, though I know the person worked for another company, much ado was made about a software called Crystal Xcelsius.

Crystal Xcelsius is a data driven application building module making use of the same principles which allow charts inside of Microsoft Excel. It uses cell ranges and attributes visual cues to numerical data. In comparison to how the charts and cues look in Excel versus what is output in Xcelsius, it seems like alien technology got its start at Business Objects. This is good and helpful, and attractive in itself, and probably worth the $300 price tag on its own. However, it is only the beginning of what an analyst is capable of with the software.

Xcelsuis has taken me a couple days to figure out and there are certainly things which I haven't explored completely, but its astonishing to uncover the value of the tool. In a weeks time, I will have created a full scale, completely operable tool to report on all aspects of business at our offices. I've been able to integrate data from numerous sources, and measure performance on each of these provided in a single interface for the stakeholders driven by subordinate interfaces through which to have other managers and reporting parties input their data with relative ease.

I'm able to report on traffic and commerce KPIs, SEO and SEM, Internal Search Performance, Key Page performance, marketing, and other online metrics while simultaneously producing relevant reporting on things like our Customer Service and non-internet performance without skipping a beat or opening new software. Further, I can integrate and report on things like our analytics tool performance and compare to determine where collection may be over counted or no populating correctly. The best part about this, and I mean this as serious as sincere, is that the most annoying part of the reporting, the collection and input, is done a little bit a day by responsible parties and collected in the tool for analysis. I can see and analyze all my data in one place and push that along with these wild graphics.

I'll input screen shots and 'snags' when I get chance to. If you have any tips or experience on Crystal Xcelsius or questions on how to perform what I have described, please feel free to email me or leave a comment. When I'm not on the clock or have a moment to respond, I will with any information I can provide.

26 May 2007

Usability Testing Driving Office Unity

Good Morning...

I know a great number of people tout the benefits of having Usability Testing performed on their site as part of their optimization efforts. Most of these include things like uncovering how real people are using cues to navigate your site, or the web in general, or how your site meets up with expectations. Little is ever mentioned about the really incredible socially fusing properties for diverse disciplines within the workplace.

We perform, on average, 4-6 usability tests per month for the sites which I am currently analyzing. Every other week we schedule and perform three tests. To do this, my employer set out a budget and some space where we took the time to set up a Usability Testing Lab. This is an office in which we created a sense of comfort with some light colors and more home styled office furniature and ambience. We added some lighting to offset the typical flourescent setup and put some plants and artwork in there for added flavor. Lastly, we installed a software called Morae (product review available soon through this blog) and with a Logitech Camera/Microphone, we're off and running.

Well, first, let me say that in terms of value of the investment, Usability Test is one of the more immediately actionable tests with a sudden measurable lift and a long tail. In other words, it provides key insights to all the major areas of the site that help contribute to conversion. It can be really exciting and really scary at times to see the types of things you may have overlooked. Ultimately, not matter what, performing scientific usability testing is worth its cost plenty of times over. There are also added value items which I never expected.

As our tests approach every couple weeks, you can catch the buzz in the office. People know they are coming up and are constantly asking me who is testing, what we're looking at, how their ideas are working out....its really neat. Then, the Thursday morning comes for the test and all of the departments can opt to be observers for the tests. Offices get crammed up in every corner of the building to see the test. We script out the first couple minutes of the test and then, extemporaneously, follow the lead of the subject. After about 30-40 minutes, we shore up the loose ends in the testing and close the connection. By the time the manager files are converted, we have a time set up when the observers and the stakeholders can get together in an office where we pool our thoughts and share ideas about the improvement of the experience.

I kid you not, I've suggested creating highlight reels of this and getting together with the whole of the local employees for the purpose of hanging out and watching these videos. I also create parsed videos with which to analyze multiple experiences for the purpose of specific departments to deal with particular elements of the design, navigation, or further analyze sections provided by outsourced agencies. People in the office have actually come to me to attempt burning usability DVDs so that they can take a look at home and jot down some additional thoughts.

I understand that there are investment considerations in the ability to perform usability testing. In all, the cost to get the software, labor to script and develop testing, the PC, the materials and labor to prepare a proper setting...its probably about $5000.00 or just under it. To get a steady picture about the problems hindering your conversion is worth the cost in a single test. If the lifetime value of a customer is impacted by a single sale with their ease of navigation or the implementation of new, more relevant images or cues, you'll triple your investment, at least, in the first changes you make. (While I say "triple" figuratively, I may be completely underestimating the ROI only because I'm trying to stay casual and effective in my communication of the point.)

If you would like to, please contact me and we can discuss Usability Testing or any analysis issue you may have, at length, so long as it does not inhibit my ability to perform for my employer.

22 May 2007

In Case Anyone At Omniture is Reading

One thing which I would really like to see in SiteCatalyst which would help me save time in generating reports, is the ability to use a 'Manual Update' feature like that which is available in Discover 1.5. Think about it.

The site is so full of people generating reports with exhaustive data day and night that, when I want to get directly to a report which I have not yet bookmarked or set into my dashboards, I have to actually go to that report, then adjust the date accordingly then search and drill down as necessary. Essentially, what that does is creates a series of reports each based on the first one. What I would like is to make a request with all my criterium, then hit update and produce a single report with exactly what I want.

So, by producing that one single button capability, you can reduce requests on the site for ad hoc reports by at least one third. My guess is, that would make the data process much faster and keep more clients working on analysis and reaping the benefits instead of fighting your software and being frustrated. Granted, this is not an easy thing to win support for but, my guess is it would be the single most useful function to the advanced analyst who has become familiar with the tools.

Keyword Bounce Rate Through Omniture SiteCatalyst

If, like just about everyone else I've spoken to, a large part of your site strategy is based around keyword strategy and some search engine marketing, you may want to take some time to think about measuring your keywords by their bounce rate. Keyword bounce is a valuable measure of engagement which will allow for the efficient trimming of fat from your keyword campaigns. Here is how its done in Omniture's SiteCatalyst suite...

SiteCatalyst allows for 'Custom Events' and eVars in the newly named Conversion third of the reporting suite in version 13.5. You will need to use three Custom Events (named Total Click Throughs, Click-Past and Click Through Flag) These, on their own determine the bounce. It takes some inversion of the impression which the terms give to understand how. The first event, of the click-through (as one part of the Total Click Throughs) is the raw number of entrances collected on a keyword. The second event, is a Click-Past, which I think needs to be renamed, but produces a trip in the formula which shows that on the occassion within the path that the user or customer came in on this keyword, they clicked beyond this page and deeper into the site. Thus, you have a non-bounce or an engaged visit. Through a simple operation in a calculated metric, expressed as (1-[click past]/[total click throughs]) you have a solid formula for bounce which can be used in any keyword report inside of the conversion reports (formerly called commerce).

The last event is the Click Through Flag. This is set in place for the purpose of pointing out keywords at a glance which have produced a number at or above a user-defined threshold of success. In other words, I could say, if I have a keyword that does not bounce at least three times, that shows a certain level of promise which I wish to be informed of. In doing so, you may be able to weed out winners from losers in a quick report.

Bonus Materials Packaged with the Click Quality Plug-In
Two additional parts of this operation which, one of which I can actually attest to, are called Time Parting and GeoLocation. The time parting is used in an eVar which is attached to the script at the Click Through. This gives an indication of which half-hour period these click throughs are occuring in. This is helpful because it creates the ability to more precisely identify if a keyword has more weight within a certain timeframe. It also helps to flag and rectify click-fraud should that ever occur (jeez, I wonder if that ever happens).

GeoLocation, I have yet to see work in our conversion suite. When it does work, I'll be happy to share my thoughts on it and what it has provided us with.

If you have an interest in getting this set up in your suites, you should probably contact your rep at Omniture to discuss it. I highly recommend using it, or some similar operation to get an idea of how engaged incoming searches are. The reasoning is this: If you measure everything by the final conversion result, you become too reactive. Its like going deep sea fishing and bringing your Ugly Stik and a can of worms. If your scope is too narrow, you'll never be able to adapt.

If you have any comments or questions about what I've discussed, please feel free to contact me. I'll help where I can within a timeframe that allows for my duties to my employer to be fulfilled first.

19 May 2007

Win-Loss Analytics: New or Evolving Practice of Actionable Analytics

Today I took some time to look up what Wikipedia.org had to say about web analytics and where they were drawing their information from. No surprise WAA and Mr. Eric T. Peterson were cited on the page. There was some talk about emetrics and the "Hotel Problem" which I thought was interesting but somehow trivial. Then, I took some time to read about and look into something called Win/Loss Analytics.

As it turns out, the definition is basically stating that Win/Loss analytics and analysis is a true single user path analysis. You look at a single customer experience compared to others and cite whether or not the path converted. A conversion is a win, a non-conversion is a loss. I've been interested in this for sometime but was unsure of how to arrive at a methodology. Using SiteCatalyst, we've been able to look directly at a single path of conversion from the referrer or entry all the way through to the closure of the path or at the departure post-conversion. It can be a little tricky at times, but the value derived from the information is quite high.

Try it yourself.

Take a solid look at the referring domains into the site through the "Finding Methods" provided in conversion. Then, take a look at the same thing in your traffic reports. Match the keywords up to the path, if collected upon loading your site, or available in the URL provided by the traffic report drill-down blender available. Trace the path all the way through to conversion. It will take some time to really refine your technique and to do some checking to ensure that the information which you are using is correct, but I have found it very useful for a variety of reason.

1. It can help improve your keywords campaigns. Granted, its a little work intensive to do it, but the value can be derived over time to justify the efforts.
2. You get a picture of where your customers, the one's who buy, come from, how they react to areas within the site, and, now that we have the Forms Analysis, where they bump into trouble in the path.
3. You can compare the conversion structure (I think that's probably a new term) posited by the type of referrer you are seeing produce traffic to the site. For example, you can manually segment (which I say only because I haven't found a method yet to automate the process) the referring domains into a couple categories: Primary Search Engines, Sub-Primary Search Engines, Product Feeds, Forums, etc. and measure them versus each other. In the end, you can sort these and find out where it make the most sense to concentrate your efforts.

There's really quite a bit more which can be provided by this. I'm available for discussion as necessary so long as it doesn't interfere with my job functions. Email is best.

Omniture Forms Analysis - Long Road Worth the Journey

As of yesterday evening CableOrganizer.com is number 441 of the Internet Retailer Top 500 sites. Its pretty exciting for all of us considering we sell such obscurely niched items. Its taken the gang a lot of work and constant improvement to produce that result and its something we are all very proud of.

Now, that's out of my system, I'd like to rant a bit if I may about the most exciting recent development in my analytics toolbox. After months of trying to get a handle on the really advanced analysis tools, Sebastien Hoffman, one of our outstanding IT folks, implemented a Forms Analysis tool for reporting to our SiteCatalyst on form errors, success and abandonment. Actually, its in the opposite order: abandon, error, success in terms of its operation flow.

What does that mean? you ask....

It means that we can measure the success of our interface with the customer and gauge their experience. We can see where people are consistently having problems in our forms and make changes to reflect the necessity for ease of use. This is done simply first by looking at the ratio of success of an area to the number of errors. If for every 10 successes of a portion of the form you have 1 error. That probably isn't significant or having a sincere impact on your conversion. If, on the other hand, your ratio is 4:1 or better, you may want to run some usability test scripting through that area for the purpose of scrientifically replicating the problem Once you have realized what it may be, either technical or human experience related, you should probably plan on running some A/B or multivariate testing to find out if some other elements could help influence a better ratio of success. (Note: I do not recommend making this your first multivariate test especially if the form is your billing info form or something of that great importance to your everyday operations!!!!)

17 May 2007

Anil Batra and ZA/AZ Interview Published

This evening, Anil Batra, a blogger and analytics writer from ZA/AZ published an interview on my job with CO. Its available for reading at http://webanalysis.blogspot.com.

It gives a pretty fair look at the job tasks associated with being a web analyst. It also gives a solid explanation of how I ended up upside down in jobs and here in the world of statistics. Feel free to contact me with questions or advice on items which I have discussed in the blog. I will do my best to help but cannot advise at the expense of my employer.

Upcoming Buzz in Google Website Optimizer

Good Evening,

As a portion of the people whom I deal with on a daily basis know, I am a test-happy web analyst for CableOrganizer.com in South Florida, US. I am constantly trying to find new ways to manipulate and blow the gears off existing technology for its improvement. One of the ways in which my job allows me to play with this is the free tool offered by Google called the Website Optimizer. It is a analytics tool (which can be attached to all the other Google goodies) with which to perform multi-variate and split (A/B) testing of page elements etc. on pages.

My initial foray into this multi-variate testing world brought me to this tool and a Beta-test through Robbin Steif at LunaMetrics in Pittsburgh, PA. Through a series of conversations and e-mail correspondence, Robbin was able to help me understand the principles and relatively complex math and ideas behind the tool. I couldn't help it, two weeks into the conversations I started to think about its limitations and what I could do to make its limitations our strengths.

With that, I set out with a handful of wrenches and poured them into a high-performance engine. (no pun intended). What I came up with is this:

1. Although a tool set with the idea that the tests must run a full-factorial analysis, when paired with the Multi-Variable framework available in Omniture's SiteCatalyst, you can developed and test with Taguchi Methods...how you say? That's a future post, and trust me, its a scorcher.

2. Proxies can be excellent indicators to conversion. By this I refer to using multiple pages tagged as a success metric within the Google Website Optimizer. If you are testing a landing page and you are an eCommerce company, chances are, doing a full factorial test with a single conversion metric page(success or goal in the optimizer interface) will limit your ability to test efficiently in one of two ways: A- you can only test limited elements if your page traffic isn't phenomenal and the test will still take quite some time. B - the effect will not be fully realized unless you are able to show that your page metric is a true indicator of conversion. (so you end up using proxies anyway, so why not stretch the limits and measure the ripple).

3. While not an ideal tool, it is free, and it can be manipulated with a little work to do nearly anything. You can pass elements of all kinds in and out between the scripts and the scripts determine the element. Another post on this will be available after we have successfully completed the tests we currently have slated for CableOrganizer.com this month.

4. Although the test doesn't produce a final result until the clear winner has achieved a certain level of successes and the sample size is statistically valid, if you create a clearly different set of variables your optimal choice should be clear.

Website Optimizer is great for creating a series of internal and personalized best practices with which you can rewrite your important landing pages and make an immediate impact. Using the tool is relatively simple and can certainly benefit anyone who can use it. I would be glad to help anyone who has an interest in making this tool work for their needs or I can certainly speak as a reference to the competence and skill of Robbin and Taylor at Lunametrics.

Please feel free to contact me through posting and I can get you information on our methodologies.