Showing posts with label Daniel Shields. Show all posts
Showing posts with label Daniel Shields. Show all posts

28 August 2007

Blackle.com: Energy Saving Screen.....Bahhhhhhhhh!!!!

We love testing. Its what makes an analyst get up in the morning and drive 15 miles through bumper to bumper traffic a different way everyday to find the optimal path. Its what makes us count our gas efficiency in our Honda. It makes sense. Its who we are. In the spirit of this, CableOrganizer has taken to some additional testing methodologies with regard to the internet. Some of these have some very real world ecological implications. On this occassion, we've taken to getting to the real story behind Blackle.com.





Blackle.com touts a running count of how many Watts it has saved. This is a counter based on the idea that the level of energy consumption from non-illuminated pixels is significant enough to actually help ebb the wasteful flow of energy from the 17" screens in front of each of our faces. Did we believe it? Hell No baby...we question everything. At first thought, I said, I'll bet its true. I admit, I know nothing about LCD energy consumption. So, Paul, myself, and Jason Hernandez, the sales associate at the company who brougt this to our attention, slated and performed a test to see what the truth was. Hint: Get Snopes.com on this right away.

Our test included one 19" flat screen LCD model E197FP, FireFox 2.0 and a Watts Up? EZ Electrical Consumption meter in our offices. We opened the browser to the Google.com page, and opened a tab to the Blackle.com page. When in the primary position, Google.com consumed 28.2 watts, give or take 1-2 tenths of a watt. When we switched over to Blackle.com in the same browser, pointed to a primary navigation position, the same monitor consumed 29.4 watts, give or take the same confidence interval. In order to help with our validation of this test, we performed the test again with the camera running for everyone to see. Now, keep in mind, we did not use a CRT (Cathode Ray Tube) should that be the majority of screens used, maybe the explanation of consumption would be viable. However, with the percentage and pervasiveness of LCD, the expense of the site does not justify or envirtue its existence.

Now, this has peaked our interest. The meter itself is fun. We can measure in volts, amps, watts, or even set it to record how much money each item will cost. Business and residential users alike have to love the fact that you can see Watt your electrical bill will be before you get it and how much each appliance is costing. True, at least for me, that our utility (FP And L) actually provides this data to help curb excessive consumption of energy. But, for some, this may actually make a difference in expense, and aid in the battle for awareness of consumption. Imagine that...

22 August 2007

Putting Limitations in Their Place in Google Website Optimizer

This appears to be an area where I'm finding new and exciting things to discuss with regard to the ability to run multivariate tests on page elements within a site. In the spirit of Google and its pursuit of excellence in business and academia, and its willingness to allow us to push the limitations imposed upon us by ourselves, I've decided to open a post with the idea of having people respond with possible obstacles to their testing which they have conceived of, and have prevented them from executing a test in GWO. Here's how it works:

You tell me the situation: You, your boss, or some smart-(insert posterior here) in your office has created a mental block to performing a high-level, and very useful multivariate test or A/B test which, if run, could produce a multitude of insights and win you office praise, and put the cherry on your bonus. You have Google Website Optimizer and a thorough understanding of the major points which promote your ability to test. If only this one thing, this tiny script or tag, or placement, or file type wasn't there, you would be all set. What is that thing, that, Bucky Dent of a item which is a severe detriment to your intracubicle ticker-tape parade?

We Take a Look: You, me and the boss....my boss. We sit down and examine the test. In doing so, we will carefully look into your problem and create a version of the problem outside of your live site. Then, we'll go to work building the solution necessary for you to get exactly what you want out of the test without a serious loss of resources; mainly time and money.

We Form A Plan: When we've come up with some theoretical work-arounds for the problems which we plan on eliminating, we'll implement them in a basic operational test. At this point, we'll also choose the appropriate place to collect success (thank you page, cart addition, proxy page, etc) Let'em rip without a sincere impact on your live traffic or their variations. This will cost you a spot on your Google Website Optimizer list, which means nothing and costs less. But it will confuse you if you go back and wonder why you have multiple tests with the same name. Be ready.

Launch the Test: After the tests produce acceptable results, we'll launch the fully-prepared test. This includes plimping your multivariate test to the thousands of visitors who will (or not) react to the output bringing back valuable stats to your interface. In a few days, we'll check up to ensure that you're getting the right stats and collecting what you need.

Allow for Completion: As a rule, we try to not look at the test progress on these everyday. We suggest the same, simply for the preservation of sanity. Looking at the way these items interact and change and progress can be like watching a horserace in ultra-high speed film. By that I mean, the suspense is terrible, and it goes on forever. You start to formulate hypotheses about events which aren't really ocurring and a whole slew of other non-useful behaviors.

Analyze Results: Take a good look at what the outcome was. Go back and verify your files to ensure that you received all the collection tags as they were executed. Then, use your information to make a follow-up experiment. Avinash implies in his book that you can narrow these from the field earlier on than the completion of the test, but, for our methodology, we will not. Take the top five variations and a 'dog' or two, and place them out again to verify the results.

HERE'S THE CATCH
Something for nothing is a great idea at family retreats and picnics at the park, but it doesn't work that way. So, for the effort of putting together a test and its strategy, we're going to ask you for two things, which, considering the payoff, may be extremely cheap for the return and praise you may receive. We'll discuss it when you have a chance to write me on the blog or in the Yahoo! Group.

11 August 2007

Update: Call-To-Action on Button Color Outcome

Quite a few weeks back, I wrote about the colors of buttons and their ability to appeal to buyer's impulses on an ecommerce site. Since then, I've placed 8 different buttons in two element areas on a deep path page in the cableorganizer site for testing. Using both Omniture and Google Website Optimizer, I was able to produce a positive correlation between our 'View Pricing' button on our Product Detail pages and sales conversion (represented in a proxy with the addition of an item to the shopping cart). The process is as follows:








  1. Using whichever metrics you choose to determine a page which can yield good results quickly, choose the page on which you expect to have the greatest impact by testing. (We actually use a few calculated metrics which help us determine these; they include: primary content composition(the percentage of total entries on which this page is the landing page within the context of the path) bounce rate, and a highly complicated weighting algorithm (developed out of the idea of proxy-measurements) to determine the importance of each page, within their context, with relation to their path, in the sequence of conversion.


  2. When the page is chosen, set up the testing scenario in Google Website Optimizer for the purpose of a multivariate test on chosen elements. Ensure that all the tagging is done and there are no overlapping elements, tables, or div tags between the script/noscript tags.


  3. Create multiple elements for each of these sections. My suggestion is to always include at least one or two 'crummy' variations on the elements. That way, you can verify their relevance rating in the process. It may lengthen the test, but it will also strengthen the output.


  4. Using Omniture, or whichever provider you are using, you can place a 'Custom Link' inside of each of the element tags which you hope to verify. Make sure each "custom Link' is unique so that when each is clicked, they show independently of each other. For our purposes, we laid each out with a color combination as to immediately present us with an idea of what was happening. (Also, make sure you place these in the appropriate report suite or ASI segment as necessary. Internal traffic could influence these if you are not careful. )


  5. Set your tests up completely and hash-out the previews before launching. Look at how they appear in different browser sizes. Know what your users will be seeing.


  6. Lastly, set up your cookies to not expire for an extended period. Make this period equal to whatever you find to be the average amount of time your customers take before making a purchase from the page which you are testing. Remember the Google code for this is made up of seconds. (i.e. 3 days equals 259,200 seconds).


Once you've ensured that all of this will work, you can feel comfortable letting the test loose.



Having received significant data from the test being performed in both Omniture and Google Website Optimizer, I'm sure that the results which I can see now are having the true correlational effect on our conversion. The really funny thing is, now that I see this ocurring with such high relational specificity to this page, and the context of the images within the colors contained within the page; I am met with more perplexing questions. These are of the type which appeals to my nature of understanding peoples consumption of visual cues with respect to the composition of a painting or image. Shapes and colors play off one another. Does this mean that because the color dominating the top button is present in the page and this is the centermost position of that color that I may have to find the individually best color for each button on the page? If so, is there some concentric model which can help explain shades and variations of intensity of colors to create the visibility, appeal and action on these colors? I'd be very interested to hear any explanations on this topic or tests which have been performed with regard to them.

As usual, I love to spend time thinking about and discussing advanced principles in testing, analytics, optimization and, well, anything about this field. If anyone has any ideas or suggestions, please feel free to contact me. I have been very busy for the past five weeks, but I think my posting may become more frequent for at least the next couple months. If I don't get back to you right away, it only indicates I am busy trying to keep my job.



12 July 2007

Some Tips and Tricks for Omniture Custom Link Tracking

Like anything else with Omniture, it seems that one of its valuable items, the Custom Link tracking available in the traffic reports is poorly explained, and consequently, difficult for people to understand or appropriately implement. Here I have compiled a list of ideas and tricks to not only help you install your own custom links, but to give you a hand making them readable, parsable, dynamic and useful. In some instances, like those related to on-site search, these may provide more insights than your company will know what to do with, in other instances, the simplicity alone may astound you. Read on and feel free to post and questions or comments to aid in the discussion.

Custom Link tracking is the installation of a small event call on certain parts of the actionable areas of your website. Some readers may not be using Omniture SiteCatalyst, in which case, inquire of your provider which method to use to gather this particular data. You can create a tag which calls a script based on the event you wish to collect on. By default, the documentation for this explains this should take place with the "=onclick" action. This works, but sometimes, and often we find, people are clicking less on buttons which hitting the 'Enter' key makes more sense. Therefore, it becomes less accurate in its reporting. To combat this, try using "=onsubmit" or some other action of your choice which can provide a more complete picture of the use of these links or areas of the site.

Using this method, CableOrganizer.com has been able to pull off a few neat stunts.

First, we know how many people are using our on-site search in comparison to what our search provider is telling us. This is important because of all our features on the site, the search is truly the single thing which ties directly to our conversion. The better our search and the more visible, the more people will use it. The more people use it, the more information available to draw from and increase activity of the learning algorithm tied into it. The better the results, the more likely people are to click and buy.

Second, we are able to gather information about the navigational habits of people using the search. Our IT department built a search-aid solution. Its a real time search suggestion application which pops out from the search box based on user input. By attaching a Custom Link to the application, we can actually determine when people use the suggestions, what they chose, and how they executed the action (either by a mouseclick or by hitting the 'Enter' key). This provides us with information about what people are searching for, how often they are searching, what the habits of the majority are and so on. Its a wonderful thing. We've used this to rethink how our navigation and our buttons should work throughout the site as well as groomed some of our keyword strategies.

Lastly, and while this may seem complicated I assure you its simple enough and valuable, we used the custom links in variations in and throughout our multivariate testing. When setting up our most recent multivariate test in Google Website Optimizer, I took some time and extra care to tag each of the variation areas with separate individual Custom Links. While the test is running, I can not only gain insight to the best possible combination as provided by the Google interface, but I can also understand how individual elements are performing their tasks. For instance, in this case, I have 6 buttons which all have their reason and strategy associated with them. Each of these buttons are passed into the page by the Website Optimizer. Along with the button, a little Custom Link tag is passed in as well. While the Website Optimizer is using 'Add to Cart' as the action representing conversion, clicking on the individual button is reporting information to me on the particular visibility of the version. This gives me a more complete picture of what the impact of the button is and how it is related to success of the page. I honestly can't wait to answer that question: "Does performing action A correlate to outcome?"

By themselves Custom Links are valuable without all the tricks and fancy dressing. If for no other reason they help the analyst understand BEHAVIOR. That's really all that matters. If you can't boil out some idea of what a majority of your users are doing, there is no use being an analyst. Without that piece, you might as well make decisions based on the Magic 8 Ball. For me, for us and for the science, there is no room for speculation.

As is regular and customary, if you feel like you have something to add, or you have a question, you can feel free to contact me or post here. I will get to it and answer when time allows.

15 April 2007

Multi-Variate Testing Info

There are many different methods by which an analytics practicioner can go about making leaps and bounds toward page and site optimization. These include functions at the mindset level of the analyst performing, winning the trust of the design team, and adhering to the principles of science which define our objectives. An analyst must be a business scientist before he or she can call themselves an analyst.

Working with Paul Holstein, I've realized there are a certain number of things which come in very handy. First, test before you speak. Paul is a man focused on results and high levels of confidence. He does not like to make business decisions based on opinion. So, when you sit down to discuss some solid insights, you'd better know damn well how those insights are valuable, where to apply them, how to measure the results, and what to expect in terms of success. Its my impression that Paul is of the mind that science won't let you down as frequently as opinion will. I can't argue with that. It kind of reminds me of math. No matter how you cut it, math is a subject where there is no room for debate. You are either right or wrong. There's no gray area, I like that.

To understand Multi-Variate testing in theory is not difficult. First there are a few functions and properties of the testing which must be uncovered and dealt with prior to making any changes to the site which you are planning on testing. There are a few things you should know.

The first thing to decide is what to test. My suggestion is to create a heirarchy of the pages you feel are most in need of testing outside of any analysis. The purpose of this is to learn the necessity of detachment which must exist. You can list all the pages that really get under your skin, that you hate to look at and which you feel are just hiddeously bad pages for your conversion, whatever that might be. (In my case, I've created an algorithmic formula which can list my best pages from top to bottom taking all my most important KPIs into account with a single sort) After you've done this, run some reports to determine which pages have the highest bounce rates, the most traffic and the least importance within the path of conversion. Sort them in descending order by highest traffic. Within the first ten or fifteen pages, you'll see where you need to start making changes. Keep in mind, if this is your first attempt at the test, you will need to avoid making changes to function pages and pages central to the operation of the site on the whole (i.e. Home Page, Newsletter Forms, Shopping Cart or Checkout, or even Internal Search pages) as this may actually cost you a great deal more by having a single adversely performing test combination.

Compare the list you made before doing a traffic-conversion analysis. Your results may or may not be very interesting, but if they are different, it can be a real eye-opening experience. Consistently, when I produce the list to the boss and the design team, I get interesting reactions. Often they confuse aesthetics for functionality and vice versa. This HAS to be a very common problem as it would be with any internal sections devoted to one hemisphere of the human brain.

After you've chosen a page to examine and test, you should keep in mind that there are some things you can change and some things you can't. In another blog, or possibly the Yahoo! web analytics forum, Paul produced a list of things which will have a greater impact on conversion. When preparing for an multi-variate test, its probably a good idea to print it out and have it close by to draw out how you will attack the methodology. Since this is my first official post, I think I'll cut it here and hope that the response show's enough demand for me to continue with this blog.