Tag Archives: testing

A Marketer’s Guide to Agile Development – Does Agile Have Time for Research?

Rapid development calls for rapid decision-making. Agile practitioners sometimes make the mistake of assuming that an Agile environment doesn’t have any spare time built in for research to inform those decisions. It’s not so. You’re actually more at risk for wasting time when you don’t research.

Research can take many forms, and a lot of insight can be gained very quickly.

Research is for grownups.

A big part of Agile culture is empowerment and self-determination. But developing software costs real money. The hard truth is that not every cool-sounding idea your brain spawns deserves to come to fruition. The ideas that should be developed are ones that research shows will advance your organization’s business goals – you know, the reason they pay you to come to work.

Research should be proactive.

It doesn’t matter what kind of software development environment you’re in. Which is smarter – a survey to find out if the Ultimate Extremely Cool Tool would solve a problem and/or delight users? Or a survey to find out why users won’t use the Ultimate Extremely Cool Tool your company spent four months developing? (If you’re unsure which one of these is smarter, you might want to take a personal day and read a couple of months worth of Dilbert comics to gain some clarity.)

Objectivity is a must.

Research should be done by people who don’t have as much skin in the game as the people writing the software. It needn’t – and indeed shouldn’t – be done by the development team itself, but by research professionals on staff or outsourced. Why? The same reason they don’t let judges preside over cases where their kid is a defendant.

Testing is a crucial form of research.

Don’t skip test-driven research – the lack of it could come back to bite you in the…area where you don’t want to be bitten. I wrote more about this recently. But don’t just take my word for it. Alberto Lumbreras has written some great blog posts about the subject, here. You know the old adage: “Act in haste, repent in Sprints 15 and 16.”

A Marketer’s Guide to Agile Development – Testing…Testing…

I don’t know if this is a universal trait – and I’m not entirely sure why – but my experience with Agile dev professionals is that there is a tendency to resist testing. I’m not talking about standard software Quality Assurance (incredibly, that’s not always a given either). I’m talking about A/B or multivariate testing, and true UX testing. These are things I have actually heard in my career as a marketing professional from dev groups.

“We don’t have the resources to do A/B testing – it would double development time.”

This person that said it misunderstood what A/B testing is. In most cases, you’re testing something new against your existing site – which is already developed and in production. So you’re only developing the new thing – and you were doing that anyway. Where’s the extra work?

If you’re introducing multivariate testing, developing multiple versions of something new would create extra work for a dev team that is used to allowing development of only one version at a time. But that’s the right way to do it for maximum iterative improvement.

“We are strapped for time – we don’t have time to develop things that won’t make it into production.”

You’re saying you don’t have time to find out if the change you’re working on to enhance click-thru rates will ratchet up the bounce rate instead. That’s dangerous to your website’s health. It means the whole enterprise needs to guess right 100% of the time – which just doesn’t happen. You’re also essentially saying you only have time to develop and deliver something once – and that’s it. Not very iterative, is it?

“Our dev team/marketing team/product manager would be demoralized if all their hard work never saw the light of day.”

That’s why they call it work. That’s why they have to pay you to do it. Certainly, we can make it fun. But ultimately, decisions on what does and doesn’t go into the final software release MUST be based on how that software accomplishes business goals. Team members who only feel validated by having their work seen can start a blog. Or therapy.

“Doing UX testing on only six people isn’t statistically significant.”

The same person who told me this also told me that he wanted a study on the top 1% of revenue-generators because that would be enough customers to be statistically significant (but surely you understand that the top 1% isn’t a representative sample for…aw, screw it, where’s my propeller hat?). UX testing is not the same as quantitative testing – five or six subjects are plenty. If you’re not sure why, read Jeff Sauro’s great blog post about user testing and sample sizes.

“We don’t need to test – we have a good feel for what users want.”

Robust, qualitative user testing tells you where you need to tweak to enhance your software’s usability. User testing is necessary, in part, because you may be too much of an insider, too in love with your own work, or too defensive about your work to identify the usability problems that can torpedo software adoption. It is also necessary because it helps remove our own biases and opinions (the dreaded “I think users would want”) as impediments to truly successful software. Skip it and you’ll risk shipping software that works but that users won’t use.

I had my frat buddy/luddite office mate/mother test it, and they said it was great!

User testing can’t be successful with insiders. Or insiders’ friends. Or insiders’ mothers. Real users won’t forgive you if it worked on your machine but doesn’t work on theirs. Users won’t let you explain the jargon that makes no sense to them and perfect sense to you. UX is a professional discipline – and no, you can’t do it just as well.