First things first. Software regression testing should always mean automated testing. Manual regression testing is a wasteful practice that will make you loath change. If you go the manual testing path, the process will take more time every time you add a feature, and soon you will find out that a simple change, that takes a programmer a few minutes to implement, takes weeks to deliver to the customer. It's best to leave manual testing to beta customers, internal or external. When they find a problem, add a test case for it to the automated test suite to prevent regression. Automated testing ensures that no regression occurred in the last build. Just like continuous integration, it is a must, and ideally you should have it right from the start of the project.
Continuous deployment that triggers tests is a bit more difficult to implement with limited resources - you would need a new test environment for every check-in, but definitely every build should generate all installers, and you should have an automated nightly deployment and test run.
Automated testing of some systems is difficult. A system consisting of web applications, desktop applications, services, all interacting with different operating systems, cannot be functionally tested the same way you test a class library with unit tests. You need something that will emulate one or more users, and will interact with all parts of the system running on multiple computers at the same time. Mocks won't do. You need a program that will click buttons, read text, close windows, launch programs, and remote desktop (yes, it's a verb).
We chose Sikuli, developed at MIT and maintained by people from around the world. Sikuli (“God’s eye” in the language of Mexico’s Huichol Indians) is still quite new, and some features don't work very well, but it is the best one out there. Especially IR (Image Recognition) works great if you remember that it uses fuzzy matching and your images need to capture the essence of what you are looking for. For example, if you have icons on the screen and you need Sikuli to find one, your images should include only the parts of the icons that differ, which may mean not including their borders. Turning off the mouse cursor is tempting, it speeds things up a lot, but it makes troubleshooting harder and may cause weird errors when a program is surprised that a mouse click happens without any preceding events.
What doesn't work very well in Sikuli?
- OCR (Optical Character Recognition) works best when text is 12-14 pixels big. Use smaller or bigger text and you get funny results.
- Switching between applications on Windows is not reliable, the operating system sometimes doesn't honour your requests. You need to resort to switching by clicking on things, like Windows taskbar -> right click -> show desktop -> click on app icon. On Macs it may work better.
- There is no serious IDE, and no unit test framework, but that problem is easily mitigated with Google's Robot Framework and Eclipse. For a simpler environment try Notepad++ with a workspace file and Python and Robot Framework language style templates.
When you have tests running overnight and something goes wrong you need to know what happened. Logs are good, but not enough. You need to see what happened. We record all tests with Screenpresso. Just yesterday, I saw a recording of an error that I would find very hard to believe, if reported by a tester. And that's what I like about programming - I see miracles every day. :-)
No comments:
Post a Comment