Username Password
Bookmark and Share

Testing WebGUI

Most likely, you're reading this book because you're a developer - a hacker. You are gifted in logical thinking, problem solving, caffeine consumption, computer programming, handling sleep deprivation, system administration, typing fast and drooling over the latest in hardware. The thought of meetings, dealing with users (or managers) and writing documentation are probably not high on your list of priorities. Testing your software falls in the same category, but past experience has shown that not testing can be severely limiting in your career. The good news is that there is a way to make software testing easier, and even fun.


Testing has usually been done by hand and involves installing the software, resetting the database, restarting Apache, firing up a browser, logging in, turning on admin mode, adding and configuring the asset, committing the asset, adding users and groups for permissions, checking all the screens and options... are you frustrated just thinking about it? And if that weren't enough, with each change to your code, you have to restart that whole process again. It is also a process fraught with problems, because if you forget a step, you might miss a bug.


Automated software testing helps makes the process easier, by allowing you to do all those things with code. That's right. You write code to test your code. This is actually a good thing, because you're good at writing code. You write a test once, and then run it again and again, and it will do the same thing every time. No more clicks, no more wondering about whether you did things in the right order or dropped a step.


Your tests can also function as a kind of documentation, since other developers can read your tests and see how you expect your code to be used. If bugs are found, then you can write tests to duplicate the bug and make sure that it gets fixed, and perhaps more importantly, that it stays fixed in the future as libraries change, or other people maintain your code. You can write the tests before you write your code. When all the tests pass, you're done!


Hopefully, this has whet your appetite and raised your interest in automated testing. This chapter is devoted to showing you how it's done.


Writing Tests in Perl

WebGUI is written in Perl, and all of its tests are written in Perl. There is a whole set of testing modules in Perl which provide methods for simple, scalar tests, testing data structures, simplified browser testing and even automating the writing of tests. These different modules all emit output using a standard called TAP (Test Anything Protocol).



The example above shows a sample TAP output. TAP is a very simple format with five main parts:


  1. Test Plan: At the beginning is the number of tests that are expected to be run. This is set in the test script, and helps any program that post processes the output to know if the test died before all tests were run.

  2. Test Status: Each test outputs a simple ok or not ok message, depending on whether the test passed or failed.

  3. Test Number: The testing module automatically takes care of numbering the tests. Since comments in tests are not required by the format, this can help to identify tests that are failing.

  4. Test Specific Comments: Optional test specific comments can give details about each test, such as what is being tested. This helps a lot in debugging tests that fail, and I strongly encourage ("strongly encourage" were the words my Dad used to say when he meant, "Do this or you'll get whupped.") you to write descriptive comments for each and every test that you write.

  5. Diagnostics: Diagnostics can be created by the test to show why it failed: "Expected to get an Asset, but got back a User object instead," or added by the test script to show sections, debugging output, or anything else.

TAP is a very human readable format, but going through the output of hundreds of test scripts, where each script has 50 or so tests, is neither easy nor fun. So, in addition to testing modules to help you generate TAP, there are test harnesses which take care of running sets of test scripts and summarizing their output.


yourTest...... Failed 1/3 subtests

Test Summary Report

foo.t (Wstat: 0 Tests: 3 Failed: 1)

Failed test number(s): 2

Files=1, Tests=3, 0 wallclock secs ( 0.02 usr + 0.01 sys = 0.03 CPU)

Result: FAIL


The above example shows the output of a test harness called prove. Depending on the version of the Test::Harness module installed with Perl on your system, this output can be different. Now that you've seen TAP, let's see how to generate it with Test::More.



Test::More is the workhorse of Perl testing, providing a whole slew of testing subroutines. However, before you begin using them, you need to generate the test plan. This can be done either when you use the Test::More module, or later using the plan subroutine.


use Test::More tests => 5;


my @testData = generatedTests();

plan tests => 15 + scalar @testData;


Using the plan subroutine gives you the freedom to calculate the number of tests. For example, you may need to test each file in a directory, or you may have data driven tests as shown above. Regardless of which method you choose, you're now ready to start writing tests.


The most basic test method is called ok.


ok( $test, 'Test passed');


If $test is true (as Perl defines true), then the test prints "ok". Otherwise it prints "not ok". In both cases, the test output is automatically numbered for you, and the comment 'Test passed' is appended to the output.


You can test everything with ok. All that you need to do is perform the comparison yourself and then test the result of the comparison. To test for falseness, just invert the variable.


my @sons = $family->getChildren;

my $hobbyCheck = $sons->[1]->getHobby eq 'Trains';

ok($hobbyCheck, 'Second son still likes trains');

my $match = ($home =~ /kids/);

ok($match, 'The kids are in $home');

ok($sons->[0]->bouncy, 'My first son is bouncy');

my $ageDifference = $sons->[0]->getAge - $sons->[1]->getAge - 2;

ok(!$ageDifference, 'Sons are two years apart in age');


That works, but it has a few problems:


  1. It's awkward. The numeric check for the difference in ages had to be coerced into a boolean check, and then logically inverted to make the test pass.

  2. It uses a lot of temporary variables. Now, strictly, they did not have to be created, but if the test fails, it is a lot easier to print them out when they are in separate variables, rather than "flattened" into the first argument of ok.

  3. When the test fails, it just prints 'not ok', and not why, since ok is just a boolean test.


Fortunately, Test::More provides several more testing subroutines that fix those problems.


my @sons = $family->getChildren;

is($sons->[1]->getHobby, 'Trains',

'Second son still likes trains');

like($home->status, qr/kids/, 'The kids are in $home');

ok($sons->[0]->bouncy, 'My first son is bouncy');


$sons->[0]->getAge - $sons->[1]->getAge,

'==', 2,

'Sons are two years apart in age'



Most of your tests will use is and like:


is($test, $expected, $comment);

like($test, $regexp, $comment);


is does a string comparison between $test and $expected., almost the same as doing


ok( $test eq $expected, $comment );


except that the diagnostics are better if the test fails.


is($youngSon->wantsToEat, $youngSon->askedFor,

'Young son asked for a hot dog');

not ok 1 - Young son asked for a hot dog

# Failed test 'Young son asked for a hot dog'

# at foo.t line 8.

# got: 'hot dog'

# expected: 'pizza'

# Looks like you failed 1 test of 1.

If you had used ok instead of is, $youngSon would still be having a fit, but now you understand why.


There is also an inverted form of is, called isn't (for those of you who are developers and students of grammar, isn't will Do What You Mean). isn't passes if the first two arguments are not equal, as strings.


For checking parts of a string or for doing anything more complex than a simple string comparison, you can use regular expressions with like.


like($olderSon, qr/$father/,

'Just a hack off the old block');

not ok 1 - Just a hack off the old block

# Failed test 'Just a hack off the old block'

# at foo.t line 10.

# 'blue-eyed'

# doesn't match '(?-xism:short-tempered)'

# Looks like you failed 2 tests of 2.


The first argument to like is a literal or variable. The second argument is a regular expression that will be matched against the first. If the test fails, as shown above, then the diagnostics will show both the actual text and regular expression that were used in the test, in case they were dynamic, as in the example. As with is, there is a negated version of like called unlike, which passes if the text does not match the regular expression.


cmp_ok is a more general version of is. is always does a string comparison on the two values, checking them for equality. cmp_ok allows you to choose how the comparison takes place, either as strings or numbers.


cmp_ok($test, $operator, $expected, $comment);


cmp_ok allows you to get around problems with comparing floats versus integers.


is('2.00', '2', 'Float vs integer equality in strings');

cmp_ok('2.00', '==', '2', 'Float vs integer equality as numbers');

not ok 3 - Float vs integer equality in strings

# Failed test 'Float vs integer equality in strings'

# at foo.t line 12.

# got: '2.00'

# expected: '2'

ok 4 - Float vs integer equality as numbers


As strings, '2.00' and '2' are quite different, but numerically they are the same. Any of the standard relational operators can be passed to cmp_ok allowing you to say things like "five or more", or "fewer than 2".


cmp_ok($wife->trips_taken, '>=', '5', 'Wife is happy');

cmp_ok($youngSon->timeouts, '<', '2', 'Young son is happy');

cmp_ok($olderSon->trains_ridden, '>=', '3', 'Older son is happy');


Finally, it's still okay to use ok, but only if you're checking truthfulness or falsehood, and not for specific values. After all, if what you're testing says that it returns false, then it could return 0, the empty string '' or undef, and change what it returns in various versions of the code.


Up to this point, all of the tests shown have been for scalar variables. Lest you think that hashes, arrays and objects have been neglected, it's time to point out Test::More's cmp_deeply. It's good for quick, surface checks of data structures. However, the Test::Deep module provides methods for object method checking, order insensitive array checks, and many, many more tests with better diagnostics than cmp_deeply provides.


Test::More provides several other methods for tests; you should spend time reading the POD for the module. More importantly, Test::More provides ways to manage sets of tests.


As you begin writing tests, you'll find code that causes your test scripts to crash, which prevents any tests from running after that point. For example, the code in the earlier example actually throws an exception a few seconds after returning "pizza". You know that $youngSon should not throw an exception after being given what he asked for, but you don't have time to go and diagnose what's going on right now, so you decide to skip the test:



skip "Avoiding food conflicts", 1 if $menu_has_pizza;

is($youngSon->willEat, $youngSon->askedFor,

'Young son asked for a hot dog');


ok 1 # skip Avoiding food conflicts


Skipping a test, or a set of tests consists of two parts. First, you mark the tests that you want to skip by placing them in a named SKIP: block. Inside the SKIP: block, you use the skip subroutine to give a reason why the tests will be skipped, and to define how many tests will be skipped. Skipping can be made optional by using a conditional with the skip statement. In our case, if there's no pizza on the menu, it's safe to run the test since $youngSon won't ask for it after saying he wants a hot dog.


The skipped tests will not be run, thus avoiding any potential problems (so long as $youngSon doesn't decide he'll eat a hamburger instead). Test::More will generate the requested number of lines of TAP output, with the tag skip and the reason for skipping, rather than the comments for each test.


You may also want to handle tests that are failing, and you know that they will fail. For example, consider the first test in. In September, with the fifth trip of the year set to beautiful Madison, Wisconsin in October, I don't want the test to continue to fail. In retrospect, I should have used a different method in the Wife class rather than trips_taken. Something like trips_planned would have been much better, but the Wife class doesn't have that method yet. Until then, the test can be marked as passing, even if it actually fails.


First, you place the test inside of a named TODO: block. Then inside of there you localize the $TODO variable and assign to a string which explains why the tests are marked as TODO. When the test runs, the contents of $TODO will be appended to the test comment and the test will be counted as passing.



local $TODO = 'Need to add a trips_planned method to $wife';

cmp_ok($wife->trips_taken, '>=', '5', 'Wife is happy');


not ok 1 - Wife is happy # TODO Need to add a trips_planned method to $wife

# Failed (TODO) test 'Wife is happy'

# at foo.t line 26.

# '4'

# >=

# '5'

Testing in WebGUI

Now that you've learned methods for writing tests, you need to know how to handle the WebGUI specific parts of testing. The most important thing, as you've seen from the earlier chapters in this book, is that you absolutely must have a session variable. You can't do anything in WebGUI without a session variable (that's not strictly true, since you can call some functions out of WebGUI::Utility, but that's not all that interesting).


A lot of work has been put into making WebGUI testing as easy as possible, and that work is encapsulated into WebGUI::Test. It will create and destroy a session variable for your tests, tell Perl how to find the WebGUI library, and provide convenience methods for getting to several files and directories, such as the WebGUI library and root directories, the testing collateral directory and the WebGUI configuration file that was used to create the session.


To get access to all of that, you need two things:


  1. Use the WebGUI::Test module in your tests. If you build your tests starting with the test skeleton, /data/WebGUI/t/_test.skeleton, you'll be set!

  2. Set the WEBGUI_CONFIG variable, to the absolute path to the WebGUI config file that you want to be used for your tests.


WebGUI Testing Modules

Included here are a number of WebGUI Testing Modules and some examples to help illustrate them.



This module is designed to make it easy to test user level permissions for web access methods, such as WebGUI::Asset::www_view and WebGUI::Operation::www_addGroupsToGroupSave. Simply create an object, configure it with a session variable, the subroutine or method to call for testing, and lists of users that you expect to pass the test, and to fail them. The object does all the hard work for you.


In the example below, a WebGUI::Test::Maker::Permission object was built, and then set up to test WebGUI::Asset's canAdd method. Admin (userId 3) and a test user in a group will be able to call the method. Visitor (userId 1) and a different test user will not. The object will run eight tests, two for each user. In the first test, the requested user is set as the default user in the session object. The second test tests sending that user's userId as a parameter to the method.


my $canAddMaker = WebGUI::Test::Maker::Permission->new();


'className' => 'WebGUI::Asset',

'session' => $session,

'method' => 'canAdd',

'pass' => [3, $testUsers{'canAdd group user'} ],

'fail' => [1, $testUsers{'regular user'}, ],





This handles almost everything that a real Apache2::Request object does in WebGUI, including managing headers, setting and retrieving the request status, form processing and file uploads, except you don't have to have a httpd server process running to make it work. A WebGUI::PseudoRequest object is put into your session variable automatically when you use WebGUI::Test. This is more convenient than creating one when you need it and allows you to easily test most of WebGUI.


WebGUI::PseudoRequest has one known shortcoming. If the session variable has a valid request object, then some of the WebGUI core code will try to load mod_Perl code for handling cookies or to actually send the HTTP header information. However, this is only encountered infrequently, and having a valid request object outweighs the occasional inconvenience. The way to handle those cases is to set the request object inside the session to undef.




[ WebGUI::Test->getTestCollateralPath('') ],


is($fileStore->addFileFromFormPost('oneFile'), '', 'Return the name of the uploaded file');


How Good are Your Tests?

Eventually, you'd like to know how effective your tests are. Have you wasted effort in writing too many tests? Where do new tests need to be written? These questions are all answered by analyzing code coverage. In code coverage, some "third-party" tool keeps track of every line of code that gets run during the test and in the end. It tells you:


  1. Which subroutines were called and which were not.

  2. If all conditionals walked down both the true and the false branches.

  3. Whether all pieces of a complex conditional statement were exercised. For example, if your code uses $a || $b, did it test $a true, $b true, and $a and $b true?

In Perl, the "third-party" tool is called Devel::Cover. It analyzes all of the metrics above, as well as tell you if your code has enough POD and gives you an aggregate overall score for your code. Devel::Cover has good documentation, and you can refer to it for details on filtering which code is analyzed and which isn't.


For a quick reference, here's how to run a coverage test across the whole WebGUI test suite.


> cover -delete ## To delete old data in the coverage database

> env WEBGUI_CONFIG=/data/WebGUI/etc/mywebgui.conf Perl5OPT='-MDevel::Cover' prove /data/WebGUI/t

> cover ##to generate the coverage report in HTML format.


Use the coverage report in the coverage directory, cover_db/coverage.html, to see which areas still need tests.


However, even if the coverage report shows that a module has 100% coverage, it doesn't mean that there are no bugs in the code. First of all, remember that the quality of coverage is determined by the quality of your tests. If your test has a hole in it, even though it covers the code, there could still be a bug in there. Regular expressions are another place to watch out. As long as you pass one piece of data through the regexp, Devel::Cover will consider it covered. However, the regexp may not be fully exercised. If you have code that divides two numbers, Devel::Cover will report it as covered too, but it won't check to see if you've handled divide by zero errors.


Use Devel::Cover to determine what code needs to be covered more thoroughly, but never forget to write edge and corner tests, too. Your goal should be not only to have 100% coverage of your tests, but to have strong, robust tests as well.


General Advice for Testing

  • Never, ever run a test on a production website, unless you have a full back-up of:

      • the database

      • the uploads area

  • Never run a test with a config file without having a full back-up of the config file. The tests will modify the config file, and in the process, any comments will be stripped from the file.

  • Always start a new test with the test skeleton file, _test.skeleton.

  • Always clean up after your tests. Delete any users, groups, assets, version tags, workflow activities, database tables or entries, ads, ad spaces, products, macros or storage locations that you create for tests. If you change a config file or a setting, put it back in its "normal" state at the end of the test in an END block. This always resets the environment into a known state for the next test.

  • Some assets need to be committed before you can add children to them. Get in the habit of creating an asset, then committing it, then adding children to it. The children should be contained in their own version tag. Don't forget to rollback all the version tags at the end of your test to clean them all up.

  • Every test should have a plan, so that if it dies it can be detected in the test run.

  • Use good programming techniques. Don't cop out on code quality just because you're writing a test.

  • Choose the right test for the right job. My examples in the beginning of the chapter were corny and contrived, but they showed how using the right test method can make your test more robust and provide verbose diagnostics in the event of a failure.

  • Remember that in the Scientific Method that success does not prove your theory. When developing new tests, make sure that they fail when you expect them to. If they don't then the theory that your tests protect against failure is false!


Other Testing Resources

by Michael Schwern and the Perl QA Dancers
An excellent introduction to basic testing.

Perl Testing - A Developer's Notebook
Ian Langworth and chromatic
O'Reilly Publishers
The canonical book on Perl testing, covering many different modules and approaches to testing with a very, "How do I get this done?" approach.


Perl Best Practices
Damian Conway
O'Reilly Publishers
Chapter 18 goes much more in depth as to why you should test, and covers some good strategies for testing.


Paul Johnson

Fergal Daly

Keywords: perl test methods testing tests

Search | Most Popular | Recent Changes | Wiki Home
© 2018 Plain Black Corporation | All Rights Reserved