Changes
Testing in Perl
Preface
Why Test?
- Do you deserve to be confident?
- Can you sleep at night?
- Testing is Fun!
What are automated test?
Just some more code that uses your functions/modules/application/etc.
Manual testing
Before we dig into the world of automated testing we should see that we are speaking about the same thing.
- Who tests their application manually?
- What kind of application?
Why Automate?
- Why do we need automated tests when we can test manually?
- Isn't that just waste of time/money?
- We could add more features instead.
Mostly for the regression but in some cases it is very hard or impossible to test manually.
Writing tests can be a huge expense. Running tests the 2nd time is where you start to see the benefit. After 10 or 100 times you'll really see the profit.
Functional Testing
What kind of tests are we going to talk about here?
Not scalability, nor performance, nor security, ...
In this course, we are mostly going to talk about functional testing though one can handle security the same way.
Unit/Integration/Acceptance Test
What's the difference between them?
Mostly just the scope and who writes them.
The advantage of unit tests, that is code checking small parts of the application is the ability to focus on a small part of the code. It's easier to test and it's easier to understand/debug etc.
Separation of concerns in small units. Every module should do only one thing.
E.g. if you are building a log analyzer that analyzes Apache log files and spits out HTML reports. Most of us would just write some code to read in the file line by line, process each line and at the end create the HTML report. All in one script.
A probably better approach is to create a module for reading the file. A separate module to parse the line and build some data structure from the data. Then a third module that generates accumulated data and a 4th module to create the HTML report. This will allow testing the individual components separately and it will allow the reuse of those components. For example when you want to create a PDF version of the same report you just need to replace the final module in the chain.
White box and Black box testing
- Black box testing is when we only use the external interface (e.g. GUI) of the application.
- White box testing is when we also check the internals. E.g. access the database directly.
Objectives of the course
- Why and when to test ?
- Learn about the tools Perl provides for automated testing.
- Use the test framework provided by Perl.
Plan of the seminar
Part 1 - Perl Testing framework
- Introduce the Perl modules for basic testing framework.
- Testing Perl function and modules.
- TAP - the Test Anything Protocol
Part 2 - Test Anything else using Perl
- Command line applications, devices with CLI access.
- Browser based web applications
- Databases.
- X Windows.
- Microsoft Windows GUI Applications.
Process of testing
- Create fixture (set up environment).
- Run test.
- Analyze results.
- Tear down (clean up environment).
Basic Testing Framework in Perl
Things we are going to test
- Test a module
- Test complex Perl applications
- Test any application
As an introduction to testing with Perl, first we are going to write unit tests for a simple Perl module.
From there, we'll be able to move on to test more complex applications written in Perl. Then to test any application no matter in what language it is written in.
Testing a simple Perl module
We have a module called MySimpleCalc.pm with a single function called sum(). It is supposed to return the sum of numbers passed to it.
package MySimpleCalc;
use strict;
use warnings;
use Exporter qw(import);
our @EXPORT_OK = qw(sum);
sub sum {
return $_[0] + $_[1];
}
1;
- sum(1, 3)
Calculator test
How would you make sure this function works correctly? You'd probably write a small, temporary script to call the function with various sets of input and you'd then check if the results are as expected.
Let's do that. Write a script using the sum() function of the module and printing the results.
use strict;
use warnings;
use FindBin qw($Bin);
use lib "$Bin/../lib";
use MySimpleCalc qw(sum);
print sum(1, 1), "\n";
print sum(2, 2), "\n";
print sum(2, 2, 2), "\n";
In this script we use the $Bin variable of the FindBin module to let perl find the MyTools.pm file.
If you run this script, the output will look like this:
2
4
4
There was an error on the last line. 2 + 2 + 2 should be 6 and not 4
After some further experimenting we find out that the problem seems to be, that sum() does not handle more than 2 parameters.
While this is a very simple example, it is easy to overlook the simple detail, not noticing that one of the results was indeed incorrect.
Of course this is a simple computation and anyone should know what is the expected result, but what if we are testing something more complex?
Do you know what should be the result? Will you compute it every time manually to check if that's the correct answer?
We could fix the code, but we will use this case to show how to write tests.
At this point we are not interested in fixing the bug in MyTools.pm. We are interested in a robust way to write tests for it.
We'll use this example but we have not solved our biggest problems yet:
We cannot expect our test engineers looking at the results to know the valid result of each line.
Test with expected results
We should write the expected value next to the actual result:
In order to make it obvious what are the expected values first we have to compute them ourself - or bring in the expert, or the client who knows what she expects from the application to display - and make sure the expected values are always written next to the actual results.
That way it will be obvious to any tester what values need to be compared.
use strict;
use warnings;
use FindBin qw($Bin);
use lib "$Bin/../lib";
use MySimpleCalc qw(sum);
print sum(1, 1), " 2\n";
print sum(2, 2), " 4\n";
print sum(2, 2, 2), " 6\n";
Output:
2 2
4 4
4 6
Now it is better.
More test cases, more output
Still, if the output is more complex than a single, short number, or if there are many results, it will be difficult to the person comparing to notice the differences.
What if we have 100s of test cases?
use strict;
use warnings;
use FindBin qw($Bin);
use lib "$Bin/../lib";
use MySimpleCalc qw(sum);
print sum(1, 1), " 2\n";
print sum(2, 2), " 4\n";
print sum(2, 2, 2), " 6\n";
print sum(3, 3), " 6\n";
print sum(4, 4), " 8\n";
print sum(5, 5), " 10\n";
print sum(6, 6), " 12\n";
print sum(7, 7), " 14\n";
Output:
2 2
4 4
4 6
6 6
8 8
10 10
12 12
14 14
Complex output
qwertyuiopasdfghjklmnopqrs qwertyuiopasdfghjk1mnopqrs
It would be much better if our testing program already compared the expected value with the actual results and would only print "ok" or "not ok" depending on success or failure.
Like this:
ok, not ok
ok
ok
not ok
ok
ok
ok
ok
ok
Print only ok/not ok
Good so we are going to implement that now. For every test unit we create an if statement that will print either "ok" or "not ok" depending on the result.
use strict;
use warnings;
use FindBin;
use lib "$FindBin::Bin/../lib";
use MySimpleCalc qw(sum);
if (sum(1, 1) == 2) {
print "ok\n";
} else {
print "not ok\n";
}
if (sum(2, 2) == 4) {
print "ok\n";
} else {
print "not ok\n";
}
if (sum(2, 2, 2) == 6) {
print "ok\n";
} else {
print "not ok\n";
}
Output:
ok
ok
not ok
The output is as we expected. I mean we already know there is a bug somewhere. We are supposed to report it to the developers, but right now we are focusing on improving our test suite and its reporting capabilities.
Refactor - Write the ok function
- ok
As we are not only testers but also developers we quickly notice the repeating pattern and decide to move it to a function so we will write less code. As we would like to be short, we call the function ok(). As we'll see we are not the only ones who want to type as little as possible.
This ok() function gets a "true" or "false" value (that is the result of a comparison such as == in our examples.)
Reminder: In Perl undef, 0, "" and "0" count as false and all other values as true.
use strict;
use warnings;
use FindBin;
use lib "$FindBin::Bin/../lib";
use MySimpleCalc qw(sum);
ok( sum(1, 1) == 2 );
ok( sum(2, 2) == 4 );
ok( sum(2, 2, 2) == 6 );
sub ok {
my ($true) = @_;
if ($true) {
print "ok\n";
} else {
print "not ok\n";
}
}
Output:
ok
ok
not ok
But why reinvent the wheel ?
Introducing Test::Simple
- Test::Simple
The Perl community has already created several implementations of the above mentioned ok() function. We'll go with the one in the module called Test::Simple. Not only will that print "ok" or "not ok" but it will also include a counter.
In order to use it first we'll need to declare how many test units are we planning to call, that is, how many times are we planning to call the ok() function.
In return we get extra features such as printing the line numbers of the ok() calls that failed and getting a final report on the number of failed tests out of the planned tests.
use Test::Simple, tell it your plan, that is the number of times you are going to call ok() and use its built in ok() function.
use strict;
use warnings;
use FindBin;
use lib "$FindBin::Bin/../lib";
use MySimpleCalc qw(sum);
use Test::Simple tests => 3;
ok( sum(1, 1) == 2 );
ok( sum(2, 2) == 4 );
ok( sum(2, 2, 2) == 6 );
Output:
1..3
ok 1
ok 2
not ok 3
# Failed test at examples/perl/tests/t10.pl line 12.
# Looks like you failed 1 test of 3.
It is more verbose, it has a couple of additional useful pieces of information: 1..3 says how many tests we were planning, then we get the tests numbered and we even get a small explanation when the test fails.
$ echo $?
1
> echo %ERRORLEVEL%
1
Test::Simple when everything is ok
For the following example we've replaced the failing test with one that is successful. This way you can see how does it look like when everything is ok.
use strict;
use warnings;
use FindBin;
use lib "$FindBin::Bin/../lib";
use MySimpleCalc qw(sum);
use Test::Simple tests => 3;
ok( sum(1, 1) == 2 );
ok( sum(2, 2) == 4 );
ok( sum(3, 3) == 6 );
Output:
1..3
ok 1
ok 2
ok 3
$ echo $?
0
> echo %ERRORLEVEL%
0
Test::Simple - missing test
- count
So why are those numbers necessary? Imagine you managed to write 200 unit tests. Someone who does not know about the number runs the test suite and sees "ok" printed 17 times. It looks like everything is ok. He won't notice that instead of 200, only 17 tests ran before the test script excited. Everything was OK up to that point, but there is a serious problem somewhere. Either in the application or in the test itself. This can be found only if the test executer knows how many test have you planned, and checks if the correct number of tests have been executed.
This is exactly what Test::Simple provides. In the following example we deliberately commented out the last test that was failing.
use strict;
use warnings;
use FindBin;
use lib "$FindBin::Bin/../lib";
use MySimpleCalc qw(sum);
use Test::Simple tests => 3;
ok( sum(1, 1) == 2 );
ok( sum(2, 2) == 4 );
exit(0); # remove this after previous test is fixed
ok( sum(3, 3) == 6 );
Output:
1..3
ok 1
ok 2
# Looks like you planned 3 tests but ran 2.
$ echo $?
255
> echo %ERRORLEVEL%
255
Test::Simple - too many tests
When there are more OKs than planned the script will also print a comment about it.
use strict;
use warnings;
use FindBin;
use lib "$FindBin::Bin/../lib";
use MySimpleCalc qw(sum);
use Test::Simple tests => 2;
ok( sum(1, 1) == 2 );
ok( sum(2, 2) == 4 );
ok( sum(3, 3) == 6 );
Output:
1..2
ok 1
ok 2
ok 3
# Looks like you planned 2 tests but ran 3.
$ echo $?
255
> echo %ERRORLEVEL%
255
Add names to the tests
- names
More advantages of Test::Simple - names of the tests.
So Test::Simple module makes our life a bit more simple in that we don't have to write our testing expression. In addition this new "ok" function can actually do some more. It can get two arguments. The first one indicates success or failure of the test as explained earlier. The second one is a string which is the name of the test. When running a test these additional names get printed on the same line where the counter and the "ok" or "not ok" is printed. If the names were written carefully, then they can provide an immediate hint on what went wrong. Sometimes you won't even need to look at the test script itself, right from this comment you'll know where to look for the bug.
use strict;
use warnings;
use FindBin;
use lib "$FindBin::Bin/../lib";
use MySimpleCalc qw(sum);
use Test::Simple tests => 3;
ok(sum(1, 1)==2, '1+1');
ok(sum(2, 2)==4, '2+2');
ok(sum(2, 2, 2)==6, '2+2+2');
Output:
1..3
ok 1 - 1+1
ok 2 - 2+2
not ok 3 - 2+2+2
# Failed test '2+2+2'
# at examples/perl/tests/t14.pl line 12.
# Looks like you failed 1 test of 3.
Exercise: Write tests for fibo
Given the following module test the fibo()
function that returns the values of the Fibonacci series.
package MyFibo;
use strict;
use warnings;
use Exporter qw(import);
our @EXPORT_OK = qw(fibo);
sub fibo {
my ($n) = @_;
return $n if $n == 0 or $n == 1;
my @fib = (0, 1);
for (2..$n) {
push @fib, $fib[-1] + shift @fib;
}
return $fib[-1];
}
1;
Exercise: Write a test to test the Anagram checker
- Anagrams are words with the exact same letter, but in different order.
- silent - listen
package MyAnagram;
use strict;
use warnings;
use Exporter qw(import);
our @EXPORT_OK = qw(is_anagram);
sub is_anagram {
my ($x, $y) = @_;
my $xx = join('', sort(split(//, $x)));
my $yy = join('', sort(split(//, $y)));
return $xx eq $yy;
}
1;
Exercise: Enlarge the test
Take the test file from the last example ( examples/test-simple/tests/t14.pl ) and add a few more tests.
Solution: Write tests for fibo
use strict;
use warnings;
use FindBin;
use lib "$FindBin::Bin/../lib";
use MyFibo qw(fibo);
use Test::Simple tests => 9;
ok( fibo(0) == 0 );
ok( fibo(1) == 1 );
ok( fibo(2) == 1 );
ok( fibo(3) == 2 );
ok( fibo(4) == 3 );
ok( fibo(5) == 5 );
ok( fibo(6) == 8 );
# what should these really be?
ok( fibo(7.5) == 13);
ok( fibo(-8) == 1);
1..9
ok 1
ok 2
ok 3
ok 4
ok 5
ok 6
ok 7
ok 8
ok 9
Solution: Write a test to test the Anagram checker
use strict;
use warnings;
use FindBin;
use lib "$FindBin::Bin/../lib";
use MyAnagram qw(is_anagram);
use Test::Simple tests => 5;
ok( is_anagram("abc", "abc") );
ok( is_anagram("silent", "listen") );
ok( not is_anagram("abc", "abd") );
ok( is_anagram("anagram", "nag a gram") );
ok( is_anagram("ABC", "abc") );
1..5
ok 1
ok 2
ok 3
not ok 4
not ok 5
Solution: Enlarge our test suite
use strict;
use warnings;
use FindBin;
use lib "$FindBin::Bin/../lib";
use MySimpleCalc qw(sum);
use Test::Simple tests => 6;
ok( sum(1, 1) == 2, '1+1');
ok( sum(2, 2) == 4, '2+2');
ok( sum(2, 2, 2) == 6, '2+2+2');
ok( sum(3, 3) == 6, '3+3');
# negative numbers
ok( sum(-1, -1) == -2, '-1 + -1');
# edge cases:
ok( sum(1, -1) == 0, '1 + -1');
Output:
1..6
ok 1 - 1+1
ok 2 - 2+2
not ok 3 - 2+2+2
# Failed test '2+2+2'
# at examples/perl/tests/t20.pl line 12.
ok 4 - 3+3
ok 5 - -1 + -1
ok 6 - 1 + -1
# Looks like you failed 1 test of 6.
Refactor larger test suite
Now that we have this tool in place it is time to start to enlarge our test suite. After all three tests are not enough. As we are adding more and more tests we can recognize again that there is the data part of the tests that are changing and the code part which is repeating. This is a good time to refactor our code again. We take the data and move it to a data structure. The code then can go over this data structure and execute each unit on its own.
use strict;
use warnings;
use FindBin;
use lib "$FindBin::Bin/../lib";
use MySimpleCalc qw(sum);
my @tests = (
[ 1, 1, 2 ],
[ 2, 2, 4 ],
[ 2, 2, 2, 6 ],
[ 3, 3, 6 ],
[-1, -1, -2 ],
[ 1, -1, 0 ],
);
use Test::Simple tests => 6;
foreach my $t (@tests) {
my $expected = pop @$t;
my $real = sum(@$t);
my $name = join " + ", @$t;
ok( $real == $expected, $name );
}
Output:
1..6
ok 1 - 1 + 1
ok 2 - 2 + 2
not ok 3 - 2 + 2 + 2
# Failed test '2 + 2 + 2'
# at examples/perl/tests/t21.pl line 24.
ok 4 - 3 + 3
ok 5 - -1 + -1
ok 6 - 1 + -1
# Looks like you failed 1 test of 6.
For every unit we created an array reference where the last element is the expected output. Anything before that is the list of parameters. We then created an array (@tests) of all these units.
In the code we loop over all the units, $t holding the current unit, and then extract the expected output using pop. The remaining values are the parameters to the test. We also generate the name of the test from the input values.
There is a small problem though. When you add a new test to the array, you also have to remember to update the tests => 6 line.
There are a number of solution to this problem
Forget about your "plan", use "no_plan"
- plan
- no_plan
use strict;
use warnings;
use FindBin;
use lib "$FindBin::Bin/../lib";
use MySimpleCalc qw(sum);
my @tests = (
[ 1, 1, 2 ],
[ 2, 2, 4 ],
[ 2, 2, 2, 6 ],
[ 3, 3, 6 ],
[-1, -1, -2 ],
[ 1, -1, 0 ],
);
use Test::Simple 'no_plan';
foreach my $t (@tests) {
my $expected = pop @$t;
my $real = sum(@$t);
my $name = join " + ", @$t;
ok( $real == $expected, $name );
}
Output:
ok 1 - 1 + 1
ok 2 - 2 + 2
not ok 3 - 2 + 2 + 2
# Failed test '2 + 2 + 2'
# at examples/perl/tests/t22.pl line 24.
ok 4 - 3 + 3
ok 5 - -1 + -1
ok 6 - 1 + -1
1..6
# Looks like you failed 1 test of 6.
The 1..6 is now at the end.
This is one of the solutions and in some cases it is hard to avoid it, but it is not a really good solution. Those who advocate never to put 'no_plan' in your test say that checking if the exact number of unit tests were executed is an additional control over our test suite. Without a 'plan' you can never be sure if - after a successful execution - the OKs you see were all the units there, or if the test script aborted in the middle and you have not executed all of the units.
There are also people who say it is not that important to have a plan but personally I am in the first camp. I think the plan is important.
There is 'done_testing' we'll cover later on.
use BEGIN block with Test::Simple
- BEGIN
Another solution is the use of BEGIN blocks. In case you don't know, code that is placed in a BEGIN block will be executed as soon as it gets compiled. Even before the rest of the code gets compiled.
So in the next example the array @tests will already have content when perl tries to compile the "use Test::Simple ..." statement. This way "scalar @tests" will already return the number of elements in the array.
Please note, we have to declare "my @tests" outside the BEGIN block or it will be scoped inside that block.
This is a good solution, though it requires the use of BEGIN, which might be considered as somewhat advanced feature of Perl.
use strict;
use warnings;
use FindBin;
use lib "$FindBin::Bin/../lib";
use MySimpleCalc qw(sum);
my @tests;
BEGIN {
@tests = (
[ 1, 1, 2 ],
[ 2, 2, 4 ],
[ 2, 2, 2, 6 ],
[ 3, 3, 6 ],
[-1, -1, -2 ],
[ 1, -1, 0 ],
);
}
use Test::Simple tests => scalar @tests;
foreach my $t (@tests) {
my $expected = pop @$t;
my $real = sum(@$t);
my $name = join " + ", @$t;
ok( $real == $expected, $name );
}
Output:
1..6
ok 1 - 1 + 1
ok 2 - 2 + 2
not ok 3 - 2 + 2 + 2
# Failed test '2 + 2 + 2'
# at examples/perl/tests/t23.pl line 27.
ok 4 - 3 + 3
ok 5 - -1 + -1
ok 6 - 1 + -1
# Looks like you failed 1 test of 6.
Put the test cases in an external file
By now we almost totally separated the data of the tests in the array from the code that executes the test, but we can go a bit further.
We can move the test data to some external file. Let's create a text file that looks like the following file:
Here each line is a test-case. On the left side of the = sign are the parameters of the sum() function, on the right hand side of the = is the expected result.
We even allow for empty rows and comments: rows that start with a # character will be disregarded.
{% embed include file="src/examples/test-simple/tests/sum.txt)
Instead of having the data in the BEGIN block, we put code in there that will read the data file line-by-line. It skips the lines that are either empty or contain only comments. Lines that contain data are split at every comma and the = sign. Spaces around the signs are removed. The array we get (@data) contains the information for one test-case. We push the reference to that array on the @tests array. This way, by the end of the BEGIN block the @tests array will look exactly as it looked in the previous example.
The code outside the BEGIN block stays the same.
use strict;
use warnings;
use FindBin;
use lib "$FindBin::Bin/../lib";
use MySimpleCalc qw(sum);
my @tests;
BEGIN {
my $file = "$FindBin::Bin/large_sum.txt";
open my $fh, '<', $file or die "Could not open '$file': $!";
while (my $line = <$fh>) {
chomp $line;
next if $line =~ /^\s*(#.*)?$/;
my @data = split /\s*[,=]\s*/, $line;
push @tests, \@data;
}
}
use Test::Simple tests => scalar @tests;
foreach my $t (@tests) {
my $expected = pop @$t;
my $real = sum(@$t);
my $name = join " + ", @$t;
ok( $real == $expected, $name );
}
Output:
1..6
ok 1 - 1 + 1
ok 2 - 2 + 2
not ok 3 - 2 + 2 + 2
# Failed test '2 + 2 + 2'
# at examples/test-perl/tests/t24.pl line 29.
ok 4 - 3 + 3
ok 5 - -1 + -1
ok 6 - 1 + -1
# Looks like you failed 1 test of 6.
If for some reason the sum.txt file cannot be opened, we'll get an error message like this:
Could not open '.../sum.txt': No such file or directory at t24.pl line 11.
BEGIN failed--compilation aborted at t24.pl line 18.
Large test suite
{% embed include file="src/examples/test-simple/tests/large_sum.txt)
$ cp tests/large_sum.txt tests/sum.txt
$ perl tests/t24.pl
Harness
- harness
- Test::Harness
This is a module that can analyze the ok / not ok printouts with the numbers. In particular, it can analyze the output of Test::Simple and a whole class of other modules in the Test::* namespace on CPAN we are going to see later.
The harness.pl script is just a sample usage of the Test::Harness module. It accepts one or more test files, runs them and analyzes the output they generated.
#!/usr/bin/perl
use strict;
use warnings;
use Test::Harness qw(runtests);
runtests @ARGV;
Run the previous test file using Test::Harness
$ perl ../test-simple/harness.pl tests/t24.pl
If the original test script had very few test units then this output won't make much sense, but if the original test script had hundreds of OKs, we would not be really interested on all those OKs. We would be mainly interested in a summary, and in the (hopefully) little number of "NOT OK" printouts. This is how the output of Test::Harness looks like:
tests/t24.......
# Failed test '2 + 2 + 2'
# at tests/t24.pl line 28.
# Looks like you failed 1 test of 45.
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/45 subtests
Test Summary Report
-------------------
tests/t24.pl (Wstat: 256 Tests: 45 Failed: 1)
Failed test: 3
Non-zero exit status: 1
Files=1, Tests=45, 0 wallclock secs
( 0.01 usr 0.00 sys + 0.02 cusr 0.00 csys = 0.03 CPU)
Result: FAIL
Failed 1/1 test programs. 1/45 subtests failed.
Harness on success
In the case when all the OKs were successful the output is much shorter:
$ perl ../root/harness.pl tests/t11.pl
tests/t11.......ok
All tests successful.
Files=1, Tests=3, 0 wallclock secs
( 0.01 usr 0.00 sys + 0.01 cusr 0.01 csys = 0.03 CPU)
Result: PASS
Harness on too few tests
$ perl ../root/harness.pl tests/t12.pl
tests/t12.......# Looks like you planned 3 tests but only ran 2.
Dubious, test returned 255 (wstat 65280, 0xff00)
Failed 1/3 subtests
Test Summary Report
-------------------
tests/t12.pl (Wstat: 65280 Tests: 2 Failed: 0)
Non-zero exit status: 255
Parse errors: Bad plan. You planned 3 tests but ran 2.
Files=1, Tests=2, 0 wallclock secs
( 0.01 usr 0.00 sys + 0.02 cusr 0.00 csys = 0.03 CPU)
Result: FAIL
Failed 1/1 test programs. 0/2 subtests failed.
prove
prove tests/t24.pl
prove tests/t11.pl
prove tests/t12.pl
Packaging as people do for CPAN
CPAN
Now let's take a small journey into how people package modules that are on CPAN. I have been using this method for all my code whether it was open source and ended upon CPAN, or a web application that I am only developing for myself, or code for a client.
If you are packaging your application in the same way as all the CPAN modules are packaged, you'll immediately get all kinds of nice features other Perl developers have built for themself. So let's see how they are doing it. There are three major ways how to package a module for CPAN. We could call them "standards" but it is quite hard to talk about standards in the Perl world.
The three tools are three Perl Modules: ExtUtils::MakeMaker, Module::Build and Module::Install. Out of these three ExtUtils::MakeMaker has been standard for ages. Module::Build is standard from 5.10 and Module::Install actually builds on ExtUtils::MakeMaker and it packages itself with your distribution so it does not have to be installed on the target system.
The major advantage of Module::Build is that if you are writing pure perl modules you only need to know about Perl. If you are writing some code that is partially written in C and requires compilation then you'll probably have to know about Makefiles anyway so there might not be any advantage to using Module::Build.
When using Module::Build you are going to create a file called Build.PL that describes the installation process while for ExtUtils::MakeMaker and Module::Install you need to prepare a file called Makefile.PL.
A CPAN distribution has the following directory layout. In the root directory of the distribution there is the Makefile.PL or Build.PL or sometimes both. The README file is not a requirement but it is nice to have a short explanation of what the module is and how to install it. Especially if the installation is not fully automated or if there are special prerequisites.
The CHANGES or Changes files is another nice to have file. People usually include the major changes between version in that file so the user can easily see what is in the new version or to see the history of releases.
For testing purposes you don't need MANIFEST, but if you plan to distribute your code even internally in your company, it is an important and required file. The MANIFEST file contains a list of all the files included in the distribution. On one hand the standard tools use this list to know what to include on the other hand when opening the distributed zip file this is the file that helps to check if all the necessary files have arrived. I think it is important to manually update this file as we add and remove files from our code but the Perl community is divided on the issue. Some people like me keep the file in version control and manually update it when necessary using it as a control mechanism. Others keep a file called MANIFEST.SKIP that lists all the files that are not to be included in the MANIFEST and then autogenerate the MANIFEST file. Some don't even have a MANIFEST.SKIP, they just make sure that there is no extra file in the directory when they release a new version so they can just include everything in MANIFEST and thus in the distributions.
META.yml is a file in YAML format that contains machine readable meta-data about the projects. This meta data contains the name and the version of the module, list of prerequisites, license information and a lot of other data. Most of the people autogenerate this file with.
In addition to these files the modules that are provided by this package can be found in the lib subdirectory. In case there are also scripts to be installed, they are either in a directory called bin, or in a directory called scripts.
The t/ directory holds all the test files with a .t extension.
Directory layout
Makefile.PL
Build.PL
dist.ini
README
CHANGES
MANIFEST.SKIP
MANIFEST (generated)
META.yml (generated)
META.json (generated)
MYMETA.yml (generated)
MYMETA.json (generated)
lib/ Modules
bin/ scripts
t/ test scripts with .t extension
- ExtUtils::MakeMaker Makefile.PL
- Module::Install Makefile.PL
- Module::Build Build.PL
- Dist::Zilla dist.ini
Makefile.PL for ExtUtils::MakeMaker
Makefile.PL ExtUtils::MakeMaker
use strict;
use warnings;
use ExtUtils::MakeMaker;
WriteMakefile
(
NAME => 'MyTools',
VERSION_FROM => 'lib/MyTools.pm',
LICENSE => 'perl',
PREREQ_PM => {
'Exporter' => '0',
},
TEST_REQUIRES => {
'Test::Simple' => '1.00',
},
);
In this example Makefile.PL we have to say what is the name of the module, and in which file is the version number - this value will be part of the generated filename. The LICENSE field is a relatively new addition. In this example I am using the "perl" license but if you are writing this for a company then you'll probably use the word "restrictive".
Once you have setup the directory structure and created a simple Makefile.PL you can type the following:
$ perl Makefile.PL
$ make
$ make test
In Windows you'll probably have nmake or dmake instead of make.
In the above three steps "perl Makefile.PL" checks if all the prerequisites are met and creates a Makefile.
make would compile your code if you had some C code in your application and also copy all the files to a new "blib" directory just in the project directory.
The most interesting for us is the third step: make test will run all your .t file within the t/ directory using Test::Harness. So you don't have to deal with it yourself and anyone who is familiar with the standard hierarchy of Perl modules will immediately know what to do.
In addition there are several other things you can do. Most notably you can execute the following command, and create the distribution you are supposed to upload to CPAN or to give to your users.
$ make manifest
$ make dist
Makefile.PL for Module::Install
Makefile.PL Module::Install
use strict;
use warnings;
use inc::Module::Install;
name 'MyTools';
license 'perl';
author 'Foo Bar';
all_from 'lib/MyTools.pm';
requires 'Exporter' => '0';
test_requires 'Test::Simple' => '1.00';
WriteAll;
In Module::Install the declaration is cleaner and it does not need to be installed on the target machine. When running perl Makefile.PL it creates and inc subdirectory and copies itself there. One should distribute this directory as well.
On the target system Module::Install is loaded from this subdirectory.
$ perl Makefile.PL
$ make
$ make test
$ make manifest
$ make dist
Build.PL
use strict;
use warnings;
use Module::Build;
my $build = Module::Build->new(
module_name => 'MyTools',
dist_version_from => 'lib/MyTools.pm',
license => 'perl',
create_makefile_pl => 0,
requires => {
'Exporter' => '0',
},
build_requires => {
'Test::Simple' => '1.00',
},
);
$build->create_build_script;
$ perl Build.PL
$ perl Build
$ perl Build test
$ perl Build manifest
$ perl Build dist
Directories under t and prove
t/
other.t
something.t
win32/
ui.t
prove t/*.t
prove -r recursively
prove t/something.t t/something.t
prove t/win32 t/win32/*.t
If our test suite was setup as outlined above, even without a real perl module, we can also keep the test files in a directory hierarchy under t/ - in that case we have to indicate this in Makefile.PL or Build.PL. We can also run the tests script one-by-one or per directory using the "prove" utility.
Let's see the layout of the sample CPAN packages we have, and run their tests:
By default "make test" or "perl Build test" will run all the t/*.t files. Sometimes we want to run them one by one. We could run perl t/something.t but that would try to use the installed versions of the modules you are using and not those you are about to install. So better use perl -Ilib t/something.t for that.
Even better to use the prove t/something.t command that comes with Test::More. It too by default would attempt to use the already installed modules so you could run prove -b t/something.t to include the files from blib/lib. Prove has another frequently used flag: -v which puts it in verbose mode.
Simple CPAN-like module
.
├── Changes
├── lib
│ └── MyTools.pm
├── Makefile.PL
├── MANIFEST.SKIP
├── README
└── t
└── 01-add.t
v0.02 2021.01.17
Test the add function.
v0.01 2021.01.16
First version released
use strict;
use warnings;
use ExtUtils::MakeMaker;
WriteMakefile
(
NAME => 'MyTools',
VERSION_FROM => 'lib/MyTools.pm',
LICENSE => 'perl',
PREREQ_PM => {
'Exporter' => '0',
},
TEST_REQUIRES => {
'Test::Simple' => '1.00',
},
);
{% embed include file="src/examples/distribution/project_with_extutils_makemaker/MANIFEST.SKIP)
A few words about the module
Maybe installation instructions.
package MyTools;
use strict;
use warnings;
use 5.010;
use Exporter qw(import);
our @EXPORT_OK = qw(add);
our $VERSION = '0.02';
=head1 NAME
MyTools - some tools to show packaging
=head1 SYNOPSIS
Short example
=cut
sub add {
return $_[0] + $_[1];
}
sub multiply {
return $_[0] * $_[1];
}
1;
use strict;
use warnings;
use Test::Simple tests => 1;
use MyTools qw(add);
ok(add(2, 3) == 5, 'adding two numbers');
Commands for CPAN release
perl Makefile.PL
make
make test
make manifest
make dist
Exercise: Add tests
- Add some tests to the
01-add.t
file. - Create a new file called
02-multiply.t
and add tests to verify themultiply()
function. - See how it works when running directly with perl and using harness.
- Increase the version number in the module.
- Add a new entry to the Changes file describing your changes.
- Create the release of of new CPAN distribution.
Test::Simple
This is all very nice and Simple.
What if you want More ?
Test::More
Moving over to Test::More
- Test::More
Test::Simple is really a very simple module. Its sole exported function is the "ok" function.
Test::More has the same "ok" function - so it is a drop-in replacement - but it also has lots of other functions and tools:
ok
is
isnt
diag
node
like
cmp_ok
is_deeply
SKIP
TODO
done_testing
subtest
Better error reporting.
In the end every test can be based on the single ok() function. The additional functions mainly serve as convenience methods to allow better error reporting.
Test::Simple ok( trueness, name);
-
ok|Test::Simple
-
The previous examples using Test::Simple.
use strict;
use warnings;
use lib 'lib';
use MySimpleCalc qw(sum);
use Test::Simple tests => 3;
ok( sum(1, 1) == 2, '1+1');
ok( sum(2, 2) == 4, '2+2');
ok( sum(2, 2, 2) == 6, '2+2+2');
perl t/30.t
1..3
ok 1 - 1+1
ok 2 - 2+2
not ok 3 - 2+2+2
# Failed test '2+2+2'
# at t/30.t line 11.
# Looks like you failed 1 test of 3.
Test::More ok( trueness, name);
-
ok|Test::More
-
Test::More is a drop-in replacement of Test::Simple.
use strict;
use warnings;
use lib 'lib';
use MySimpleCalc qw(sum);
use Test::More tests => 3;
ok( sum(1, 1) == 2, '1+1');
ok( sum(2, 2) == 4, '2+2');
ok( sum(2, 2, 2) == 6, '2+2+2');
perl t/31.t
1..3
ok 1 - 1+1
ok 2 - 2+2
not ok 3 - 2+2+2
# Failed test '2+2+2'
# at t/31.t line 12.
# Looks like you failed 1 test of 3.
Test::More is( value, expected_value, name);
- is
It would be much better to see the expected value and the actually received value. This usually helps in locating the problem.
use strict;
use warnings;
use lib 'lib';
use MySimpleCalc qw(sum);
use Test::More tests => 3;
is(sum(1, 1), 2, '1+1' );
is(sum(2, 2), 4, '2+2' );
is(sum(2, 2, 2), 6, '2+2+2' );
perl t/32.t
1..3
ok 1 - 1+1
ok 2 - 2+2
not ok 3 - 2+2+2
# Failed test '2+2+2'
# at t/32.t line 11.
# got: '4'
# expected: '6'
# Looks like you failed 1 test of 3.
See, in this case we can already guess that it cannot add 3 values.
is
compares usingeq
Test::More isnt( value, not_expected_value, name);
- isnt
Sometimes you are expecting to get a value but you don't really know what. You just know one specific value that you want to make sure you have not received.
use strict;
use warnings;
use lib 'lib';
use MyTools;
use Test::More tests => 3;
foreach my $str ('Deep Purple', 'Beatles', 'ABBA') {
isnt(scramble($str), $str, $str);
}
perl t/isnt.t
1..3
ok 1 - Deep Purple
ok 2 - Beatles
not ok 3 - ABBA
# Failed test 'ABBA'
# at t/isnt.t line 10.
# got: 'ABBA'
# expected: anything else
# Looks like you failed 1 test of 3.
This isn't a good example though.
Test::More isnt undef
- undef
use strict;
use warnings;
use Test::More tests => 2;
my $x;
isnt($x, undef);
$x = 1;
isnt($x, undef);
perl t/isnt_undef.t
Output:
1..2
not ok 1
# Failed test at t/isnt_undef.t line 7.
# got: undef
# expected: anything else
ok 2
# Looks like you failed 1 test of 2.
note( message ) or diag( message );
-
diag
-
note
-
diag
prints out a message along with the rest of the output. -
note()
does the same, but when running under the prove it does not show up.
Use it for whatever extra output in order to ensure that
your printouts will not interfere with future changes in the
test environment modules (such as prove
or Test::Harness
).
use strict;
use warnings;
use Test::More tests => 1;
ok 1;
diag "This is a diag message";
note "This is a note message";
$ perl t/messages.t
1..1
ok 1
# This is a diag message
# This is a note message
prove t/messages.t
# This is a diag message
t/messages.t .. ok
All tests successful.
Files=1, Tests=1, 0 wallclock secs ( 0.01 usr 0.01 sys + 0.03 cusr 0.00 csys = 0.05 CPU)
Result: PASS
prove -v t/messages.t
# This is a diag message
t/messages.t ..
1..1
ok 1
# This is a note message
ok
All tests successful.
Files=1, Tests=1, 0 wallclock secs ( 0.01 usr 0.00 sys + 0.03 cusr 0.00 csys = 0.04 CPU)
Result: PASS
(note or diag) explain( a_variable );
-
explain
-
explain();
will recognize if its parameter is a simple scalar or a reference to a more complex data structure.
Its result must be passed to either note(); does or diag();
use strict;
use warnings;
use Test::More tests => 1;
ok 1;
my $x = "String data";
my @y = ( 1, 2, 3 );
my %h = (
foo => 'bar',
numbers => [ 42, 17 ],
);
diag $x;
diag \@y;
diag \%h;
diag explain $x;
diag explain \@y;
note explain \%h;
perl t/explain.t
1..1
ok 1
# String data
# ARRAY(0x5573692156f8)
# HASH(0x5573691b5b80)
# String data
# [
# 1,
# 2,
# 3
# ]
# {
# 'foo' => 'bar',
# 'numbers' => [
# 42,
# 17
# ]
# }
TODO
- TODO
When you don't want to see the failing tests any more
use strict;
use warnings;
use MySimpleCalc qw(sum);
use Test::More tests => 3;
diag "Add two numbers";
is(sum(1, 1), 2, '1+1');
is(sum(2, 2), 4, '2+2');
diag "Add 3 numbers";
TODO: {
local $TODO = "fix bug summing more than 2 values #173";
is(sum(2, 2, 2), 6, '2+2+2');
}
$ prove -l t/34.t
# Add two numbers
# Add 3 numbers
t/34.t .. ok
All tests successful.
Files=1, Tests=3, 1 wallclock secs ( 0.01 usr 0.00 sys + 0.02 cusr 0.01 csys = 0.04 CPU)
Result: PASS
TODO Verbose output
- TODO
$ prove -lv t/34.t
# Add two numbers
# Add 3 numbers
t/34.t ..
1..3
ok 1 - 1+1
ok 2 - 2+2
not ok 3 - 2+2+2 # TODO fix bug summing more than 2 values #173
# Failed (TODO) test '2+2+2'
# at t/34.t line 16.
# got: '4'
# expected: '6'
ok
All tests successful.
Files=1, Tests=3, 0 wallclock secs ( 0.01 usr 0.00 sys + 0.03 cusr 0.00 csys = 0.04 CPU)
Result: PASS
In the eXtreme Programming paradigm the following two key aspects are somewhat in contradiction:
- Write your test before you write your code.
- Make sure your test suit always passes at 100%.
Of course after you already wrote your tests for a new feature but before you can write the actual code there is a short time period when your test suit will not pass 100%.
Worse than that, it is also recommended that immediately when you get a bug report from somewhere you should write a test case that reproduces this bug. Obviously this test will fail before you fix the bug and will hopefully pass once you fixed it.
In order to make the test suit happy there is a way to tell the harness tool that a test is supposed to fail. That is, we know it will fail. What we can do to achieve this is to set one or more tests to be in a TODO block.
TODO: unexpected success
What if the bug gets fixed - accidentally?
# Add two numbers
# Add 3 numbers
t/35.t .. ok
All tests successful.
Test Summary Report
-------------------
t/35.t (Wstat: 0 Tests: 3 Failed: 0)
TODO passed: 3
Files=1, Tests=3, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.03 cusr 0.00 csys = 0.05 CPU)
Result: PASS
# Add two numbers
# Add 3 numbers
t/35.t ..
1..3
ok 1 - 1+1
ok 2 - 2+2
ok 3 - 2+2+2 # TODO fix bug summing more than 2 values
ok
All tests successful.
Test Summary Report
-------------------
t/35.t (Wstat: 0 Tests: 3 Failed: 0)
TODO passed: 3
Files=1, Tests=3, 0 wallclock secs ( 0.01 usr 0.00 sys + 0.03 cusr 0.00 csys = 0.04 CPU)
Result: PASS
TODO: unexpected success (the code)
package MySimpleCalcFixed;
use strict;
use warnings;
use Exporter qw(import);
our @EXPORT_OK = qw(sum);
sub sum {
my $sum = 0;
$sum += $_ for @_;
return $sum;
}
1;
use strict;
use warnings;
use lib 'lib';
use MySimpleCalcFixed qw(sum);
use Test::More tests => 3;
diag "Add two numbers";
is(sum(1, 1), 2, '1+1');
is(sum(2, 2), 4, '2+2');
diag "Add 3 numbers";
TODO: {
local $TODO = "fix bug summing more than 2 values";
is(sum(2, 2, 2), 6, '2+2+2');
}
MyTools with various functions
package MyTools;
use strict;
use warnings;
use DateTime;
our $VERSION = '0.01';
use Exporter qw(import);
our @EXPORT_OK = qw(
last_update
get_copyright
get_copyright_broken
fibo
fibonacci
wait_for_input_with_timeout
);
sub fibo {
my @f = _fibonacci(@_);
return $f[-1];
}
sub fibonacci {
return [ _fibonacci(@_) ];
}
sub _fibonacci {
my ($n) = @_;
die "Need to get a number\n" if not defined $n or $n !~ /^[0-9]+$/;
if ($n < 0) {
warn "Given number must be > 0";
return 0;
}
return (0) if $n == 0;
return (0, 1) if $n == 1;
return (0, 1, 1, 4, 3) if $n == 4;
my @fib = (0, 1);
for (2..$n) {
push @fib, $fib[-1]+$fib[-2];
}
return @fib;
}
sub get_copyright {
my $year = (localtime)[5]+1900;
return "Copyright 2000-$year Gabor Szabo, all rights reserved.";
}
sub get_copyright_broken {
my $year = "19" . (localtime)[5];
return "Copyright 2000-$year Gabor Szabo, all rights reserved.";
}
sub last_update {
return "This page was last updated at " . DateTime->now;
}
sub wait_for_input_with_timeout {
sleep rand shift;
}
1;
like(value, qr/expected regex/, name);
What if you don't want or can't realisticly expect an exact match with the result?
You can use like
that compares with regex =~
.
use strict;
use warnings;
use 5.010;
use lib 'lib';
use MyTools qw(last_update);
say (last_update());
This page was last updated at 2020-11-10T09:19:38
use strict;
use warnings;
use MyTools qw(last_update);
use Test::More tests => 3;
my $resp = last_update();
diag $resp;
ok( $resp =~ /^This page was last updated at/, 'last_update =~');
like( $resp, qr/^This page was last updated at/, 'last_update like');
like( $resp,
qr/^This page was last updated at \d\d\d\d-\d\d-\d\dT\d\d:\d\d:\d\d$/, 'last_update full match');
prove t/last_update.t
# This page was last updated at 2020-11-10T09:20:48
t/last_update.t .. ok
All tests successful.
Files=1, Tests=3, 0 wallclock secs ( 0.01 usr 0.01 sys + 0.13 cusr 0.00 csys = 0.15 CPU)
Result: PASS
like(value, qr/expected regex/, name);
- like
use strict;
use warnings;
use 5.010;
use lib 'lib';
use MyTools qw(get_copyright get_copyright_broken);
say (get_copyright());
say (get_copyright_broken());
Copyright 2000-2020 Gabor Szabo, all rights reserved.
Copyright 2000-19120 Gabor Szabo, all rights reserved.
use strict;
use warnings;
use MyTools qw(get_copyright get_copyright_broken);
use Test::More tests => 3;
like( get_copyright(),
qr/Copyright 2000-\d{4} Gabor Szabo, all rights reserved./, 'copyright');
my $copyright = get_copyright_broken();
ok( $copyright =~ /Copyright 2000-\d{4} Gabor Szabo, all rights reserved./, 'use =~' );
like( $copyright,
qr/Copyright 2000-\d{4} Gabor Szabo, all rights reserved./, 'use like');
1..2
ok 1 - copyright
not ok 2 - copyright
# Failed test 'copyright'
# at t/copyright.t line 12.
# 'Copyright 2000-19120 Gabor Szabo, all rights reserved.'
# doesn't match '(?^:Copyright 2000-\d{4} Gabor Szabo, all rights reserved.)'
# Looks like you failed 1 test of 2.
Another example with like
use strict;
use warnings;
use Test::More tests => 2;
like( foo(), qr/\d+/, "there are some digits in the result" );
like( bar(), qr/\d+/, "there are some digits in the result" );
sub foo {
return "This is a long text with a number 42 in it";
}
sub bar {
return "This is another string with no number in it";
}
1..2
ok 1 - there are some digits in the result
not ok 2 - there are some digits in the result
# Failed test 'there are some digits in the result'
# at t/like.t line 8.
# 'This is another string with no number in it'
# doesn't match '(?^:\d+)'
# Looks like you failed 1 test of 2.
cmp_ok( this, op, that, name);
- cmp_ok
Sometimes an eq by is() or a regular expression by like() just isn't good enough. For example what if you would like to check the rand() function of perl? Its result must be between 0 (inclusive) and 1 (non inclusive).
In other case you might have a function that should happen within a certain period of time. You don't have an exact expectation but you know the elapsed time must be between a lower and upper limit.
cmp_ok
compares with any operator you like.
use strict;
use warnings;
use Test::More tests => 2;
use MyTools qw(wait_for_input_with_timeout);
my $start = time;
wait_for_input_with_timeout(3);
my $end = time;
cmp_ok $end - $start, ">=", 2, "process was waiting at least 2 secs";
cmp_ok $end - $start, "<=", 3, "process was waiting at most 3 secs";
1..2
ok 1 - process was waiting at least 2 secs
ok 2 - process was waiting at most 3 secs
1..2
not ok 1 - process was waiting at least 2 secs
# Failed test 'process was waiting at least 2 secs'
# at t/cmp_ok.t line 11.
# '0'
# >=
# '2'
ok 2 - process was waiting at most 3 secs
# Looks like you failed 1 test of 2.
- Actually this is a really bad test as it can fail randomnly
is_deeply( complex_structure, expected_complex_structure, name);
- is_deeply
Compare two Perl data structures:
use strict;
use warnings;
use Test::More tests => 11;
use MyTools qw(fibo fibonacci);
is fibo(0), 0;
is fibo(1), 1;
is fibo(2), 1;
is fibo(3), 2;
is fibo(4), 3;
is fibo(5), 5;
is_deeply fibonacci(1), [0, 1], 'fibs 1';
is_deeply fibonacci(2), [0, 1, 1], 'fibs 2';
is_deeply fibonacci(3), [0, 1, 1, 2], 'fibs 3';
is_deeply fibonacci(4), [0, 1, 1, 2, 3], 'fibs 4';
is_deeply fibonacci(5), [0, 1, 1, 2, 3, 5], 'fibs 5';
# Failed test 'fibs 4'
# at t/is_deeply.t line 20.
# Structures begin differing at:
# $got->[3] = '4'
# $expected->[3] = '2'
# Looks like you failed 1 test of 11.
t/is_deeply.t ..
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/11 subtests
Test Summary Report
-------------------
t/is_deeply.t (Wstat: 256 Tests: 11 Failed: 1)
Failed test: 10
Non-zero exit status: 1
Files=1, Tests=11, 0 wallclock secs ( 0.01 usr 0.00 sys + 0.13 cusr 0.01 csys = 0.15 CPU)
Result: FAIL
Function returning data from bug-tracker
Look at the code that generates the bug reports you'll see that testing the 4th return value
- which is quite complex already - is hard. We cannot test against a fixed hash as some of the values are totally dynamic (e.g. a timestamp).
package MyBugs;
use strict;
use warnings FATAL => 'all';
our $VERSION = '0.01';
use base 'Exporter';
our @EXPORT = qw(fetch_data_from_bug_tracking_system);
=head2 fetch_data_from_bug_tracking_system
fake the behavior of a bug tracking system by returning various constructs
=cut
sub fetch_data_from_bug_tracking_system {
my @sets = (
{ bugs => 3,
errors => 6,
failures => 8,
warnings => 1,
},
{ bugs => 3,
errors => 9,
failures => 8,
warnings => 1,
},
{ bogs => 3,
erors => 9,
failures => 8,
warnings => 1,
},
{ bugs => 'many',
errors => 6,
failures => 8,
warnings => 1,
},
{
bugs => [
{
ts => time,
name => "System bug",
severity => 3,
},
{
ts => time - int rand(100),
name => "Incorrect severity bug",
severity => "extreme",
},
{
ts => time - int rand(200),
name => "Missing severity bug",
},
],
},
);
my $h = $sets[shift];
return %$h;
}
1;
is_deeply on a hash
Another example with is_deeply
checking the returned hash from a bug tracking system.
use strict;
use warnings;
use MyBugs;
use Test::More tests => 3;
my %expected = (
bugs => 3,
errors => 6,
failures => 8,
warnings => 1,
);
my %a = fetch_data_from_bug_tracking_system(0);
is_deeply( \%a, \%expected, "Query 0" );
my %b = fetch_data_from_bug_tracking_system(1);
is_deeply( \%b, \%expected, "Query 1" );
my %c = fetch_data_from_bug_tracking_system(2);
is_deeply( \%c, \%expected, "Query 2" );
# Failed test 'Query 1'
# at t/is_deeply_bugs.t line 21.
# Structures begin differing at:
# $got->{errors} = '9'
# $expected->{errors} = '6'
# Failed test 'Query 2'
# at t/is_deeply_bugs.t line 24.
# Structures begin differing at:
# $got->{bugs} = Does not exist
# $expected->{bugs} = '3'
# Looks like you failed 2 tests of 3.
t/is_deeply_bugs.t ..
Dubious, test returned 2 (wstat 512, 0x200)
Failed 2/3 subtests
Test Summary Report
-------------------
t/is_deeply_bugs.t (Wstat: 512 Tests: 3 Failed: 2)
Failed tests: 2-3
Non-zero exit status: 2
Files=1, Tests=3, 0 wallclock secs ( 0.01 usr 0.00 sys + 0.03 cusr 0.00 csys = 0.04 CPU)
Result: FAIL
What if we are testing a live system and so not interested in the exact values, just in the keys and the fact that the values are numbers?
Platform dependent tests
use strict;
use warnings;
use Test::More test => 2;
like( `/sbin/ifconfig`, qr/eth0/ );
like( `ipconfig`, qr/Windows IP Configuration/ );
ok 1
Can't exec "ipconfig": No such file or directory at without_skip.t line 8.
not ok 2
# Failed test at examples/intro/without_skip.t line 8.
# undef
# doesn't match '(?-xism:Windows IP Configuration)'
1..2
# Looks like you failed 1 test of 2.
Sometimes, you know that a part of your test suite isn't relevant. Running them - if at all possible - would report false results. Maybe some of the features of your system are platform dependent, you don't want to test them on an unsupported platform. Sometimes failure of previous tests make a test irrelevant.
In all such cases what you actually want is to skip the tests. Surprisingly the way to do that is to enclose the tests in a SKIP block.
SKIP some tests
- SKIP
use strict;
use warnings;
use Test::More tests => 2;
SKIP: {
skip "Linux related tests", 1 if $^O ne 'linux';
like( `/sbin/ifconfig`, qr/eth0|enp0s31f6/ );
}
SKIP: {
skip "Windows related tests", 1 if $^O !~ /MSWin/i;
like( `ipconfig`, qr/Windows IP Configuration/ );
}
Output:
1..2
ok 1
ok 2 # skip Windows related tests
locale
- locale
use strict;
use warnings;
use Test::More tests => 1;
use POSIX qw(locale_h);
my $locale = setlocale(LC_CTYPE);
diag $locale;
# OSX/English: en_US.UTF-8
# Windows:
# Hebrew_Israel.1255
my $ipconfig = 'Windows IP Configuration';
if ($locale eq 'German_Germany.1252') {
$ipconfig = 'Windows-IP-Konfiguration';
}
SKIP: {
skip "Windows related tests", 1 if $^O !~ /MSWin/i;
like( `ipconfig`, qr/$ipconfig/ );
}
Stop running current test script
When running a test script sometimes we reach a failure that is so problematic you cannot go on testing. This can be in the scope of a single test script in which case you would call exit() to abort the current test script or it can be so bad that all the testing should stop. In that case you should call BAIL_OUT(). That will indicate the harness that it should not call any other test script.
use strict;
use warnings;
use Test::More tests => 3;
my $x = 0;
ok(1, "first");
ok($x, "second") or exit;
ok(1, "third");
prove t/exit.t t/other.t
# Failed test 'second'
# at t/exit.t line 9.
# Looks like you planned 3 tests but ran 2.
# Looks like you failed 1 test of 2 run.
t/exit.t ...
1..3
ok 1 - first
not ok 2 - second
Dubious, test returned 1 (wstat 256, 0x100)
Failed 2/3 subtests
t/other.t ..
1..1
ok 1 - Other test
ok
Test Summary Report
-------------------
t/exit.t (Wstat: 256 Tests: 2 Failed: 1)
Failed test: 2
Non-zero exit status: 1
Parse errors: Bad plan. You planned 3 tests but ran 2.
Files=2, Tests=3, 0 wallclock secs ( 0.01 usr 0.00 sys + 0.09 cusr 0.01 csys = 0.11 CPU)
Result: FAIL
Stop all the test scripts
- BAIL_OUT
use strict;
use warnings;
use Test::More tests => 3;
my $x = 0;
ok(1, "first");
ok($x, "second") or BAIL_OUT("no way");
ok(1, "third");
prove t/bail_out.t t/other.t
t/bail_out.t ..
1..3
ok 1 - first
not ok 2 - second
# Failed test 'second'
# at t/bail_out.t line 9.
Bailout called. Further testing stopped: no way
Bail out! no way
FAILED--Further testing stopped: no way
Exercises
- Take a local copy of the Math::RPN module located in (examples/Math-RPN) and add 30 test cases. See what is Reverse Polish Notation.
- You can also look for a module on CPAN - maybe one that you are using a lot. Maybe one that you found a problem with.
- Download the tar.gz file from CPAN unzip it (don't install the module) and write at least 20 tests.
Test coverage using Devel::Cover
- Devel::Cover
cover --test
Once we know that our tests are passing we could check which lines are exercised in the code during the test execution. For this we can use Devel::Cover by Paul Johnson. First we need to run the tests again now instrumenting with Devel::Cover. This will be much slower than the regular execution but in the end we will get a text report and we will be able to build a nice HTML report with drill down about all the code we ran.
{% embed include file="src/coverage_summary.png)
All tests successful.
Files=11, Tests=2078, 50 wallclock secs
Test coverage report example
package MyMath;
use strict;
use warnings;
use Exporter qw(import);
our @EXPORT_OK = qw(add div fibo abs);
our $VERSION = '0.02';
=head1 NAME
MyMath - some tools to show test coverage
=head1 SYNOPSIS
Short example
=head2 add
Some docs
=cut
sub add {
return $_[0] + $_[1];
}
sub abs {
my ($num) = @_;
if ($num < 0) {
return -$num;
}
return $num;
}
sub fibo {
my ($n) = @_;
return $n if $n == 0 or $n == 1;
my @fib = (0, 1);
for (2..$n) {
push @fib, $fib[-1] + shift @fib;
}
return $fib[-1];
}
sub div {
return $_[0] / $_[1];
}
1;
use strict;
use warnings;
use Test::More 'no_plan';
use MyMath qw(add div fibo abs);
is(add(2, 3), 5);
#is(abs(-2), 2);
#is(abs(2), 2);
#is(fibo(0), 0);
#is(fibo(1), 1);
#is(fibo(2), 1);
#is(div(6, 3), 2);
#is(div(6, 0), 0);
use strict;
use warnings;
use ExtUtils::MakeMaker;
WriteMakefile
(
NAME => 'MyMath',
VERSION_FROM => 'lib/MyMath.pm',
LICENSE => 'perl',
PREREQ_PM => {
'Exporter' => '0',
},
TEST_REQUIRES => {
'Test::Simple' => '1.00',
},
);
Declare your plan at execution time
- plan
use Test::More tests => 6;
or
use Test::More;
...
plan tests => 6;
No need to tell your plan at load time;
Earlier when we were talking about Test::Simple we had a case when the test data was placed in an array and the test script looped over the array executing the function to be tested for each input.
Later we moved the data to an external file which made it even more difficult to declare the plan so we used lengthy code in the BEGIN block in order to have the expected number of tests before Test::Simple is loaded into memory.
With Test::More we have a much better solution. We don't have to declare the plan on the use Test::More line. We can do that later, in the run time of the Perl script.
use strict;
use warnings;
use MySimpleCalc qw(sum);
use Test::More;
my @tests = (
[ 1, 1, 2 ],
[ 2, 2, 4 ],
[ 2, 2, 2, 6],
[3, 3, 6],
[-1, -1, -2],
[1, -1, 0],
);
plan tests => scalar @tests;
foreach my $t (@tests) {
my $expected = pop @$t;
my $real = sum(@$t);
my $name = join " + ", @$t;
ok( $real == $expected, $name );
}
done_testing
- done_testing
I am not a fan of it, but in rare cases it is useful to know that done_testing can be used to signal all tests have been done. This way we don't need to have a "plan".
use strict;
use warnings;
use MySimpleCalc qw(sum);
use Test::More;
is sum(1, 1), 2, '1+1';
is sum(2, 2), 4, '2+2';
done_testing;
plan tests, no_testing, done_testing
package Fibonacci;
use strict;
use warnings;
use Exporter qw(import);
our @EXPORT_OK = qw(fibo);
sub fibo {
my ($n) = @_;
return $n if $n == 0 or $n == 1;
#exit if $n == 4;
my @fib = (0, 1);
for (2..$n) {
push @fib, $fib[-1] + shift @fib;
}
return $fib[-1];
}
1;
use strict;
use warnings;
use Fibonacci qw(fibo);
use Test::More;
plan tests => 6;
is fibo(1), 1;
is fibo(2), 1;
is fibo(3), 2;
is fibo(4), 3;
is fibo(5), 5;
is fibo(6), 8;
use strict;
use warnings;
use Fibonacci qw(fibo);
use Test::More 'no_plan';
is fibo(1), 1;
is fibo(2), 1;
is fibo(3), 2;
is fibo(4), 3;
is fibo(5), 5;
is fibo(6), 8;
use strict;
use warnings;
use Fibonacci qw(fibo);
use Test::More;
is fibo(1), 1;
is fibo(2), 1;
is fibo(3), 2;
is fibo(4), 3;
is fibo(5), 5;
is fibo(6), 8;
done_testing;
Compute test plan
use strict;
use warnings;
use Test::More;
my @cases = (1, 2, 7);
plan tests => 2 * scalar @cases;
for my $case (@cases) {
my $value = rand($case);
cmp_ok $value, '>=', 0, 'bigger than 0';
cmp_ok $value, '<', $case, "smaller than $case";
}
subtest with plan
use strict;
use warnings;
use MySimpleCalc qw(sum);
use Test::More;
plan tests => 2;
subtest positive => sub {
plan tests => 2;
is sum(1, 1), 2, '1+1';
is sum(2, 2), 4, '2+2';
};
subtest negatives => sub {
plan tests => 2;
is sum(-1, -1), -2, '-1, -1';
is sum(-1, -1, -1), -3, '-1, -1, -1';
};
prove -l t/planned_subtest.t
# Failed test '-1, -1, -1'
# at t/planned_subtest.t line 21.
# got: '-2'
# expected: '-3'
# Looks like you failed 1 test of 2.
# Failed test 'negatives'
# at t/planned_subtest.t line 22.
# Looks like you failed 1 test of 2.
t/planned_subtest.t ..
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/2 subtests
Test Summary Report
-------------------
t/planned_subtest.t (Wstat: 256 Tests: 2 Failed: 1)
Failed test: 2
Non-zero exit status: 1
Files=1, Tests=2, 0 wallclock secs ( 0.01 usr 0.00 sys + 0.13 cusr 0.00 csys = 0.14 CPU)
Result: FAIL
subtest with implicit done_testing
- done_testing
use strict;
use warnings;
use MySimpleCalc qw(sum);
use Test::More;
subtest positives => sub {
is sum(1, 1), 2, '1+1';
is sum(2, 2), 4, '2+2';
};
subtest negatives => sub {
is sum(-1, -1), -2, '-1, -1';
is sum(-1, -1, -1), -3, '-1, -1, -1';
};
done_testing;
# Failed test '-1, -1, -1'
# at t/subtest.t line 15.
# got: '-2'
# expected: '-3'
# Looks like you failed 1 test of 2.
# Failed test 'negatives'
# at t/subtest.t line 16.
# Looks like you failed 1 test of 2.
t/subtest.t ..
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/2 subtests
Test Summary Report
-------------------
t/subtest.t (Wstat: 256 Tests: 2 Failed: 1)
Failed test: 2
Non-zero exit status: 1
Files=1, Tests=2, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.13 cusr 0.00 csys = 0.15 CPU)
Result: FAIL
Implicit call to done_testing inside. skip-able, etc.
subtest to restrict scope
use strict;
use warnings;
use HTTP::CookieJar::LWP ();
use LWP::UserAgent ();
use Test::More;
subtest first => sub {
my $jar = HTTP::CookieJar::LWP->new;
my $ua = LWP::UserAgent->new(
cookie_jar => $jar,
protocols_allowed => ['http', 'https'],
timeout => 10,
);
my $response = $ua->get('https://google.com/');
ok $response->is_success;
diag $jar->dump_cookies;
};
subtest second => sub {
my $jar = HTTP::CookieJar::LWP->new;
my $ua = LWP::UserAgent->new(
cookie_jar => $jar,
protocols_allowed => ['http', 'https'],
timeout => 10,
);
my $response = $ua->get('https://google.com/');
ok $response->is_success;
diag $jar->dump_cookies;
};
done_testing;
skip all
- skip_all
use strict;
use warnings;
use Test::More;
eval 'use Test::Perl::Critic 1.02';
plan skip_all => 'Test::Perl::Critic 1.02 required' if $@;
ok 1;
done_testing;
echo $?
0
Exercise: skip test
- Write a test that will be skipped if is run as root. (Administrator on Windows?)
- And another test that will executed only when not run as root.
- Write a test script called long.t that takes a long time to run. Execute it only if the RUN_LONG environment variable is true.
- Other ideas: Skip tests that need database access and/or tests that need network access.
Exercise: use coverage
Generate a test coverage report for Math::RPN or the module you are testing and look for holes in the coverage. Add more tests. Another suggestion:
Test blocks (use subtest instead)
- {}}
Create small blocks of tests
When writing a test script you often write similar pieces of code that do unrelated tests. You can reuse the same variables throughout the test script but that means that in case of a bug in the test script the various parts might have effects on each other.
You can also invent new names for the variables but there are only so many names you can reasonably use for the same kind of data.
The best solution probably is to put the individual pieces into anonymous blocks. That serves several purposes. First of all it makes clear to both the writer of the code and the reader that the blocks are mostly independent. It also ensures that the variables used in one block won't interfere with the variables in the other block. You'll even have to define these variables in both blocks.
use strict;
use warnings;
use MyTools;
use Test::More tests => 2;
{
my $result = sum(1, 1);
is $result, 2, '1+1';
}
{
my $result = sum(2, 2);
is $result, 4, '2+2';
}
Counting tests in the small blocks (use subtest instead)
- BEGIN
When you are writing many test in one file quickly you'll face the problem of keeping the "plan" up to date. You will add a test and forget to update the number worse, you'll add many tests and when you suddenly remember you did not update the number it is too late already. Will you switch to "no_plan"? Will you count the ok(), is() and similar calls? Will you run the test and update your expectation accordingly?
There is trick I learned on the perl-qa mailing list.
You declare a variable called $tests at the beginning of the script. Then at the end of each section you update the number.
use strict;
use warnings;
use MyTools;
use Test::More;
my $tests;
plan tests => $tests;
{
my $result = sum(1, 1);
is $result, 2, '1+1';
BEGIN { $tests += 1; }
}
{
my $result = sum(2, 2);
is $result, 4, '2+2';
BEGIN { $tests += 1; }
}
See also: Test::Block
Test libraries
Multiple expected values
dice()
returns a whole number between 1-6.
In the application we have a function that can return any one of a list of possible values. In our example we have a dice() function that throws the dice. It should return a whole number between 1-6.
use strict;
use warnings;
use MyTools;
use Test::More tests => 2 * 4;
for (1..4) {
my $value = dice();
cmp_ok $value, '>', 0, 'bigger than 0';
cmp_ok $value, '<', 7, 'smaller than 7';
}
perl examples/test-perl/t/dice_cmp_ok.t
1..8
ok 1 - bigger than 1
ok 2 - smaller than 6
ok 3 - bigger than 1
ok 4 - smaller than 6
ok 5 - bigger than 1
ok 6 - smaller than 6
ok 7 - bigger than 1
ok 8 - smaller than 6
It seems to be ok but we are actually not testing the correct thing. We should check if the result is one of the following values (1, 2, 3, 4, 5, 6)
Multiple expected values revised
- any
- List::MoreUtils
We are going to use the "any" function of List::MoreUtils.
use strict;
use warnings;
use MyTools;
use List::MoreUtils qw(any);
use Test::More tests => 4;
for (1..4) {
my $value = dice();
ok( (any {$_ eq $value} (1, 2, 3, 4, 5, 6)), 'correct number');
}
Output:
1..4
not ok 1 - correct number
# Failed test 'correct number'
# at t/dice_any.t line 14.
ok 2 - correct number
not ok 3 - correct number
# Failed test 'correct number'
# at t/dice_any.t line 14.
ok 4 - correct number
# Looks like you failed 2 tests of 4.
This shows that there is some problem but we still don't know what exactly is the problem. Especially think if this is part of a larger test suit when one of the tests fail. We would like to see the actual value and maybe even the expected values.
Adding information with diag
- diag
All the ok() and related functions return true or false depending on their reporting success or failure. One can use this to print extra information using diag()
use strict;
use warnings;
use MyTools;
use List::MoreUtils qw(any);
use Test::More tests => 4;
my @expected = (1, 2, 3, 4, 5, 6);
for (1..4) {
my $value = dice();
ok( (any {$_ eq $value} @expected), 'correct number')
or diag "Received: $value\nExpected:\n" .
join "", map {" $_\n"} @expected;
}
Output:
1..4
not ok 1 - correct number
# Failed test 'correct number'
# at t/dice_any_diag.t line 16.
# Received: 1.5
# Expected:
# 1
# 2
# 3
# 4
# 5
# 6
ok 2 - correct number
ok 3 - correct number
ok 4 - correct number
# Looks like you failed 1 test of 4.
My own test functions
After writing lots of tests, you'll see that you need the above code (with the extra diag) in several places in your tests script, so you'll want to refactor it and create a function wrapping it.
The story behind is that the dice() function can actually get any number ($n) and it should then produce a random whole number between 1 and $n. The default is 6. So we are testing dice() with several possible parameters.
use strict;
use warnings;
use MyTools;
use List::MoreUtils qw(any);
use Test::More tests => 8;
for (1..4) {
my $n = 6;
my @expected = (1..$n);
my $value = dice($n);
is_any($value, \@expected, 'correct number');
}
for (1..4) {
my $n = 4;
my @expected = (1..$n);
my $value = dice($n);
is_any($value, \@expected, 'correct number');
}
sub is_any {
my ($actual, $expected, $name) = @_;
$name ||= '';
ok( (any {$_ eq $actual} @$expected), $name)
or diag "Received: $actual\nExpected:\n" .
join "", map {" $_\n"} @$expected;
}
We move the ok() to a function call is_any and we are calling it with the actual value, a reference to an array holding the expected values and the name of the test uint. We had to slightly change the part of the ok() as now we have a reference to the expected values and not the array itself.
Output:
1..8
not ok 1 - correct number
# Failed test 'correct number'
# at t/dice_is_any.t line 33.
# Received: 5.5
# Expected:
# 1
# 2
# 3
# 4
# 5
# 6
ok 2 - correct number
ok 3 - correct number
ok 4 - correct number
ok 5 - correct number
not ok 6 - correct number
# Failed test 'correct number'
# at t/dice_is_any.t line 33.
# Received: 1.5
# Expected:
# 1
# 2
# 3
# 4
ok 7 - correct number
ok 8 - correct number
# Looks like you failed 2 tests of 8.
This seems to be ok but we have a slight problem here. The row number displayed in the error report is the row number where we actually call the ok() and not where we call the is_any().
My own test functions with Test Builder level
- Test::Builder
- $Test::Builder::Level
Behind the scenes both Test::Simple and Test::More use a module called Test::Builder. Actually Test::Builder is doing the hard work of counting test and displaying error messages. Test::More is just the user friendly front end.
Adding the local $Test::Builder::Level = $Test::Builder::Level + 1; to our own test function will tell Test::Builder to go one level further back in the call stack to find the location where the function was called and where the error occurred.
use strict;
use warnings;
use MyTools;
use List::MoreUtils qw(any);
use Test::More tests => 8;
for (1..4) {
my $n = 6;
my @expected = (1..$n);
my $value = dice($n);
is_any($value, \@expected, 'correct number');
}
for (1..4) {
my $n = 4;
my @expected = (1..$n);
my $value = dice($n);
is_any($value, \@expected, 'correct number');
}
sub is_any {
my ($actual, $expected, $name) = @_;
$name ||= '';
local $Test::Builder::Level = $Test::Builder::Level + 1;
ok( (any {$_ eq $actual} @$expected), 'correct number')
or diag "Received: $actual\nExpected:\n" .
join "", map {" $_\n"} @$expected;
}
Output:
1..8
not ok 1 - correct number
# Failed test 'correct number'
# at t/dice_is_any.t line 16.
# Received: 5.5
# Expected:
# 1
# 2
# 3
# 4
# 5
# 6
ok 2 - correct number
ok 3 - correct number
ok 4 - correct number
ok 5 - correct number
not ok 6 - correct number
# Failed test 'correct number'
# at t/dice_is_any.t line 24.
# Received: 1.5
# Expected:
# 1
# 2
# 3
# 4
ok 7 - correct number
ok 8 - correct number
# Looks like you failed 2 tests of 8.
Create a test module
Now that we created the above is_any function we might want to use it in other tests scripts as well. We might even want to distribute it to CPAN. In order to do that we'll need to move it to a module. The accepted name space for such modules is the Test::* namespace so we are going to use that too. Of course if you are building this for a specific project and not for general use then you are probably better off using the Project::Test::* namespace and if this is indented to be used in-house in a company then it might be better to use Company::Test::* for the name so the chances your module will have the same name as another module in CPAN are small.
If written correctly, the only extra thing we need to do is to load the module and import the is_any function. Usually private test modules are placed in the t/lib directory, so we have to add this to our @INC by calling use lib.
use strict;
use warnings;
use MyTools;
use Test::More tests => 8;
use lib 't/lib';
use Test::MyTools qw(is_any);
for (1..4) {
my $n = 6;
my @expected = (1..$n);
my $value = dice($n);
is_any($value, \@expected, 'correct number');
}
for (1..4) {
my $n = 4;
my @expected = (1..$n);
my $value = dice($n);
is_any($value, \@expected, 'correct number');
}
The problematic part is the module. We need the ok and diag functions from the Test::More package but we cannot load the Test::More package as it would confuse the testing system. So instead we are using the Test::Builder backend and the ok and diag methods it provides.
package Test::MyTools;
use strict;
use warnings;
our $VERSION = '0.01';
use Exporter qw(import);
our @EXPORT_OK = qw(is_any);
use List::MoreUtils qw(any);
use Test::Builder::Module;
my $Test = Test::Builder::Module->builder;
sub is_any {
my ($actual, $expected, $name) = @_;
$name ||= '';
$Test->ok( (any {$_ eq $actual} @$expected), $name)
or $Test->diag("Received: $actual\nExpected:\n"
. join "", map {" $_\n"} @$expected);
}
1;
Output:
1..8
ok 1 - correct number
not ok 2 - correct number
# Failed test 'correct number'
# at t/dice.t line 17.
# Received: 1.5
# Expected:
# 1
# 2
# 3
# 4
# 5
# 6
ok 3 - correct number
ok 4 - correct number
not ok 5 - correct number
# Failed test 'correct number'
# at t/dice.t line 25.
# Received: 3.5
# Expected:
# 1
# 2
# 3
# 4
ok 6 - correct number
ok 7 - correct number
ok 8 - correct number
# Looks like you failed 2 tests of 8.
Test::Builder
- Test::Builder
Test modules created using Test::Builder all work nicely together. Among other things, they don't get confused with the counting of the tests.
There are many Test::Builder based modules already available from CPAN. Not only Test::Simple and Test::More.
We'll see some of them here.
Test::Builder object is a singleton
use strict;
use warnings;
use Test::More tests => 1;
use Test::Builder;
use Test::Builder::Module;
my $TMb = Test::More->builder;
diag $TMb;
my $TBM = Test::Builder::Module->builder;
diag $TBM;
my $TBn = Test::Builder->new;
diag $TBn;
ok 1;
perl examples/test-perl/t/builder.t
1..1
# Test::Builder=HASH(0x7ff234044ab8)
# Test::Builder=HASH(0x7ff234044ab8)
# Test::Builder=HASH(0x7ff234044ab8)
ok 1
Skip on the fly
- skip
There are cases when you cannot easily decide up front which tests you'll need to skip. In such cases you can rely on the skip method of Test::Builder that you can access from Test::More as well.
use strict;
use warnings;
use FindBin;
use Test::More;
my @files = glob "$FindBin::Bin/[0-9]*.t";
plan tests => scalar @files;
my $T = Test::More->builder;
foreach my $file (@files) {
if ($file =~ /explain/) {
$T->skip("Not this one");
next;
}
ok(-e $file, $file);
}
Skip on the fly based on earlier tests
- skip
use strict;
use warnings;
use LWP::UserAgent ();
use Test::More;
my $T = Test::More->builder;
my @cases = ('https://perlmaven.com/', 'https://perlmaven.com/qqrq');
for my $case (@cases) {
my $ua = LWP::UserAgent->new;
my $response = $ua->get($case);
my $ok = ok $response->is_success, $case;
if (not $ok) {
$T->skip("Previous failed");
next;
}
unlike $response->content, qr{No such article}, "title of $case";
}
done_testing;
Code emitting warnings
use strict;
use warnings;
use Test::More;
use MyTools qw(add);
plan tests => 4;
is(add(1, 2), 3, 'first');
is(add(2), 2, 'second');
is(add(3), 3, 'third');
is(add(-1, 1), 0, 'fourth');
prove -lv t/test_add.t
Use of uninitialized value $y in addition (+) at /home/gabor/work/slides/perl/examples/test-warn/lib/MyTools.pm line 31.
Use of uninitialized value $y in addition (+) at /home/gabor/work/slides/perl/examples/test-warn/lib/MyTools.pm line 31.
t/test_add.t ..
1..4
ok 1 - first
ok 2 - second
ok 3 - third
ok 4 - fourth
ok
All tests successful.
Files=1, Tests=4, 0 wallclock secs ( 0.01 usr 0.01 sys + 0.16 cusr 0.02 csys = 0.20 CPU)
Result: PASS
Test::NoWarnings
- Test::NoWarnings
use strict;
use warnings;
use Test::More;
use Test::NoWarnings;
use MyTools qw(add);
plan tests => 4 + 1;
is(add(1, 2), 3, 'first');
is(add(2), 2, 'second');
is(add(3), 3, 'third');
is(add(-1, 1), 0, 'fourth');
prove -v t/test_nowarnings.t
# Failed test 'no warnings'
# at /home/gabor/perl5/lib/perl5/Test/Builder.pm line 193.
# There were 2 warning(s)
# Previous test 1 'first'
# Use of uninitialized value $y in addition (+) at /home/gabor/work/slides/perl/examples/test-warn/lib/MyTools.pm line 31.
# at /home/gabor/work/slides/perl/examples/test-warn/lib/MyTools.pm line 31.
# MyTools::add(2) called at t/test_nowarnings.t line 12
#
# ----------
# Previous test 2 'second'
# Use of uninitialized value $y in addition (+) at /home/gabor/work/slides/perl/examples/test-warn/lib/MyTools.pm line 31.
# at /home/gabor/work/slides/perl/examples/test-warn/lib/MyTools.pm line 31.
# MyTools::add(3) called at t/test_nowarnings.t line 13
#
# Looks like you failed 1 test of 5.
t/test_nowarnings.t ..
1..5
ok 1 - first
ok 2 - second
ok 3 - third
ok 4 - fourth
not ok 5 - no warnings
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/5 subtests
Test Summary Report
-------------------
t/test_nowarnings.t (Wstat: 256 Tests: 5 Failed: 1)
Failed test: 5
Non-zero exit status: 1
Files=1, Tests=5, 1 wallclock secs ( 0.01 usr 0.00 sys + 0.18 cusr 0.00 csys = 0.19 CPU)
Result: FAIL
Test::FailWarnings
- Test::FailWarnings
Test::NoWarnings does not play well with done_testing, but Test::FailWarnings does.
use strict;
use warnings;
use Test::More;
use Test::FailWarnings;
use MyTools qw(add);
is(add(1, 2), 3, 'first');
is(add(2), 2, 'second');
is(add(3), 3, 'third');
is(add(-1, 1), 0, 'fourth');
done_testing();
prove -v t/test_failwarnings.t
# Failed test 'Test::FailWarnings should catch no warnings'
# at /home/gabor/work/slides/perl/examples/test-warn/lib/MyTools.pm line 31.
# Warning was 'Use of uninitialized value $y in addition (+) at /home/gabor/work/slides/perl/examples/test-warn/lib/MyTools.pm line 31.'
# Failed test 'Test::FailWarnings should catch no warnings'
# at /home/gabor/work/slides/perl/examples/test-warn/lib/MyTools.pm line 31.
# Warning was 'Use of uninitialized value $y in addition (+) at /home/gabor/work/slides/perl/examples/test-warn/lib/MyTools.pm line 31.'
# Looks like you failed 2 tests of 6.
t/test_failwarnings.t ..
ok 1 - first
not ok 2 - Test::FailWarnings should catch no warnings
ok 3 - second
not ok 4 - Test::FailWarnings should catch no warnings
ok 5 - third
ok 6 - fourth
1..6
Dubious, test returned 2 (wstat 512, 0x200)
Failed 2/6 subtests
Test Summary Report
-------------------
t/test_failwarnings.t (Wstat: 512 Tests: 6 Failed: 2)
Failed tests: 2, 4
Non-zero exit status: 2
Files=1, Tests=6, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.17 cusr 0.02 csys = 0.21 CPU)
Result: FAIL
Test with warnings
First we'll check if the fibonacci function works correctly even when called with negative numbers.
use strict;
use warnings;
use Test::More;
use MyTools qw(fibo);
subtest negative => sub {
my $result = fibo(-1);
is($result, undef, 'fibonacci on -1 returns undef');
};
done_testing;
prove -lv t/fibonacci_negative.t
Given number must be > 0 at /home/gabor/work/slides/perl/examples/test-warn/lib/MyTools.pm line 23.
t/fibonacci_negative.t ..
# Subtest: negative
ok 1 - fibonacci on -1 returns undef
1..1
ok 1 - negative
1..1
ok
All tests successful.
Files=1, Tests=1, 0 wallclock secs ( 0.01 usr 0.00 sys + 0.16 cusr 0.01 csys = 0.18 CPU)
Result: PASS
In the above code the tests are passing but there is a warning as well. This is an expected warning so we don't need to worry about it. But then again people or code using our module might start to rely on this warning. We would like to make sure it won't disappear or change by mistake.
Testing for warnings
- Test::Warn
- warnings
- exception
Test code that should give a warning, and check if that is the correct warning.
So once we have tested our nicely behaving code we can also test our warnings and errors. For this we are going to use several additional modules from CPAN. As they all use the Test::Builder backend we can use them along with our standard Test::More setup.
use strict;
use warnings;
use Test::More;
use Test::Warn;
use MyTools qw(fibo);
subtest negative => sub {
my $result = 'something else';
warning_is {$result = fibo(-1)} "Given number must be > 0",
'warning when called with -1';
is($result, undef, 'fibonacci on -1 returns undef');
};
done_testing;
prove -lv t/fibonacci_negative_tested.t
t/fibonacci_negative_tested.t ..
# Subtest: negative
ok 1 - warning when called with -1
ok 2 - fibonacci on -1 returns undef
1..2
ok 1 - negative
1..1
ok
All tests successful.
Files=1, Tests=1, 1 wallclock secs ( 0.02 usr 0.00 sys + 0.16 cusr 0.02 csys = 0.20 CPU)
Result: PASS
Testing for warnings - different warning
- What if we change the text of the warning?
Testing for warnings - missing warning
- What if we comment out the warning?
Test::Warn
- Test::Warn
Test::Warn can be used to test for both warnings and carp calls. It can be used to check if there was a warning or if there was not.
warning_is
warnings_are
warning_like
warnings_like
warning_is {code} undef
- to check there was no warning- ...
Test for no warnings - the hard way
- warnings
If we can test our code for specific warnings we should also test that in other places there are no warnings.
use strict;
use warnings;
use Test::More;
use Test::Warn;
use MyTools qw(fibo);
subtest negative => sub {
my $result;
warning_is {$result = fibo(-1)} "Given number must be > 0",
'warning when called with -1';
is($result, undef, 'fibonacci on -1 returns undef');
};
subtest positive_4 => sub {
my $result;
warning_is {$result = fibo(4)} undef, 'no warning here';
is($result, 3, 'fibonacci on 4 returns 3');
};
subtest positive_6 => sub {
my $result;
warning_is {$result = fibo(6)} undef, 'no warning here';
is($result, 8, 'fibonacci on 6 returns 8');
};
done_testing;
prove -lv t/fibonacci_test_warn.t
t/fibonacci_test_warn.t ..
# Subtest: negative
ok 1 - warning when called with -1
ok 2 - fibonacci on -1 returns undef
1..2
ok 1 - negative
# Subtest: positive_4
ok 1 - no warning here
ok 2 - fibonacci on 4 returns 3
1..2
ok 2 - positive_4
# Subtest: positive_6
ok 1 - no warning here
ok 2 - fibonacci on 6 returns 8
1..2
ok 3 - positive_6
1..3
ok
All tests successful.
Files=1, Tests=3, 0 wallclock secs ( 0.01 usr 0.00 sys + 0.15 cusr 0.02 csys = 0.18 CPU)
Result: PASS
No other warnings using Test::NoWarnings
-
Test::NoWarnings
use strict;
use warnings;
use Test::More tests => 4 + 1;
use Test::NoWarnings;
use Test::Warn;
use MyTools qw(fibo);
subtest positive_2 => sub {
my $result = fibo(2);
is($result, 1, 'fibonacci on 2');
};
subtest negative => sub {
my $result;
warning_is {$result = fibo(-1)} "Given number must be > 0",
'warning when called with -1';
is($result, undef, 'fibonacci on -1 returns undef');
};
subtest positive_4 => sub {
my $result = fibo(4);
is($result, 3, 'fibonacci on 4');
};
subtest positive_6 => sub {
my $result = fibo(6);
is($result, 8, 'fibonacci on 6');
};
prove -lv t/fibonacci_no_warnings.t
t/fibonacci_no_warnings.t ..
1..5
# Subtest: positive_2
ok 1 - fibonacci on 2
1..1
ok 1 - positive_2
# Subtest: negative
ok 1 - warning when called with -1
ok 2 - fibonacci on -1 returns undef
1..2
ok 2 - negative
# Subtest: positive_4
ok 1 - fibonacci on 4
1..1
ok 3 - positive_4
# Subtest: positive_6
ok 1 - fibonacci on 6
1..1
ok 4 - positive_6
ok 5 - no warnings
ok
All tests successful.
Files=1, Tests=5, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.17 cusr 0.01 csys = 0.20 CPU)
Result: PASS
No other warnings Test::FailWarnings
-
Test::FailWarnings
use strict;
use warnings;
use Test::More;
use Test::FailWarnings;
use Test::Warn;
use MyTools qw(fibo);
subtest positive_2 => sub {
my $result = fibo(2);
is($result, 1, 'fibonacci on 2');
};
subtest negative => sub {
my $result;
warning_is {$result = fibo(-1)} "Given number must be > 0",
'warning when called with -1';
is($result, undef, 'fibonacci on -1 returns undef');
};
subtest positive_4 => sub {
my $result = fibo(4);
is($result, 3, 'fibonacci on 4');
};
subtest positive_6 => sub {
my $result = fibo(6);
is($result, 8, 'fibonacci on 6');
};
done_testing;
prove -lv t/fibonacci_failwarnings.t
t/fibonacci_failwarnings.t ..
# Subtest: positive_2
ok 1 - fibonacci on 2
1..1
ok 1 - positive_2
# Subtest: negative
ok 1 - warning when called with -1
ok 2 - fibonacci on -1 returns undef
1..2
ok 2 - negative
# Subtest: positive_4
ok 1 - fibonacci on 4
1..1
ok 3 - positive_4
# Subtest: positive_6
ok 1 - fibonacci on 6
1..1
ok 4 - positive_6
1..4
ok
All tests successful.
Files=1, Tests=4, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.17 cusr 0.01 csys = 0.20 CPU)
Result: PASS
unexpected warnings Test::NoWarnings, Test::FailWarnings
- Edit the lib/MyTools.pm enable the row with "Some other warning"
prove -lv t/fibonacci_no_warnings.t
It shows that there were warnings generated during the tests. It even tells us at which test. The biggest problem with this module is that it does not work together with done_testing() and so it requires that you know how many test you are going to run.
Test::Exception
- Test::Exception
use strict;
use warnings;
use Test::More;
use Test::Exception;
use MyTools qw(fibo);
is fibo(6), 8, 'fibo(6)';
throws_ok { fibo() } qr/Need to get a parameter/, 'missing parameter';
throws_ok { fibo('name') } qr/Need to get a number/, 'not a number';
done_testing;
1..2
ok 1 - div by 2
ok 2 - div by zero
throws_ok { $foo->method } 'Error::Simple', 'simple error thrown';
Where Error::Simple is the class of the exception that have been thrown. e.g. by Exception::Class.
Exercise: improve test module
Enlarge the Test::MyTools to include a test function called my_any_num that works like is_any but compares the values as numbers. Write test script that uses this function.
Exercise: add is_max
Add another function called is_max that gets a number and a reference to an array and will give you ok if the number is really the max. (See List::Util for a max function) What should it print on error? Write test script that uses this function.
Exercise: is_between
Add a function to Test::MyTools called is_between that will check if the received value is between two given values: is_between($lower_limit, $real_value, $upper_limit, $name);
Exercise: test sum
Write a test for the sum() function without parameter. Normally it should return 0 and should not give any warnings. So try to write a test that check there is no warning in this call but mark it as TODO as it is currently failing but we have decided to postpone it fix after we finish more urgent tasks.
Exercise: Test::Exception
- Test::Exception
Similar to Test::Warn there is also a module called Test::Exception
to test for calls to die.
Try calling fibonacci(); and writing a test for it.
use strict;
use warnings;
use Test::More tests => 3;
use lib 'lib';
use MyTools;
{
my $result = fibonacci(3);
is($result, 2, 'fibonacci on 3');
}
{
my $result = fibonacci();
is($result, 1, 'fibonacci()');
}
{
my $result = fibonacci(4);
is($result, 3, 'fibonacci on 4');
}
1..3
ok 1 - fibonacci on 3
Need to get a number
# Looks like you planned 3 tests but only ran 1.
# Looks like your test died just after 1.
Solution: is_between
Maybe this is better:
is_between($lower_limit, $oper1, $real_value, $oper2, $upper_limit, $name)
use strict;
use warnings;
use lib 'lib';
use MyTools;
use Test::More tests => 4;
use lib 't/lib';
use Test::Range qw(is_between);
for (1..4) {
my $value = dice();
is_between(1, '<=', $value, '<=', 6, 'in range');
}
package Test::Range;
use strict;
use warnings;
our $VERSION = '0.01';
use base 'Exporter';
our @EXPORT = qw(is_between);
use Test::Builder;
use Carp qw(croak);
my $Test = Test::Builder::Module->builder;
=head2 is_between
is_between ($lower_limit, '<', $real_value, '<=', $name);
=cut
sub is_between {
my ($lower_limit, $op1, $real_value, $op2, $upper_limit, $name) = @_;
croak "The comparison operators ($op1)must all be either < or <="
if $op1 ne '<' and $op1 ne '<=';
croak "The comparison operators ($op2)must all be either < or <="
if $op2 ne '<' and $op2 ne '<=';
croak "The lower limit must be lower than the upper limit"
if $lower_limit >= $upper_limit;
$name ||= '';
my @errors;
if (($op1 eq '<' and not $lower_limit < $real_value)
or ($op1 eq '<=' and not $lower_limit <= $real_value)) {
push @errors, "Lower limit: $lower_limit $op1 $real_value failed";
}
if (($op2 eq '<' and not $real_value < $upper_limit)
or ($op2 eq '<=' and not $real_value <= $upper_limit)) {
push @errors, "$real_value $op2 $upper_limit upper limit failed";
}
$Test->ok(! scalar @errors, $name)
or $Test->diag(join "", map {" $_\n"} @errors);
}
1;
Solution: test sum
use strict;
use warnings;
use Test::More tests => 2;
use Test::Warn;
use lib 'lib';
use MyTools;
{
my $result;
TODO: {
local $TODO = 'fix warnings';
warning_is {$result = sum()} undef, 'no warning in empty sum';
}
is($result, 0, 'result is ok');
}
use strict;
use warnings;
use Test::More tests => 3;
use Test::Exception;
use lib 'lib';
use MyTools;
{
my $result = fibonacci(3);
is($result, 2, 'fibonacci on 3');
}
{
dies_ok {fibonacci()} 'expecting to die';
}
{
my $result = fibonacci(4);
is($result, 3, 'fibonacci on 4');
}
Perl Best Practices - Perl::Critic
- PBP
- Perl::Critic
The book Perl Best Practices of Damian Conway provides a reasonable set of guidelines on how to write a Perl program. While you might not want to follow each guideline as it is written in the book it can be a very good starting point. It is certainly much better than the current practice of letting everyone write whatever she wants. This is good in the general terms and when we are talking about individual projects. Within a project (or within a company) it makes a lot of sense to stick to some guidelines.
So reading the book is good. In addition there is a module called Perl::Critic by Jeffrey Ryan Thalhammer and currently maintained by Elliot Shank that can check each one of the practices Damian suggest. Not only that.
There is also a module called Test::Perl::Critic that takes the functions of Perl::Critic and turns them into Test::Builder based functions. So you can get ok/not ok output from them.
Using them in Perl projects can help improving the code base very quickly.
You can also configure the module to check each one of the "practices" according to the style accepted in your company.
use strict;
use warnings;
use Test::More;
eval {
require Test::Perl::Critic;
import Test::Perl::Critic;
};
plan skip_all => 'Test::Perl::Critic required to criticise code' if $@;
all_critic_ok('blib');
#all_critic_ok('blib', 't');
Verify code layout
-
Test::Code::TidyAll
-
Code::TidyAll
use strict;
use warnings;
use Test::Most;
eval 'use Test::Code::TidyAll 0.20';
plan skip_all => "Test::Code::TidyAll 0.20 required to check if the code is clean."
if $@;
tidyall_ok();
Why number the test files?
By default prove will run the test script in ABC order. One common way to make sure the test scripts run in a certain order is to name the files
00-basic.c
Test::Differences
- Test::Differences
- cmp_deeply
Test::Differences provides UNIX-like diff output when strings are not matching.
use strict;
use warnings;
use Test::More tests => 1;
use Test::Differences;
my @expected = (
'This is a string',
'Another string',
);
my @actual = @expected;
$actual[0] .= 'x';
eq_or_diff \@actual, \@expected;
perl examples/test-perl/t/test_differences.t
1..1
not ok 1
# Failed test at t/test_differences.t line 15.
# +----+------------------------+-----------------------+
# | Elt|Got |Expected |
# +----+------------------------+-----------------------+
# | 0|[ |[ |
# * 1| 'This is a stringx', | 'This is a string', *
# | 2| 'Another string' | 'Another string' |
# | 3|] |] |
# +----+------------------------+-----------------------+
# Looks like you failed 1 test of 1.
Test::Deep
Test::Deep by Fergal Daly provides various function testing data structure is much better way than is_deeply of Test::More. We return to the example examples/test-perl/lib/MyBugs.pm
- cmp_deeply
- cmp_bag
- cmp_set
- cmp_methods
use strict;
use warnings;
use Test::More tests => 3;
use Test::Deep;
use lib 'lib';
use MyBugs;
use Data::Dumper;
my $NUMBER = re('^\d+$');
my %expected = (
bugs => $NUMBER,
errors => $NUMBER,
failures => $NUMBER,
warnings => $NUMBER,
);
#diag Dumper \%a;
for my $i (0..3) {
my %a = fetch_data_from_bug_tracking_system($i);
cmp_deeply(\%a, \%expected, 'hash is ok');
}
1..3
ok 1 - hash is ok
ok 2 - hash is ok
not ok 3 - hash is ok
# Failed test 'hash is ok'
# at t/test_deep.t line 24.
# Comparing hash keys of $data
# Missing: 'bugs', 'errors'
# Extra: 'bogs', 'erors'
not ok 4 - hash is ok
# Failed test 'hash is ok'
# at t/test_deep.t line 24.
# Using Regexp on $data->{"bugs"}
# got : 'many'
# expect : (?-xism:^\d+$)
# Looks like you planned 3 tests but ran 1 extra.
# Looks like you failed 2 tests of 4 run.
use strict;
use warnings;
use Test::More tests => 2;
use Test::Deep;
{
my @expected = (1, 2, 3);
my @received = (3, 1, 2);
cmp_bag(\@received, \@expected);
}
{
my @expected = ([1, 'a'], [2, 'b'], [3, 'c']);
my @received = ([3, 'c'], [1, 'a'], [2, 'b']);
cmp_bag(\@received, \@expected);
}
Test::File
- Test::File
- file_exists_ok
- file_empty_ok
- file_size_ok
Test::File of brian d foy provides functions for testing meta information about files
- file_exists_ok( FILENAME [, NAME ] )
- file_empty_ok( FILENAME [, NAME ] )
- file_size_ok( FILENAME, SIZE [, NAME ] )
- ...
Test::LongString
- Test::LongString
- like_string
- is_string
Test::LongString of Rafael Garcia-Suarez for better error reporting when comparing strings. Especially long strings.
use strict;
use warnings;
use Test::More;
use Test::LongString;
plan tests => 2;
my ($expected, $actual) = generate(200, 170);
is $actual, $expected;
is_string $actual, $expected;
sub generate {
my ($cnt, $loc) = @_;
my @chars = ('a' .. 'z', 'A' .. 'Z');
my $str = '';
$str .= $chars[ rand scalar @chars ] for 1..$cnt;
my $actual = $str;
substr($actual, $loc, 1, ' ');
return ($str, $actual);
}
1..2
not ok 1
# Failed test at t/test_longstring.t line 10.
# got: 'qgjVLTjmhSKWoIZIAiCNSBkJQOOrLUYVihtrBehKKtUEozKzZvJDwClTCgHayvbrPsZojPiNGrzwXNDtuEpHuBcSDepTsoNlzPWMezBhuCJJZPbsLVMDWvuMZfXjtDhSrAsfrciXuJuZVCEXTBxkLXwklcgrqXMPoPloZhdoxu iTGjygbCuiSpNTgrdrkpONiEVfAEa'
# expected: 'qgjVLTjmhSKWoIZIAiCNSBkJQOOrLUYVihtrBehKKtUEozKzZvJDwClTCgHayvbrPsZojPiNGrzwXNDtuEpHuBcSDepTsoNlzPWMezBhuCJJZPbsLVMDWvuMZfXjtDhSrAsfrciXuJuZVCEXTBxkLXwklcgrqXMPoPloZhdoxuBiTGjygbCuiSpNTgrdrkpONiEVfAEa'
not ok 2
# Failed test at t/test_longstring.t line 12.
# got: ..."oPloZhdoxu iTGjygbCuiSpNTgrdrkpONiEVfAEa"...
# length: 200
# expected: ..."oPloZhdoxuBiTGjygbCuiSpNTgrdrkpONiEVfAEa"...
# length: 200
# strings begin to differ at char 171 (line 1 column 171)
# Looks like you failed 2 tests of 2.
Test::Most
- Test::Most
- die_on_fail
Test::Most by Curtis "Ovid" Poe is a replacement of Test::More with even more stuff. It exports the functions of the following test modules making it a bit more convenient to use them.
- Test::More
- Test::Exception
- Test::Differences
- Test::Deep
It also provides a nice set of extra features such as
the die_on_fail;
and bail_on_fail
calls.
use strict;
use warnings;
use Test::Most tests => 3;
ok 1;
ok 0;
ok 1;
prove examples/test-perl/t/test_most.t
DIE_ON_FAIL=1 prove examples/test-perl/t/test_most.t
BAIL_ON_FAIL=1 prove examples/test-perl/t/test_most.t
Test::Trap
- Test::Trap
Trap exit codes, exceptions, output.
Or use Capture::Tiny and then interrogated the returned values.
Test::Fatal
- Test::Fatal
For testing code with exceptions (instead of Test::Exception) see Test::Fatal.
Test::XPath
- Test::XPath
- XML
Sample script for testing Client-Server
use strict;
use warnings;
my $pid = fork();
die "Could not fork()\n" if not defined $pid;
if (not $pid) {
# call the external implementation of the server
# exec("bin/server.pl");
# or implement the server inline here and then call exit();
sleep(1000);
exit(0);
}
# give the server a chance to start
sleep(1);
require Test::More;
import Test::More;
plan(tests => 1);
# start up the client code here
# and call the testing functions
ok(1);
END {
# make sure the server gets killed even if the
# test finishes abnornmally
kill 9, $pid if $pid;
}
Sample script for testing Client-Server Win32
package lib::Test;
use strict;
use warnings;
use File::Spec;
my $process;
sub start {
my $root = File::Spec->catdir( $dir, 'dwimmer' );
if ( $^O =~ /win32/i ) {
require Win32::Process;
#import Win32::Process;
Win32::Process::Create( $process, $^X,
"perl -Ilib -It\\lib $root\\bin\\app.pl",
0, Win32::Process::NORMAL_PRIORITY_CLASS(), "." )
|| die ErrorReport();
} else {
$process = fork();
die "Could not fork() while running on $^O" if not defined $process;
if ($process) { # parent
sleep 1;
return $process;
}
my $cmd = "$^X -Ilib -It/lib $root/bin/app.pl";
exec $cmd;
}
return 1;
}
sub stop {
return if not $process;
if ( $^O =~ /win32/i ) {
$process->Kill(0);
} else {
kill 9, $process;
}
}
END {
stop();
}
1;
Exercise for Test::Builder
Given the following convenience function (exported by MyTools.pm ), please test if it works properly.
my $fh = get_fh(MODE, FILENAME)
my $in = get_fh('<', 'data.txt');
Exercise: Math::RPN
Take the Math::RPN exercise and add further tests:
- Test at least some of the warnings.
- Make sure nothing else generates warnings
Exercise: exceptions
Take the earlier exercise where we used Test::Exception and see how to rewrite it using Test::Fatal and/or Test::Trap.
Solution
use strict;
use warnings;
use lib 'lib';
use MyTools;
use Test::More tests => 4;
use Test::Exception;
{
my $file = time . ".txt";
if (open my $fh, '>', $file) {
print {$fh} "$file\n";
close $fh;
} else {
die;
}
my $fh = get_fh('<', $file);
is ref($fh), 'GLOB';
my @content = <$fh>;
is scalar(@content), 1;
chomp @content;
is $content[0], $file;
unlink $file;
}
{
my $file = time . ".txt";
unlink $file;
dies_ok {get_fh('<', $file)}, 'expected to die';
}
Devel::Cover script
use strict;
use warnings;
system('cover -delete');
my $perl = $^X; # the current perl
my $make = ($^O =~ /MSWin/i ? 'dmake' : 'make');
$ENV{DEVEL_COVER_OPTIONS} = "-ignore,perl5lib";
if (-e 'Makefile.PL') {
system("$perl Makefile.PL");
if (-d 't/') {
$ENV{HARNESS_PERL_SWITCHES} = "-MDevel::Cover";
} else {
$ENV{PERL5OPT} = "-MDevel::Cover";
}
system("$make test");
} elsif (-e "Build.PL") { # Build.PL exists
system($perl, "Build.PL");
system($perl, "build", "test");
} else {
die "Unable to locate 'Makefile.PL' or 'Build.PL'.\n";
}
system('cover -report html');
Can module be loaded? use_ok and require_ok
- use_ok
- require_ok
use_ok and require_ok are not recommended any more. Just use or require the modules as necessary and let perl provide the appropriate failure message if either of those fails.
use strict;
use warnings;
use Test::More tests => 1;
use lib 'lib';
BEGIN { use_ok('MyTools'); }
# the rest of the tests...
1..1
ok 1 - use MyTools;
can_ok('Class', qw(method_a method_b));
- can_ok
can_ok($object, qw(method_a method_b));
In order to nicely test if a module has certain methods you can use the can_ok() function of Test::More. It can be used both on modules and on objects.
use strict;
use warnings;
use Test::More tests => 2;
use FindBin;
use lib "$FindBin::Bin/../lib";
use MyTools;
can_ok('MyTools', 'fibonacci');
can_ok('MyTools', 'make_tea');
Output:
1..2
ok 1 - MyTools->can('fibonacci')
not ok 2 - MyTools->can('make_tea')
# Failed test 'MyTools->can('make_tea')'
# at examples/test-perl/t/can_ok.t line 11.
# MyTools->can('make_tea') failed
# Looks like you failed 1 test of 2.
All the tests
Just to show you all the tests of the MyTools module we used
use strict;
use warnings;
use Test::More;
my $tests;
plan tests => $tests;
use FindBin;
use lib "$FindBin::Bin/../lib";
use MyTools;
{
is sum(2, 2), 4, '2 + 2 = 4';
TODO: {
local $TODO = "teach it to add more than 2 numbers";
is sum(2, 2, 2), 6, '2 + 2 + 2 = 6';
}
BEGIN { $tests += 2; }
}
{
is fibonacci(1), 1, 'fib 1';
is fibonacci(2), 1, 'fib 2';
is fibonacci(3), 2, 'fib 3';
is fibonacci(4), 3, 'fib 4';
is fibonacci(5), 5, 'fib 5';
is_deeply [ fibonacci(1) ], [1], 'fibs 1';
is_deeply [ fibonacci(2) ], [1, 1], 'fibs 2';
is_deeply [ fibonacci(3) ], [1, 1, 2], 'fibs 3';
is_deeply [ fibonacci(4) ], [1, 1, 2, 3], 'fibs 4';
is_deeply [ fibonacci(5) ], [1, 1, 4, 3, 5], 'fibs 5'; # bug added on purpose
BEGIN { $tests += 10; }
}
{
TODO: {
local $TODO = 'fix multiply';
is multiply(), 0, 'nothing should be 0';
}
is multiply(1), 1, 'one';
is multiply(1, 1), 1, '1 * 1';
is multiply(1, -1), -1, '1 * -1';
is multiply(-1, -1), 1, '-1 * -1';
BEGIN { $tests += 5; }
}
Smolder
- Smolder
Smolder is a web based application to collect and display TAP streams. It is already available on CPAN. Once installed it can be launched with the smolder command. By default it provides its own web server and uses SQLite so you don't have to worry about further configuration. Once you see it and decide that you'd like to use it in production with many test systems reporting to it then you'll probably invest more in the installation, use Apache as the front-end server and MySQL as the database but at the beginning you can use the simple installation.
Once the system is setup you have a web based administrative interface to add projects, users and associate them with each other. Every user can then upload archived tap results as generated by the -a flag of prove. The system then provides various views of the tests results. You can view the details of a single test run and you can already see some historical data as well. As of version 1.35 you have to do the following to start the server:
create ~/.smolder and in it create ~/.smolder/smolder.conf with one line:
HostName smolder.local
In addition you have to change the /etc/hosts file so smolder.local will resolve to 127.0.0.1 (You can of course use any name with at least one . in it.)
Then you can launch Smolder using the following command:
smolder -c ~/.smolder/smolder.conf
Then you can use your browser to access it.
The password for the "admin" user is "qa_rocks".
Smolder by Michael Peters.
smolder_smoke_signal --server smolder.foo.com:port --username myself --password s3cr3t --file tap.tar.gz --project MyProject
Exercise: Smolder
Check if Smolder is already installed and start it. Add one or more users to it and at least one project and create some associations. Then run the prove command with the archiving option on the examples/tap files and upload the resulting file manually using your browser.
Change the test scripts a bit adding more tests. Change tests from success to failure and from failure to success. After every change run the tests again generating the archive file and upload it to Smolder. The first few results you should upload manually but later switch to the use of Smolder smoke signaler.
Mocking
What is Mocking?
- It is a term we often use for various types of test doubles....
Test Doubles
-
Mocks
-
Spies
-
Stubs
-
Fakes
-
Dummies
Test Doubles explained
Dummy objects are passed around but never actually used.
Fakes - Working implementation, but much more simple than the original
- An in-memory list of username/password pairs that provide the authentication.
- A database interface where data stored in memory only, maybe in a hash or an in-memory SQLite DB.
Mocks - Mocks are objects that register calls they receive, but do not execute the real system behind.
Stubs - Stub is an object that holds predefined data and uses it to answer calls during tests.
- A list of "random values".
- Responses given to prompt.
Spies usually record some information based on how they were called and then call the real method. (or not)
What is Monkey Patching?
-
It is just another name of mocking.
-
.. or test doubles.
-
Replace some internal part of a module or class for the sake of testing.
-
Monkey Patching is probably a subset of Mocking, but who knows?
When is mocking useful?
-
TDD - Test Driven Development
-
Write application agains API that is not ready yet or not controlled by you.
-
Replace a complex object with a simpler one.
-
Isolate parts of the system to test them on their own.
-
Speed up tests (e.g. eliminate remote calls, eliminate database calls).
-
Simulate cases that are hard to replicate. (What if the other system fails?)
-
Unit tests.
Mocking in various situations
-
Random
-
Time
-
IO (print)
-
External calls.
-
Method calls.
-
A whole class.
Application using random
package MyRandomApp;
use strict;
use warnings;
use Exporter qw(import);
our @EXPORT_OK = qw(dice);
sub dice {
my ($n) = @_;
return 1 + int(rand() * $n);
}
1;
use strict;
use warnings;
use 5.010;
use MyRandomApp qw(dice);
say dice(6);
say dice(6);
say dice(6);
perl -Ilib bin/dice.pl
Mock random generator in BEGIN
use strict;
use warnings;
use 5.010;
BEGIN {
my @values = (0.03, 0.72);
*CORE::GLOBAL::rand = sub {
return shift @values;
};
}
use Test::More;
use MyRandomApp qw(dice);
is dice(10), 1;
is dice(10), 8;
my $x = rand;
is $x, undef, 'We have replaced rand here too';
done_testing;
prove -lv t/test-begin.t
Mock random generator Mock::Sub
use strict;
use warnings;
use Test::More;
use Mock::Sub no_warnings => 1;
my $mock;
my $rand;
BEGIN {
$mock = Mock::Sub->new;
$rand = $mock->mock('MyRandomApp::rand');
};
use MyRandomApp qw(dice);
$rand->return_value(0.023);
is dice(10), 1;
$rand->return_value(0.72);
is dice(10), 8;
my $x = rand;
isnt $x, undef, 'The local rand() is not mocked';
cmp_ok $x, '<', 1;
cmp_ok $x, '>=', 0;
diag $x;
is $rand->called_count, 2, 'How many times rand() was called in MyRandomApp';
done_testing;
prove -lv t/test-mock-sub.t
Function that (also) writes to STDOUT or STDERR
There are many cases when we encounter a function that does more than one things. For example in the following simplified example we have a function that both makes some (simple) mathematical calculation and prints to the screen. For added fun it also prints to the Standard Error channel.
We will probably want to refactor it to separate the concerns of calculating and IO, but first we'd like to write a unit-test.
In this example we use the capture function of the Capture::Tiny module that will capture and return as a string everything that is printed to STDOUT and STDERR. (It could also capture the exit value of an external call, but this is not relevant in our case.
The whole code is wrapped in a subtest so the external $result variable will be lexically scoped.
package CalcOutput;
use strict;
use warnings;
use Exporter qw(import);
our @EXPORT_OK = qw(calc_and_print);
sub calc_and_print {
my ($x, $y) = @_;
my $z = $x + $y;
print "The result on STDOUT is $z\n";
print STDERR "Some messages sent to STDERR\n";
return $z;
}
1;
use strict;
use warnings;
use CalcOutput qw(calc_and_print);
die "Usage: $0 NUM NUM\n" if @ARGV != 2;
my $result = calc_and_print(@ARGV);
print "The result is $result.\n";
perl -Ilib bin/calc.pl 7 8
Capture STDOUT and STDERR in functions call
-
Capture::Tiny
-
capture
use strict;
use warnings;
use Capture::Tiny qw(capture);
use Test::More;
use CalcOutput qw(calc_and_print);
subtest calc => sub {
my $result;
my ($out, $err, $exit) = capture {
$result = calc_and_print(2, 3);
};
is $result, 5, 'the result';
is $out, "The result on STDOUT is 5\n", 'STDOUT';
is $err, "Some messages sent to STDERR\n", 'STDERR';
is $exit, 5, 'The value of the last statement';
};
done_testing;
prove -lv t/test.t
Code with STDIN
package MyEcho;
use strict;
use warnings;
use Exporter qw(import);
our @EXPORT_OK = qw(echo);
sub echo {
my $name = <STDIN>;
return scalar reverse $name;
}
use strict;
use warnings;
use 5.010;
use MyEcho qw(echo);
say echo();
perl -Ilib bin/echo.pl
Mock STDIN
use strict;
use warnings;
use Test::More;
use MyEcho qw(echo);
subtest test_echo => sub {
my $input = "Hello";
open my $stdin, '<', \$input or die "Cannot open STDIN to read from string: $!";
local *STDIN = $stdin;
is echo(), 'olleH', 'echo works';
};
done_testing;
Simple game to test
use strict;
use warnings;
my $hidden = 1 + int(100 * rand);
#print "Debug '$hidden'\n";
print "Guess: ";
my $guess = <STDIN>;
chomp $guess;
if ($guess < $hidden) {
print "Too small ($guess)\n";
} elsif ($guess > $hidden) {
print "Too big ($guess)\n";
} else {
print "Found\n";
}
Test simple game (small)
use strict;
use warnings;
use Test::More;
use Capture::Tiny qw(capture);
my $random;
no warnings 'once';
*CORE::GLOBAL::rand = sub {
return $random;
};
subtest test_game_small => sub {
my $input = "20\n";
$random = 0.3;
open my $stdin, '<', \$input or die "Cannot open STDIN to read from string: $!";
local *STDIN = $stdin;
my ($stdout, $stderr, $exit) = capture {
do './game_one.pl';
};
is $stderr, '';
is $stdout, "Guess: Too small (20)\n";
};
done_testing;
Test simple game (big)
use strict;
use warnings;
use Test::More;
use Capture::Tiny qw(capture);
my $random;
no warnings 'once';
*CORE::GLOBAL::rand = sub {
return $random;
};
subtest test_game_big => sub {
my $input = "20\n";
$random = 0.04;
open my $stdin, '<', \$input or die "Cannot open STDIN to read from string: $!";
local *STDIN = $stdin;
my ($stdout, $stderr, $exit) = capture {
do './game_one.pl';
};
is $stderr, '';
is $stdout, "Guess: Too big (20)\n";
};
done_testing;
Test simple game (exact)
use strict;
use warnings;
use Test::More;
use Capture::Tiny qw(capture);
my $random;
no warnings 'once';
*CORE::GLOBAL::rand = sub {
return $random;
};
subtest test_game_big => sub {
my $input = "20\n";
$random = ($input-1)/100;
open my $stdin, '<', \$input or die "Cannot open STDIN to read from string: $!";
local *STDIN = $stdin;
my ($stdout, $stderr, $exit) = capture {
do './game_one.pl';
};
is $stderr, '';
is $stdout, "Guess: Found\n";
};
done_testing;
Exercise: test game
use strict;
use warnings;
my $hidden = 1 + int(100 * rand);
while (1) {
print "Guess: ";
my $guess = <STDIN>;
if ($guess < $hidden) {
print "Too small\n";
next;
}
if ($guess > $hidden) {
print "Too big\n";
next;
}
print "Found\n";
last;
}
Testing time-dependent module
Module that must behave differently on a certain day:
- every day do A, on Sunday do B.
- On January 1 do something special.
- First workday of every month pay salary.
- On April 1 do something special.
Code that maintains a session
- Test if the timeout works well.
Module with operation based on date
package MyDaily;
use strict;
use warnings;
use 5.010;
use DateTime;
use Exporter qw(import);
our @EXPORT_OK = qw(message);
sub message {
my $now = DateTime->now;
if ($now->month == 4 and $now->day == 1) {
return 'Welcome to Python';
}
return 'Welcome to Perl';
}
1;
use strict;
use warnings;
use 5.010;
use MyDaily qw(message);
say message();
Mocking fixed absolute time
use strict;
use warnings;
use Test::MockTime qw(set_absolute_time restore_time);
use Test::More;
use MyDaily qw(message);
diag time;
set_absolute_time('1970-03-01T03:00:00Z');
is message(), 'Welcome to Perl';
set_absolute_time('1970-04-01T03:00:00Z');
is message(), 'Welcome to Python';
diag time;
sleep(2);
diag time;
restore_time();
diag time;
done_testing;
Module with session timeout
package MySession;
use strict;
use warnings;
my %SESSION;
my $TIMEOUT = 60;
sub new {
return bless {}, shift;
}
sub login {
my ($self, $username, $pw) = @_;
# ...
$SESSION{$username} = time;
return;
}
sub logged_in {
my ($self, $username) = @_;
if ($SESSION{$username} and time - $SESSION{$username} < $TIMEOUT) {
return 1;
}
return 0;
}
1;
use strict;
use warnings;
use 5.010;
use MySession;
my $session = MySession->new;
$session->login('foo', 'secret');
say $session->logged_in('foo');
say $session->logged_in('bar');
my $time = 61;
say "Waiting $time seconds to see what happens....";
sleep $time;
say $session->logged_in('foo');
say $session->logged_in('bar');
perl -Ilib bin/timer.pl
Mocking relative Time
-
Test::MockTime
-
time
use strict;
use warnings;
use Test::MockTime qw(set_relative_time);
use Test::More;
use MySession;
my $session = MySession->new;
$session->login('foo', 'secret');
ok $session->logged_in('foo'), 'foo logged in';
ok !$session->logged_in('bar'), 'bar not logged in';
#sleep 61;
set_relative_time(61);
ok !$session->logged_in('foo'), 'foo not logged in - timeout';
ok !$session->logged_in('bar'), 'bar not logged in';
done_testing;
Make sure you load Test::MockTime before you load the module under testing. Otherwise the time function in that module won't be mocked.
Mocking class
- We have a class (our own code, or some 3rd party code)
package BaseSalary;
use strict;
use warnings;
sub new {
my ($class) = @_;
print "new\n";
my $self = bless {}, $class;
return $self;
}
# plain getter/setter for name (with an extra print statement)
sub name {
my ($self, $value) = @_;
print "name\n";
if (@_ == 2) {
$self->{name} = $value;
}
return $self->{name};
}
sub get_base_salaray {
my ($self) = @_;
print "get_base_salary\n";
return 1370;
}
1;
Application using the class
- This is the application that uses the class
package MySalary;
use strict;
use warnings;
use BaseSalary;
sub get_salary {
my ($name) = @_;
my $bonus = 100;
my $obj = BaseSalary->new;
$obj->name($name);
my $base_salary = $obj->get_base_salary();
return $base_salary + $bonus;
}
1;
- How can we test that our application will report a correct error message if the 3rd party application breaks (instead of rasing an exception)?
Testing the 3rd party class
- This is probably not very interesting, just testing an object-oriented module.
use strict;
use warnings;
use Test::More;
use SomeClass;
my $obj = SomeClass->new;
isa_ok $obj, 'SomeClass';
is $obj->name, undef;
is $obj->name('Apple'), 'Apple';
is $obj->name, 'Apple';
is $obj->double(3), 6;
done_testing;
use strict;
use warnings;
use Test::More;
use Mock::Quick qw(qclass);
my $control = qclass(
-implement => 'SomeClass',
-with_new => 1,
-attributes => [ qw(name) ],
double => 7,
);
my $obj = SomeClass->new;
isa_ok $obj, 'SomeClass';
is $obj->name, undef;
is $obj->name('Apple'), 'Apple';
is $obj->name, 'Apple';
is $obj->double(3), 7;
$control->undefine();
done_testing;
Testing our app using the 3rd party class
use strict;
use warnings;
use Test::More;
use MySalary;
is MySalary::get_salary('Foo'), 100;
done_testing;
Testing our app mocking the 3rd party class
use strict;
use warnings;
use Test::More;
use Mock::Quick qw(qclass);
my $control;
BEGIN {
$control = qclass(
-implement => 'SomeClass',
-with_new => 1,
-attributes => [ qw(name) ],
get_salary => undef,
);
}
use MyApp;
is MyApp::give_name('Foo'), 'Foo';
is MyApp::get_my_salary, 22;
$control->undefine();
done_testing;
Mocking function of external web call
-
Test::Mock::Simple
-
We have an application that uses LWP::Simple
-
It gets a list of strings and tells us how many times each string appears on that web page.
-
We'll talk about the commented out code a bit later.
package MyWebAPI;
use strict;
use warnings;
use LWP::Simple qw(get);
sub new {
my ($class, $url) = @_;
my $self = bless {}, $class;
$self->{url} = $url;
return $self;
}
sub count_strings {
my ($self, @strings) = @_;
my $content = get $self->{url};
#my $content = LWP::Simple::get $self->{url};
my %data;
foreach my $str (@strings) {
$data{$str} = () = $content =~ /$str/ig;
}
return \%data;
}
1;
use strict;
use warnings;
use MyWebAPI;
my ($url, @names) = @ARGV;
die "Usage: URL 1-or-more-EXPRESSION\n" if not @names;
my $myapi = MyWebAPI->new($url);
my $res = $myapi->count_strings(@names);
for my $name (sort keys %$res) {
print "$name $res->{$name}\n";
}
perl -Ilib bin/count.pl https://code-maven.com/ perl python Java
Test the application end-to-end
- This is a very old test, and even then it did not work
- As it assumes a given content of that page
use strict;
use warnings;
use Test::More;
use MyWebAPI;
my $w = MyWebAPI->new('http://www.dailymail.co.uk/');
diag explain $w->count_strings('Beyonce', 'Miley Cyrus');
is_deeply $w->count_strings('Beyonce', 'Miley Cyrus'),
{
'Beyonce' => 26,
'Miley Cyrus' => 3,
};
done_testing;
Mocked web test
- Here we need to mock the get function as it is already in the MyWebAPI module
- If we mocked the one inside LWP::Simple that would not impact the one we already have in the MyWebAPI module.
use strict;
use warnings;
use Test::More;
use Test::Mock::Simple;
use MyWebAPI;
my $w = MyWebAPI->new('http://www.dailymail.co.uk/');
my $mock = Test::Mock::Simple->new(module => 'MyWebAPI');
#my $mock = Test::Mock::Simple->new(module => 'LWP::Simple');
$mock->add(get => sub {
return 'Beyonce Beyonce Miley Cyrus';
});
is_deeply $w->count_strings('Beyonce', 'Miley Cyrus'),
{
'Beyonce' => 2,
'Miley Cyrus' => 1,
};
$mock->add(get => sub {
return 'Beyonce';
});
is_deeply $w->count_strings('Beyonce', 'Miley Cyrus'),
{
Beyonce => 1,
'Miley Cyrus' => 0,
};
$mock->add(get => sub {
return '';
});
is_deeply $w->count_strings('Beyonce', 'Miley Cyrus'),
{
Beyonce => 0,
'Miley Cyrus' => 0,
};
done_testing;
Mocking to reproduce error in our function
- Someone reported that in certain cases the count does not work properly.
- For example when the multi-word name is spread to multiple lines (so there is a newline).
- This is how we can test
use strict;
use warnings;
use Test::More;
use Test::Mock::Simple;
my $mock = Test::Mock::Simple->new(module => 'MyWebAPI');
my $w = MyWebAPI->new('http://www.dailymail.co.uk/');
$mock->add(get => sub {
return 'Beyonce Miley Cyrus Miley
Cyrus';
});
is_deeply $w->count_strings('Beyonce', 'Miley Cyrus'),
{
Beyonce => 1,
'Miley Cyrus' => 2,
};
done_testing;
Mocking exception
- What if the
get
function raises an exception, how will our code handle? - Hint: it does not, so this test will break
- But this is how we could test what will happen in that case without trying to figure out how to reliably create one using the real LWP::Simple
use strict;
use warnings;
use Test::More;
use Test::Mock::Simple;
my $mock = Test::Mock::Simple->new(module => 'MyWebAPI');
my $w = MyWebAPI->new('http://www.dailymail.co.uk/');
$mock->add(get => sub {
die 'Something went wrong';
});
is_deeply $w->count_strings('Beyonce', 'Miley Cyrus'),
{
Beyonce => 0,
'Miley Cyrus' => 0,
};
done_testing;
Using MetaCPAN::Client
package MyMetaCPAN;
use strict;
use warnings;
use MetaCPAN::Client;
sub get_releases_by_author {
my ($pauseid, $limit) = @_;
my $mcpan = MetaCPAN::Client->new();
print "client=$mcpan\n";
my $author = $mcpan->author($pauseid);
print "author=$author\n";
my $rset = $author->releases();
print "releases=$rset\n";
my $releases = _get_names($rset);
my @sorted = reverse sort {$a->{date} cmp $b->{date} } @$releases;
if (@sorted > $limit) {
@sorted = @sorted[0..$limit-1];
}
return \@sorted;
}
sub get_recent_releases {
my ($limit) = @_;
my $mcpan = MetaCPAN::Client->new();
print "client=$mcpan\n";
my $rset = $mcpan->recent($limit);
print "recent=$rset\n";
return _get_names($rset);
}
sub _get_names {
my ($rset) = @_;
my @dists;
while ( my $item = $rset->next ) {
print "item=$item\n";
push @dists, {
distribution => $item->distribution,
version => $item->version,
date => $item->date,
};
}
return \@dists;
}
42;
use strict;
use warnings;
use lib 'lib';
use Data::Dumper qw(Dumper);
use MyMetaCPAN ();
my $recent_releases = MyMetaCPAN::get_recent_releases(3);
print Dumper $recent_releases;
my $author_releases = MyMetaCPAN::get_releases_by_author('SZABGAB', 2);
print Dumper $author_releases;
Mocking MetaCPAN::Client
use strict;
use warnings;
use Test::More;
use Mock::Quick qw(qclass);
use Storable qw(dclone);
use MetaCPAN::Client::Release;
my @results_recent = (
{
'date' => '2020-11-04T12:01:11',
'distribution' => 'Robin-Hood',
'version' => '1.01',
},
{
'date' => '2020-11-04T10:31:20',
'distribution' => 'Princess Fiona',
'version' => '2.03',
},
{
'date' => '2020-11-04T09:51:50',
'distribution' => 'Zorg',
'version' => '3.21',
},
);
my @results_author = (
{
'date' => '2020-11-04T12:01:11',
'distribution' => 'Mars-Base',
'version' => '1.11',
},
{
'date' => '2020-11-04T10:31:20',
'distribution' => 'Moon-Base',
'version' => '2.22',
},
{
'date' => '2020-11-04T09:51:50',
'distribution' => 'Earth',
'version' => '3.33',
}
);
sub my_next {
my ($self) = @_;
my $res = shift @{$self->{results}};
return if not $res;
my $obj = MetaCPAN::Client::Release->new(%$res);
return $obj;
}
sub recent {
my ($self, $limit) = @_;
return _result_set(@results_recent);
}
sub releases {
my ($self) = @_;
return _result_set(@results_author);
}
sub author {
return MetaCPAN::Client::Author->new;
}
sub _result_set {
my (@results) = @_;
my $rs = MetaCPAN::Client::ResultSet->new;
$rs->{results} = dclone(\@results);
return $rs;
}
my $client;
my $resultset;
my $author;
BEGIN {
$client = qclass(
-implement => 'MetaCPAN::Client',
-with_new => 1,
recent => \&recent,
author => \&author,
);
$resultset = qclass(
-implement => 'MetaCPAN::Client::ResultSet',
-with_new => 1,
next => \&my_next,
);
$author = qclass(
-implement => 'MetaCPAN::Client::Author',
-with_new => 1,
releases => \&releases,
);
}
use MyMetaCPAN;
# in /etc/hosts add
# 127.0.0.1 fastapi.metacpan.org
my $recent_releases = MyMetaCPAN::get_recent_releases(3);
is_deeply $recent_releases, \@results_recent;
my $author_releases = MyMetaCPAN::get_releases_by_author('FOOBAR', 2);
is_deeply $author_releases, [ @results_author[0..1] ];
done_testing;
Override printing functions (mocking)
- redefine
Sometimes there are functions that print directly to the screen.
The program could be tested as an external application or we can redirect the STDOUT to a scalar variable in the memory of perl but it might be cleaner to replace the display function, capture the data in a variable and then check that variable.
use strict;
use warnings;
use lib 'lib';
use MyTools;
use Test::More tests => 1;
my @data;
{
no warnings 'redefine';
sub MyTools::display {
push @data, \@_;
}
}
{
@data = ();
print_copyright();
like( $data[0][0],
qr/Copyright 2000-\d{4} Gabor Szabo, all rights reserved./,
'copyright');
}
Monkey Patching
package Monkey;
use strict;
use warnings;
sub new {
my ($class, $count) = @_;
return bless { bananas => $count }, $class;
}
sub is_hungry {
my ($self) = @_;
my $hungry = 1; # ... check if I am hungray.
if ($hungry) {
$self->eat();
}
return $hungry;
}
sub eat {
my ($self) = @_;
$self->{bananas}--;
}
sub bananas {
my ($self) = @_;
return $self->{bananas};
}
1;
use strict;
use warnings;
use FindBin qw($Bin);
use lib $Bin;
use Monkey;
use Test::More tests => 4;
my $m = Monkey->new(10);
is $m->bananas, 10, 'bananas';
ok $m->is_hungry, 'is_hungry';
is $m->bananas, 10, 'bananas';
$m->eat;
is $m->bananas, 9, 'bananas';
1..4
ok 1 - bananas
ok 2 - is_hungry
not ok 3 - bananas
# Failed test 'bananas'
# at check_monkey.t line 13.
# got: '9'
# expected: '10'
not ok 4 - bananas
# Failed test 'bananas'
# at check_monkey.t line 15.
# got: '8'
# expected: '9'
# Looks like you failed 2 tests of 4.
use strict;
use warnings;
use FindBin qw($Bin);
use lib $Bin;
use Monkey;
use Test::More tests => 5;
my $m = Monkey->new(10);
is $m->bananas, 10, 'bananas';
{
my $eat;
no warnings 'redefine';
local *Monkey::eat = sub { $eat = 1;};
ok $m->is_hungry, 'is_hungry';
ok $eat, 'eat called';
}
is $m->bananas, 10, 'bananas';
$m->eat;
is $m->bananas, 9, 'bananas';
1..5
ok 1 - bananas
ok 2 - is_hungry
ok 3 - eat called
ok 4 - bananas
ok 5 - bananas
Test STDIN
use strict;
use warnings;
print "Width: ";
my $width = <STDIN>;
print "Height: ";
my $height = <STDIN>;
my $area = $width * $height;
print "Area: $area\n";
use strict;
use warnings;
use Test::More;
use Capture::Tiny qw(capture);
my $input = "20\n30";
open my $stdin, '<', \$input or die "Cannot open STDIN to read from string: $!";
local *STDIN = $stdin;
my ($out, $err, $exit) = capture {
do './rectangle.pl';
};
is $out, "Width: Height: Area: 600\n";
is $err, '';
done_testing;
Mocking get in LWP::Simple
use strict;
use warnings;
use FindBin qw($Bin);
use lib $Bin;
use Test::More;
plan tests => 3;
use Test::Mock::Simple;
my $mock;
BEGIN {
$mock = Test::Mock::Simple->new(module => 'LWP::Simple');
$mock->add(get => sub {
return 'Beyonce Beyonce Miley Cyrus';
});
}
use MyWebAPI;
my $w = MyWebAPI->new;
$mock->add(get => sub {
return 'Beyonce Beyonce Miley Cyrus';
});
is_deeply $w->count_strings('Beyonce', 'Miley Cyrus'),
{
'Beyonce' => 2,
'Miley Cyrus' => 1,
};
Test::Class
Simple module to test
package MyApp;
use strict;
use warnings;
use Exporter qw(import);
our @EXPORT_OK = qw(add div);
sub add {
my ($x, $y) = @_;
return $x + $y;
}
sub div {
my ($x, $y) = @_;
return $x / $y;
}
1;
Test::Class simple example
-
Test::Class
-
use the Test attribute to mark a function as test
package t::lib::MyApp;
use strict;
use warnings;
use base 'Test::Class';
use Test::More;
use MyApp qw(add);
sub test_add : Test {
is add(2, 3), 5;
}
1;
use strict;
use warnings;
use lib '.';
use t::lib::MyApp;
Test::Class->runtests;
Test::Class with fixtures
package t::lib::MyAppMore;
use strict;
use warnings;
use base 'Test::Class';
use Test::More;
use Time::HiRes qw(time);
use MyApp qw(add div);
sub test_add : Test {
my ($self) = @_;
diag "st add: $self->{start_time}";
is add(2, 3), 5, 'more add';
}
sub test_div : Test {
my ($self) = @_;
diag "st div: $self->{start_time}";
is div(6, 3), 2, 'more div';
}
sub setup_fixture : Test(setup) {
my ($self) = @_; # t::lib::MyAppMore instance
$self->{start_time} = time;
diag "setup $self->{start_time}";
}
sub teardown_fixture : Test(teardown) {
my ($self) = @_;
my $end_time = time;
diag "teardown elapsed time: " . ($end_time - $self->{start_time});
}
sub startup_fixture : Test(startup) {
my ($self) = @_;
$self->{class_start_time} = time;
diag "startup $self->{class_start_time}";
}
sub shutdown_fixture : Test(shutdown) {
my ($self) = @_;
my $end_time = time;
diag "shutdown elapsed time: " . ($end_time - $self->{class_start_time});
}
1;
use strict;
use warnings;
use lib '.';
use t::lib::MyAppMore;
Test::Class->runtests;
Test::Class run both test classes
use strict;
use warnings;
use lib '.';
use t::lib::MyApp;
use t::lib::MyAppMore;
Test::Class->runtests;
Test::Class inline
package Test::MyApp;
use strict;
use warnings;
use base 'Test::Class';
use Test::More;
use MyApp qw(add div);
sub test_add : Test {
is add(2, 3), 5;
}
sub test_div : Test {
is div(8, 2), 4;
}
Test::Class->runtests;
Test::Class Load
#!/usr/bin/perl -T
use strict;
use warnings;
use Test::Class::Load 't/lib';
Test::Class->runtests;
Running and reporting tests
TAP - Test Anything Protocol
- TAP
- IETF
1..3
ok 1
ok 2
not ok 3
All printing of "ok" and "not ok" are part of TAP - Test Anything Protocol.
Work is under way to turn it into an IETF standard.
We run our test scripts either using plain perl, or via prove or make test. They all generated textual output. Some very verbose, others a more concise aggregate report of the results. None of them is really pleasing to the eye.
prove
- prove
prove
prove -h Help
perldoc prove
prove -r Recursive
prove -v Verbose
prove -l -Ilib
prove -b -Iblib/lib -Iblib/arch
prove -s shuffle (random order)
prove --timer Show elapsed time for each script
prove --exec '/usr/bin/ruby -w' t/
Parallel testing
prove -j4 Run test scripts in parallel
~/.proverc Can have values like -j4 in it
then use -j1 to run sequential
- Shared resources? (database, temp files, sockets, etc.)
- Are the test script independent? (setup fixture, teardown in every file.)
prove --state
- prove --state=save will save the status of the meta data of the test run in .prove
- prove --state=failed will run the test scripts that failed last time (based on .prove)
- prove --state=failed,save run the failed tests and update .prove
- prove --shuffle --state=save Run in random order and save it in .prove
- prove --state=last Run the same order as last time (the result of shuffle)
- prove --state=todo Run the test scripts with TODO entries
- prove --state=slow -j4 Run the slowest test first (but then 4 in parallel)
Parse TAP from a file
Prove tricks from Michael G Schwern on the perl-qa list.
Parse TAP from a file, rather than program output. Handy for doing experiments without having to mock up a program.
{% embed include file="src/examples/root/foo.tap)
prove --exec 'cat' examples/root/foo.tap
examples/root/foo.tap .. ok
All tests successful.
Files=1, Tests=2, 0 wallclock secs ( 0.04 usr + 0.01 sys = 0.05 CPU)
Result: PASS
prove - run other executables
Make prove run tests as executables with no interpreter. Useful for mixed language environments and tests written in compiled C. Just make sure your tests have the executable bit set and that you're using an unambiguous path to the test (ie. not "test.t" but "./test.t") so prove doesn't search your $PATH.
$ ls -l
total 12K
-rwxrwxr-x 1 schwern schwern 53 2009-02-25 17:30 test.perl
-rwxrwxr-x 1 schwern schwern 53 2009-02-25 17:31 test.ruby
-rwxrwxr-x 1 schwern schwern 42 2009-02-25 17:31 test.sh
$ prove --exec '' t/test.*
t/test.perl....ok
t/test.ruby....ok
t/test.sh......ok
All tests successful.
Files=3, Tests=6, 0 wallclock secs
( 0.04 usr 0.02 sys + 0.00 cusr 0.01 csys = 0.07 CPU)
Result: PASS
TAP::Formatter::HTML by Steve Purkis
- TAP::Formatter::HTML
- HTML
- prove
The simplest way to generate nice reports is to use TAP::Formatter::HTML. Instead of running prove alone, you can pass it a class implementing formattion options and it will use that instead of the default textual output.
prove -b -m -Q --formatter=TAP::Formatter::HTML examples/tap > output.html
Collecting Test reports
Collect the report with the following command:
prove -b -a tap.tar.gz examples/tap
and save it on a centralized server.
You don't always want to install the TAP::Formatter::HTML on every system you run your tests. After all you might not even use Perl for generating the TAP stream so we should have a way to collect the results of the TAP streams. Move them to a central machine and generate the nice reports there.
This will run the tests and generate a tarbal from the resulting TAP stream along with a meta.yml file that contains some meta data on the execution. You can take this file and move it to another server. (A warning though, the TAP streams of each test file is saved in a file with the exact same name as the test file was. So if you create the archive and the untar it you will overwrite your test scripts with the TAP streams. Better to open it in another directory.
Generating HTML reports from Archives
Once you have the tar.gz file on the central machine you should be able to create the HTML report. Unfortunately I could not find a nice way to do it but with the help of Ash Berlin and Steve Purkis we came up with workaround:
First unzip the file using
tar xzf tap.tar.gz
Then you can run the following command:
prove --exec 'cat' -Q --formatter=TAP::Formatter::HTML t/ > output.html
This will only work on Unix, maybe on Windows one can replace 'cat' with 'type' but I have not tried it. In any case I hope soon there will a better solution to this.
First unzip the file using tar xzf tap.tar.gz
Then you can run the following command
prove --exec 'cat' -Q --formatter=TAP::Formatter::HTML t/ > output.html
BDD - Behavior Driven Development
BDD Hello World
examples/bdd/
└── basic
├── features
│ ├── basic.feature
│ └── step_definitions
│ └── some_steps.pl
└── lib
└── HelloWorld.pm
{% embed include file="src/examples/bdd/basic/features/basic.feature)
#!perl
use strict;
use warnings;
use Test::More;
use Test::BDD::Cucumber::StepFile;
use lib 'examples/basic/lib/';
use HelloWorld;
Given 'the HelloWorld module', sub {
};
When qr/^calling hello_world function/, sub {
S->{'result'} = HelloWorld::hello_world();
};
Then qr/return is "(.+)"/, sub {
my $expected = C->matches->[0];
is S->{'result'}, $expected;
};
package HelloWorld;
use strict;
use warnings;
sub hello_world {
return 'Hello World!'
}
1;
pherkin examples/bdd/basic/
prove -v -r --source Feature --ext=.feature examples/bdd/basic/
BDD Demo
{% embed include file="src/examples/bdd/demo/features/basic.feature)
#!perl
use strict;
use warnings;
use Test2::Bundle::More;
use Test::BDD::Cucumber::StepFile;
Given qr/a Fixture called (\S+)/, sub {
my $name = $1;
diag "The fixture is $name";
ok 1, "OK $name"; # really $1 as in the tutorial?
};
Given qr/a (\S+) object/, sub {
my $name = C->matches->[0];
ok 1, "OK $name";
};
When qr/I've added "(.+)" to the object/, sub {
push @{ S->{'object'} }, C->matches->[0];
};
Before sub {
diag 'before';
};
After sub {
my $c = shift;
diag 'after'
# $c->stash->{'scenario'}->{'Calculator'};
};
Then qr/^the output is "(.+)"/, sub {
my ($value) = @{ C->matches };
ok 1, "expected output is $value";
};
Then qr/^the error is "(.*)"/, sub {
my ($value) = @{ C->matches };
ok 1, "expected error is $value";
#is "expected", "recived", "demo";
};
# Transform qr/^(__THE_NUMBER_\w+__)$/, sub { map_word_to_number($1) };
# Transform qr/^table:number as word$/, sub {
# S->{'object'}->add( C->data );
#
Test Command line application
bc - An arbitrary precision calculator language
It is much more than a calculator, it is a language. Luckily we don't need to learn the whole language in order to to do simple calculations. Normally you execute 'bc' from the command line and then you type in your calculations. Pressing ENTER will do the calculation.
Normal operation
$ bc
bc 1.06
Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
23+7
30
quit
$
Try it ....
Expect.pm
- expect
Expect.pm written by Austin Schutz and maintained by Roland Giersig
-
Provides a way to describe user behavior in a command line environment.
-
Can send information as if it was typed on the keyboard.
-
Can wait for some Expect-ed value and based on this value do something.
-
Originally an extension of Tcl.
-
Ported to Perl.
-
Can be used in environments such as:
-
Works on Linux/Unix/OSX.
-
Does NOT work on MS Windows.
-
Command line application like bc.
-
Telnet to another box and type in things.
-
Anything usually a person would do on the command line.
Simple computation - adding two values
#!/usr/bin/perl
use strict;
use warnings;
use Expect;
my $e = Expect->new;
#$e->raw_pty(1);
$e->spawn("bc") or die "Cannot run bc\n";
$e->expect(1, -re => "warranty'\.") or die "no warranty\n";
$e->send("23+7\n");
$e->expect(1, -re => '\d+\+\d+') or die "no echo\n";
print $e->match, "\n";
$e->expect(1, -re => '\d+') or die "no sum\n";
my $match = $e->match;
if ($match eq "30") {
print "Success\n";
} else {
print "Failure. Received $match\n";
}
$e->send("quit\n");
- raw_pty turns off echo
- spawn starts the external program
- expect(timeout, regex) return undef if failed
- timeout is in seconds, 0 means check once, undef means wait forever
- send - as if the user was typing at the keyboard
Results
$ perl examples/bc/bc1.pl
{% embed include file="src/examples/bc/bc1.pl.out)
Reduce output - turn it into a test script
We don't want to see all the output bc generates and then try to look for the correct responses or the error messages. We'd prefer just see ok or not ok
#!/usr/bin/perl
use strict;
use warnings;
use Expect;
use Test::More tests => 5;
my $e = Expect->new;
$e->log_stdout(0);
#$e->raw_pty(1);
$e->spawn("bc") or die "Cannot run bc\n";
my $warranty;
$e->expect(1, ["warranty'\." => sub { $warranty = 1; }]);
ok $warranty, 'warranty';
$e->send("23+7\n");
ok($e->expect(1, -re => '\d+\+\d+'), 'echo expression');
is($e->match, '23+7', 'input');
ok($e->expect(1, -re => '\d+'), 'data received');
is($e->match, 30, 'correct response');
$e->send("quit\n");
- $e->log_stdout(0); - turn off the printing to the screen
Output
$ perl examples/bc/bc4.pl
{% embed include file="src/examples/bc/bc4.pl.out)
Expect and BAIL_OUT
$ perl examples/bc/bc41.pl
#!/usr/bin/perl
use strict;
use warnings;
use Expect;
use Test::More tests => 5;
my $e = Expect->new;
$e->log_stdout(0);
#$e->raw_pty(1);
$e->spawn("bc") or BAIL_OUT("Cannot run bc");
my $warranty;
$e->expect(1, ["warranty'\." => sub { $warranty = 1; }]);
ok $warranty, 'warranty';
$e->send("23+7\n");
ok($e->expect(1, -re => '\d+\+\d+'), 'echo expression');
is($e->match, '23+7', 'input');
ok($e->expect(1, -re => '\d+'), 'data received');
is($e->match, 30, 'correct response');
$e->send("quit\n");
More than one test
We can then setup lot's of tests and run them through one invocation of bc.
#!/usr/bin/perl
use strict;
use warnings;
use Expect;
use Test::More;
my @sets = (
['23+7', 30],
['11+1', 10],
['2*21', 42],
);
plan tests => 1 + 3 * scalar @sets;
my $e = Expect->new;
$e->log_stdout(0);
$e->raw_pty(1);
$e->spawn("bc") or BAIL_OUT("Could not start bc");
my $warranty;
$e->expect(1, ["warranty'\." => sub { $warranty = 1; }]);
ok $warranty, 'warranty';
foreach my $set (@sets) {
$e->send("$set->[0]\n");
ok($e->expect(1, $set->[0]), 'echo');
ok($e->expect(1, -re => '\d+'), 'numbers received');
is($e->match, $set->[1], "expected value of " . $set->[0]);
}
$e->send("quit\n");
Output
$ perl examples/bc/bc5.pl
{% embed include file="src/examples/bc/bc5.pl.out)
External test file
Separating the test cases from the code.
#!/usr/bin/perl
use strict;
use warnings;
use FindBin;
use Expect;
use Test::More;
$Expect::Log_Stdout = 0;
my @sets = read_file();
plan tests => 2 * scalar @sets;
my $e = Expect->new;
$e->raw_pty(1);
$e->spawn("bc") or die "Could not start bc\n";
$e->expect(1, [qr/warranty'./]) or die "no warranty\n";
foreach my $set (@sets) {
$e->send("$set->[0]\n");
ok($e->expect(1, [qr/\d+/]), 'numbers received');
is($e->match, $set->[1], "expected value of " . $set->[0]);
}
$e->send("quit\n");
sub read_file {
open my $fh, "<", "$FindBin::Bin/bc_input.txt"
or die "Could not open bc_input.txt";
my @data;
while (my $line = <$fh>) {
chomp $line;
push @data, [split /\s*,\s*/, $line];
}
return @data;
}
{% embed include file="src/examples/bc/bc5a.pl.out)
Random regression tests
The idea is that we don't have time to manually setup hundreds of tests and calculate our expectations so instead we compare some random tests to the results of a previous run.
We can log the results of each operation in a file and compare the resulting files to some previous execution.
- Create a set of random operations
- Because we don't have time to check all the results we only check if there were no error messages, but in general we don't care about the correctness of the results
- Record the tests and the results
- Run the tests again with the a version (now they are not random any more) and check if any of the results has changed. If something changed it indicates that either earlier or now we have a problem
- Investigate the differences and include the problematic tests in the manual test suit
- Either save the new results as the new expectation or discard it and discard the current version of the application
Random and regression testing
#!/usr/bin/perl
use strict;
use warnings;
use Expect;
$Expect::Log_Stdout = 0;
if (not @ARGV or $ARGV[0] ne "random" and $ARGV[0] ne "regress") {
die "Usage: $0 [random count|regress]\n";
}
if ($ARGV[0] eq "regress" and not -e "tests.txt") {
die "Cannot run regression before running random tests!\n";
}
my $REGEX = qr/-?(\d*\.\d+|\d+)/;
if ($ARGV[0] eq 'random') {
my $cnt = $ARGV[1] or die "Need to get number of cases\n";
my $e = Expect->new;
$e->raw_pty(1);
$e->spawn("bc") or die "Could not start bc\n";
$e->expect(1, [qr/warranty'./]) or die "no warranty\n";
open my $test_file, ">", "tests.txt"
or die "Cannot open tests file for writing\n";
foreach (1..$cnt) {
my ($x, $y) = (rand, rand);
my $op = qw(+ * - /)[int rand 4];
my $line = "$x $op $y";
print {$test_file} "$line=";
$e->send("$line\n");
$e->expect(1, [$REGEX]);
# TODO also check that the system did not crash...
print {$test_file} $e->match, "\n";
look_around($e, $line);
}
$e->send("quit\n");
} elsif ($ARGV[0] eq 'regress') {
my $e = Expect->new;
$e->raw_pty(1);
$e->spawn("bc") or die "Could not start bc\n";
$e->expect(1, [qr/warranty'./]) or die "no warranty\n";
my @sets = read_file();
foreach my $t (@sets) {
$e->send("$t->[0]\n");
$e->expect(1, [$REGEX]);
if ($e->match ne $t->[1]) {
die "Failed when trying $t->[1]. Expected $t->[1]. Received " .
$e->match . "\n";
}
look_around($e, $t->[0]);
}
$e->send("quit\n");
} else {
die "Invalid argument $ARGV[0]\n";
}
sub read_file {
open my $fh, "<", "tests.txt" or die "Could not open tests.txt";
my @data;
while (my $line = <$fh>) {
chomp $line;
push @data, [split /=/, $line];
}
return @data;
}
sub look_around {
my ($e, $line) = @_;
if ($e->before =~ /\S/ or $e->after =~ /\S/) {
my $str = "Error when trying '$line'\n";
$str .= sprintf("Error before: '%s'\n", $e->before);
$str .= sprintf("Match: '%s'\n", $e->match);
$str .= sprintf("Error after: '%s'\n", $e->after);
die $str;
}
}
# Two parts
# - random tests
# - regression tests
#
#
# run random tests
# save test cases and result in a file
#
# run all the tests from the test cases file and check if the
# results are the same as inthe previous run.
Random and regression testing - slight improvement
#!/usr/bin/perl
use strict;
use warnings;
use Text::Diff; # instead of File::Compare
# and diff instead of compare
if (my $diff = diff ("random.log", "regress.log")) {
print "Regression failed\n\n";
print $diff;
} else {
print "Regression successful\n";
}
Results
~/work/training/testing/examples/bc>perl bc7.pl regress
{% embed include file="src/examples/bc/bc7.pl.out)
Expect multiple values
use strict;
use warnings;
my $x = rand();
if ($x < 0.3) {
print "abc\n";
} elsif ($x < 0.6) {
print "def\n";
} else {
print "xyz\n";
}
{% embed include file="src/examples/expect/random.t)
Test::Expect
-
Test::Expect
-
Expect::Simple
-
Test::Expect by Leon Brocard is using
-
Expect::Simple Diab Jerius which is a wrapper around Expect.
Capturing both STDOUT and STDERR
- Write out the expected STDIN to a file called "in"
- Run the app system "$app < in >out 2>err";
- read in the out and err files an examine them
Capturing both STDOUT and STDERR manually
#!/usr/bin/perl
use strict;
use warnings;
use Test::More tests => 2;
use File::Temp qw(tempdir);
my $dir = tempdir( CLEANUP => 1 );
diag $dir;
my $app = "./examples/io/application.pl";
my @in = ('10', '21', 'hello', '3x');
my $in = join "\n", @in;
my @expected_out = ('20', '42');
my @expected_err = (
"The input 'hello' contains no numeric values",
"The input '3x' contains no numeric values",
);
{
open my $fh, ">", "$dir/in" or die $!;
print $fh $in;
}
system "$app < $dir/in > $dir/out 2> $dir/err";
{
open my $fh, "<", "$dir/out" or die $!;
my @out = <$fh>;
chomp @out;
is_deeply(\@out, \@expected_out, "Output");
}
{
open my $fh, "<", "$dir/err" or die $!;
my @err = <$fh>;
chomp @err;
is_deeply(\@err, \@expected_err, "Error");
}
Capturing both STDOUT and STDERR using IPC::Run3
- IPC::Run3
#!/usr/bin/perl
use strict;
use warnings;
use Test::More tests => 2;
use IPC::Run3;
my $app = "./examples/io/application.pl";
my @in = ('10', '21', 'hello', '3x');
my $in = join "\n", @in;
my @expected_out = ('20', '42');
my @expected_err = (
"The input 'hello' contains no numeric values",
"The input '3x' contains no numeric values",
);
{
my $out;
my $err;
run3 [$app], \$in, \$out, \$err;
my $expected_out = join("\n", @expected_out) . "\n";
is($out, $expected_out, "IPC Output");
my $expected_err = join("\n", @expected_err) . "\n";
is($err, $expected_err, "IPC Error");
}
Capture::Tiny
- Capture::Tiny
use strict;
use warnings;
use Capture::Tiny qw(capture);
use Test::More;
plan tests => 3;
my @cmd = ($^X, '-e', q{print 42; print STDERR 35});
my ($stdout, $stderr, $exit) = capture {
system(@cmd);
};
is $stdout, 42, 'stdout';
is $stderr, 35, 'stderr';
is $exit, 0, 'exit';
Test::Snapshots
- Test::Snapshots
Test::Snapshots. Testing several command line-ish executables by saving INPUT, command line ARGUMENTs expected OUTPUT, ERROR and EXIT code in external files.
bin/abc.exe
bin/abc.exe.in
bin/abc.exe.argv
bin/abc.exe.out
bin/abc.exe.err
bin/abc.exe.exit
Exercise: Expect
Take the bc6.pl example and similarly to bc7_diff.pl replace the way we compare data to use Test::Differences.
Networking devices
Introduction - pick the right abstraction level
When trying to connect some network device using Perl you have a number of choices.
See the full stack of HTTP connections:
- Socket, and the socket function.
- IO::Socket::INET using IO::Socket
- Net::Telnet,
- Net::FTP,
- Net::SSH (wrapping ssh),
- Net::SSH::Perl,
- Net::*
- LWP::Simple, LWP
- WWW::Mechanize
- WWW::Gittip (scaping or talking to an API)
At the lowest level you can use the built in socket function.
Using the Socket library provides several extra functions
and constants that will make your code cleaner and more portable.
See also perlipc
plackup examples/server.psgi
Socket level programming using Socket.pm
- Socket
Using the built in "socket" function with various helper variables and functions from the standard Socket.pm module
#!/usr/bin/perl
use strict;
use warnings;
use Socket qw(:DEFAULT :crlf);
# get the protocol id (on Linux from /etc/protocols)
my $protocol_id = getprotobyname('tcp');
socket(my $socket, PF_INET, SOCK_STREAM, $protocol_id) or die $!;
# build C structure in_addr from hostip
# if hostname is given it tries to resolve hostname to ip first
# (and returns undef if not successful)
my $host = 'localhost';
my $port = 5000;
my $host_struct = inet_aton($host);
my $sockaddr_in = pack_sockaddr_in($port, $host_struct);
connect($socket, $sockaddr_in) or die $!;
# turn off buffering on the socket
{
my $old = select($socket);
$| = 1;
select($old);
}
print $socket "GET / HTTP/1.0$CRLF$CRLF";
while (my $line = <$socket>) {
print $line;
}
print "\n";
HTTP/1.0 200 OK
Date: Sun, 03 Aug 2014 11:59:18 GMT
Server: HTTP::Server::PSGI
Content-Type: text/html
Content-Length: 2
42
Newline
\n is a newline on our current system (is NOT always ASCII LF)
\r is (is NOT always ASCII CR)
use \015\012 to say CR+LF on networking applications
Socket level programming using IO::Socket
- IO::Socket
IO::Socket is a higher level abstraction Hides many of the ugly part we had to know in case of the socket() function. Provides an OOP interface.
#!/usr/bin/perl
use strict;
use warnings;
use IO::Socket;
my $host = 'localhost';
my $port = 5000;
my $CRLF = "\015\012";
my $socket = IO::Socket::INET->new(
PeerAddr => $host,
PeerPort => $port,
Proto => 'tcp',
) or die $!;
$socket->send("GET / HTTP/1.0$CRLF$CRLF") or die $!;
my $SIZE = 100;
my $data = '';
while ($socket->read($data, $SIZE, length $data) == $SIZE) {};
print $data;
print "\n";
Net::Telnet
- Net::Telnet
#!/usr/bin/perl
use strict;
use warnings;
use Net::Telnet ();
my $t = Net::Telnet->new();
$t->open('localhost');
$t->login('smoke', '123456');
my @lines = $t->cmd("who");
print @lines;
print "\n";
print "Who am i: ", $t->cmd("whoami"), "\n\n";
Net::Telnet for HTTP
#!/usr/bin/perl
use strict;
use warnings;
my $CRLF = "\015\012";
use Net::Telnet ();
my $t = Net::Telnet->new(
Binmode => 1,
Timeout => 10,
Host => 'localhost',
Port => 5000,
);
$t->print("GET / HTTP/1.0$CRLF$CRLF") or die $!;
while (my $line = $t->getline) {
print $line;
}
print "\n";
Net::Telnet configure VLAN
#!/usr/bin/perl
use strict;
use warnings;
use Net::Telnet;
open my $out, ">>", "out.log" or die $!;
my $t = Net::Telnet->new(
Timeout => 2,
#Prompt => '/>/',
input_log => "input.log",
);
$t->open("172.30.40.146");
$t->waitfor('/User:.*$/');
$t->print("admin");
$t->waitfor('/Password:/');
$t->print("");
$t->waitfor('/>/');
$t->prompt('/\(Switching\) >/');
my @lines = $t->cmd("show vlan 5");
if (grep /VLAN ID: 5/, @lines) {
print "VLAN is already configured\n";
print "Please remove it manually and rerun the program\n";
exit;
#$t->cmd("logout");
}
$t->print("enable");
$t->waitfor('/Password:/');
$t->prompt('/\(Switching\) #/');
$t->print("");
$t->prompt('/\(Switching\) \(Vlan\) #/');
@lines = $t->cmd("vlan database");
@lines = $t->cmd("vlan 5");
#print @lines;
#if (grep /VLAN already/, @lines) {
# print "QQ 1\n";
#}
@lines = $t->cmd("vlan 5000");
#print @lines;
@lines = $t->cmd("vlan 5");
#print @lines;
#if (grep /VLAN already/, @lines) {
# print "QQ 2\n";
#}
#@lines = $t->cmd("no vlan 5");
$t->prompt('/\(Switching\) #/');
$t->cmd("exit");
$t->prompt('/--More-- or \(q\)uit/');
@lines = $t->cmd("show ?");
$t->output_record_separator("");
push @lines, $t->cmd(" ");
$t->prompt('/\(Switching\) #show/');
push @lines, $t->cmd(" ");
#print @lines;
$t->output_record_separator("\n");
$t->prompt('/\(Switching\) #/');
@lines = $t->cmd(" vlan 5"); # show was left on the promt line !
#print @lines;
@lines = $t->cmd("show vlan 7");
#print @lines;
@lines = $t->cmd("show slot");
#print @lines;
print $out @lines;
$t->prompt('/\(Switching\) \(Vlan\) #/');
@lines = $t->cmd("vlan database");
@lines = $t->cmd("no vlan 5");
$t->prompt('/\(Switching\) #/');
$t->cmd("exit");
print "done: $_\n";
print $out "done: $_\n";
ftp using Net::FTP
- Net::FTP
#!/usr/bin/perl
use strict;
use warnings;
use Net::FTP;
use File::Basename qw(dirname);
use File::Spec;
my $DEBUG = 1;
if (not @ARGV) {
print "Usage:\n";
print " $0 FILE [FILES]\n";
exit;
}
my $ftp = Net::FTP->new('192.168.1.100') or die $!;
$ftp->login('gabor', 'the password of gabor') or die $!;
my $pwd = $ftp->pwd;
foreach my $file (@ARGV) {
my $dir = dirname $file;
$ftp->cwd($pwd);
$ftp->cwd($dir);
$ftp->put($file);
}
ssh using Net::SSH
- Net::SSH
Wrapping the external ssh command. Therefore working only in UNIX/Linux. See also Net::SSH::Perl.
#!/usr/bin/perl
use strict;
use warnings;
use Net::SSH qw(sshopen2);
use IO::File;
my $output = IO::File->new;
my $input = IO::File->new;
sshopen2("localhost", $output, $input) or die $!;
print $input "set\n";
print $input "echo DONE\n";
print $input "who\n";
print $input "echo DONE\n";
print $input "date\n";
print $input "echo DONE\n";
print $input "cat README\n";
print $input "exit\n";
my @out = <$output>;
my $c=0;
my @section;
while (my $line = shift @out) {
if ($line =~ /^DONE$/) {
$c++;
next;
}
push @{$section[$c]}, $line;
}
foreach my $sect (@section) {
print @$sect;
print "--------------------\n";
}
LWP::Simple
- LWP::Simple}
#!/usr/bin/perl
use strict;
use warnings;
my $url = 'http://localhost:5000/';
if (defined $ARGV[0]) {
$url = $ARGV[0];
}
use LWP::Simple qw(get);
my $page = get($url);
if (defined $page) {
print $page;
} else {
print "Could not fetch $url\n";
}
print "\n";
42
LWP::UserAgent
- LWP::UserAgent
#!/usr/bin/perl
use strict;
use warnings;
my $url = 'http://localhost:5000/';
if (defined $ARGV[0]) {
$url = $ARGV[0];
}
use LWP::UserAgent;
my $ua = LWP::UserAgent->new;
$ua->agent("Internet Explorer/17.1");
my $req = HTTP::Request->new(GET => $url);
my $res = $ua->request($req);
if ($res->is_success) {
print $res->content;
} else {
print $res->status_line, "\n";
}
print "\n";
WWW::Mechanize
- WWW::Mecanize
#!/usr/bin/perl
use strict;
use warnings;
use 5.010;
use WWW::Mechanize;
my $w = WWW::Mechanize->new();
$w->get('http://localhost:5000/');
say $w->content;
42
WWW::Mechanize for Google
#!/usr/bin/perl
use strict;
use warnings;
use 5.010;
use WWW::Mechanize;
my $w = WWW::Mechanize->new();
$w->get('http://www.google.com/');
$w->submit_form(
fields => {
q => 'test automation using perl'
},
);
$w->follow_link( n => 5 );
say $w->title;
Exercise: Search on Financial Times
Go to the Financial Times and search for the word "perl". Print out how many items match.
Exercise: Compare exchange rates
Go to various sites, check the AAA/BBB exchange rate on each one of them. Print out the current exchange rate and also send an e-mail to yourself with the information. Also put a ARBITRAGE warning if there is a difference between the rates.
AAA and BBB can be any two currencies you are interested in.
Try the following sites:
http://www.xe.com/
http://www.oanda.com/
http://www.bankofcanada.ca/en/rates/exchform.html
Exercise: Telnet or SSH to Unix machines
Pick a Unix/Linux machine your are using and write a script that will telnet or ssh to it, execute report what network cards does it have (ifconfig on Linux) and what is its routing table (route -n on Linux).
Introduction
We have a device that has a Command Line Interface (CLI). Normally you would telnet to it and type in commands.
Let's see what can we do with Net::Telnet.
In order to do that first we need to see how the device behaves when we access it manually.
Use the local telnet command to access the device and try some basic commands. (eg. type "help")
We supply an example system that shows a partially faulty system. In order to run the daemon by yourself you need to install Net::Server and Class::Accessor and type perl examples/cli-perl/bin/cli_daemon.pl
Then you can access it using telnet localhost 8000
When accessing it using a telnet client you can use the built in username: admin and password: nimda.
Connect to the device
Setting both
Dump_log => 'dump.log',
Input_log => 'input.log',
in the constructor of Net::Telnet will allow us to see what is really going on on the connection.
We also add a call to wait for something that likely won't show up
in the output. Depending on where the demo application (the daemon)
is running you might need to change the $hostname variable.
#!/usr/bin/perl
use strict;
use warnings;
use Net::Telnet;
my $port = 8000;
my $hostname = 'localhost';
my $telnet = Net::Telnet->new(
Port => $port,
Host => $hostname,
);
print "opened\n";
{
my ($prematch, $match) = $telnet->waitfor('/not likely to show up/');
}
print "after wait\n";
Running the script we notice that after printing "opened" it waits
quite a lot of time and it never prints "after wait".
This happened because waitfor was waiting for a string that never
showed up. Hence it gave up waiting after the built-in timeout
period. Once it reached the timeout it called the default errmode()
function which is the "die" function. So the script never reached
the second print() and did not have a chance to print anything.
Reduce timeout
- Reduce the timeout
- Wait for a string we know will show up
- After seeing Username: we should type in 'admin', the username
#!/usr/bin/perl
use strict;
use warnings;
use Net::Telnet;
my $port = 8000;
my $hostname = 'localhost';
my $telnet = Net::Telnet->new(
Port => $port,
Host => $hostname,
Timeout => 1,
);
print "opened\n";
{
my ($prematch, $match) = $telnet->waitfor('/Username:.*$/');
if ($prematch =~ /Welcome/) {
print "welcome printed\n";
}
$telnet->print('admin');
}
print "after wait\n";
-
Manually check out what does this server do?
-
Turn the two example clients scripts into test using Test::More.
-
Continue and write more tests for this telnet server.
-
write a test that makes sure when someone types in 'help' the system does not write invalid command
-
make sure you can write a test that can handle cases such as the '??'
Our test script
#!/usr/bin/perl
use strict;
use warnings;
# we should make sure no other test runs on the same port
my $port = 8000;
# fork and start server in the child process
my $pid = start_server();
sleep 1;
# load the Testing tools in the main process
require Test::Most;
import Test::More;
require Net::Telnet;
our $TODO;
plan(tests => 22);
Test::Most::bail_on_fail();
diag("Server started (pid: $pid)");
# shut down the server (even if the test stopped in the middle)
END {
stop_server($pid);
}
my $telnet_invalid = _new();
ok(1, 'opened (telnet_invalid)');
{
my ($prematch, $match) = $telnet_invalid->waitfor('/Username:.*$/');
like($prematch, qr/^Welcome$/, "welcome printed (telnet_invalid)");
$telnet_invalid->print('admin');
}
{
my ($prematch, $match) = $telnet_invalid->waitfor('/Password:.*$/');
is($prematch, '', 'empty prematch (telnet_invalid)');
$telnet_invalid->print('bad password');
}
$telnet_invalid->errmode('return');
TODO: {
local $TODO = "System allows login with incorrect password";
my ($prematch, $match) = $telnet_invalid->waitfor('/Invalid login/');
is($match, "Invalid login", "Invalid login message");
}
my $telnet = _new(); # get the telnet client
ok($telnet, "opened (telnet)");
{
my ($prematch, $match) = $telnet->waitfor('/Username:.*$/');
like($prematch, qr/^Welcome$/, 'welcome printed (telnet)');
$telnet->print('admin');
}
{
my ($prematch, $match) = $telnet->waitfor('/Password:.*$/');
is($prematch, '', 'empty prematch (telnet)');
$telnet->print('nimda');
}
{
my ($prematch, $match) = $telnet->waitfor('/\w+>/');
is($prematch, '', 'empty prematch (telnet)');
is($match, 'cli>', 'prompt is correct (telnet)');
}
{
my @resp = $telnet->cmd('');
is(scalar(@resp), 1, '1 line in response to "" (telnet)');
is($resp[0], '', 'ENTER (telnet)');
}
{
my @resp = $telnet->cmd('working?');
is(scalar(@resp), 1, "one line in response (telnet)");
like($resp[0], qr/Invalid command: 'working\?'/, 'invalid command (telnet)');
}
{
my @resp = $telnet->cmd('help');
is(scalar(@resp), 7, '7 lines in response to "help" (telnet)');
like($resp[0], qr/help\s+-\s+this help/, 'invalid command (telnet)');
# TODO: test more lines of the help?
}
TODO: {
my @resp;
eval {
@resp = $telnet->cmd('?');
};
local $TODO = "? does not work: $@" if $@;
is(scalar(@resp), 7, '7 line in respons "?" (telnet)');
push @resp, '' if $@; # to avoid warning on undef;
like($resp[0], qr/help\s+-\s+this help/, 'invalid command (telnet)');
# TODO: test more lines of the help?
$telnet->buffer_empty;
}
{
my @resp = $telnet->cmd('');
is(scalar(@resp), 1, '1 line in response to "" (telnet)');
is($resp[0], '', 'ENTER (telnet)');
}
# TODO: how to catch the final Goodbye?
{
my ($prematch, $match) = $telnet->waitfor('/.*$/');
$telnet->print('exit');
is($prematch, '', 'prematch is empty of "exit" (telnet)');
is($match, '', 'match is empty "exit" (telnet)');
# is $telnet->lastline, '';
ok(1, 'done (telnet)');
#my @resp = $telnet->cmd('exit');
#is @resp, 1, "one line in respons";
#like($resp[0], qr/Good bye/, 'Goodbye');
}
exit;
# print enable
# wait for Password:
##########################################
sub stop_server {
my ($pid) = @_;
if ($pid) {
diag("killing $pid");
kill 3, $pid;
}
}
sub _new {
# TODO catch error connecting to server and report nicely
my $t = Net::Telnet->new(
Port => $port,
Prompt => '/^.*>\s*$/m',
Host => 'localhost',
Dump_log => "dump.log",
Timeout => 1,
);
return $t;
}
sub start_server {
my $pid = fork();
if (not defined $pid) {
die "Cannot fork\n";
}
if ($pid) { # parent
return $pid;
} else { # child
require FindBin;
no warnings;
exec "$^X $FindBin::Bin/../bin/cli_daemon.pl --port $port --stderr";
}
}
# TODO:
# enable mode, change password of regular user,
# change password of enabled user
# BUG: not cannot set password longer than 5 characters
# show config (in regular mode)
# set config (in enabled mode)
Testing Web Applications
What can be tested ?
- Fetching pages: type in a URL.
- Check if the HTML is correct.
- Check if elements of a page are in place.
- Follow links.
- Click on the link to the registration form,
- Fill in the fields (you'll have to play with this and fill in the fields with good values, bad values, find the edge cases etc.)
- White box: Check if the changes also took place in the backend. (e.g. in the database.)
- Check if you get back the correct response page.
- ...
Extreme web site testing
- Send data out of band, on lower level protocols, beneath the application.
It is not enough to test the web application as it is. Users can actually send any request to your web server. Even if the client side part of your web application (written in HTML/CSS/Javascript ) behaves well, some people will try to send requests that would not be generated by your application. Sometimes that will happen by mistake - copy-pasting a URL incorrectly. Sometime on purpose, when trying to attack your application.
If you want to seriously test your web application you'll have to do the same. On the higher level protocols - you can send various http requests similar to the valid ones, but with invalid data. You can also send invalid fields, and you can try to attack your own application on low level protocols and send invalid HTTP headers.
Tools
- LWP::Simple
- LWP::UserAgent
- WWW::Mechanize - based on the LWP library
- Test::WWW::Mechanize
- HTML::Lint
- Test::HTML::Lint
- Test::HTML::Tidy
- Selenium
Small test HTTP server
We are using a small portable HTTP server built using HTTP::Daemon which is part of libwww-perl for the examples.
You can also run it by typing $ perl examples/www/server/server.pl
or type the following to get help $ perl examples/www/server/server.pl --help
Fetching a static page
- LWP::Simple
#!/usr/bin/perl
use strict;
use warnings;
use Test::More tests => 1;
use LWP::Simple qw(get);
my $home = get 'http://localhost:8080/';
ok $home, 'There is a response';
Fetch a page and check if there is response at all.
$ perl static.t
1..1
ok 1 - There is a response
Fetching a not-existing static page
#!/usr/bin/perl
use strict;
use warnings;
use Test::More tests => 1;
use LWP::Simple;
my $home = get 'http://localhost:8080/xx';
ok $home, 'There is a response';
Fetch a page and check if there is response.
$ perl static_bad.t
1..1
not ok 1 - There is a response
# Failed test (static_bad.t at line 10)
# Looks like you failed 1 tests of 1.
Checking good HTML
- HTML::Lint
- Test::HTML::Lint
#!/usr/bin/perl
use strict;
use warnings;
use Test::More tests => 2;
use Test::HTML::Lint;
use LWP::Simple qw(get);
my $home = get 'http://localhost:8080/';
ok $home, 'There is a response';
html_ok $home, 'HTML OK';
$ perl static_lint.t
1..2
ok 1 - There is a response
ok 2 - HTML OK
Checking bad HTML
#!/usr/bin/perl
use strict;
use warnings;
use Test::More tests => 2;
use Test::HTML::Lint;
use LWP::Simple qw(get);
my $html = get 'http://localhost:8080/bad.html';
ok $html, 'There is a response';
html_ok $html, 'HTML OK';
1..2
ok 1 - There is a response
not ok 2 - HTML OK
# Failed test 'HTML OK'
# at examples/www/static_lint_bad.t line 11.
# Errors: HTML OK
# (3:13) </a> with no opening <a>
# (8:1) <h1> at (3:1) is never closed
# Looks like you failed 1 test of 2.
What is this bad HTML ?
{% embed include file="src/examples/www/server/html/bad.html)
HTML::Tidy and Test::HTML::Tidy
#!/usr/bin/perl
use strict;
use warnings;
use Test::More tests => 2;
use Test::HTML::Tidy;
use LWP::Simple qw(get);
my $home = get 'http://localhost:8080/';
ok $home, 'There is a response';
html_tidy_ok $home, 'HTML OK';
$ perl examples/www/static_tidy.t
1..2
ok 1 - There is a response
not ok 2 - HTML OK
# Failed test 'HTML OK'
# in examples/www/static_tidy.t at line 11.
# Messages: HTML OK
# examples/www/static_tidy.t (1:1) Warning: missing <!DOCTYPE> declaration
# Looks like you failed 1 test of 2.
Test using W3C validator
-
Module to access that web site.
-
Module to access the same service installed on a local web server.
-
Module to access the validating code without a web server.
Use a local copy of the W3C validator
On Ubuntu install the following packages using sudo aptitude install w3c-dtd-xhtml w3c-linkchecker w3c-markup-validator
dpkg -L w3c-markup-validator
shows that the sample apache configuration file is at
/etc/w3c/w3c-markup-validator-apache-perl.conf
and the executable is at /usr/lib/cgi-bin/check
Change /etc/hosts w3c.local to resolve to 127.0.0.1
Copy the Apache configuration file and wrap it with a virtual host configuration.
Then access the page via http://w3c.local/w3c-markup-validator/
{% embed include file="src/examples/www/w3c.conf)
#!/usr/bin/perl
use strict;
use warnings;
use Data::Dumper;
use WebService::Validator::HTML::W3C;
my $v = WebService::Validator::HTML::W3C->new(
detailed => 1,
validator_uri => 'http://w3c.local/w3c-markup-validator/check',
);
if ( $v->validate_file('index.html') ) {
if ( $v->is_valid ) {
print "OK\n";
printf ("%s is valid\n", $v->uri);
} else {
print "Failed\n";
printf ("%s is not valid\n", Dumper $v->uri);
printf("Num errors %s\n", $v->num_errors);
#print $v->errors;
print $v->_content;
#foreach my $error ( @{$v->errors} ) {
# printf("%s at line %d\n", $error->msg, $error->line);
#}
}
} else {
printf ("Failed to validate the website: %s\n", $v->validator_error);
}
LWP::Simple and LWP
LWP::Simple is, well, simple.
LWP on the other hand enables you to do a lot of things
- Setting the User Agent
- Support for cookies
- Authentication
- Proxy Servers
- Parse HTML
- Write robots
But is it not simple.
WWW::Mechanize
Is simple, and very powerful (but does not support JavaScript).
Web based Calculator with WWW::Mechanize
#!/usr/bin/perl
use strict;
use warnings;
use WWW::Mechanize;
my $url = 'http://localhost:8080';
my $mech = WWW::Mechanize->new;
$mech->get($url);
$mech->follow_link( text => 'calculator' );
$mech->submit_form(
fields => {
a => 23,
b => 19,
},
);
print $mech->content;
Output:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01
Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
<html>
<head><title>Result</title></head>
<body>
<h1 align="center">42</h1>
</body>
</html>
Testing with WWW::Mechanize
#!/usr/bin/perl
use strict;
use warnings;
use Test::More tests => 14;
use WWW::Mechanize;
#use Test::HTML::Tidy;
my $SERVER = 'http://localhost:8080';
my $url = $SERVER;
my $mech = WWW::Mechanize->new;
$mech->get($url);
is $mech->status, 200, 'main page fetched';
like $mech->content, qr{Our languages}, 'content';
$mech->follow_link( text => 'calculator' );
is $mech->status, 200, 'calculator page fetched';
like $mech->content, qr{Calculator}, 'start page ok';
#html_tidy_ok $mech->content, "html is tidy";
my @forms = $mech->forms;
is @forms, 1, 'there is one form on this page';
# Shall we check if all the parts of the form are there?
is $forms[0]->action, "$SERVER/cgi/cgi_sum.pl", "action URL is correct";
my @inputs = $forms[0]->inputs;
is @inputs, 3, "there are 3 input fields on this form";
{
my $a = $forms[0]->find_input('a');
isa_ok $a, 'HTML::Form::TextInput';
my $b = $forms[0]->find_input('b');
isa_ok $b, 'HTML::Form::TextInput';
my $s = $forms[0]->find_input('submit');
isa_ok $s, 'HTML::Form::SubmitInput';
}
# Shall we check the name of the form ?
$mech->submit_form(
fields => {
a => 23,
b => 19,
},
);
like $mech->content, qr{<h1 align="center">42</h1>}, 'get 42';
#html_tidy_ok $mech->content, "result html is tidy";
$mech->back;
my @comps = (
[23, 19, 42],
[1, 2, 3],
[1, -1, 0],
);
foreach my $c (@comps) {
$mech->submit_form(
fields => {
a => $c->[0],
b => $c->[1],
},
);
like $mech->content,
qr{<h1 align="center">$c->[2]</h1>},
"$c->[0]+$c->[1]=$c->[2]";
$mech->back;
}
Output:
1..14
ok 1 - main page fetched
ok 2 - content
ok 3 - calculator page fetched
ok 4 - start page ok
ok 5 - there is one form on this page
ok 6 - action URL is correct
ok 7 - there are 3 input fields on this form
ok 8 - The object isa HTML::Form::TextInput
ok 9 - The object isa HTML::Form::TextInput
ok 10 - The object isa HTML::Form::SubmitInput
ok 11 - get 42
ok 12 - 23+19=42
ok 13 - 1+2=3
ok 14 - 1+-1=0
Test::WWW::Mechanize
#!/usr/bin/perl
use strict;
use warnings;
use Test::More tests => 8;
use Test::WWW::Mechanize;
#use Test::HTML::Tidy;
my $SERVER = 'http://localhost:8080';
my $url = $SERVER;
my $mech = Test::WWW::Mechanize->new;
$mech->get_ok($url);
$mech->content_like( qr{Our languages}, 'content' );
$mech->follow_link_ok({ text => 'calculator' });
$mech->content_like( qr{Calculator}, 'start page ok' );
#html_tidy_ok( $mech->content, "html is tidy" );
$mech->submit_form(
fields => {
a => 23,
b => 19,
},
);
$mech->content_like( qr{<h1 align="center">42</h1>}, 'get 42' );
#html_tidy_ok( $mech->content, "result html is tidy" );
$mech->back;
my @comps = (
[23, 19, 42],
[1, 2, 3],
[1, -1, 0],
);
foreach my $c (@comps) {
$mech->submit_form(
fields => {
a => $c->[0],
b => $c->[1],
},
);
$mech->content_like(
qr{<h1 align="center">$c->[2]</h1>},
"$c->[0]+$c->[1]=$c->[2]");
$mech->back;
}
Output:
1..8
ok 1 - GET http://localhost:8080
ok 2 - content
ok 3 - Followed link with "text=calculator"
ok 4 - start page ok
ok 5 - get 42
ok 6 - 23+19=42
ok 7 - 1+2=3
ok 8 - 1+-1=0
Login to Act using Mechanize
use strict;
use warnings;
use Test::More;
use WWW::Mechanize;
plan tests => 3;
my $url = 'http://www.yapcna.org/yn2013';
my $w = WWW::Mechanize->new;
$w->get("$url/login");
unlike $w->content, qr{logout};
#diag explain $w->cookie_jar;
#print $w->current_form->dump;
$w->submit_form(
form_number => 1,
fields => {
'credential_0' => 'szabgab',
'credential_1' => $ARGV[0],
},
);
like $w->content, qr{logout};
my $other = WWW::Mechanize->new;
$other->get("$url/login");
unlike $other->content, qr{logout};
More things to test
- Check if you can access restricted pages without logging in ...
- or after logging out ...
- or after timeout expired
- logging-in, check good/bad passwords
- Upon unsuccessful login, see if the return page does NOT contain the password
Test without server Test::WWW::Mechanize::PSGI
PSGI-based applications can be tested without even launching a server.
Test page with JavaScript
-
WWW::Mechanize::Firefox
-
No single JavaScript engine, certainly there won't be those used in versions of IE
-
There are several Open Source implementations
-
Test only the data as it was sent
-
Use a real browser (e.g. driven by WWW::Mechanize::Firefox that needs MozRepl )
-
Use Selenium
Selenium
Selenium documentation
-
Selenium
-
Selenium tutorial by Barbie
-
Selenium tutorial on PerlMonks
-
Using Firefox install Selenium IDE
Selenium IDE
-
Selenium IDE
-
Download xpi file of version 2.5.0 using FireFox
-
The company web site
-
YAPC::EU (select one of the talks)
Reset web site to the point we need to start testing.
We can install the Selenium IDE in Firefox, record a session interacting with a web site and then we can replay it with Firefox.
We start the recording by click on Tools/Selenium IDE. A new window will show, that is the Selenium IDE. We'll be able to stop the recording by pressing the red button. Then we can export the recorded script in several languages.
Launch Selenium server
Download the jar file from Selenium HQ.
java -jar /home/gabor/Downloads/selenium-server-standalone-2.42.2.jar
Selenium DuckDuckGo
- Selenium::Remote::Driver
- get
- get_title
- quit
use strict;
use warnings;
use 5.010;
use Selenium::Remote::Driver;
my $driver = Selenium::Remote::Driver->new(
auto_close => 0,
);
$driver->get('https://duckduckgo.com/');
say $driver->get_title();
#sleep 5;
#$driver->quit();
Selenium using Chrome
- Chrome
- browser_name
Follow the instructions in the wiki
java -Dwebdriver.chrome.driver="/home/gabor/Downloads/chromedriver" -jar /home/gabor/Downloads/selenium-server-standalone-2.42.2.jar
use strict;
use warnings;
use 5.010;
use Selenium::Remote::Driver;
my $driver = Selenium::Remote::Driver->new(
browser_name => 'chrome',
);
$driver->get('https://duckduckgo.com/');
sleep 5;
say $driver->get_title();
$driver->quit();
Selenium DuckDuckGo Search
- find_element
- Selenium::Remote::WDKeys
- KEYS
- send_keys
use strict;
use warnings;
use 5.010;
use Selenium::Remote::Driver;
use Selenium::Remote::WDKeys qw(KEYS);
my $driver = Selenium::Remote::Driver->new;
$driver->get('https://duckduckgo.com/');
my $input = $driver->find_element('search_form_input_homepage', 'id');
$input->send_keys('perl');
sleep 5;
$input->send_keys(KEYS->{'enter'});
sleep 5;
say $driver->get_title();
$driver->quit();
Selenium DuckDuckGo Test
- Test::Selenium::Remote::Driver
- get_ok
- title_is
- body_text_contains
Test::Selenium::Remote::Driver
use strict;
use warnings;
use 5.010;
use Test::More tests => 4;
use Test::Selenium::Remote::Driver;
use Selenium::Remote::WDKeys qw(KEYS);
my $driver = Test::Selenium::Remote::Driver->new;
$driver->get_ok('https://duckduckgo.com/');
$driver->title_is('DuckDuckGo', 'title');
my $input = $driver->find_element('search_form_input_homepage', 'id');
$input->send_keys('perl');
#sleep 5;
$input->send_keys(KEYS->{'enter'});
#sleep 5;
$driver->title_is('perl at DuckDuckGo', 'title');
$driver->body_text_contains('Official Site');
$driver->quit();
Selenium locator
- Selenium::Remote::WebElement}
- find_element}
- default_find}
find_element(SEARCH_STRING, [MODE])
return a Selenium::Remote::WebElement object of the first matched element.
MODE:
class
class_name
css
id
link
link_text
partial_link_text
tag_name
name
xpath
MODE defaults to xpath
default mode can be set via the 'default_finder' parameter of the constructor.
Selenium locator example
Run perl examples/www/server/server.pl for the sample web application.
use strict;
use warnings;
use 5.010;
use Test::More;
use Test::Selenium::Remote::Driver;
use Selenium::Remote::WDKeys qw(KEYS);
if (not Test::Selenium::Remote::Driver->server_is_running()) {
plan skip_all => 'The Selenium server must be running for this test';
}
plan tests => 1;
my $url = 'http://localhost:8080/';
my $driver = Test::Selenium::Remote::Driver->new;
subtest plain => sub {
plan tests => 13;
$driver->get_ok($url);
my $h1 = $driver->find_element('h1', 'tag_name');
is $h1->get_text, 'Our languages';
my $h2;
eval {
$h2 = $driver->find_element('h2', 'tag_name');
};
ok !$@, 'found h2';
is $driver->find_element('calculator', 'id')->get_text, 'calculator', 'find by id';
my $js = $driver->find_element('js', 'class');
is $js->get_text, 'JavaScript', 'js located';
my @jses = $driver->find_elements('js', 'class');
is scalar @jses, 3, 'count of js-es';
is $jses[0]->get_text, 'JavaScript', 'link 1';
is $jses[1]->get_text, 'js calculator', 'link 2';
is $jses[2]->get_text, 'jquery calculator', 'link 3';
#my $link = $driver->find_element('a[class="js"]', 'css');
my $link = $driver->find_element('a.js', 'css');
is $link->get_text, 'js calculator';
my $div = $driver->find_element('//div', 'xpath');
is $div->get_attribute('id'), 'links', 'links div';
my $calc_link = $driver->find_child_element($div, 'js calculator', 'link_text');
is $calc_link->get_attribute('href'), "${url}js_calc.html", 'href';
my $jq_link = $driver->find_child_element($div, 'jquery', 'partial_link_text');
is $jq_link->get_attribute('href'), "${url}jquery_calc.html", 'href';
};
Selenium content
- server_is_running}
- body_text_contains}
- content_contains}
- get_text}
use strict;
use warnings;
use 5.010;
use Test::More;
use Test::Selenium::Remote::Driver;
use Selenium::Remote::WDKeys qw(KEYS);
if (not Test::Selenium::Remote::Driver->server_is_running()) {
plan skip_all => 'The Selenium server must be running for this test';
}
plan tests => 1;
my $url = 'http://localhost:8080/';
my $driver = Test::Selenium::Remote::Driver->new;
subtest plain => sub {
plan tests => 4;
$driver->get_ok($url);
$driver->title_is('Hello world', 'title');
$driver->body_text_contains('Our languages');
$driver->content_contains('<h1>Our languages</h1>');
};
- body_text_contains - disregard HTML element
- content_contains - check in the raw HTML
- find_element will throw exception if cannot find element
- find_element_ok of the test module can only use the default locator
content_* methods
- $s->content_like($regex [,$desc])
- $s->content_unlike($regex [,$desc])
- $s->content_contains( $str [, $desc ] )
- $s->content_lacks( $str [, $desc ] )
body_text_*
- $s->body_text_like( $regex [, $desc ] )
- $s->body_text_unlike( $regex [, $desc ] )
- $s->body_text_contains( $str [, $desc ] )
- $s->body_text_lacks( $str [, $desc ] )
Selenium in the calc example
use strict;
use warnings;
use 5.010;
use Test::More;
use Test::Selenium::Remote::Driver;
use Selenium::Remote::WDKeys qw(KEYS);
if (not Test::Selenium::Remote::Driver->server_is_running()) {
plan skip_all => 'The Selenium server must be running for this test';
}
plan tests => 2;
my $url = 'http://localhost:8080/';
my $s = Test::Selenium::Remote::Driver->new(
default_finder => 'css',
);
subtest plain => sub {
plan tests => 10;
$s->get_ok($url);
$s->click_element_ok('#calculator');
$s->title_is('Calculator Test page');
$s->type_element_ok('input[name=a]', 19);
$s->type_element_ok('input[name=b]', 23);
sleep 3;
$s->click_element_ok('input[name=submit]');
sleep 3;
$s->title_is('Result');
$s->element_text_is('h1', 42);
$s->go_back_ok;
$s->go_back_ok;
sleep 1;
};
subtest js => sub {
plan tests => 16;
$s->click_element_ok('#jscal');
$s->title_is('JS Calculator');
$s->type_element_ok('input[name=a]', 19);
$s->type_element_ok('input[name=b]', 23);
$s->content_unlike(qr/42/);
$s->content_unlike(qr/1923/);
$s->content_like(qr/Foo Bar written by javascript/);
sleep 3;
$s->click_element_ok('#addstr');
$s->alert_text_is('1923');
$s->accept_alert_ok;
$s->content_unlike(qr/1923/);
sleep 3;
$s->click_element_ok('#addnum');
$s->alert_text_is('42');
$s->accept_alert_ok;
$s->content_unlike(qr/42/);
$s->go_back_ok;
};
$s->quit;
Selenium examples with JavaScript
use strict;
use warnings;
use 5.010;
use Test::More;
use Test::Selenium::Remote::Driver;
use Selenium::Remote::WDKeys qw(KEYS);
if (not Test::Selenium::Remote::Driver->server_is_running()) {
plan skip_all => 'The Selenium server must be running for this test';
}
plan tests => 1;
my $url = 'http://localhost:8080/';
my $s = Test::Selenium::Remote::Driver->new(
default_finder => 'css',
);
subtest jquery => sub {
plan tests => 10;
$s->get_ok($url);
$s->click_element_ok('#jquerycalc');
$s->title_is('JQuery based Calculator', 'title');
$s->type_element_ok('input[name=a]', 19);
$s->type_element_ok('input[name=b]', 23);
$s->element_text_is('#result', 'Result:', 'result');
sleep 3;
$s->click_element_ok('#addstr');
$s->element_text_is('#result', 'Result: 1923');
sleep 3;
$s->click_element_ok('#addnum');
$s->element_text_is('#result', 'Result: 42');
sleep 3;
};
$s->quit;
Selenium examples with Ajax
use strict;
use warnings;
use 5.010;
use Test::More;
use Test::Selenium::Remote::Driver;
use Selenium::Remote::WDKeys qw(KEYS);
if (not Test::Selenium::Remote::Driver->server_is_running()) {
plan skip_all => 'The Selenium server must be running for this test';
}
plan tests => 1;
my $url = 'http://localhost:8080/';
my $s = Test::Selenium::Remote::Driver->new(
default_finder => 'css',
);
subtest jquery => sub {
plan tests => 15;
$s->get_ok($url);
$s->click_element_ok('#ajaxcalc');
$s->title_is('Ajax Calculator', 'title');
$s->type_element_ok('input[name=a]', 19);
$s->type_element_ok('input[name=b]', 23);
$s->element_text_is('#result', '', 'result');
sleep 2;
$s->click_element_ok('#add');
$s->element_text_is('#result', '42');
sleep 2;
$s->clear_element_ok('#a');
$s->clear_element_ok('#b');
$s->type_element_ok('#a', 2);
$s->type_element_ok('#b', 5);
$s->type_element_ok('#sleep', 3);
$s->click_element_ok('#add');
sleep 4; # this one is really needed
$s->element_text_is('#result', '7');
sleep 2;
};
$s->quit;
WWW::Mechanize::PhantomJS
- WWW::Mechanize::PhantomJS}
Speed up Selenium with PhantomJS which is a headless Webkit browser using GhostDriver and WWW::Mechanize::PhantomJS.
use strict;
use warnings;
use 5.010;
use WWW::Mechanize::PhantomJS;
my $mech = WWW::Mechanize::PhantomJS->new();
$mech->get('http://google.com');
say $mech->content;
use strict;
use warnings;
use 5.010;
use WWW::Mechanize::PhantomJS;
my $mech = WWW::Mechanize::PhantomJS->new();
$mech->get('http://localhost:8080/');
say $mech->content;
Testing Dancer
use strict;
use warnings;
use t::lib::Test qw(start);
my $run = start($password);
eval "use Test::More";
eval "use Test::Deep";
require Test::WWW::Mechanize;
plan( skip_all => 'Unsupported OS' ) if not $run;
my $url = "http://localhost:$ENV{APP_PORT}";
plan( tests => 2 );
my $w = Test::WWW::Mechanize->new;
$w->get_ok($URL);
$w->content_like( qr{Welcome to your application}, 'content ok' );
...
package t::lib::Test;
use strict;
use warnings;
use base 'Exporter';
our @EXPORT = qw(start stop);
use File::Basename qw(basename);
use File::Spec;
use File::Temp qw(tempdir);
my $process;
sub start {
my ($password) = @_;
#return if $^O !~ /win32/i; # this test is for windows only now
my $dir = tempdir( CLEANUP => 1 );
# print STDERR "# $dir\n";
my ($cnt) = split /_/, basename $0;
$ENV{APP_PORT} = 3000+$cnt;
my $root = File::Spec->catdir( $dir, 'application' );
system
"$^X -Ilib script/setup.pl --root $root" and die $!;
if ( $^O =~ /win32/i ) {
require Win32::Process;
#import Win32::Process;
Win32::Process::Create( $process, $^X,
"perl -Ilib -It\\lib $root\\bin\\app.pl",
0, Win32::Process::NORMAL_PRIORITY_CLASS(), "." )
|| die ErrorReport();
} else {
$process = fork();
die "Could not fork() while running on $^O" if not defined $process;
if ($process) { # parent
sleep 1;
return $process;
}
my $cmd = "$^X -Ilib -It/lib $root/bin/app.pl";
exec $cmd;
}
return 1;
}
sub stop {
return if not $process;
if ( $^O =~ /win32/i ) {
$process->Kill(0);
} else {
kill 9, $process;
}
}
END {
stop();
}
1;
Exercies: MetaCPAN
- Visit MetaCPAN, search for something and observe the links. Has the small logo appeared?
- Visit Expect, click on 'Jump to version' and select 1.25. It should go here: [Expect](https://metacpan.org/pod/release/SZABGAB/Expect-1.25/lib/Expect.pm" %}
- Visit MetaCPAN, can you automate logging in?
Exercise: blogs.perl.org
Test blogs. You can also contribute test cases to my tests.
Exercise: Testing Smolder
In an earlier chapter we used Smolder to collect results of the test executions. Smolder itself is a web application using Javascript. Let's test it.
Let's start by assuming smolder is running and try to access its front page and login with an existing user.
Then we should try to create a new user and log in with that.
After that, as we cannot really know which user is still available let's create a new .smolder directory in some temporary place (use File::Temp for this). Create a configuration file, launch Smolder and then access it using your test script.
Exercise: Act
Take the script logging in to Act and change it to use Tess::WWW::Mechanize
Servers
Net::Server
- Net::Server
We are going to use the Net::Server module to create various server processes.
Skeleton Server
First we create a skeleton server that does not do anything.
#!/usr/bin/perl
use strict;
use warnings;
use FindBin;
use lib "$FindBin::Bin/lib";
use SkeletonServer;
SkeletonServer->run(port => 8000);
package SkeletonServer;
use warnings;
use strict;
use base 'Net::Server';
sub process_request {
# do your stuff
}
1;
Simple Echo Server
The Simple Echo Server lets you telnet to it and echos back every word you type.
#!/usr/bin/perl
use strict;
use warnings;
use FindBin;
use lib "$FindBin::Bin/lib";
use SimpleEchoServer;
SimpleEchoServer->run(port => 8000);
package SimpleEchoServer;
use warnings;
use strict;
use base 'Net::Server';
my $EOL = "\015\012";
sub process_request {
my $self = shift;
while( my $line = <STDIN> ) {
$line =~ s/\r?\n$//;
print qq(You said "$line"$EOL);
last if $line eq "bye";
}
}
1;
Echo Server
The Echo Server lets you telnet to it and echos back every word you type just like the Simple Echo Server but once connected you have 5 seconds between every two line you type or it prints Timeout and closes the connection. Prints a message both to the client and the console (STDERR) of the server.
#!/usr/bin/perl
use strict;
use warnings;
use FindBin;
use lib "$FindBin::Bin/lib";
use EchoServer;
EchoServer->run(port => 8000);
package EchoServer;
use warnings;
use strict;
use base 'Net::Server';
use English qw( -no_match_vars ) ;
my $timeout = 5; # give the user 5 seconds to type a line
my $EOL = "\015\012";
sub process_request {
my $self = shift;
print "Welcome to the echo server$EOL";
print "Type 'bye' to disconnect.$EOL";
eval {
local $SIG{ALRM} = sub { die "Timeout\n" };
alarm($timeout);
while( my $line = <STDIN> ) {
alarm($timeout);
$line =~ s/\r?\n$//;
print qq(You said "$line"$EOL);
last if $line eq "bye";
}
};
alarm(0);
if ( $EVAL_ERROR ) {
if ( $EVAL_ERROR eq "Timeout\n" ) {
print "Timed Out. Disconnecting...$EOL";
print STDERR "Client timed Out.\n";
} else {
print "Unknown internal error. Disconnecting...$EOL";
print STDERR "Unknown internal error: $EVAL_ERROR\n";
}
} else {
print STDERR "User said bye\n";
}
return;
}
1;
Complex network servers
There are many other options to build a network server. Besides providing more complex interaction with the single server one can configure it to be able to handle multiple clients at the same time.
Just replace "use base 'Net::Server';" by "use base 'Net::Server::PreFork';" and you have a preforking web server.
Testing networking devices
Elements
- Do some hardware setup, connect some wires
- Access the administrative interface to configure the device
- Configure devices on all sides of our box
- Run test
- Check results
Network testing
There are many kinds of networking appliances and applications that need testing.
Firewalls will normally need 3 computers - one on each side of the firewall. One would configure the firewall and then send packets from one side to the other side. Then check if the packets got through or not.
Routers or switches might need more computers connected to the device under testing.
Proxies can be tested in a way similar to firewalls.
Hardware setup
We cannot yet fully automate this part.
Access the administrative interface
- CLI - Command Line Interface (telnet)
- SNMP
- Web server with web GUI
- Proprietary protocol with a Java Applet loaded from the box
- Proprietary protocol with some locally installed GUI
CLI testing
Here you have a device like a router, or some other box connected to the network. Normally you would telnet to it and then interactively test the various commands to see if they work. In addition you might be able to fetch the raw configuration information where you can validate if the configuration values were written correctly.
Going even further after you configured the device somehow you can test it if the new behavior of the device really can be observed: You connect other devices and ping this box or try to send packets and see if they get to the correct location.
- Telnet to device
- Use SNMP to monitor/configure the device
- Prepare external entities on 2 or more sides of the device
- Send packets
- Check if the packets were received correctly
Configure devices on all sides of our box
- Traffic generators (e.g. SmartBits can be configured using Tcl,
- Web/ftp/... servers
- Use Telnet/SSH
Run tests
Still requires the same telnet connection to the various elements in your test setup.
Check results
- Parse log files
- Compute throughput
- Compare files copied
Expect.pm
- expect
As we saw earlier Expect.pm with some low level networking protocol can be used to access any device that can be connected via some cable.
Or without a cable.
But you might not want to implement the whole protocol, or you might not have a command line tool that can access the device remotely. Or you don't want to use it as you'd like to test that separately.
You can use the built-in telnet/ssh/ftp/tftp clinets in your Unix/Linux machine.
Networking
- Net::*
- Net::Telnet
- Net::FTP
- Net::SSH::Perl
- Net::SNMP
- SNMP::*
- TFTP
- IO::* low level I/O modules
Network devices
- Cisco::*
- Net::Telnet::Cisco
Devices connected to Serial or Parallel port
- Device::*
- Device::SerialPort
- Win32::SerialPort
- Device::Modem
- Device::Gsm
- Device::ParallelPort
X10 protocol
- ControlX10::CM11 (AC power line)
- ControlX10::CM17 Firecracker (RF)
Database Testing
Database testing
This can be considered as part of any application as there is some kind of a database used by every application. In the simple case the 'database' might be a flat file but it can also be some csv file or xml file or an RDBMS that you can access using SQL. In this case you would like to test what are the consequences on the database of certain operations of the application ?
- Prepare a database
- Execute some code
- Check if the database was updated correctly
A couple of tools
- DBI - Database independent interface
- DBD::* Database driver(s)
- DBIx::Class
- Test::DatabaseRow - simple database tests
Test::More and DBI
#!/usr/bin/perl
use strict;
use warnings;
use Test::More tests => 4;
use DBI;
system "$^X examples/dbi/create_sample.pl";
END { unlink 'sample.db' }
my $email = 'foo@bar.com';
my $pw = 'secret';
my $dbh = DBI->connect("dbi:SQLite:dbname=sample.db");
my $sth = $dbh->prepare('SELECT * FROM people WHERE email=? AND pw=?');
$sth->execute($email, $pw);
my $h = $sth->fetchrow_hashref('NAME_lc');
ok($h, "row received");
is($h->{fname}, 'Foo', 'fname');
is($h->{lname}, 'Bar', 'lname');
$h = $sth->fetchrow_hashref('NAME_lc');
ok(!$h, "no more rows");
$dbh->disconnect;
Results:
1..4
ok 1 - row received
ok 2 - fname
ok 3 - lname
ok 4 - no more rows
Test::DatabaseRow
#!/usr/bin/perl
use strict;
use warnings;
use Test::More tests => 1;
use Test::DatabaseRow;
use DBI;
system "$^X examples/dbi/create_sample.pl";
END { unlink 'sample.db' }
my ($email, $pw) = ('foo@bar.com', 'secret');
my $dbh = DBI->connect("dbi:SQLite:dbname=sample.db");
local $Test::DatabaseRow::dbh = $dbh;
row_ok(
sql => ['SELECT * FROM people WHERE email=? AND pw=?', $email, $pw],
tests => [ fname => 'Foo', lname => 'Bar'],
label => 'Foo Bar',
);
$dbh->disconnect;
Results:
1..1
ok 1 - Foo Bar
Test::DatabaseRow fail
#!/usr/bin/perl
use strict;
use warnings;
use Test::More tests => 1;
use Test::DatabaseRow;
use DBI;
system "$^X examples/dbi/create_sample.pl";
END { unlink 'sample.db' }
my ($email, $pw) = ('foo@bar.com', 'secret');
my $dbh = DBI->connect("dbi:SQLite:dbname=sample.db");
local $Test::DatabaseRow::dbh = $dbh;
row_ok(
sql => ['SELECT * FROM people WHERE email=? AND pw=?', $email, $pw],
tests => [ fname => 'Foo', lname => 'Zorg'],
label => "Foo Zorg",
);
$dbh->disconnect;
The only difference in the test is that we are expecting SZabo instead of Szabo
Results:
1..1
not ok 1 - Foo Zorg
# Failed test 'Foo Zorg'
# at examples/db/dbrow_fail.t line 15.
# While checking column 'lname' on 1st row
# got: 'Bar'
# expected: 'Zorg'
# Looks like you failed 1 test of 1.
Test::DatabaseRow tests more than one row
#!/usr/bin/perl
use strict;
use warnings;
use Test::More tests => 1;
use Test::DatabaseRow;
use DBI;
system "$^X examples/dbi/create_sample.pl";
END { unlink 'sample.db' }
my $dbh = DBI->connect("dbi:SQLite:dbname=sample.db");
local $Test::DatabaseRow::dbh = $dbh;
row_ok(
sql => ['SELECT * FROM accounts'],
tests => { '>=' => { 'amount' => 0 }},
label => 'accounts',
results => 3,
check_all_rows => 1,
);
$dbh->disconnect;
Results:
1..1
ok 1 - accounts
Test::DatabaseRow tests more than one row - failure
#!/usr/bin/perl
use strict;
use warnings;
use Test::More tests => 1;
use Test::DatabaseRow;
use DBI;
system "$^X examples/dbi/create_sample.pl";
END { unlink 'sample.db' }
my $dbh = DBI->connect("dbi:SQLite:dbname=sample.db");
local $Test::DatabaseRow::dbh = $dbh;
row_ok(
sql => ['SELECT * FROM accounts'],
tests => { '>' => { 'amount' => 0 }},
label => 'accounts',
results => 3,
check_all_rows => 1,
);
$dbh->disconnect;
Results:
{% embed include file="src/examples/db/dbrow_more_fail.err)
Test::DatabaseRow without SQL
Of course, writing SQL is not fun, especially if you don't know SQL. You might prefer to write the logic in Perl and not care about the SQL stuff.
#!/usr/bin/perl
use strict;
use warnings;
use Test::More tests => 1;
use Test::DatabaseRow;
use DBI;
system "$^X examples/dbi/create_sample.pl";
END { unlink 'sample.db' }
my ($email, $pw) = ('foo@bar.com', 'secret');
my $dbh = DBI->connect("dbi:SQLite:dbname=sample.db");
local $Test::DatabaseRow::dbh = $dbh;
row_ok(
table => 'people',
where => { '=' => {
email => $email,
pw => $pw,
}
},
tests => { 'eq' => {
fname => 'Foo',
lname => 'Bar',
},
},
label => 'Foo Bar',
);
$dbh->disconnect;
Results:
1..1
ok 1 - Foo Bar
Microsoft Windows GUI Testing
GUI testing
- Launch the application
- Find various controls
- Type in values/check if they show up correctly
- Push buttons
- Check if results are correct
We cannot separate testing from development. We cannot say that after the developers have written something we can test everything as black box, no matter how they developed it.
Keyboard and mouse movements
-
Send keystrokes
-
Move the mouse and click on "controls".
Testing command line or networking applications on Microsoft Windows is the same as testing those applications on Unix or Linux. That does not need a separate chapter. Especially because Perl will run on MS Windows as well and nearly all the functionality we have on Unix we also have on MS Windows.
What is different is the GUI and the way applications interact with each other. For GUI testing we need to be able to send keystrokes as if someone was typing at the keyboard and we also need to be able to move the mouse and click on various objects. In Microsoft terminology the objects are called controls.
Luckily we have the Win32::GuiTest module for our disposal.
Win32::GuiTest
- Written and maintained by Ernesto Guisado Jarek Jurasz and others.
- Now maintained by Dmitry Karasik.
- Available since 1998.
Win32::GUIRobot
- Written by Dmitry Karasik.
- Available since 2007.
Calculator
calc.exe
perl calc.pl
Testing a calculator might be simple. This is the reason we use this example.
Launch the application
- start
- :ALL
- FindWindowLike
Launch the application and find the id of the new window
Make sure there is exactly one such application running
Run the code twice without closing the window and see the error message.
Use the start command in order to detach the calc.exe from our Perl
script and both will be able to run simultaneously.
# launch the application and find the id of the new window
# Make sure there is exactly one such application running
# Run the code twice without closing the window and see the error message
# using the current public release from CPAN / installed by ppm
# There is a much newer version on the yahoo group web site that can be also used
# and has a lot of other functions
use strict;
use warnings;
use Win32::GuiTest qw(:ALL);
system "start calc.exe";
sleep(1);
my @windows = FindWindowLike(undef, "Calculator");
print join ":", @windows, "\n";
if (not @windows) {
die "Could not find Calculator\n";
}
if (@windows > 1) {
die "There might be more than one Calculators running\n";
}
Locale
- setlocale
- LC_CTYPE
The current locale is what tells the application in what language it should display its strings. We use this information in an external file to find out what strings to expect.
This script gets a parameter --app with a value 'calculator' or 'notepad' or 'notepad_menu' just
to be able to return the appropriate strings in the current language. For example:
perl locale.pl --app calculator
TODO: include: examples/Win32GUI/locale.pl
Close the application
- Alt-F4
- SendKeys
- %{F4}
Close the application by Alt-F4
% means Alt
{F4} means the F4 key.
# close the application by Alt-F4
use strict;
use warnings;
use Win32::GuiTest qw(:ALL);
system "start calc.exe";
sleep(1);
my $calculator_title = `$^X locale.pl --app calculator`;
my @windows = FindWindowLike(undef, $calculator_title);
print join ":", @windows, "\n";
if (not @windows) {
die "Could not find Calculator\n";
}
if (@windows > 1) {
die "There might be more than one Calculators running\n";
}
SendKeys("%{F4}"); # Alt-F4 to exit
Using the Keyboard
- PushButton
# type in the calculation and see as the calculater reaches the result
#
use strict;
use warnings;
use Win32::GuiTest qw(:ALL);
system "start calc.exe";
sleep(1);
my $calculator_title = `$^X locale.pl --app calculator`;
my @windows = FindWindowLike(undef, $calculator_title);
print join ":", @windows, "\n";
if (not @windows) {
die "Could not find Calculator\n";
}
if (@windows > 1) {
die "There might be more than one Calculators running\n";
}
PushButton '7';
sleep(1);
PushButton '\*';
sleep(1);
PushButton '5';
sleep(1);
PushButton '=';
sleep(2);
SendKeys("%{F4}"); # Alt-F4 to exit
Type in the calculation and see as the calculator reaches the result.
Nice. It would be even better if after we typed in the values and pressed =
we would be able to read the result automatically and then compare it to
our expected value.
Check the children
- GetChildWindows
- GetWindowText
# Check out the children, Try to find the window where the results should be
# It does not have a text and there are 3 such controls - without a name
use strict;
use warnings;
use Win32::GuiTest qw(:ALL);
system "start calc.exe";
sleep(1);
my $calculator_title = `$^X locale.pl --app calculator`;
my @windows = FindWindowLike(undef, $calculator_title);
printf "Window: %8s\n", @windows;
if (not @windows) {
die "Could not find Calculator\n";
}
if (@windows > 1) {
die "There might be more than one Calculators running\n";
}
my @children = GetChildWindows($windows[0]);
foreach my $child (@children) {
printf "Child: %8s %s\n", $child, GetWindowText($child);
}
SendKeys("%{F4}"); # Alt-F4 to exit
Check out the children, Try to find the window where the results should be
It does not have a text and there are 3 such controls - without a name
Fetch content of a control
- WMGetText
# Catch the content of the first child,
# At this point we can only hope that this is the child that holds the result
# as it does not have a title, maybe it has a type that we can check ?
#
use strict;
use warnings;
use Win32::GuiTest qw(:ALL);
system "start calc.exe";
sleep(1);
my $calculator_title = `$^X locale.pl --app calculator`;
my @windows = FindWindowLike(undef, $calculator_title);
print join ":", @windows, "\n";
if (not @windows) {
die "Could not find Calculator\n";
}
if (@windows > 1) {
die "There might be more than one Calculators running\n";
}
PushButton '7';
sleep(1);
PushButton '\*';
sleep(1);
PushButton '5';
sleep(1);
PushButton '=';
sleep(2);
my @children = GetChildWindows($windows[0]);
printf "Result: %s\n", WMGetText($children[0]);
SendKeys("%{F4}"); # Alt-F4 to exit
Catch the content of the first child. At this point we can only hope that this is the child that holds the result as it does not have a title, maybe it has a type that we can check ? Or maybe it is time to ask the engineers to give a title to this control ?
Window location
- GetWindowRect
# check the location of the windows.
# run this script several times and see the data
# As we can see all 4 values change so while the first
# two are x and y coordinates of the top left corner
# the other two values are not width and haight.
# Looking at the data more closely we can see that
# the two other values are the coordinates of the
# bottom right corner
#
use strict;
use warnings;
use Win32::GuiTest qw(:ALL);
system "start calc.exe";
sleep(1);
my $calculator_title = `$^X locale.pl --app calculator`;
my @windows = FindWindowLike(undef, $calculator_title);
print join ":", @windows, "\n";
if (not @windows) {
die "Could not find Calculator\n";
}
if (@windows > 1) {
die "There might be more than one Calculators running\n";
}
my ($left, $top, $right, $bottom) = GetWindowRect($windows[0]);
print join ":", GetWindowRect($windows[0]), "\n";
SendKeys("%{F4}"); # Alt-F4 to exit
As we would like to start to use the mouse now, first we should find out where are, we and where is the window?
Check the location of the windows. Run this script several times and see the data. As we can see all 4 values change so while the first two are x and y coordinates of the top left corner the other two values are not width and height. Looking at the data more closely we can see that the two other values are the coordinates of the bottom right corner.
Move the mouse
- MouseMoveAbsPix
# Let's try to move the cursor to the top left corner
#
use strict;
use warnings;
use Win32::GuiTest qw(:ALL);
system "start calc.exe";
sleep(1);
my $calculator_title = `$^X locale.pl --app calculator`;
my @windows = FindWindowLike(undef, $calculator_title);
print join ":", @windows, "\n";
if (not @windows) {
die "Could not find Calculator\n";
}
if (@windows > 1) {
die "There might be more than one Calculators running\n";
}
my ($left, $top, $right, $bottom) = GetWindowRect($windows[0]);
print join ":", GetWindowRect($windows[0]), "\n";
MouseMoveAbsPix($left,$top);
sleep(2);
SendKeys("%{F4}"); # Alt-F4 to exit
Let's try to move the mouse to the top left corner. Here we can see that the location of the mouse is at the outside of the window heading.
Move the cursor around the edges
# And now we would like to see that we can draw the outline of the windows
# with our mouse
use strict;
use warnings;
use Win32::GuiTest qw(:ALL);
system "start calc.exe";
sleep(1);
my $calculator_title = `$^X locale.pl --app calculator`;
my @windows = FindWindowLike(undef, $calculator_title);
print join ":", @windows, "\n";
if (not @windows) {
die "Could not find Calculator\n";
}
if (@windows > 1) {
die "There might be more than one Calculators running\n";
}
my ($left, $top, $right, $bottom) = GetWindowRect($windows[0]);
print join ":", GetWindowRect($windows[0]), "\n";
foreach my $x ($left..$right) {
MouseMoveAbsPix($x,$top);
}
foreach my $y ($top..$bottom) {
MouseMoveAbsPix($right,$y);
}
foreach my $x (reverse ($left..$right)) {
MouseMoveAbsPix($x,$bottom);
}
foreach my $y (reverse ($top..$bottom)) {
MouseMoveAbsPix($left,$y);
}
SendKeys("%{F4}"); # Alt-F4 to exit
And now we would like to see that we can draw the outline of the windows with our mouse
Close the window by a mouse click
- SendMouse
- {LeftClick}
# now that we know how to move the mouse we can even use it to click on the
# x in the top right corner exiting the application with a mouse click.
use strict;
use warnings;
use Win32::GuiTest qw(:ALL);
system "start calc.exe";
sleep(1);
my $calculator_title = `$^X locale.pl --app calculator`;
my @windows = FindWindowLike(undef, $calculator_title);
print join ":", @windows, "\n";
if (not @windows) {
die "Could not find Calculator\n";
}
if (@windows > 1) {
die "There might be more than one Calculators running\n";
}
my ($left, $top, $right, $bottom) = GetWindowRect($windows[0]);
print join ":", GetWindowRect($windows[0]), "\n";
MouseMoveAbsPix($right-10,$top+10); # this probably depends on the resolution
sleep(2);
SendMouse("{LeftClick}");
Now that we know how to move the mouse we can even use it to click on the x in the top right corner exiting the application with a mouse click.
Calculate using the mouse
# Now we could calcualte the location of the various buttons based
# on the size of the window and our feeling about the size but
# we remember that most of the children of the main Calculator window
# had a name, so it will be quite easy to find them.
use strict;
use warnings;
use Win32::GuiTest qw(:ALL);
system "start calc.exe";
sleep(1);
my $calculator_title = `$^X locale.pl --app calculator`;
my @windows = FindWindowLike(undef, $calculator_title);
print join ":", @windows, "\n";
if (not @windows) {
die "Could not find Calculator\n";
}
if (@windows > 1) {
die "There might be more than one Calculators running\n";
}
my ($left, $top, $right, $bottom) = GetWindowRect($windows[0]);
print join ":", GetWindowRect($windows[0]), "\n";
# find the appropriate child window and click on it
my @children = GetChildWindows($windows[0]);
foreach my $title (qw(7 * 5 =)) {
my ($c) = grep {$title eq GetWindowText($_)} @children;
my ($left, $top, $right, $bottom) = GetWindowRect($c);
MouseMoveAbsPix(($right+$left)/2,($top+$bottom)/2);
SendMouse("{LeftClick}");
sleep(1);
}
printf "Result: %s\n", WMGetText($children[0]);
MouseMoveAbsPix($right-10,$top+10); # this probably depends on the resolution
sleep(2);
SendMouse("{LeftClick}");
Now we could calculate the location of the various buttons based
on the size of the window and our feeling about the size but
we remember that most of the children of the main Calculator window
had a name, so it will be quite easy to find them.
The full calc.pl example
#!/usr/bin/perl -w
use strict;
use Win32::GuiTest qw(:ALL);
if (not @ARGV or
($ARGV[0] ne "keyboard" and $ARGV[0] ne "mouse")) {
die "Usage: $0 [keyboard|mouse]\n"
}
system "start calc.exe";
sleep(1);
my $calculator_title = `$^X locale.pl --app calculator`;
my @windows = FindWindowLike(undef, $calculator_title);
if (not @windows) {
die "Could not find Calculator\n";
}
if (@windows > 1) {
die "There might be more than one Calculators running\n";
}
if ($ARGV[0] eq "keyboard") {
PushButton '7';
sleep(1);
PushButton '\*';
sleep(1);
PushButton '5';
sleep(1);
PushButton '=';
sleep(2);
# Catch the content of the first child,
# At this point we can only hope that this is the child that holds the result
# as it does not have a title, maybe it has a type that we can check ?
my @children = GetChildWindows($windows[0]);
printf "Result: %s\n", WMGetText($children[0]);
SendKeys("%{F4}"); # Alt-F4 to exit
}
if ($ARGV[0] eq "mouse") {
my ($left, $top, $right, $bottom) = GetWindowRect($windows[0]);
# find the appropriate child window and click on it
my @children = GetChildWindows($windows[0]);
foreach my $title (qw(7 * 5 =)) {
my ($c) = grep {$title eq GetWindowText($_)} @children;
my ($left, $top, $right, $bottom) = GetWindowRect($c);
MouseMoveAbsPix(($right+$left)/2,($top+$bottom)/2);
SendMouse("{LeftClick}");
sleep(1);
}
printf "Result: %s\n", WMGetText($children[0]);
# this probably depends on the resolution
MouseMoveAbsPix($right-10,$top+10);
sleep(2);
SendMouse("{LeftClick}");
}
Installing Win32::GuiTest
The latest version is on CPAN.
Tools
- Windows spy (probably defunct)
- Winspector (?)
- Spy++ that comes with Visual Studio
- WinExplorer
- Win32::GuiTest
- Win32::GUIRobot
- spy.pl that comes with Win32::GuiTest, but is also included here
#!perl -w
use strict;
# Based on the spy--.pl within the distribution
# Parse a subtree of the whole windoing systme and print as much
# information as possible about each window and each object.
# This software is in a very early stage. Its options and output
# format will change a lot.
# Your input is welcome !
# Written by Gabor Szabo <gabor@szabgab.com>
my $VERSION = "0.02";
use Getopt::Long;
use Win32::GuiTest qw(:ALL);
my %opts;
GetOptions(\%opts, "help", "title=s", "all", "id=i", "class=s");
usage() if $opts{help} or not %opts;
my %seen;
my $desktop = GetDesktopWindow();
my $root = 0;
my $start;
$start = 0 if $opts{all};
$start = $opts{id} if $opts{id};
if ($opts{title} or $opts{class}) {
my @windows = FindWindowLike(0, $opts{title}, $opts{class});
#my @windows = FindWindowLike(0, $opts{title}) if $opts{title};
#@windows = FindWindowLike(0, '', $opts{class}) if $opts{class};
if (@windows > 1) {
print "There are more than one window that fit:\n";
foreach my $w (@windows) {
printf "%s | %s | %s\n", $w, GetClassName($w), GetWindowText($w);
}
exit;
}
die "Did not find such a window." if not @windows;
$start = $windows[0];
}
usage() if not defined $start;
my $format = "%-10s %-10s, '%-25s', %-10s, Rect:%-3s,%-3s,%-3s,%-3s '%s'\n";
printf $format,
"Depth",
"WindowID",
"ClassName",
"ParentID",
"WindowRect","","","",
"WindowText";
parse_tree($start);
sub GetImmediateChildWindows {
my $WinID = shift;
grep {GetParent($_) eq $WinID} GetChildWindows $WinID;
}
sub parse_tree {
my $w = shift;
if ($seen{$w}++) {
print "loop $w\n";
return;
}
prt($w);
#foreach my $child (GetChildWindows($w)) {
# parse_tree($child);
#}
foreach my $child (GetImmediateChildWindows($w)) {
print "------------------\n" if $w == 0;
parse_tree($child);
}
}
# GetChildDepth is broken so here is another version, this might work better.
# returns the real distance between two windows
# returns 0 if the same windows were provides
# returns -1 if one of the values is not a valid window
# returns -2 if the given "ancestor" is not really an ancestor
# of the given "descendant"
sub MyGetChildDepth {
my ($ancestor, $descendant) = @_;
return -1
if $ancestor
and (not IsWindow($ancestor) or not IsWindow($descendant));
return 0 if $ancestor == $descendant;
my $depth = 0;
while ($descendant = GetParent($descendant)) {
$depth++;
last if $ancestor == $descendant;
}
return $depth + 1 if $ancestor == 0;
}
sub prt {
my $w = shift;
my $depth = MyGetChildDepth($root, $w);
printf $format,
(0 <= $depth ? "+" x $depth : $depth),
$w,
($w ? GetClassName($w) : ""),
($w ? GetParent($w) : "n/a"),
($w ? GetWindowRect($w) : ("n/a", "", "", "")),
($w ? GetWindowText($w) : "");
}
sub usage {
print "Version: v$VERSION\n";
print "Usage:\n";
print " $0 --help\n";
print " $0 --all\n";
print " $0 --title TITLE\n";
print "\n";
print "As the output is quite verbose, probably you'll want to redirect \n";
print "the output to a file: $0 options > out.txt\n";
print "\n";
exit;
}
Appendixes
Test related CPAN Modules
Task::Test A collection of testing related modules with some explanation on each one of them.
- Test - this is obsolete
- Test::Tutorial
- Test::Simple
- Test::More
- Test::Harness
- Test::DatabaseRow
Generic modules
- File::Find
- File::Find::Rule
Web testing
- Test::HTML::Lint
- HTML::Lint
- LWP
- LWP::Simple
- WWW::Mechanize
- HTTP::Recorder
- HTTP::Proxy
GUI testing
- Win32::GuiTest
- Win32::GUIRobot
- X11::GUITest
Other sources of information
- Slides about automated testing by Andy Lester.
- An overview of the test modules.
- Perl Quality Assurance Projects.
- Whirlwind Tour of Test::Class by Ovid.
- Testing presentation via the Perl TV.
- Open Source Testing tools.
- Perl Test Refcard by Ian Langworth.
Test Realistically
- Use the application according to its documentation.
- Use it according to common sense.
Test Unrealistically
- Test for edge cases:
-
- negative values, 0, 1, -1, very big numbers.
-
- characters instead of numbers, floating point instead of whole numbers.
-
- empty string, very long string, strange characters.
- Test randomly.
- Test using invalid input.
File System testing
When testing a file system one wants to make sure that the system operates reliably under various conditions. In no case will it loose data, not even in extreme cases such as many small files or few large files that could fill an entire disk.
Applications should be prepared for testing. For example in a Windows GUI application every control should have a unique and persistent name so we can use that name to find the handle. Testing the fact that every is control has a name is already one element in our testing suit.