NAME
    Test::Usage - A different approach to testing: selective, quieter,
    colorful.

SYNOPSIS
    Let's say we are building module Foo.pm. To exercise it, we write a
    usage examples module, Foo_T.pm, which may eventually look something
    like this:

      package Foo_T;
      use Test::Usage;
      use strict;
      use warnings;
      use Foo;

      example('e1', sub { ...  ok(...); ...  die "Uh oh"; ... });
      example('a1', sub { ...  ok(...) or diag(...); ... });
      example('a2', sub { ...  ok(...); ... });
      example('a3', sub {
        my $f = Foo->new();
        my $got_foo = $f->foo();
        my $exp_foo = 'FOO';
        ok(
          $got_foo eq $exp_foo,
          "Expecting foo() to return '$exp_foo'.",
          "But got '$got_foo'."
        );
      });

    Here are a few ways to test its examples:

        # Run example 'a3' only.
      perl -MFoo_T -e 'test(a => "a3")'

        # Run all examples whose label matches glob 'a*': a1, a2, a3.
      perl -MFoo_T -e 'test(a => "a*")'

        # Run all examples found in the test module.
      perl -MFoo_T -e test

        # Run example 'a3', reporting successes also, but without color.
      perl -MFoo_T -e 'test(a => "a3", v => 2, c => 0)'

        # Run and summarize all examples in all "*_T.pm" files found under
        # current directory.
      perl -MTest::Usage -e files

DESCRIPTION
    This module approaches testing differently from the standard Perl way.
    It is selective because it makes it possible to run only selected tests
    from a test file that may contain many more. It is quieter because by
    default only failing tests are reported. It is colorful because, if
    possible, results are displayed using color (with Term::ANSIColor or
    Win32::Console).

    I usually have a test file named *_T.pm for each ordinary *.pm file in
    my projects. For example, the test file for Foo.pm would be named
    Foo_T.pm. I place Foo_T.pm in the same directory as Foo.pm. Foo_T.pm has
    a conventional structure, like the one shown in the SYNOPSIS. Basically,
    it just names the module, loads Test::Usage and defines a bunch of
    examples. Each example(), identified by its label, adds to the tests
    that the module can run, upon request.

    The module exports some of its methods to the calling package and some
    to main, to make them easier to use, usually from the shell. When the
    developer wishes to run a test, he invokes it as shown in the synopsis
    (perhaps with a coating of shell syntaxic sugar).

METHODS AND FUNCTIONS
    All methods apply to a single instance of Test::Usage, named $t,
    initialized by import().

    The module defines the following methods and functions.

  import ($pkg)
    Sets $t to an empty hash ref, blessed in Test::Usage.

    Resets $t's counters to 0:

      Number of 'ok' that failed.
      Number of 'ok' that succeeded.
      Number of examples that died.
      Number of examples that had warnings.

    Sets $t's default label to '-'.

    Resets $t's options to default values. Here are the as-shipped values:

      For the test() method:

        a => '*'             # Accept tests whose label matches this glob.
        e => '__*'           # Exclude tests whose label matches this glob.
        s => 1               # Print a summary line if true.
        v => REPORT_FAILURES # Verbosity level.
        fail => 0            # Fail tests systematically if true.

      For the files() method:

        d => '.'      # Directory in which to look for files.
        g => '*_T.pm' # Test files whose name matches this glob.
        r => 1        # Look for files recursively through dir if true.
        i => ''       # Add to Perl %INC path.
        t => {}       # Option values to pass to test() for each file.

      For both test() and files():

        c => 1  # Use color if possible.

    Exports these methods to the calling package:

      t
      example
      ok
      ok_labeled
      diag

    Exports these methods to main:

      t
      test
      files
      labels

  $pkg::t (), ::t ()
    Both return $t, effectively giving access to all Test::Usage methods.

  $pkg::example ($label, $sub_ref)
    Add a test example labeled $label and implemented by $sub_ref to the
    tests that can be run by $t->test().

    $label is an arbitrary string that is used to identify the example. The
    label will be displayed when reporting tests results. Labels can be
    chosen to make it easy to run selected subsets; for example, you may
    want to label a bunch of examples that you usually run together with a
    common prefix.

    The $sub_ref is a reference to the subroutine implementing the test. It
    often calls a number of ok(), wrapped in setup/tear-down scaffolding to
    express the intended usage.

    Here's a full example:

      example('t1', sub {
        my $f = Foo->new();
        my $exp = 1;
        my $got = $f->get_val();
        ok(
          $got == $exp,
          "Expected get_val() to return $exp for a new Foo object.",
          "But got $got.",
        );
      });

  $pkg::ok ($bool, $exp_msg, $got_msg)
    $bool is an expression that will be evaluated as true or false, and thus
    determine the return value of the method. Also, if $bool is true, $t
    will increment the number of successful tests it has seen, else the
    number of failed tests.

    Note that $bool will be evaluated in list context; for example, if you
    want to use a bind operator here, make sure you wrap it with 'scalar'.
    For example:

      ok(scalar($x =~ /abc/),
        "Expected \$x to match /abc/.",
        "But its value was '$x'."
      );

    In that example, if 'scalar' is not used, the bind operator is evaluated
    in list context, and if there is no match, an empty list is returned,
    which results in ok() receiving only the last two arguments.

    If the test() flags are such that the result of the ok() is to be
    printed, something like one of the following will be printed:

      ok a1
        # Expected $x to match /abc/.

      not ok a1
        # Expected $x to match /abc/.
        # But its value was 'def'.

    ok() is most useful when it describes expected result and helps debug
    when it fails. So the $exp_msg should tell the user what the test is
    expecting, and the $got_msg, what it got instead. It is useful to
    formulate them in terms that are useful for development, maintenance,
    and debugging. Compare the following examples:

    Useful
          ok(! defined($got = foo()),
            'foo() should return undef if no arguments are given.',
            "But returned '$got' instead."
          );

        Whether it succeeds of fails, the following messages are helpful:

          ok a1
            # foo() should return undef if no arguments are given.

          not ok a1
            # foo() should return undef if no arguments are given.
            # But returned '' instead.

    Useless
          ok(! defined(my $got = foo()),
            'Result is undefined.',
            'Didn\'t work.'
          );

        Whether it succeeds of fails, we don't really know what exactly went
        right, or wrong:

          ok a1
            # Result is undefined.

          not ok a1
            # Result is undefined.
            # Didn't work.

  $pkg::ok_labeled ($sub_label, $bool, $exp_msg, $got_msg)
    Same as ok(), except that ".$sub_label" is appended to the label in the
    printed output. This is useful for examples containing many ok() whose
    labels we want to distinguish.

  $pkg::diag (@msgs)
    Prefixes each line of each string in @msgs with ' # ' and displays them
    using the 'diag' color tag. Returns true (contrary to Test::Builder,
    Test::More, et al.).

  ::labels ()
    Returns a ref to an array holding the labels of all the examples, in the
    order they were defined.

  ::test (%options)
    Clears counters and runs all the examples in the module, subject to the
    constraints indicated by %options. If %options is undefined or if some
    of its keys are missing, default values apply.

    Returns a list containing the following values:

      Name of the module being tested.
      Number of seconds it took to run the examples.
      Number of 'ok' that succeeded.
      Number of 'ok' that failed.
      Number of examples that died.
      Number of examples that had warnings.

    Here is the meaning and default value of the elements of %options:

    a => '*' # Accept.
        The value is a glob. Tests whose label matches this glob will be
        run. All tests are run when the value is the default.

    e => '__*' # Exclude.
        The value is a glob. Tests whose label matches this glob will not be
        run. I use this when I want to keep a test in the test module, but I
        don't want to run it for some reason. When using the default value,
        prepending the string '__' to a test label will effectively
        disactivate it. When you are ready to run those tests, remove the
        '__' prefix from the label, or pass the 'e => ""' argument.

    v => 1 # Verbosity.
        Determines the verbosity of the testing mechanism:

          0: Display no individual results.
          1: Display individual results for failing tests only.
          2: Display individual results for all tests.

    s => 1 # Summary.
        If true, two lines like the following will wrap the test output:

          # module_name
            ...
            # +3 -1 -d +w (00h:00m:02s) module_name

        That means that of the ok*() calls that were made, 3 succeeded and 1
        failed, that no dies but some warnings occurred, and it took about 2
        seconds to run.

    fail => 0 # Fail.
        If true, any ok*() invoked will act as though it failed. When
        combined with a verbosity of 1 or 2, (to display failures), you will
        see all the actual messages that would get printed when failures
        occur.

  ::files (%options)
    After having found all the files that correspond to the criteria defined
    in %options (for example, directory to look in), for each file calls
    perl in a subshell to run something like this:

        perl -M$file -e 'test()'

    The results of each run are collected, examined and tallied, and a
    summary line and a '1..n' line are displayed, something like this:

      # Total +7 -5 0d 1w (00h:00m:00s) in 4 modules
      1..12

    Returns a list of:

      Number of seconds it took to run the examples.
      Number of 'ok' that succeeded.
      Number of 'ok' that failed.
      Number of examples that died.
      Number of examples that had warnings.
      Number of modules that were run.

    All values in %options are optional. Their meaning and default value are
    as follows:

    d* => '.' # Glob Directory.
        All options starting with the letter 'd' designate directories in
        which to look for files matching the glob specified by option 'g'.
        These directories should be in perl's current module search path,
        else add to the path using the 'i' option.

    g => '*_T.pm' # Glob for files to test.
        Only files matching this glob will be tested.

    r => 1 # Search for files recursively.
        If set to true, files matching the 'g' glob will be searched for
        recursively in all subdirs starting from (and including) those
        specified by the 'd' options. FIXME: Currently, it's always true.

    i* => '' # Directories to add to perl @INC paths.
        All options starting with the letter 'i' designate directories that
        you want to add to the @INC path for finding modules. They will be
        added in the order of the sorted 'i*' keys.

    t => {} # test() options.
        These options will be passed to the test() method, invoked for each
        tested file.

    follow => 1 # Follow symlinks wehn looking for files.
        This is hard-coded for now, it cannot change. FIXME

  $t->reset_options ()
    Resets all options to their default values.

  $t->options ()
    Returns a ref to the hash representing current option settings.

Using Test::Usage in a standard Perl module distribution.
    If you want to distribute your module in the standard Perl fashion, and
    want to make your Test::Usage tests the ones to be run, you need to make
    Test::Usage a prerequisite in your Makefile.PL, and use a t/test.t file
    whose contents are like this:

      use strict;
      use FindBin qw($Bin);
      use lib "$Bin/lib";
      use Test::Usage;

      files(
        c => 0,
        d => "$Bin/../lib",
        i => "$Bin/../lib",
        t => {
          c => 0,
          v => 2,
        },
      );

    (Note that this will be evaluated in the 'main' package, where files()
    is visible.)

BUGS
    No checks are made for duplicated labels.

    We want our test module to have as little influence as possible on what
    is being tested, but this can be problematic. For example, suppose the
    module being tested needs package Foo, but forgot to 'use' it. If our
    testing module uses Foo, the test will not reveal its absence from the
    main program.

    If a module we are testing has an END block, it won't be invoked in time
    for testing.

AUTHOR
    Luc St-Louis, <lucs@cpan.org>

COPYRIGHT AND LICENSE
    Copyright (C) 2005 by Luc St-Louis

    This library is free software; you can redistribute it and/or modify it
    under the same terms as Perl itself, either Perl version 5.8.3 or, at
    your option, any later version of Perl 5 you may have available.