Monday, 12 December 2016

Test log lines

Some times the only thing we are interested in testing is weather a log line correctly gets logged. This is not the most optimal test but sometimes its required... in cases where log lines are getting monitored by a monitoring agent and we want to make sure correct format gets logged etc.

The trick is to mock the Log Appender and then capture the logs on that appender in a safe way. Once these log lines are captured you can verify them to your hearts content....

A fully working example of this is given at: https://github.com/njaiswal/logLineTester

Following snippet shows how to achieve this:

// Fully working test at: https://github.com/njaiswal/logLineTester/blob/master/src/test/java/com/nj/Utils/UtilsTest.java
@Test
public void testUtilsLog() throws InterruptedException {
Logger utilsLogger = (Logger) LoggerFactory.getLogger("com.nj.utils");
final Appender mockAppender = mock(Appender.class);
when(mockAppender.getName()).thenReturn("MOCK");
utilsLogger.addAppender(mockAppender);
final List<String> capturedLogs = Collections.synchronizedList(new ArrayList<>());
final CountDownLatch latch = new CountDownLatch(3);
//Capture logs
doAnswer((invocation) -> {
LoggingEvent loggingEvent = invocation.getArgumentAt(0, LoggingEvent.class);
capturedLogs.add(loggingEvent.getFormattedMessage());
latch.countDown();
return null;
}).when(mockAppender).doAppend(any());
//Call method which will do logging to be tested
Application.main(null);
//Wait 5 seconds for latch to be true. That means 3 log lines were logged
assertThat(latch.await(5L, TimeUnit.SECONDS), is(true));
//Now assert the captured logs
assertThat(capturedLogs, hasItem(containsString("One")));
assertThat(capturedLogs, hasItem(containsString("Two")));
assertThat(capturedLogs, hasItem(containsString("Three")));
}

Thursday, 11 February 2016

Run background server during integration tests

Most of the times during integration testing we have to run the built java code in server mode and run client junit/testng tests against the server. Mostly this has to be also done on the CI Jenkins server as well. Following post will go through some of the techniques to achieve the same.

Assuming the project is built in maven, run the integration test server (your application) in the pre-integration-test phase using maven-ant-run plugin.

<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.8</version>
<executions>
<execution>
<id>start-test-server</id>
<phase>pre-integration-test</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<!-- To debug this forked process, set spawn=false only then stdout and stderr of process will be visible -->
<!-- Test will not run if spawn=false, but helps debug test server runs -->
<java spwan="true" fork="true" classname="your.example.com.MainClass">
<jvmarg value="-Dproperties.file=file:/path/to/app.properties"/>
<classpath refid="maven.test.classpath"/> <!-- This basically means we will run current build -->
<env key="LD_LIBRARY_PATH" value="${yourNativeLibs}:${env:LD_LIBRARY_PATH}"/>
</java>
</target>
</configuration>
</execution>
</executions>
</plugin>


Above maven plugin will run your main class in a forked jvm process in background and then you can run your junit/testNg integration tests against this server.

<project>
[...]
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<version>2.19.1</version>
<executions>
<execution>
<goals>
<goal>integration-test</goal>
<goal>verify</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
[...]
</project>


One of the issues that you will face is to stop the server once integration tests are complete.
For this you can create a method call to exit the test server if possible or exit the test server after a time interval. However this needs to be implemented server side. If you only run integration tests on Jenkins, Jenkins will make sure to kill all the pids that were created during a integration test run and keep integration test environment clean.

Happy testing...

Friday, 15 August 2014

Skip TestNG test based on a condition

Sometimes you might want to skip tests based on a condition. Following snippet shows a simple way to do it by throwing a testngSkipException. This integrates seamlessly with test reports and skipped tests are counted displayed correctly in it.

//....code snippet.....
// Filter out bad tests
for (String c : conditions) {
if(c.matches("some condition 1") || c.matches("some other condition")){
throw new org.testngSkipException(c);
}
}
//....code snippet.....

Custom logger for TestNG

Another small post on TestNG custom logger. I have a TestNG test suite that runs for hours doing through integration tests. Now at the end of the test run I can see testNG reports and see what failed and what passed, however I get no response during the test run as to if all tests are failing due to some configuration issue or its a normal test run. This is happening due to a fact that its basically only 1 test with DataProvider providing different test parameters. If you have separate tests then yes you will get messages for each test.

Anyhow I wanted a custom TestNG logger which logs a message that I understand at the end of each test and here is a way to do so.


Extend TestListenerAdapter class and override a few methods.
package com.clearqa.utils;
import org.testng.ITestResult;
import org.testng.TestListenerAdapter;
public class TestNgCustomLogger extends TestListenerAdapter{
@Override
public void onTestFailure(ITestResult tr) {
logToStdOut(tr, "FAILED");
}
@Override
public void onTestSkipped(ITestResult tr) {
logToStdOut(tr, "SKIPPED");
}
@Override
public void onTestSuccess(ITestResult tr) {
logToStdOut(tr, "PASS");
}
private void logToStdOut(ITestResult tr, String result){
Object[] parameters = tr.getParameters();
System.out.println("Test with parameters " + result);
for(Object o : parameters) {
System.out.println("\t -" + o.toString());
}
}
}


Add a custom Listener to your testng xml config:
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd" >
<suite name="REST api Integration Tests" verbose="1" data-provider-thread-count="10">
<listeners>
<listener class-name="com.clearqa.utils.TestNgCustomLogger" />
</listeners>
<test name="Rest API - Json schema validation Tests" >
<classes>
<class name="com.clearqa.restapi.test.RestApiITCase" />
</classes>
</test>
</suite>
view raw testng.xml hosted with ❤ by GitHub


and wola I get the much needed indication on the std out:

$ mvn test-compile failsafe:integration-test
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building XXXX Webapp 1.0.5-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
....blah blah blah....
....blah blah blah....
[INFO] Failsafe report directory: XXXX\target\failsafe-reports
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running TestSuite
Test with parameters PASS
-XXXX
Test with parameters FAILED
-YYYY
...
...
view raw stdout.log hosted with ❤ by GitHub


Wednesday, 13 August 2014

Increase JBehave story timeout

Small post on how to increase the default story timeout in JBehave. This might be required if you have a long running story as I have with many Steps/Examples etc and you see error message similar to following:

Story my.story duration of 301 seconds has exceeded timeout of 300 seconds

or

STORY CANCELLED (DURATION 301 s)

The solution is the increase the story timeout in your maven config (default is 300s). Note this will only take effect when you run integration-tests via mvn command and not when you run the stories via Eclipse/IntelliJ which will bypass your pom.xml and other maven config.

Full pom.xml with timeout setting can be found here: https://github.com/jbehave/jbehave-core/blob/master/examples/threads/pom.xml


Relevant portion below:
<build>
<plugins>
<plugin>
<groupId>org.jbehave</groupId>
<artifactId>jbehave-maven-plugin</artifactId>
<executions>
<execution>
<id>embeddable-stories</id>
<phase>integration-test</phase>
<configuration>
<includes>
<include>${embeddables}</include>
</includes>
<excludes />
<skip>${skip}</skip>
<batch>false</batch>
<threads>${threads}</threads>
<storyTimeoutInSecs>600</storyTimeoutInSecs> <!-- Affects default story timeout of 300 secs -->
<generateViewAfterStories>true</generateViewAfterStories>
<ignoreFailureInStories>${ignore.failre.in.stories}</ignoreFailureInStories>
<ignoreFailureInView>true</ignoreFailureInView>
<metaFilters>
<metaFilter>${meta.filter}</metaFilter>
</metaFilters>
</configuration>
<goals>
<goal>run-stories-as-embeddables</goal>
</goals>
</execution>
</executions>
<dependencies>
<!-- Only needed if groovy-based MetaFilters are used -->
<dependency>
<groupId>org.codehaus.groovy</groupId>
<artifactId>groovy-all</artifactId>
<version>1.8.4</version>
</dependency>
</dependencies>
</plugin>
</plugins>
</build>
view raw pom.xml hosted with ❤ by GitHub

Thursday, 10 April 2014

restIT: Testing Framework for REST based web services

I would like to call this testing framework restIT (REST Integration Test).

This test framework will give a simple way to verify that REST webservice is responding with right JSON responses. All you need to provide is a couple of yml config files declaring the REST urls and JSON schema files for the expected JSON responses.

Adding new tests should not take more than 10 mins of your time. Framework is flexible enough for you to add more advanced features such as Authentication headers and complex JSON response validation which cannot be done via JSON schema files.

Project is hosted on github at: https://github.com/njaiswal/restITframework

Steps to make use of this framework.

  • Git clone the project.
git clone git@github.com:njaiswal/restITframework.git restIT
view raw git clone cmd hosted with ❤ by GitHub

  • Import the project in eclipse or your fav ID as existing maven project.
  • Replace src/test/resources/testcases.yml file with your own.
    • Make sure you change baseurl to map to base url of your rest service.
    • Add services which you want to test with/out prameters, this depends on your rest service. See examples in the file.
---
baseurl: 'http://yourWebSite/AppName'
services:
- /customers
- /customer/One
- /customer/Two
view raw testCases.yml hosted with ❤ by GitHub

  • For each of the test case you have added in the above step, you need to add a schema map. Replace file src/main/resources/schemaMap.yml with your own.
---
schemas:
- id: Customers List
file: "/schemas/customers_list_schema.json"
regex: "^.*/customers$"
- id: Any particular customer
file: "/schemas/customer_schema.json"
regex: "^.*/customer/.+$"
view raw schemaMap.yml hosted with ❤ by GitHub

Each schema map in this yml file has 3 fields:
    • id: This is just a name.Appears in logs/reports.
    • file: This is the JSON schema file path
    • regex: If the URL matches this regex, response of this URL will be validated againts above JSON schema file
  • In src/test/resoruces/schemas directory add all JSON schema files defined in schemaMap.yml file.
    • See examples in the git project.
    • Find more details on how to write schema files at: http://json-schema.org/
    • You can add as much or as little JSON response validation as you want depending on what json-schema provides out of the box. Think of this like xml schema for xml responses.
    • If you want to do more complicated response validation, extended the response validation code, or if you have specific requests let me know and I might accommodate it in my free time.

Steps to run the test framework and view reports.

Easy: 
mvn test
view raw cmd hosted with ❤ by GitHub

View reports at: target/surefire-reports/index.html

Make sure to check the 'Reporter Output' tab for any JSON schema validation error reports. It shows all json schema vaildation errors pretty printed.

Enjoy. Let me know if you have any questions/suggestions.


Friday, 4 April 2014

Small but useful utilities

Many times working inside corporate env you cannot use online utilities like url decoders, base64 decoders etc where you copy paste the encoded string into a webpage and wola its gets decoded on the browser. Though these websites say all decoding is done on the client side, which is true most of the times, its always smart not to copy paste company data on to random websites.

I have a set of small utilities which i have shared here, might prove useful to you.


  • Decode encoded URL string. Useful for me to read encoded urls and make sense of it at times.
    $ cat url_decoder
    #!/usr/bin/perl
    use strict;
    use warnings;
    use URI::Encode qw(uri_encode uri_decode);
    while (1) {
    my $url = <STDIN>;
    print uri_decode($url) . "\n\n";
    }
    view raw url_decoder hosted with ❤ by GitHub

    ./url_decoder
    https://www.google.co.uk/search?q=encoded+urls&oq=encoded+urls&aqs=chrome.0.69i57j0l3j69i62l2.2861j0&sourceid=chrome&ie=UTF-8#q=GBP+%C2%A3+%2F+USD+%24
    https://www.google.co.uk/search?q=encoded+urls&oq=encoded+urls&aqs=chrome.0.69i57j0l3j69i62l2.2861j0&sourceid=chrome&ie=UTF-8#q=GBP+£+/+USD+$
  • Base64 decoder. Useful when you want a quick decoding requirements.
    #!/usr/bin/perl
    use strict;
    use warnings;
    use MIME::Base64 qw/decode_base64/;
    while (1) {
    my $text = <STDIN>;
    print decode_base64($text) . "\n\n";
    }
    view raw base64_decoder hosted with ❤ by GitHub

    $ ./base64_decoder
    cGxhaW5UZXh0wqM=
    plainText£
    .
  • json to yml: Useful tool to quickly convert json to yml
    #!/usr/bin/perl
    use strict;
    use warnings;
    use JSON qw/decode_json/;
    use Data::Dumper;
    use YAML;
    my $json_file = $ARGV[0];
    my $json_text;
    open(IN, "<", $json_file) or die("Cannot open file: $json_file");
    {
    local $/;
    $json_text = <IN>;
    }
    close(IN);
    my $hash = decode_json($json_text);
    print YAML::Dump $hash;
    view raw json2yml hosted with ❤ by GitHub

These are  very simple but useful utilities, obviously will need your perl modules and setup ready to run them.