top of page
Search

Old School vs/Incl. New School

Updated: Jun 5, 2022

“QoT Solutions’ Senior Consultant, Glen Dean, shares his recent experiences establishing and managing test automation solutions in the contemporary era”


In the ever-changing landscape of information technology application development, many processes and methods have evolved and, depending on their relative success (or not), have either been ingrained into modern day methodologies or have been abandoned by the wayside.


www.qotsolutions.co.za
Glen Dean | Senior Consultant at QoT Solutions

We frequently hear terms like ‘old school’ and ‘new school’, not only in the I.T. industry but in many areas of life. The old school invariably gives way to new school thinking and in fact, new school would most often not exist without the pioneering efforts of old school.

Do we though, ever encounter instances where the old and new co-exist together in a constructive and beneficial way? Where the tried and proven has been retained and utilized alongside newer and more efficient ways of maintaining or creating whatever it is we are applying our time to. I’m sure if we looked long and hard enough, we would find many instances of such harmonious co-existences, but in the I.T. industry the old is most often ditched for the new, well at least in my eighteen years of testing experience this has been the case.


One such instance of this harmonious symbiosis of old and new, that I have recently experienced first-hand, is the hybrid software test-automation approach my teams and I have taken by using good old front-end Selenium (GUI) scripting alongside scripts that test via the applications’ API.


Why on earth would anyone want to hang onto GUI scripting when an API test run can be achieved in a tenth of the time? Well, for several reasons which I will elaborate on further along in this paper.


The general requirement and purpose of an automated test pack has always been to support and improve regression testing turnaround times, as well as to improve on test accuracy. This is all good and well when this type of execution follows your traditional waterfall model (yes, waterfall still exists for various projects), but when more agile methodologies are being followed with practices such as TDD (Test Driven Development) and its offshoot BDD (Behavior Driven Development), test execution times are often under heavy scrutiny, and sometimes criticism, due to the bulky and voluminous nature of this beast. The more complex and critical the application is, the greater in size this regression pack can be. This is where automation steps in to most often, cut back on execution time. While traditional GUI testing tools such as Selenium, revolutionized the way we test, the need for speed eventually caught up with the testing world. This is where API testing tools can step in and take care of generally repetitive donkey work, so to speak. Testing a mature application via the API will involve a test between the business layer and the database layer of the application and will focus on the data, responses and error handling of the application.


What does this mean for GUI testing though? Where traditional methodologies are still being followed (or even hybrids between new and old) we cannot exclude front-end tests, how do we improve on functionally regression-testing the user interface? Three streams of testing are generally happening on this layer- namely, the initial functional test to test whatever change has been made, the regression checks and, of course, the User Acceptance Test most often still required for a production implementation signoff.

Two approaches could be to continue with the manual front-end verifications as well as to update the existing scripts where they exist and execute such. Newer functionality and new screens introduced will generally not have ready- to-edit scripts in place and will add to the burden of manual verification.


Another approach could be to add newer testing tools into the mix, with the introduction of an API testing tool to either take over the regression testing as a whole, or to compliment what your team already has and have painstakingly been building up and maintaining over time. The latter of course is the approach that my team and I have taken on during our latest initiative.


Our journey began as follows:

We had at our disposal, a newly created GUI regression test pack. Built using Selenium, by applying a risk-based approach which focused on the most commonly used modules of the application under test, which took the greatest amount of time to execute manually. Once this foundation was built, we expanded further by adding the various scenarios that could be tested within these modules to the base code. Fortunately, these scenarios are largely dictated by the data sets that are used which can be stored in purpose-built databases, so this largely eliminated the need for writing individual lines of code per scenario.

Something was missing, and even though this ever-growing GUI regression pack has been extremely beneficial to the team over the last couple of months (by cutting down on the need for extensive manual test runs) it was still taking time to execute. This should by no means serve as a poor reflection of the scripts under test, but rather the complex nature of the application itself. If you understand anything about global trade, you will know that they are not simple one- or two-page applications. There are complex business rules as well as volumes of data that are computed through various trade modules, not to mention the importance of calculating and passing correct financial data to back-end systems for further processing.


After a conversation with a colleague sent me in search of an API testing tool that could better our testing performance, I was a bit hesitant to just throw out the months of hard work we had put into creating what we had. This led me to the idea of maintaining the Selenium-based test pack and using it to assist the team ‘SIT test’ newly introduced changes. This not only assists them by saving on test execution time, but also the gathering of test evidence is also made easier using screen grabs via script instead of doing this manually. As the tests are data driven, the team has the ability to customize their tests by adding the required data to a data sheet prior to a run.

As we already had a comfortably sized, scripted regression test pack, we started our API investigations using the REST Assured tool. We decided on this tool due to its ability to integrate with Selenium (and our existing framework), and after a few initial challenges we successfully ran our first API test script. The performance of the test was phenomenal, just what we had been looking for. We are currently in the exciting build-stage of writing new test scripts using this tool, there are plans to integrate with Selenium to compliment and speed up what we already have. The API script will service the bulk of the regression while the GUI script (along with API) will assist functional testers with the integration checks.


The reason for this approach is due to the complexities within a trade product, such as a Letter of Credit for example, there can be many steps within a single deal; once a deal is issued/created it can be amended in numerous ways (each way is a test condition in itself), it can be cancelled, accepted, partially paid, or paid in full. The beauty of our approach is that it gives us the flexibility to mix the different step-types across the two testing tools. Let’s assume a fix was made to the PAY module which now requires testing, but before a pay step can be tested a deal needs to be created and possibly also accepted. We can create and accept the deal via an API script in a tenth of the time it will take for a user or GUI script to capture, and the deal can then be handed over to the user to focus on the actual change under test.

Future plans will also include the automated verification of our output data on the various downstream applications, eliminating the need for manual checking. While a manual verification is always welcome at the end, it’s when tests of this nature must be repeated over and over until the end result has passed, that it becomes time consuming. This will of course form part of our inter-application test vision and strategy which will evolve over time.

It might seem like a lot of overhead and unnecessary maintenance to run with two testing tools, but when testing every single module (and the various steps within these modules) of an application of this complex nature, adding to that a full regression test, the picture can become extremely daunting very quickly. Off hand, I recall approximately 1800 variations that can be tested across the entire application, as well as a number of instances of the application. So the ability to safely and accurately cut back on testing time not only adds immense value to the project team as a whole but is a requirement for the development team to build maturity and agility.


So, in summary. Evolution is a requirement if you are part of an IT team. However, it’s important not to abandon everything that historically we had a reason for- if it worked in the past then we need to assess whether it is still important now and whether it needs to be integrated into our newer processes/frameworks. If I, as a Test Manager, can add additional value to my client without sacrificing quality and they are happy with my solution- new school or old school…or both, then I feel that I have achieved some measure of success in my service to them.

4 views0 comments

Comments


bottom of page