Firmen News Kerstin Lehmann Partner

Testing - a quality assurance or acceptance process? Or how to improve the project quality through testing.

Testing - a quality assurance or acceptance process? Or how to improve the project quality through testing.

After having covered multiple topics over the last months, let’s now come to one of the famous reasons why projects fail or are delayed –Testing. I am sure everyone working in IT has a clear opinion on why testing is such an issue and what efficient testing should look like. Therefore, please be reminded that I am only giving you my point of view here, which I am hoping might be of interest to you. I am very much aware that there are many other ways to look at this. 

Per definition, testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service. Therefore, testing is the quality assurance process of each IT project.

Additionally, testing is also an independent view of the software to allow the business to appreciate and understand the risks of software implementation. As a result, testing is also an acceptance process. By successfully testing a solution, the business representatives agree that the solution matches their requirements.  

This raises the question if there is a conflict between the use of testing as an acceptance process versus testing being a quality assurance process. Ideally, it can be both. But only if it is well planned, prepared and executed. In this ideal case, both targets should and can be achieved – first providing an indication of the software quality and then followed by the acceptance.

I believe that testing is mostly serving the purpose of acceptance by the business today and used less as an indication for the quality of the software. This is a key issue.

I would like to share my observations over the last years concerning Testing in large scale, international IT projects:

  • In the classical IT customer-build-implementation following a “waterfall” implementation methodology, testing has been organized by following the V-Model. Specific test phases have been defined based on the different design and implementation phases of the project. This approach has many issues – highlights being e.g.  (a) test phases have not been followed, resulting in too many errors within later test phases, (b) test cases have not been written and execution is done freestyle, resulting in unclear test results or (c) test scope isn’t defined and the end of the test execution is unknown, resulting in endless test phases and potentially delays of the project. The idea behind correct testing is that each test phase can be measured with KPIs and only when the entry criteria are met the next test phase can begin. The user acceptance test phase would then be the last phase. In this case, KPIs give an indication of the software quality during the test phases and the acceptance process starts when the KPIs show that the quality is acceptable. 
  • Implementation projects of software packages often result in other challenges as they usually follow a delta implementation approach lacking designs that explain the overall functionality of the software, but are just designs created for the new implementation. The IT test teams need to have deep knowledge of the implemented software plus good business knowledge to prepare a meaningful test. Unfortunately, I have seen teams with neither, whose work resulted in useless tests.  Cases like these usually end with high costs for new tests with experienced people or business users having to do a meaningful test or, last but not least, by simply going into production and then fixing the issues once all is in place and having lots of side-effects. Here the problems started with the question of how to measure the software quality for a software package implementation, when the first overall test is the user acceptance test when all business processes are tested. 
  • Currently, test automation is key in an Agile implementation methodology. Unfortunately, a high percentage of test automatization is no guarantee for good quality. If test cases have poor quality, then the results will also be of poor quality.  Additionally, my observation is that with an Agile implementation, “old rules” are ignored, including following a test methodology with different test phases, creating test cases, and a test execution plan. As a result of this approach, I have rarely seen meaningful KPIs measuring the software quality of Agile implementation projects. The only conclusion is, in this case, is that testing is mainly serving the purpose of business acceptance but doesn’t give an independent assessment of software quality. 

In summary, I have seen many issues in testing due to lack of methodology, lack of scope definition, meaningful test cases, and lack of business knowledge and skills. However, when testing is not a quality assurance process providing clear indications of the software quality, this is the real issue for me. Because in this case a project might go-live because the business will accept the software, but no one knows what to expect in production.  

What can a project manager do to handle these challenges?

  • The project manager needs to clarify the purpose of testing at the beginning of the project. If testing should serve as an independent assessment of the software, the project manager needs to clarify how this can be ensured and with which KPIs IT needs to provide as evidence of its quality.  
  • The quality of each test is based on the quality of the test preparation. You need to define the right test cases, with the right coverage, the right level of detail, and the right data. You then have a baseline for a successful test execution
  • Business knowledge is key to ensure high quality in test preparation – writing a good test case requires deep business knowledge and skills, documented in designs or acquired through very knowledgeable test team members. Ideally, test cases are written in parallel with the requirements gathering and design creation.
  • I am a big fan of the V-Model where different test levels are required and follow the different development steps: from business requirements, to functional and technical designs, to unit, integration, functional and acceptance tests. I believe that the V-Model is valid for Agile implementation projects as well, but is often not respected due to time pressure.   
  • And most importantly, do not forget the lessons learned of the past. They are still valid. Just because everybody is “Agile” does not mean testing is not required anymore or should be done differently.

 

Ein Beitrag von
Kerstin Lehmann
Kerstin Lehmann