By Mario Lopez, VP Professional Services System Integration,
Optiva • March 24, 2022
Mario has over 20 years of experience in the telecom enterprise industry. He has held management positions at Siemens Mobile Networks, Nokia Siemens Networks, Redknee, and Optiva. Currently, Mario is the VP of Professional Services for System Integration. He drives Optiva’s journey to introduce test automation to Optiva projects.
I was recently preparing for a new delivery and aligning on timeline expectations with a customer. The Middle East telecom operator’s team leader for IN, charging, and policy control and I had an interesting discussion about our test methodologies and respective efforts. The conversation went like this:
Mario: Are you using any test automation tools?
Operator: Well, yes and no. Our marketing department is setting such short go-live deadlines for new campaigns that we cannot afford to create automated test cases. So, we do manual tests only. We convert the manual test cases into automated test cases only after launch. Still, in reality, the next campaign starts before we can complete the conversion of all manual test cases. So our automated regression test suite becomes more and more incomplete.
Mario: I assume that you document all manual test cases in a test manual?
Operator: Yes, of course, this serves as input for the later conversion into automated test cases.
Mario: So, you double the effort for test cases? First to document the test case for the manual execution and later to convert the description into the language of your test automation tool? Manual test cases are described in a sequence of actions, and at the end, the expected result is listed? Is my assumption correct?
Mario: Do you think that describing a test case in a natural human programming language like Gherkin or Cucumber takes more time than designing and documenting a test case in a document?
Operator: I do not know Gherkin and Cucumber. We are using a test tool developed by our development department based on Java, and I do not have enough programming skills in the QA team.
Mario: Understood. Another question. How often are you running on average the test cases?
Operator: Hmmm … at least twice. Complex test cases even three times or more.
Mario: How much time do you spend executing and validating such a manual test case?
Operator: Difficult to answer, but between two minutes and, for the complex ones, up to 15 minutes. This is the case if a GUI has to be populated and the result checked. I have to look up several log files that are usually stored in different locations. Then, I merge the information in the log files according to the timestamps to generate the correct sequence of the messages in the files.
Mario: Next question. If your supplier provides you with a defect correction, how and what do you test?
Operator: We create a couple of test cases to verify that the error is corrected. We cannot afford to run an extensive test campaign because of time constraints, and our automated regression suite is incomplete.
Mario: So, you might not detect the side effects of the correction?
Operator: Yes, this might happen, unfortunately.
Mario: Do you know whether your testing covers all the possible tariff combinations of your subscribers?
Operator: No, and these can be millions, so it is impossible to test them all!
Mario: Last question. Do you have an inventory list of all test cases to create new ones for the reuse of existing test cases?
Operator: My team remembers most of the test cases. But we could oversee existing test cases and create duplicates. We have more than 1,000 test cases in our inventory, so nobody remembers them all in detail.
It’s not the first time I had this conversation with a customer, and it’s probably not the last. When Optiva started its journey to migrate to the cloud and productize its solutions, one of the first areas we identified that needed a different and automated approach was the testing domain. So, we invested in dedicated test framework capabilities that allow us to:
For one of our first public cloud implementations on Google Cloud Platform (GCP), our customer in Europe had frequent campaign releases. These included more than nine rollouts in the last 12 months, 360 regression test cases, and seven million possible tariff combinations and tariff scenarios from call detail records (CDRs), which resulted in a test coverage of >99%.
Each automated rollout and all 360 online test cases took six hours to complete. This was followed by a subset of at least 500,000 to one million offline tariff test cases. The full regression test took between one to two days instead of weeks. Since this, we have not had a single revenue or customer-impacting experience issue.
For another customer on private cloud, we developed 177 end-to-end test cases plus an additional 65 variants. Using Optiva Testing Framework inventory list, we could identify 140 test cases similar to already developed and released test cases. Those could be reused and adapted to the customer’s specifications. This reuse reduced efforts by approximately 600 hours plus the time to release the test suite. As a result, time to market and duration for the investment return on the initial launch and all subsequent launches were reduced by more than 50%.
Due to the growth of our partnership and collaboration with Google Cloud, our test automation framework capabilities allow operators to fast track the launch of their new services. With Optiva solutions on public or private cloud, operators can launch in 90 days. Further, they have the assurance that the quality of their services will remain high while competitiveness is achieved with new offerings rolled out in hours!
Are you interested in discovering more about our delivery approach? Request a presentation.
Have feedback or questions for the author? Contact Mario Lopez, VP Global Function Lead Solution Integration, Optiva
Discover more! Read Redefining Telecom BSS Automation, Rollout, Delivery — Join the Revolution.