By Beatriz Tamashiro, Program Director, Americas
Optiva • November 30, 2022
Beatriz began in the telecom industry at Siemens, delivering charging and billing projects in South America. Additionally, she worked for a major CSP in South America in a B2B management role, hands-on with customers. She is a technology evolution and service delivery transformation champion who thrives on a collaborative, global, multifaceted team. Beatriz holds a Master of Science in communications technology, PMP certificate, finance diplomat certificate, and bachelor’s degree in system engineering.
After 16 years of experience in telecommunications, I look at how cloud technology has profoundly changed critical projects. In the past, a critical project path was defined by the time required to deploy the infrastructure. This would include performing the site survey, preparing the bill of materials and invoices for logistics in migration, mounting power supplies, deploying cabling on racks, and even installing anti-earthquake mount systems.
I recall in 2010 weeks of project delivery delays for a customer in South America when Eyjafjallajökull erupted, and airports were closed all around Europe. Today, thanks to cloud computing, not even a global pandemic stops us from delivering infrastructure, and the dynamics of delivering projects have been completely revolutionized.
Today, when delivering a cloud-based project, most of the focus is on integration, testing, and keeping an eye on optimizing computing costs. As a result, cloud computing is a great way for telecom operators to save on infrastructure costs, reducing their Capex in exchange for an increase in Opex. As hyperscalers promote, “With a public cloud provider, you pay for what you use,” allowing you to scale up or down as needed without purchasing equipment or investing in expensive IT staff.
However, there are underlying costs that can eat into your budget if not properly analyzed in time. Such costs could be related to the service layers for storage disks, writing/reading speed, number of requests for queries in databases, predefined virtual machines, and different operating systems that are free or licensed. All of these variants can add up to a big difference over time.
Pre-cloud, when starting a project, the design and building of solution specifications were done parallel to deploying the hardware. This took months. Doing these together — the production and lab environments and disaster recovery systems — helped save on costs and the painful, time-consuming administrative process.
With public cloud implementations, and in some cases private, too, all of these environments and systems are ready to use. So time is dramatically reduced compared to traditional implementation. Additionally, if it is a greenfield project, requiring a full production system, you would first focus on the definition and configuration of the use cases around the new business model and integration to the core network.
In this scenario, the project would require only a low-cost test environment. After IoT and business configurations were complete, the order would be issued to deploy, integrate, and configure the production environment based on the pre-tested configuration of the lab system. Upon completion, the next steps would be performance testing and customer acceptance.
Right-sizing your project and aligning its progress will directly impact the size and number of instances required for running your application, which can translate into potential cost savings. For example, during a greenfield MVNO North American customer’s deployment, we significantly increased efficiency and improved overall performance. After deploying an inefficiently sized application into the cloud, the customer was unable to take full advantage of the available resources on each instance, resulting in underutilization.
We looked at the service tier of the filestore, which was a lab environment with minimal project team interactions. Therefore, a basic HDD was enough to meet the customer’s needs. On the other hand, if they had had a training platform that was accessed by the customer care team or a proof of concept demonstration, it would have been a better investment to consider an SSD with greater performance. Therefore, in alignment with the customer’s needs, we adjusted the minimum VM and instance configuration during the integration phase. As a result, we maximized the use of all available resources, increased efficiency, and improved overall performance.
Optiva’s pricing model is defined during the pre-sales and commercial phases, and it may vary between implementations. Therefore, it is vital that the project team knows and understands the implementation to properly account for and prepare its plan and execution. The pricing model features and characteristics vary depending on the public cloud used, however they include:
For the implementation of another greenfield customer in Oceania, the team selected a set of predefined instances to set up a lab environment for beginning the integration tasks. Upon nearing the completion of the implementation period, the initial production environment set up commenced. When commercially launched, a committed use discount will be applied with an engagement of three years, providing discounts of 57-70% over the on-demand resources. Further, the production environment is designed to scale up as the subscriber base grows, ensuring growth capacity for our customer and avoiding the expense of a 24/7, full-scale production environment.
Cloud-native applications should support auto scale up and down. This is the most effective way to save on costs by only activating resources as needed. It is also worth considering, especially if reserved discounts are not yet applied, to shut down Kubernetes and other services during off hours. This allowed us to save our Oceania customer up to 31% on costs during its project implementation.
Whether this is convenient and cost efficient depends on your specific situation, and as the implementation progresses, the risk of shutting down and restarting the application increases. When adding more components, customizations, and configurations, the time required to shut down and restart also increases. So, there comes a point where the risk outweighs the benefit, especially as you get closer to the customer acceptance phase. Also, if the team is composed of global resources working in different time zones, taking advantage of a follow-the-sun working model is more beneficial than cloud compute cost savings.
As I continue learning and experiencing more cloud implementation deliveries, I am grateful to our team of Optiva engineers, product and R&D teams, InfraOps, cloud specialists, and solution architects who work behind the scenes. They define and prepare the templates, environments, documentation, and even decide which pricing model will apply depending on the characteristics of each requirement.
There is so much experience and knowledge involved, and the learning is continuously increasing with each implementation as we add new technologies and applications to meet market demand. Improving the cost efficiency of cloud implementations, without impacting our quality of delivery, is a company goal that allows us to offer our customers more significant benefits at affordable prices.
Want to know more about Optiva’s 5G-ready proven BSS that powers brands globally? Request a presentation.
Discover and read more → Is 5G the Key to the Metaverse?