Our recent post discussed using the Infrastructure Engineering Lifecycle (IELC) to enable organizations to build a modern, efficient and robust technology infrastructure. One of the key expressions that both leverages and IELC approach and helps an infrastructure team properly plan and navigate the cycles is the Technology Plan. Normally, the technology plan is constructed for each major infrastructure ‘component’ (e.g. network, servers, client environment, etc). A well-constructed technology plan creates both the pull – outlining how the platform will meet the key business requirements and technology objectives and the push – reinforcing proper upkeep and use of the IELC practice.
Digitalization continues to sweep almost every industry, and the ability of firms to actually deliver the digital interfaces and services requires a robust, modern and efficient infrastructure. To deliver an optimal technology infrastructure, one must utilize an ‘evergreen’ approach and maintain an appropriate technology pace matching the industry. Similar to a dolphin riding the bow wave of a ship, a company can optimize both the feature and capability of its infrastructure and minimize its cost and risk by staying consistently just off the leading pace of the industry. Often companies make the mistake of either surging ahead and expending large resources to get fully leading technology or eking out and extending the life of technology assets to avoid investment and resource requirements. Neither strategy actually saves money ‘through the cycle’ and both strategies add significant risk for little additional benefit.
For those companies that choose to minimize their infrastructure investments and reduce costs by overextending asset lives, they typically incur greater additional costs through higher maintenance, greater fix resources required, and lower system performance (and staff productivity). Obviously, extending your desktop PC refresh cycle from 2 years to 4 years is workable and reasonable, but extending the cycle much beyond this and you quickly run into:
- Integration issues – both internal and external compatibility as your clients and partners have newer versions of office tools that are incompatible with yours
- potentially higher maintenance costs as much hardware has no maintenance cost for the first 2 or 3 years, and increasing costs in subsequent years
- greater environmentals costs as power and cooling savings from newer generation equipment is not realized
- longer security patch cycles for older software (though some benefit as it is also more stable)
- greater complexity and resulting cost within your environment as you must integrate 3 or 4 generations of equipment and software versus 2 or 3 versions
- longer incident times as the usual first vendor response to an issue is ‘you need to upgrade to the latest version of the software before we can really fix this defect’
And if you press the envelope further and extend infrastructure life to the end of the vendor’s life cycle or beyond, expect significantly higher failure rates, unsupported or expensively support software, and much higher repair costs. In my experience, where multiple times we modernized an overextended infrastructure, we were able to reduce total costs by 20 or 30%, and this included the costs of the modernization. In essence you can run 4 servers from 3 generations ago on 1 current server, and having modern PCs and laptops means far less service issues, fewer service desk calls, far less breakage (people take care of newer stuff) and more productive staff.
For those companies that have surged to the leading edge on infrastructure, they are typically paying a premium for nominal benefit. For the privilege of being first, frontrunners encounter an array of issues including:
- Experiencing more defects – trying out the latest server or cloud product or engineered appliance means you will find far more defects.
- Paying a premium – being first with new technology means typically you will pay a premium because it is well before the volumes and competition can kick in to drive better pricing.
- Integration issues – having the latest software version often means third party utilities or extensions have not yet released their version that will properly work with the latest software
- Higher security flaws – all the backdoors and gaps have been uncovered yet as there are not enough users. Thus, hackers have a greater opportunity to find ‘zero day’ flaws and exploit them to attack you
Typically, those groups that I have inherited that were on the leading edge, were doing so because they had either an excess of resources or were solely focused on technology product(and not business needs). There was inadequate dialogue with the business to ensure the focus was on business priorities versus technology priorities. Thus, the company was often expending 10 to 30% more for little tangible business benefit other than to be able to state they were ‘leading edge’. In today’s software world, seldom does the latest infrastructure provide compelling business benefit over above that of a well-run modern utility infrastructure. Nearly all of the time the business benefit is derived by compelling services and features enabled by the application software running on the utility. Thus, typically the shops that are tinkering with leading edge hardware or are always on the latest version first are shops that are doing hobbies disconnected from the business imperatives. Only where organizations are operating at massive scale or actually providing infrastructure services as a business does leading edge positioning make business sense.
So, given our objective is to be in the sweet spot riding the industry bow wave, then a good practice to ensure proper consistent pace and connection to the business is a technology plan for each of the major infrastructure components that incorporates the infrastructure engineering lifecycle. A technology plan includes the infrastructure vision and strategy for a component area, defines key services provided in business terms, and maps out an appropriate trajectory and performance for a 2 or 3 year cycle. The technology plan then becomes the roadmap for that particular component and enables management to both plan and track performance against key metrics as well as ensuring evolution of the component with the industry and business needs.
The key components of the technology plan are:
- Mission, Vision for that component area
- Key requirements/strategy
- Services (described in business terms)
- Key metrics (definition, explanation)
- Current starting point – explanation (SWOT) – as needed by service
- Current starting point – Configuration – as needed by service
- Target – explanation (of approach) and configuration — also defined by service
- Metrics trajectory and target (2 to 3 years)
- Gantt chart showing key initiatives, platform refresh or releases, milestones (can be by service)
- Configuration snapshots at 6 months (for 2 to 3 years, can be by service)
- Investment and resource description
- Summary
- Appendices
- Platform Schedule (2 -3 years as projected)
- Platform release schedule (next 1 -2 years, as projected)
- Patch cycle (next 6 – 12 months, as planned)
The mission and vision should be derived and cascaded from the overall technology vision and corporate strategy. It should emphasis key tenets of the corporate vision and their implication for the component area. For example if the corporate strategy is to be ‘easy to do business with’ then the network and server components must support a highly reliable, secure and accessible internet interface. Such reliability and security aspirations then have direct implications on component requirements, objectives and plans.
The services portion of the plan should translate the overall component into the key services provided to the business. For example, network would be translated into data services, general voice services, call center services, internet and data connection services, local branch and office connectivity, wireless and mobility connectivity, and core network and data center connectivity. The service area should be described in business terms with key requirements specified. Further, each service area should then be able to describe the key metrics to be used to gauge its performance and effectiveness. The metrics could be quality, cost, performance, usability, productivity or other metrics.
For each service area of a component, the plan is then constructed. If we take the call center service as the example, the current technology configuration and specific services available would define the current starting point. A SWOT analysis should accompany the current configuration explaining both strengths and where the services falls short of business needs. The the target is constructed where both the overall architecture and approach are described as well as the target configuration (high to medium level of definition) is provided (e.g. where will the technology configuration for that area be in 2 or 3 years).
Then, given the target, the key metrics are mapped from their current to their future levels and a trajectory established that will be the goals for the service over time. This is subsequently filled out with a more detailed plan (Gantt chart) that shows the key initiatives and changes that must be implemented to achieve the target. Snapshots, typically at 6 month intervals, of the service configuration are added to demonstrate detailed understanding of how the transformation is accomplished and enable effective planning and migration. Then the investment and resource needs and adjustments are described to accompany the technology plans.
If well done, the technology plan then provides an effective roadmap for the entire technology component team to both understand how what they do delivers to the business, where they need to be, and how they will get there. It can be an enormous assist for productivity and practicality.
I will post some good examples of technology plans in the coming months.
Have you leveraged plans like this previously? If so, did they help? Would love to to hear from you.
All the best, Jim Ditmore
Thanks for the thorough and thoughtful insights Jim. I have experienced the same coming into shops where the focus was on the next new and not mature software or hardware or cloud product without a synergistic and integration view of the hardware, the application architecture and roadmap. Often the situation is driven by technology acquired by 3rd party promises, discounts, silver bullet searches or high aspirations. Applying many of the insights you laid out helped bring the environment and the external and internal customer experience to a better place, but it does take time. The other thought I could offer is one on a financial note and transparency with our key stakeholders. It is incumbent upon us to lean out our cost structure, label contingency spend clearly and define a load balance of a year over year investment spend spanning ten years, that incorporates a 3-4 refresh per technology (ie network, server, desk) on offsetting years. And of course, an investment set aside annually to R&D new technology 6mos -2 years priors to implementation including the development of an integration strategy well in advance, before any new shiny bauble is bought and installed.
Dear Martin,
Thanks for the note and I fully agree that it iso important to be transparent and communicate effectively the full lifecycle of the infrastructure utility to your business partners. It is key to bring them along the journey and how you get to an optimal infrastructure utility.
Thanks for the perspective!
Jim
Jim,
one interesting aspect of this, is how you measure and quantify the status of your current IELC state, without becoming overly complex and detailed. In other words, how one defines the relevant KPIs which can and will show both the current state as well as the business drivers for moving to the defined future state.
It needs to be a delicate balance between providing enough insight to take necessary decisions, but at the same time not digging into irrelevant details.
Appreciate the blog, since not that many industry leaders share their thoughts at this level.
/Martin
Dear Martin,
I would agree with you and would add it is worthwhile to structure your KPIs or metrics into operational, performance and verification metrics. There is a good explanation of this approach in the Metrics Best Practices section.
I am pleased you are enjoying the blog, and I am hoping to help others in the industry achieve better success and best practices 🙂
Best, Jim Ditmore
Dear Jim.
This is great stuff in relation to focus on long term efficiency and quality in whatever process that you need to setup. As you both noted this will always be a journey that starts with a plan. In my business and in relation to clients I am seen the need for building of matrix in relation not only to systems and systems requirements, however also in relation compliance that these systems needs to support. Would it work to set up a planning matrix for the long term solution (possibly also short term) and then integrate that with the compliance matrix that the system matrix is set to support. Does that make any make sense?
Jan
Dear Jan,
Thanks for the observations. I agree fully that you should set up a planning matrix for the long term solution, but integrate it to the compliance requirements by considering these requirements as quality requirements. The compliance needs are best served by having the operational team deliver quality and control as needed in an inherent structure as opposed to an add-on structure (there is further discussion on ‘verification metrics’ in the best practices for metrics section of the blog). To rephrase: set up the technology plan for the long term solution including quality, then integrate any standalone compliance requirements. This will minimize the ‘patchwork effect’ that compliance measures can sometimes have.
Hope that makes sense, best,
Jim Ditmore