Overcoming the Inefficient Technology Marketplace

The typical IT shop spends 60% or more of its budget on external vendors – buying hardware, software, and services. Globally, the $2 trillion dollar IT marketplace (2013 estimate by Forrester) is quite inefficient where prices and discounts vary widely between purchasers and often not for reasons of volume or relationship. As a result, many IT organizations fail to effectively optimize their spend, often overpaying by 10%, 20%, or even much more.

Considering that IT budgets continue to be very tight, overspending your external vendor budget by 20% (or a total budget overrun of 12%) means that you must reduce the remaining 40% budget spend (which is primarily for staff) by almost 1/3 ! What better way to get more productivity and results from your IT team than to spend only what is needed for external vendors and plow these savings back into IT staff and investments or to the corporate bottom line?

IT expenditures are easily one of the most inefficient areas of corporate spending due to opaque product prices and uneven vendor discounts. The inefficiency occurs across the entire spectrum of technology purchases – not just highly complex software purchases or service procurements. I learned from my experience in several large IT shops  that there is rarely a clear rationale for the pricing achieved by different firms other than they received what they competitively arranged and negotiated. To overcome this inefficient marketplace, the key prerequisite is to set up strong competitive playing fields for your purchases. With competitive tension, your negotiations will be much stronger, and your vendors will work to provide the best value. In several instances, when comparing prices and discounts between firms where I have worked that subsequently merged, it became clear that many IT vendors had no consistent pricing structures, and in too many cases, the firm that had greater volume had a worse discount rate than the smaller volume firm. The primary difference? The firm that robustly, competitively arranged and negotiated always had the better discount. The firms that based their purchases on relationships or that had embedded technologies limiting their choices typically ended up with technology pricing that was well over optimum market rates.

As an IT leader, to recapture the 6 to 12% of your total budget due to vendor overspend, you need to address inadequate technology acquisition knowledge and processes in your firm — particularly with your senior managers and engineers who are participating or making the purchase decisions. To achieve best practice in this area, the basics of a strong technology acquisition approach are covered here, and I will post on the reference pages the relevant templates that IT leaders can use to seed their own best practice acquisition processes. The acquisition processes will only work if you are committed to creating and maintaining competitive playing fields and not making decisions based on relationships. As a leader, you will need to set the tone with a value culture and focus on your company’s return on value and objectives – not the vendors’.

Of course, the technology acquisition process outlined here is a subset of the procurement lifecycle applied to technology. The technology acquisition process provides additional details on how to apply the lifecycle to technology purchases, leveraging the teams, and accommodating the complexities of the technology world. As outlined in the lifecycle, technology acquisition should then be complemented by a vendor management approach that repairs or sustains vendor performance and quality levels – this I will cover in a later post.

Before we dive into the steps of the technology acquisition process, what are the fundamentals that must be in place for it to work well? First, a robust ‘value’ culture must be in place. A ‘value’ culture is where IT management (at all levels) is committed to optimizing its company’s spending in order to make sure that the company gets the most for its money. It should be part of the core values of the group (and even better — a derivative of corporate values). The IT management and senior engineers should understand that delivering strong value requires constructing competitive playing fields for their primary areas of spending. If IT leadership instead allows relationships to drive acquisitions, then this quickly robs the organization of negotiating leverage, and cost increases will quickly seep into acquisitions.  IT vendors will rapidly adapt to how the IT team select purchases — if it is relationship oriented, they will have lots of marketing events, and they will try to monopolize the decision makers’ time. If they must be competitive and deliver outstanding results, they will instead focus on getting things done, and they will try to demonstrate value. For your company, one barometer on how you are conduct your purchases is the type of treatment you receive from your vendors. Commit to break out of the mold of most IT shops by changing the cycle of relationship purchases and locked-in technologies with a ‘value’ culture and competitive playing fields.

Second, your procurement team should have thoughtful category strategies for each key area of IT spending (e.g. storage, networking equipment, telecommunications services). Generally, your best acquisition strategy for a category should be to establish 2 or 3 strong competitors in a supply sector such as storage hardware. Because you will have leveled most of the technical hurdles that prevent substitution, then your next significant acquisition could easily go to any of vendors . In such a situation, you can drive all vendors to compete strongly to lower their pricing to win. Of course, such a strong negotiating position is not always possible due to your legacy systems, new investments, or limited actual competitors. For these situations, the procurement team should seek to understand what the best pricing is on the market, what are the critical factors the vendor seeks (e.g., market share, long term commitment, marketing publicity, end of quarter revenue?) and then the team should use these to trade for more value for their company (e.g., price reductions, better service, long term lower cost, etc). This work should be done upfront and well before a transaction initiates so that the conditions favoring the customer in negotiations are in place.

Third, your technology decision makers and your procurement team should be on the same page with a technology acquisition process (TAP). Your technology leads who are making purchase decisions should be work arm in arm with the procurement team in each step of the TAP.  Below is a diagram outlining the steps of the technology acquisition process (TAP). A team can do very well simply by executing each of the steps as outlined. Even better results are achieved by understanding the nuances of negotiations, maintaining competitive tension, and driving value.

 

Here are further details on each TAP step:

A. Identify Need – Your source for new purchasing can come from the business or from IT. Generally, you would start at this step only if it is a new product or significant upgrade or if you are looking to introduce a new vendor (or vendors) to a demand area. The need should be well documented in business terms and you should avoid specifying the need in terms of a product — otherwise, you have just directed the purchase to a specific product and vendor and you will very likely overpay.

B. Define Requirements – Specify your needs and ensure they mesh within the overall technology roadmap that the architects have defined. Look to bundle or gather up needs so that you can attain greater volumes in one acquisition to possibly gain better better pricing. Avoid specifying requirements in terms of products to prevent ‘directing’ the purchase to a particular vendor. Try to gather requirements in a rapid process (some ideas here) and avoid stretching this task out. If necessary, subsequent steps (including an RFI) can be used to refine requirements.

C. Analyze Options – Utilize industry research and high level alternatives analysis to down-select to the appropriate vendor/product pool. Ensure you maintain a strong competitive field. At the same time, do not waste time or resources for options that are unlikely.

D, E, F, G. Execute these four steps in concurrence. First, ensure the options will all meet critical governance requirements (risk, legal, security, architectural) and then drive the procurement selection process as appropriate based on the category strategy. As you narrow or extend options, conduct appropriate financial analysis. If you do wish to leverage proofs of concept or other trials, ensure you have pricing well-established before the trial. Otherwise, you will have far less leverage in vendor negotiations after it has been successful.

H. Create the contract – Leverage robust terms and conditions via well-thought out contract templates to minimize the work and ensure higher quality contracts. At the same time, don’t forgo the business objectives of price and quality and capability and trade these away for some unlikely liability term. The contract should be robust and fair with highly competitive pricing.

I. Acquire the Product – This is the final step of the procurement transaction and it should be as accurate and automated as possible. Ensure proper receivables and sign off as well as prompt payment. Often a further 1% discount can be achieved with prompt payment.

J & K. The steps move into lifecycle work to maintain good vendor performance and manage the assets. Vendor management will be covered in a subsequent post and it is an important activity that corrects or sustains vendor performance to high levels.

By following this process and ensuring your key decision makers set a competitive landscape and hold your vendors to high standards, you should be able to achieve better quality, better services, and significant cost savings. You can then plow these savings back into either strategic investment including more staff or reduce IT cost for your company. And at these levels, that can make a big difference.

What are some of your experiences with technology acquisition and suppliers? How have you tackled or optimized the IT marketplace to get the best deals?

I look forward to hearing your views. Best, Jim Ditmore

Moving from Offshoring to Global Shared Service Centers

My apologies for the delay in my post. It has been a busy few months and it has taken an extended time since there is quite a bit I wish to cover in the global shared service center model. Since my NCAA bracket has completely tanked, I am out of excuses to not complete the writing, so here is the first post with at least one to follow. 

Since the mid-90s, companies have used offshoring to achieve cost and capacity advantages in IT. Offshoring was a favored option to address Y2K issues and has continued to expand at a steady rate throughout the past twenty years. But many companies still approach offshoring as  ‘out-tasking’ and fail to leverage the many advantages of a truly global and high performance work force.

With out-tasking, companies take a limited set of functions or ‘tasks’ and move these to the offshore team. They often achieve initial economic advantage through labor arbitrage and perhaps some improvement in quality as the tasks are documented and  standardized in order to make it easier to transition the work to the new location. This constitutes the first level of a global team: offshore service provider. But larger benefits around are often lost and typically include:

  • further ongoing process improvement,
  • better time to market,
  • wider service times or ‘follow the sun’,
  • and leverage of critical innovation or leadership capabilities of the offshore team.

In fact, the work often stagnates at whatever state it was in when it was transitioned with little impetus for further improvement. And because lower level tasks are often the work that is shifted offshore and higher level design work remains in the home country, key decisions on design or direction can often take an extended period – actually lengthening time to market. In fact, design or direction decisions often become arbitrary or disconnected because the groups – one in home office, the other in the offshore location – retain significant divides (time of day, perspective, knowledge of the work, understanding of the corporate strategy, etc). At its extreme, the home office becomes the ivory tower and the offshore teams become serf task executors and administrators. Ownership, engagement, initiative and improvement energies are usually lost in these arrangements. And it can be further exacerbated by having contractors at the offshore location, who have a commercial interest in maintaining the status quo (and thus revenue) and who are viewed as with less regard by the home country staff. Any changes required are used to increase contractor revenues and margins. These shortcomings erase many of the economic advantages of offshoring over time and further impact the competitiveness of the company in areas such as agility, quality, and leadership development.

A far better way to approach your workforce is to leverage a ‘global footprint and a global team’. And this approach is absolutely key for competitive advantage and essential for competitive parity if you are an international company. There are multiple elements of the ‘global footprint and team’ approach, that when effectively orchestrated by IT leadership, can achieve far better results than any other structure. By leveraging high performance global approach, you can move from an offshore service provider to a shared service excellence center and, ultimately to a global service leadership center.

The key elements of a global team approach can be grouped into two areas: high performance global footprint and high performance team. The global footprint elements are:

  • well-selected strategic sites, each with adequate critical mass, strong labor pools and higher education sources
  • proper positioning to meet time-of-day and improved skill and cost mix
  • knowledge and leverage of distinct regional advantages to obtain better customer interface, diverse inputs and designs, or unique skills
  • proper consolidation and segmentation of functions across sites to achieve optimum cost and capability mixes

Global team elements include:

  • consistent global goals and vision across global sites with commensurate rewards and recognition by site
  • a team structure that enables both integrated processes and local and global controls
  • the opportunity for growth globally from a junior position to a senior leader
  • close partnership with local universities and key suppliers at each strategic location
  • opportunity for leadership at all locations

Let’s tackle global footprint today and in a follow on post I will cover global team. First and foremost is selecting the right sites for your company. Your current staff total size and locations will obviously factor heavily into your ultimate site mix. Assess your current sites using the following criteria:

  • Do they have critical mass (typically at least 300 engineers or operations personnel, preferably 500+) that will make the site efficient, productive and enable staff growth?
  • Is the site located where IT talent can be easily sourced? Are there good universities nearby to partner with? Is there a reasonable Are there business units co-located or customers nearby?
  • Is the site in a low, medium, or high cost location?
  • What is the shift (time zone) of the location?

Once you have classified your current sites with these criteria, you can then assess the gaps. Do you have sites in low-cost locations with strong engineering talent (e.g. India, Eastern Europe)? Do you have medium cost locations (e.g., Ireland or 2nd tier cities in the US midwest)? Do you have too many small sites (e.g., under 100 personnel)? Do you have sites close to key business units or customers? Are no sites located in 3rd shift zones? Remember that your sites are more about the cities they are located in than the countries. A second tier city in India or a first or second tier city in Eastern Europe can often be your best site location because of improved talent acquisition and lower attrition than 1st tier locations in your country or in India.

It is often best to locate your service center where there are strong engineering and business universities nearby that will provide an influx of entry level staff eager to learn and develop. Given staff will be the primary cost factor in your service, ensure you locate in lower cost areas that have good language skills, access to the engineering universities, and appropriate time zones. For example, if you are in Europe, you should look to have one or two consolidated sites located just outside 2nd tier cities with strong universities. For example, do not locate in Paris or London, instead base your service desk either in or just outside Manchester or Budapest or Vilnius. This will enable you to tap into a lower cost yet high quality labor market that also is likely to provide more part-time workers that will help you solve peak call periods. You can use a similar approach in the US or Asia.

A highly competitive site structure enables you to meet a global optimal cost and capability mix as well. At the most mature global teams in very large companies, we drove for a 20/40/40 cost mix (20% high cost, 40% medium and 40% low cost) where each site is in a strong engineering location. Where possible, we also co-located with key business units. Drive to the optimal mix by selecting 3, 4, or 5 strategic sites that meet the mix target and that will also give you the greatest spread of shift coverage.  Once you have located your sites correctly, you must then of course drive to have effective recruiting, training, and management of the site to achieve outstanding service. Remember also that you must properly consolidate functions to these strategic sites.  Your key functions must be consolidated to 2 or 3 of the sites – you cannot run a successful function where there are multiple small units scattered around your corporate footprint. You will be unable to invest in the needed technology and provide an adequate career path to attract the right staff if it is highly dispersed.

You can easily construct a matrix and assess your current sites against these criteria. Remember these sites are likely the most important investments your company will make. If you have poor portfolio of sites, with inadequate labor resources or effective talent pipelines or other issues, it will impact your company’s ability to attract and retain it’s most important asset to achieve competitive success. It may take substantial investment and an extended period of time, but achieving an optimal global site and global team will provide lasting competitive advantage.

I will cover the global team aspects in my next post along with the key factors in moving from a offshore service provider to shared service excellence to shared service leadership.

It would be great to hear of your perspectives and any feedback on how you or your company been either successful (or unsuccessful) at achieving a global team.

Best, Jim Ditmore

How Did Technology End Up on the Sunday Morning Talk Shows?

It has been two months since the Healthcare.gov launch and by now nearly every American has heard or witnessed the poor performance of the websites. Early on, only one of every five users was able to actually sign in to Healthcare.gov, while poor performance and unavailable systems continue to plague the federal and some state exchanges. Performance was still problematic several weeks into the launch and even as of Friday, November 30, the site was down for 11 hours for maintenance. As of today, December 1, the promised ‘relaunch day’, it appears the site is ‘markedly improved’ but there are plenty more issues to fix.

What a sad state of affairs for IT. So, what does the Healthcare website issues teach us about large project management and execution? Or further, about quality engineering and defect removal?

Soon after the launch, former federal CTO Aneesh Chopra, in an Aspen Institute interview with The New York Times‘ Thomas Friedman, shrugged off the website problems, saying that “glitches happen.” Chopra compared the Healthcare.gov downtime to the frequent appearances of Twitter’s “fail whale” as heavy traffic overwhelmed that site during the 2010 soccer World Cup.

But given that the size of the signup audience was well known and that website technology is mature and well understood, how could the government create such an IT mess? Especially given how much lead time the government had (more than three years) and how much it spent on building the site (estimated between $300 million and $500 million).

Perhaps this is not quite so unusual. Industry research suggests that large IT projects are at far greater risk of failure than smaller efforts. A 2012 McKinsey study revealed that 17% of lT projects budgeted at $15 million or higher go so badly as to threaten the company’s existence, and more than 40% of them fail. As bad as the U.S. healthcare website debut is, there are dozens of examples, both government-run and private of similar debacles.

In a landmark 1995 study, the Standish Group established that only about 17% of IT projects could be considered “fully successful,” another 52% were “challenged” (they didn’t meet budget, quality or time goals) and 30% were “impaired or failed.” In a recent update of that study conducted for ComputerWorld, Standish examined 3,555 IT projects between 2003 and 2012 that had labor costs of at least $10 million and found that only 6.4% of them were successful.

Combining the inherent problems associated with very large IT projects with outdated government practices greatly increases the risk factors. Enterprises of all types can track large IT project failures to several key reasons:

  • Poor or ambiguous sponsorship
  • Confusing or changing requirements
  • Inadequate skills or resources
  • Poor design or inappropriate use of new technology

Unfortunately, strong sponsorship and solid requirements are difficult to come by in a political environment (read: Obamacare), where too many individual and group stakeholders have reason to argue with one another and change the project. Applying the political process of lengthy debates, consensus-building and multiple agendas to defining project requirements is a recipe for disaster.

Furthermore, based on my experience, I suspect the contractors doing the government work encouraged changes, as they saw an opportunity to grow the scope of the project with much higher-margin work (change orders are always much more profitable than the original bid). Inadequate sponsorship and weak requirements were undoubtedly combined with a waterfall development methodology and overall big bang approach usually specified by government procurement methods. In fact, early testimony by the contractors ‘cited a lack of testing on the full system and last-minute changes by the federal agency’.

Why didn’t the project use an iterative delivery approach to hone requirements and interfaces early? Why not start with healthcare site pilots and betas months or even years before the October 1 launch date? The project was underway for three years, yet nothing was made available until October 1. And why did the effort leverage only an already occupied pool of virtualized servers that had little spare capacity for a major new site? For less than 10% of the project costs a massive dedicated farm could have been built.  Further, there was no backup site, nor any monitoring tools implemented. And where was the horizontal scaling design within the application to enable easy addition of capacity for unexpected demand? It is disappointing to see such basic misses in non-functional requirements and design in a major program for a system that is not that difficult or unique.

These basic deliverables and approaches appear to have been fully missed in the implementation of the wesite. Further, the website code appears to have been quite sloppy, not even using common caching techniques to improve performance. Thus, in addition to suffering from weak sponsorship and ambiguous requirements, this program failed to leverage well-known best practices for the technology and design.

One would have thought that given the scale and expenditure on the program, top technical resources would have been allocated and ensured these practices were used. The feds are  scrambling with a “surge” of tech resources  for the site. And while the new resources and leadership have made improvements so far, the surge will bring its own problems. It is very difficult to effectively add resources to an already large program. And, new ideas introduced by the ‘surge’ resources, may not be either accepted or easily integrated. And if the issues are deeply embedded in the system, it will be difficult for the new team to fully fix the defects. For every 100 defects identified in the first few weeks, my experience with quality suggests there are 2 or 3 times more defects buried in the system. Furthermore, if one wonders if the project couldn’t handle the “easy” technical work — sound website design and horizontal scalability – how will they can handle the more difficult challenges of data quality and security?

These issues will become more apparent in the coming months when the complex integration with backend systems from other agencies and insurance companies becomes stressed. And already the fraudsters are jumping into the fray.

So, what should be done and what are the takeaways for an IT leader? Clear sponsorship and proper governance are table stakes for any big IT project, but in this case more radical changes are in order. Why have all 36 states and the federal government roll out their healthcare exchanges in one waterfall or big bang approach? The sites that are working reasonably well (such as the District of Columbia’s) developed them independently. Divide the work up where possible, and move to an iterative or spiral methodology. Deliver early and often.

Perhaps even use competitive tension by having two contractors compete against each other for each such cycle. Pick the one that worked the best and then start over on the next cycle. But make them sprints, not marathons. Three- or six-month cycles should do it. The team that meets the requirements, on time, will have an opportunity to bid on the next cycle. Any contractor that doesn’t clear the bar gets barred from the next round. Now there’s no payoff for a contractor encouraging endless changes. And you have broken up the work into more doable components that can then be improved in the next implementation.

Finally, use only proven technologies. And why not ask the CIOs or chief technology architects of a few large-scale Web companies to spend a few days reviewing the program and designs at appropriate points. It’s the kind of industry-government partnership we would all like to see.

If you want to learn more about how to manage (and not to manage) large IT programs, I recommend “Software Runaways,” , by Robert L. Glass, which documents some spectacular failures. Reading the book is like watching a traffic accident unfold: It’s awful but you can’t tear yourself away. Also, I expand on the root causes of and remedies for IT project failures in my post on project management best practices.

And how about some projects that went well? Here is a great link to the 10 best government IT projects in 2012!

What project management best practices would you add? Please weigh in with a comment below.

Best, Jim Ditmore

This post was first published in late October in InformationWeek and has been updated for this site.

Looking to Improve IT Production? How to Start

Production issues, as Microsoft and Google can tell you, impact even cloud email apps. A few weeks ago, Microsoft took an entire weekend to full recover its cloud Outlook service. Perhaps you noted the issues earlier this year in financial services where Bank of America experienced internet site availability issues. Unfortunately for Bank of America that was their second outage in 6 months, though they are not alone in having problems as Chase suffered a similar production outage on their internet services the week following. And these are regular production issues, not the unavailability of websites and services due to a series of DD0S attacks.

Perhaps 10 or certainly 15 years ago, such outages with production systems would have resulted in far less notice by their customers as the front office personnel would have worked alternate systems and manual procedures until the systems were restored. But with customers accessing the heart of most companies systems now through internet and mobile applications, typically on a 7×24 basis, it is very difficult to avoid direct and widespread impact to customers in the event of a system failure. Your production performance becomes very evident to your customers. And your customers’ expectations have continued to increase such that they expect your company and your services to be available pretty much whenever they want to use them. And while being available is not the only attribute that customers value (usability, feature, service and pricing factor in importantly as well) companies that consistently meet or exceed consumer availability expectations gain a key edge in the market.

So how do you deliver to current and future rising expectations around availability of your online and mobile services? And if both BofA and Chase, which are large organizations that offer dozens of services online and have massive IT departments have issues delivering consistently high availability, how can smaller organizations deliver compelling reliability?

And often, the demand for high availability must be achieved in an environment where ongoing efficiencies have eroded the production base and a tight IT labor market has further complicated obtaining adequate expertise. If your organization is struggling with availability or you are looking to achieve top quartile performance and competitive service advantage, here’s where to start:

First, understand that availability, at its root, is a quality issue. And quality issues can only be changed if you address all aspects. You must set quality and availability as a priority, as a critical and primary goal for the organization. And you will need to ensure that incentives and rewards are aligned to your team’s availability goal.

Second, you will need to address the IT change processes. You should look to implement an ITSM change process based on ITIL. But don’t wait for a fully defined process to be implemented. You can start by limiting changes to appropriate windows. Establish release dates for major systems and accompanying subsystems. Avoid changes during key business hours or just before the start of the day. I still remember the ‘night programmer’ at Ameritrade at the beginning of our transformation there. Staying late one night as CIO in my first month, I noticed two guys come in at 10:30 PM. When I asked what they did, they said ‘ We are the night programmers. When something breaks with the nightly batch run, we go in and fix it.’  And done with no change records, minimal testing and minimal documentation. Of course, my hair stood on end hearing this. We quickly discontinued that practice and instead made changes as a team, after they were fully engineered and tested. I would note that combining this action with a number of other measures mentioned here enabled us to quickly reach a stable platform that had the best track record for availability for all online brokerages.

Importantly, you should ensure that adequate change review and documentation is being done by your teams for their changes. Ensure they take accountability for their work and their quality. Drive to an improved change process with templates for reviews, proper documentation, back out plans, and validation. Most failed changes are due to issues with the basics: a lack of adequate review and planning, poor change documentation of deployment steps, or missing or ineffective validation, or one person doing an implementation in the middle of the night when you should have at least two people doing it together (one to do, and one to check).

Also, you should measure the proportion of incidents due to change. If you experience mediocre or poor availability and failed changes contribute to more than 30% of the incidents, you should recognize change quality is a major contributor to your issues. You will need to zero in on the areas with chronic change issues. Measure the change success rate (percentage of changes executed successfully without production incident) of your teams. Publish the results by team (this will help drive more rapid improvement). Often, you can quickly find which of your teams has inadequate quality because their change success rate ranges from a very poor mid-80s percentage to a mediocre mid-90s percentage. Good shops deliver above 98% and a first quartile shop consistently has a change success rate of 99% or better.

Third, ensure all customer impacting problems are routed through an enterprise command center via an effective incident management process. An Enterprise Command Center (ECC) is basically an enterprise version of a Network Operations Center or NOC, where all of your systems and infrastructure are monitored (not just networks). And the ECC also has capability to facilitate and coordinate triage and resolution efforts for production issues. An effective ECC can bring together the right resources from across the enterprise and supporting vendors to diagnose and fix production issues while providing communication and updates to the rest of the enterprise. Delivering highly available systems requires an investment into an ECC and the supporting diagnostic and monitoring systems. Many companies have partially constructed the diagnostics or have siloed war rooms for some applications or infrastructure components. To fully and properly handle production issues requires consolidating these capabilities and extending their reach.  If you have an ECC in place, ensure that all customer impacting issues are fully reported and handled. Underreporting of issues that impact a segment of your customer base, or the siphoning off of a problem to be handled by a local team, is akin to trying to handle a house fire with a garden hose and not calling the fire department. Call the fire department first, and then get the garden hose out while the fire trucks are on their way.

Fourth, you must execute strong root cause and followup. These efforts must be at the individual issue or incident level as well as at a summary or higher level. It is important to not just get focused on fixing the individual incident and getting to root cause for that one incident but to also look for the overall trends and patterns of your issues. Are they cluster with one application or infrastructure component? Are they caused primarily by change? Does a supplier contribute far too many issues? Is inadequate testing a common thread among incidents? Are your designs too complex? Are you using the products in a mainstream or unique manner – especially if you are seeing many OS or product defects? Use these patterns and analysis to identify the systemic issues your organization must fix. They may be process issues (e.g. poor testing), application or infrastructure issues (e.g., obsolete hardware), or other issues (e.g., lack of documentation, incompetent staff). Track both the fixes for individual issues as well as the efforts to address systemic issues. The systemic efforts will begin to yield improvements that eliminate future issues.

These four efforts will set you on a solid course to improved availability. If you couple these efforts will diligent engagement by senior management and disciplined execution, the improvements will come slowly at first, but then will yield substantial gains that can be sustained.

You can achieve further momentum with work in several areas:

  • Document configurations for all key systems.  If you are doing discovery during incidents it is a clear indicator that your documentation and knowledge base is highly inadequate.
  • Review how incidents are reported. Are they user reported or did your monitoring identify the issue first? At least 70% of the issues should be identified first by you, and eventually you will want to drive this to a 90% level. If you are lower, then you need to look to invest in improving your monitoring and diagnostic capabilities.
  • Do you report availability in technical measures or business measures? If you report via time based systems availability measures or number of incidents by severity, these are technical measures. You should look to implement business-oriented measures such as customer impact availability. to drive great transparency and more accurate metrics.
  • In addition to eliminating issues, reduce your customer impacts by reducing the time to restore service (Microsoft can certainly stand to consider this area given their latest outage was three days!). For mean time to restore (MTTR – note this is not mean time to repair but mean time to restore service), there are three components: teime to detect (MTTD), time to diagnose or correlation (MTTC), and time to fix (to restore service or MTTF). An IT shop that is effective at resolution normally will see MTTR at 2 hours or less for its priority issues where the three components each take about 1/3 of the time. If your MTTD is high, again look to invest in better monitoring. If your MTTC is high look to improve correlation tools, systems documentation or engineering knowledge. And if your MTTF is high, again look to improve documentation or engineering knowledge or automate recovery procedures.
  • Consider investing in greater resiliency for key systems. It may be that customer expectations of availability exceed current architecture capabilities. Thus, you may want to invest in greater resiliency and redundancy or build a more highly available platform.

As you can see, providing robust availability for your customers is a complex endeavor. By implementing these steps, you can enable sustainable and substantial progress to top quartile performance and achieve business advantage in today’s 7×24 world.

What would you add to these steps? What were the key factors in your shop’s journey to high availability?

Best, Jim Ditmore

Getting Things Done: A Key Leadership Skill

It is a bit ironic that this post has taken me twice as long to do as my average post. But while it is an important topic, it is difficult to pinpoint, of all the practices you can leverage, which ones really help you or your team or organization get the right things done. So, just before the Memorial Day holiday, here is a post to help you execute better for the rest of the year and meet those goals.

Have a great holiday weekend.  Jim

Getting things done is a hallmark of effective teams. Unfortunately, the focus and flow of large business organizations combined with influences of the modern world erode our ability to get the right things done. To raise the productivity to a high performance team,  as a senior leader, you should impart an ability to get the right things done at the divisional and team level within your organization. And while there are myriad reasons that conspire to reduce our focus or effectiveness, there are a number of techniques and practices that can greatly improve the selection and capacity at all levels: at the overall organization or division, at the working team level, and for the individual.

Realize that the same positive forces that ensure a focus on business goals, drive consensus within an organization, or require risk and control to be addressed, can also be mis- or over-applied and result in organizational imbalance or gridlock. Coupled with too much waterfall or ‘big bang’ approaches and you can get not just ineffectiveness but spectacular failures of large efforts. At the organizational level, you should set the right agenda and framework so the productivity and capacity of your IT shop can be improved at the same time you are delivering to the business agenda. To set the right agenda look to the following practices:

  • provide a clear vision with robust goals that include clear delivery milestones and that are aligned to the business objectives. The vision should also be compelling — your team will only outperform for a worthwhile aspiration.
  • avoid too many big bets (an unbalanced portfolio) – your portfolio should be a mix of large, medium and small deliveries. This enables you to deliver a regular stream of benefits across a broader set of functions and constituents with less risk. Often a nice balancing investment area is drive several small efforts in HR and Finance that streamline and automate common processes in these areas used by much of the corporation (thus a good, broad positive impact on the corporate productivity).
  • aggregate your delivery – often IT efforts can be so tightly tied to immediate delivery for the business that the IT processes are substantially penalized including:
    • where a continuous stream of applications and updates are introduced into production without a release schedule (causing large amount of duplicative or indequate design, testing and implementation)
    • where a highly siloed delivery approach where every minor business unit has its own set of business systems resulting in redundant feature build and maintain work.
  • address poor quality standards and ineffective build capability including:
    • correct defects as early in the build process as possible. Defects correct at their source (design or implementation) are far less costly to fix than those corrected once in production
    • lower build productivity due to a lack of investment in the underlying ‘build factory’ including tools, training and processes or the teams do not leverage modern incremental or agile methods
    • delivery by the internal team of the full stack, where packaged software is not leveraged (recently I have encountered shops trying to do their own software distribution tools or data bases

So, in sum, at the organizational level, provide clarity of vision, review your portfolio for balance, make room for investments in your factory and look to simplify and consolidate.

At the team level, employ clarity, accountability, and simplicity to get the right things done. Whether it is a project or an ongoing function:

  • are the goals or deliverables clear?
  • are the efforts broken into incremental tasks or steps?
  • are the roles clear?
  • are the tasks assigned?
  • are there due dates? or good operational metrics?
  • is the solution or approach straightforward?
  • is there follow up to ensure that the important work takes priority and the work is done?

And then, most important, are you recognizing and rewarding those who gets things done with quality? There are many other factors that you may need to address or supplement to enable the team to be achieve results from providing specific direction to coaching to adding resources or removing poor performers. But frequently well-resourced teams can spin their wheels working on the wrong things, or delivering with poor quality or just not focusing on getting results. This is where clarity, accountability and simplicity make the difference and enable your team to get the right things done.

Most importantly, getting the right things done as an individual is a critical skill that enables outperformance. Look to hone your abilities with some of following suggestions:

  • recognize we tend to do what is urgent rather than what is important. Shed the unimportant but urgent tasks and spend more time on important tasks. In particular, use the time to be prepared, improve your skills, or do the planning work that is often neglected.
  • hold yourself accountable, make your commitments. As a leader you must demonstrate holding yourself to the same (or higher) standards as those for your team.
  • Make clear, fact-based decisions and don’t over-analyze. But seek inputs where possible from your team and experts. And leverage a low PDI style so you can avoid major mistakes.
  • and finally, a positive approach can make a world of a difference. Do your job with high purpose and in high spirit. Your team will see it and it will lighten their step as well.

So, those are the practices from my experience that have been enablers to getting things done. What would you add? or change? Do let me know.

Best, Jim Ditmore


 

Key Steps to Building a High Performance Team: Prune and Improve

Today I revisit a core topic of Recipes for IT: High Performance IT Teams. Before I provide background on this series of posts, I thought it was about time for a quick blog update. Recipes for IT continues to attract new readers and has a substantial ongoing readership. It is quite heartening to see the level of interest and I really appreciate your visits and comment. I will strive to regularly add thoughtful and relevant material for IT leaders and hope that you continue to find the site useful. I do recommend for new readers that you check out the introduction page and the various topic areas as you should find useful material of strong depth and actionability that can help you be more successful. This site also continues to do well in Google page rankings on a number of topic areas, particularly service desk queries and IT metrics and reporting. If there are topics you would like me to tackle, please do not hesitate to send me a comment.

Now back to some background on Building High Performance Teams. This post is now the fifth on this topic and there will be one further post to complete the steps of building a high performance team. I hope you find the material to be both enlightening and actionable. One key for IT leaders is that you consider the tasks required to build a HP team as some of your most important activities. At nearly every poor performing organization that I have been responsible for turning around, I have found that many times, the primary reason for inadequate talent and poor performing teams is inadequate manager attention and focus on these activities. So, work hard to make the time, even though you would much rather be doing other activities. And now for the post. Best, Jim

Building High Performance Teams: As I have mentioned previously, I have a positive outlook on the competence of today’s managers and leaders. I see more material and approaches available for managers than ever before and more effort and study applied by the managers as well. Much of the material though is either a very narrow spectrum or a single technique which does not address the full spectrum of practices and knowledge that must be brought to bear to build and sustain a high performance IT team. So,  I have assembled a set of practices that I have leveraged or I have seen peers or other senior IT leaders use to build high performance IT teams in this series of posts to enable managers to have a broad source of practice at their disposal.

Senior IT leaders, with his or her senior management team, can use these practices to build a high performing team, in the following steps:

Today’s post covers how to prune and improve as required. The previous steps are prior posts and I have further constructed reference pages with links above on the first four steps.  Subsequent posts will cover the last steps as well as a summary.

I think the aspiration of building a high performing team is a lofty, worthwhile, and achievable vision. If you have ever participated in a high performance team at the top of their game, in other words: a championship team, then you know the level of professional reward and sense of accomplishment that accompanies such membership. And for most companies that rely significantly on IT, if their IT team is a high performing team, it can make a very large difference in their products, their customer experience, and their bottom line. Building such a championship team is not only about attracting or retaining top talent, it is also necessarily about identifying those team members who do not have the capabilities, behaviors, or performance to remain part of the team and addressing their future role constructively but firmly.

Let’s first revisit some key truths that underly how to build a high performance team:

– top performing engineers, typically paid similar to their mediocre peers are not 10% better but 2x to 10x better

– having primarily senior engineers and not a good mix of interns, graduates, junior and mid and senior level engineers will result in stagnation and overpaid senior engineers doing low level work

– having a dozen small sites with little interaction is far less synergistic and productive than having a few strategic sites with critical mass

– relying on contractors to do most of the critical or transformational work is a huge penalty to retain or grow top engineers

– line and mid-level managers must be very good people managers, not great engineers, otherwise you are likely have difficulty retaining good talent and you will not develop your talent

– engineers do not want to work in an expensive in-city location like the financial district of London (that is for investment bankers)

– enabling an environment where mistakes can be made, lessons learned, and quality and innovation and initiative are prized means you will get a staff that behaves and performs like that.

With these truths in mind, (and these are the same ones you used to set about building the team), having executed the first four steps, you should have adequate capacity to begin thoughtful pruning and improvement of your organization. While there are circumstances when a poor performing manager or senior engineer causes so many issues that it is a benefit to remove them, in many cases you must have adequate resource capacity to meet demands so that once you begin pruning your team is not overtaxed and penalized as a result.

Pruning should begin at the top and work down from there. Start with your directs and the next level below. Consider the span of control of your organization and the number of levels. High performing organizations are generally flatter with greater spans of control. In considering your team, I recommend leveraging a talent calibration approach of either the typical 9 box or a top-grading variant. The key to calibration is to essentially formulate three sets of results: those on your staff that are top performers that you will need to further develop and challenge; the ‘well-placed experts’ and solid performers that will need support and attention but will execute reliably; and those whose performance and potential is lacking and who must step up to continue in their role. With these three groupings of your management team identified, ensure you lay out crisp plans for all three groups and execute against them. (Remember, it will be very difficult for you to subsequently demand of your line managers that they address their staff issues if you have not shown a capability to execute such accountability with your team.)

One area to particularly focus on is time-boxing the development plans for poor performers. As these are senior managers the time to address performance issues should be shorter not longer. I recommend you start the development plan with a succinct, clear conversation on high expectations and shortcoming of their performance with examples where possible. You should provide a writeup covering this discussion at the end of the discussion. Jointly layout key deliverables, milestones, expected behavior changes and results with the affected leader. Be open to the possibility that the employee may know they are in over their head and may be looking for an alternative. While not advocating moving problem performers around, there may be a role within the company or elsewhere outside the company that is a much better fit. Look to assist with such a transition if beneficial for the company and the employee. If the employee insists this is the role they want and they are willing to step up and adjust, then you should provide support under a tight timeline for them to achieve it. Monitor the plan regularly with HR. If you follow up diligently it will become evident quite quickly that the employee can muster to the new level or not. Generally, in my experience, a surprisingly large percentage of poor performing employees will drop out of their own accord once you have provided clear expectations and no escape routes other than the hard work to get there — assuming of course that there is a modest but respectable exit plan for them. It is also key to treat the employee with respect and fairness throughout the process and focus on the results and outcomes.

Equally though, I have more than a handful of senior leaders and managers who have expressed surprise when confronted with poor performance as no one had communicated clearly and firmly their performance issues previously. Once understood and once the higher goals and expectations were known, many of these individuals (and others as well), definitively stepped up and improved significantly. Thus, until you communicate the higher goals and expectations clearly AND communicate where they must improve (constructively, with specifics) the likelihood of improvement is minimal. So, allocate the time to hold the tough but fair conversations and provide this information. Once the conversations are held, over the next 2 to 3 months you should take action based on the results. Either poor performing managers will be exited (or moved to a role much more befitting) or poor performers will become good performers.  One of the interesting results from such actions is that the remaining team, upon seeing poor performers exited, will view the results positively. In fact, I have experienced some very strong reactions from other team members who now felt a dead weight was off of their shoulders as they no longer had to make up for the defects and negative performance of the just exited team member. Further, I have received multiple (back-handed) compliments along the lines of ‘Wow, we are glad management finally figured out what to do and took action!’ . So do not be persuaded that the team will view performance actions solely in a negative light.

Once you have initiated the performance management process and you are well in the process of pruning your team, you can work with your managers and HR department to address areas lower in the organization. Remember it is key to first set expectations and goals that cascade and match your overall goals. Then ensure you hold managers and senior engineers to a higher bar than the mid and junior staff. For senior staff, you are not looking just for technical competence but also they must meet the standard for such behaviors as problem solving/solution orientation, teamwork, initiative and drive, and quality and focus on doing things right. And they should exhibit the right leadership and communication skills.

Driving such pruning and development work through your organization is important but also a delicate task. Generally, with little exception, management in a IT organization can improve how they handle performance management. Because most of the managers are engineers, their ability to interact firmly with another person in a highly constructive manner is typically under-developed. Thus, some managers may not be up to this pruning task or their calibration of talent could be well off the mark. So, leverage your HR resources to guide management and personally check in to ensure proper calibration of talent by your lower level managers. Provide classes and interactive session on how to do coaching and provide feedback to employees. Even better, insist that performance reviews and development plans must be read and signed off by the manager’s manager before being given to improve their quality. This a key element to focus on because a poorly executed resource improvement plan could backfire. Remember that the line manager’s interaction with an employee is the largest factor in undesired attrition and employee engagement. Of course, these is all the more reason to replace poor performing managers with good leaders, but do so effectively and firmly. Use the workforce plans that you developed in the Build step to ensure your pruning and development also helps you move toward your strategic site goals, contractor/staff mix targets, and junior/mid/senior profiles.

Pruning and improvement is the tough but necessary step in building a high performance team. If done well, pruning and improvement will provide additional substantial lift to the team and more importantly, enable ongoing sustainment. It requires discipline and focus to execute the steps we would all prefer to avoid, but are necessary for reaching the final high performance stages.

What has been your experience either as a leader or participant in such efforts? What have you seen go very well? or terribly wrong? I look forward to your perspective.

Best, Jim Ditmore

Using Performance Metric Trajectories to Achieve 1st Quartile Performance

I hope you enjoyed the Easter weekend. I have teamed up today with Chris Collins, a senior IT Finance manager and former colleague. Our final post on metrics is on unit costing — on which Chris has been invaluable with his expertise. For those just joining our discussion on IT metrics, we have had 6 previous posts on various aspects of metrics. I recommend reading the Metrics Roundup and A Scientific Approach to Metrics to catch you up in our discussion.

As I outlined previously, unit costing is one of the critical performance metrics (as opposed to operational or verification metrics) that a mature IT shop should leverage particularly for its utility functions like infrastructure (please see the Hybrid model for more information on IT utilities). With proper leverage, you can use unit cost and the other performance metrics to map a trajectory that will enable your teams to drive to world-class performance as well as provide greater transparency to your users.

For those just starting the metrics journey, realize that in order to develop reliable sustainable unit cost metrics, significant foundational work must be done first including:

  • IT service definition should be completed and in place for those areas to be unit costed
  • an accurate and ongoing asset inventory must be in place
  • a clean and understandable set of financials must be available organized by account so that the business service cost can be easily derived

 If you have these foundation elements in place then you can quickly derive the unit costing for your function. I recommend partnering with your Finance team to accomplish unit costing. And this should be an effort that you and your infrastructure function leaders champion. You should look to apply a unit cost approach to the 20 to 30 functions within the utility space (from storage to mainframes to security to middleware, etc). It usually works best to start with one or two of the most mature component functions and develop the practices and templates. For the IT finance team, they should progress the effort as follows:

  • Ensure they can easily segregate cost based on service listing for that function
  • Refine and segregate costs further if needed (e.g., are there tiers of services that should be created because of substantial cost differences?)
  • Identify a volume driver to use as the basis of the unit cost (for example, for storage it could be terabytes of allocated storage)
  • Parallel to the service identification/cost segregation work, begin development of unit cost database that allows you to easily manipulate and report on unit cost.  Specifically, the database should contain:
    • Ability to accept RC and account level assignments
    • Ability to capture expense/plan from the general ledger
    • Ability to capture monthly volume feeds from source systems including detail volume data (like user name for an email account or application name tied to a server)

For the function team, they should support the IT Finance team in ensuring the costs are properly segregated into the services they have defined. Reasonable precision of the cost segregation is required since later analysis will be for naught if the segregations are inaccurate. Once the initial unit costs are reported, the function technology can now begin their analysis and work. First and foremost should be an industry benchmark exercise. This will enable you to understand quickly how your performance ranks against competitors and similar firms. Please reference the Leveraging Benchmarkspage for best practices in this step. In addition to this step, you should further leverage performance metrics like unit cost to develop a projected trajectory for for your function’s performance. For example, if your unit cost for storage is currently $4,100/TB for tier 1 storage, then the storage team should map out what their unit cost will be 12, 24, and even 36 months out given their current plans, initiatives and storage demand. And if your target is for them to achieve top quartile cost, or cost median, then they can now understand if their actions and efforts will enable them to deliver to that future target. And if they will not achieve it, they can add measures to address their gaps.

Further, you can now measure and hold them accountable on a regular basis to achieve the proper progress towards their projected target. This can be done not just for unit cost but for all of your critical performance measures (e.g., productivity, time to market, etc).  Setting goals and performance targets in this manner will achieve far better results because a clear mechanism for understanding cause and effect between their work and initiatives and the target metrics has been established.

A broad approach to also potentially utilize is to establish a unit cost progress chart for all of your utility functions. On this chart, where the y axis is cost as a percentage of current cost and the x axis is future years, you should establish a minimum improvement line of 5% per year. The rationale behind this is that improving hardware (e.g., servers, storage, etc) and improving productivity, yield an improving unit cost tide of at least 5% a year. Thus, to truly progress and improve, your utility functions should well exceed a 5% per year improvement if they are below 1st quartile. This approach also conveys the necessity and urgency of not sitting on our laurels in the technology space. Often, with this set of performance metrics practices employed along with CPI and other best practices, you can then achieve 1st quartile performance within 18 to 24 months for your utility function.

What has been your experience with unit cost or other performance measures? Where you able to achieve sustained advantage with these metrics?

Best,

Jim Ditmore and Chris Collins

 

Tying Consumption to Cost: Allocation Best Practices

In 1968, Garrett Hardin wrote about the over-exploitation of common resources in an essay titled the “The Tragedy of the Commons“. While Garrett wrote about the overexploitation of common pastureland where individual herders overused and diminished common pasture, there can be a very similar effect with IT resources within a large corporation. If there is no cost associated with the usage of IT resources by different business unit, than each unit will utilize the the IT resources to maximize its potential benefit to the detriment of the corporate as a whole. Thus, to ensure effective use of the IT resources there must be some association of cost or allocation between the internal demand and consumption by each business unit. A best practice allocation approach enables business transparency of IT cost and business drivers of IT usage so that thoughtful business decisions for the company as a whole can be made with the minimum of allocation overhead and effort.

A well-designed allocations framework will ensure this effective association as well as:

  • provide transparency to IT costs and the particular business unit costs and profitability,
  • avoid wasteful demand and alter overconsumption behaviors
  • minimize pet projects and technology ‘hobbies’

To implement an effective allocations framework there are several foundation steps. First, you must ensure you have the corporate and business unit CFOs’ support and the finance team resources to implement and run the allocations process. Generally, CFOs look for greater clarity on what drives costs within the corporation.  Allocations allow significant clarity on IT costs which are usually a good-sized chunk of the corporation’s costs.  CFOs are usually highly supportive of a well-thought out allocations approach. So, first garner CFO support along with adequate finance resources.

Second, you must have a reasonably well-defined set of services and an adequately accurate IT asset inventory. If these are not in place, you must first set about defining your services (e.g. and end user laptop service that includes laptop, OS, productivity software, and remote access or a storage service of high performance Tier 1 storage by Terabyte) and ensuring your inventory of IT assets is minimally accurate (70 to 80 %). If there are some gaps, they can be addressed by leveraging a trial allocation period where numbers and assets are published, no monies are actually charged, but every business unit reviews its allocated assets with IT and ensures it is correctly aligned. Once you have the service defined and the assets inventoried, your finance team must then set about to identify which costs are associated with which services. They should work closely with your management team to identify a ‘cost pool’ for each service or asset component. Again, these costs pools should be at least reasonably accurate but do not need to be perfect to begin a successful allocation process.

The IT services defined should be as readily understandable as possible. The descriptions and missions should not be esoteric except where absolutely necessary. They should be easily associated with business drivers and volumes (such as number of employees, or branches, etc) wherever possible.  In essence, all major categories of IT expenditure should have an associated service or set of services and the services should be granular enough so that each service or component can be easily understood and each one’s drivers should be easily distinguished and identified. The targets should should be somewhere between 50 and 150 services for the typical large corporation.  More services than 150 will likely lead to more effort being spent on very small services and result in too much overhead. Significantly, less than 50 services could result in clumping of services that are hard to distinguish or enable control. Remember the goal is to provide adequate allocations data at the minimum effort for effectiveness.

The allocations framework must have an overall IT owner and a senior Finance sponsor (preferably the CFO). CFOs want to implement systems that encourage effective corporate use of resources so they are a natural advocate for a sensible allocation framework. There should also be a council to oversee the allocation effort and provide feedback and direction where majors users and the CFO or designate are on the council. This will ensure both adequate feedback as well as buy-in and support for successful implementation and appropriate methodology revisions as the program grows. As the allocations process and systems mature, ensure that any significant methodology changes are reviewed and approved by the allocation council with sufficient advance notice to the Business Unit CFOs. My experience has been that everyone agrees to a methodology change if it is in their favor and reduces their bill, but everyone is resistant if it impacts their business unit’s finances regardless of how logical the change may be. Further, the allocation process will bring out intra business unit tensions toward each other, especially for those that have an increase versus those that have a decrease, if the process is not done with plenty of communication and clear rationale.

Once you start the allocations, even if during a pilot or trial period, make sure you are doing transparent reporting. You or your leads should have a monthly meeting with each business area with good clear reports. Include your finance lead and the business unit finance lead in the meeting to ensure everyone is on the same financial page.  Remember, a key outcome is to enable your users to understand their overall costs, what the cost is for each services and, what business drivers impact which services and thus what costs they will bear. By establishing this linkage clearly the business users will then look to modify business demand so as to optimize their costs. Further, most business leaders will also use this allocations data and new found linkage to correct poor over-consumption behavior (such as users with two or three PCs or phones) within their organizations. But for them to do this you must provide usable reporting with accurate inventories. The best option is to enable managers to peruse their costs through an intranet interface for such
end-user services such as mobile phones, PCs, etc . There should be readily accessible usage and cost reports to enable them to understand their team’s demand and how much each unit costs.  They should have the option right on the same screens to discontinue, update or start services. In my experience, it is always amazing that once leaders understand their costs, they will want to manage them down, and if they have the right tools and reports, managing down poor consumption happens faster than a snowman melting in July — exactly the effect you were seeking.

There are a few additional caveats and guides to keep in mind:

  • In your reporting, don’t just show this month’s costs, show the cost trend over time and provide a projection of future unit costs and business demand
  • Ensure you include budget overheads in the cost allocation, otherwise you will have a budget shortfall and neglect key investment in the infrastructure to maintain it.
  • Similarly, make sure you account for full lifecycle costs of a service in the allocation — and be conservative in your initial allocation pricing, revisions later that are upward due to missed costs will be painful
  • For ‘build’ or ‘project’ costs, do not use exact resource pricing. Instead use an average price to avoid the situation where every business unit demands only the lowest cost IT  resources for their project resulting in a race to the bottom for lowest cost resources and no ability to expand capacity to meet demand since these would be high cost resources on the margin.
  • Use allocations to also avoid First-In issues to new technologies (set the rate at the project volume rate not the initial low volume rate) and to encourage transition off of expensive legacy technologies (Last out increases)
  • And lastly, and ensure your team knows and understands their services and their allocations and can articulate why what costs what they cost

With this framework and approach, you should be able to build and deliver an effective allocation mechanism that enables the corporation to avoid the overconsumption of free, common resources and properly direct the IT resources to where the best return for the corporation will be. Remember though that in the end this is an internal finance mechanism so the CFO should dictate the depth, level and allocation approach and you should ensure that the allocations mechanism does not become burdensome beyond its value. remember that allocations framework.

What have been your experiences with allocations frameworks? What changes or additions to these best practices would you add?

Best, Jim Ditmore

 

Why you want an Australian Pilot: Lessons for Outstanding IT Leadership

Perhaps you are wondering what nationality or culture has to do with piloting an airplane? And how could piloting an airplane be similar to making decisions in an IT organization?

For those of you who have read Outliers, which I heartily recommend, you would be familiar with the well-supported conclusions that Malcolm Gladwell makes:

  • that incredible success often has strong parallels and patterns among those high achievers, often factors you would not expect or easily discern
  • and no one ever makes it alone

A very interesting chapter in Outliers is based on the NTSB analysis of what occurred in the cockpit during several crashes as well as the research work done by Dutch psychologist Geert Hofstede. What Hofstede found in his studies for IBM HR department in the 70s and 80s is that people from different countries or cultures behave differently in their work relationships. Not surprising of course, and Hofstede did not place countries as right or wrong but used the data as a way to measure differences in cultures. A very interesting measure of culture is the Power Distance Index (PDI). Countries with a high PDI have cultures where those in authority are treated with great respect and deference. For countries with a low PDI, those in authority go to great lengths to downplay their stature and members feel comfortable challenging authority.

Now back to having an Australian pilot your plane: commercial aircraft, while highly automated and extremely reliable, are complex machines that when in difficult circumstances, require all of the crew to do their job well and in concert. But for multiple crashes in the 1990s and early 2000s, the NTSB found that crew communication and coordination were significant factors. And those airlines with crews from countries with high PDI scores, had the worst records. Why? As Malcolm Gladwell lays out so well, it is because of the repeated deference of lower status crew members to a captain who is piloting the plane. And when the captain makes repeated mistakes, these crew members defer and do not call vigorously call out the issues when it is their responsibility to do so even to fatal effect. So, if you were flying a plane in the 1990s, you would want your pilot to be from Australia, New Zealand, Ireland, South Africa, or the US, as they have the lowest PDI cultural score. Since that time, it is worth noting that most airlines with high PDI ratings have incorporated crew responsibility training to overcome these effects and all airlines have added further crew training on communications and interaction resulting in the continued improvement in safety we witnessed this past decade.

But this experience yields insight into how teams operate effectively in complex environments. Simply put, the highest performance teams are those with a low PDI that enables team members to provide appropriate input into a decision. Further, once the leader decides, with this input, the team rotates quickly and with confidence to take on the new tack. Elite teams in our armed forces operate on very similar principles.

I would suggest that high performance IT teams operate in a low PDI manner as well. Delivering an IT system in today’s large corporations requires integrating a dozen or more technologies to deliver features that require multiple experts to fully comprehend. In contrast, if you have the project or organization driven by a leader whose authoritative style imposes high deference by all team members and alternative views cannot be expressed, than it is simply a matter of time before poor performance will set in. Strong team members and experts will look elsewhere for employment as their voices are not heard, and at some point, one person cannot be an expert in everything required to succeed and delivery failure will occur. High PDI leaders will not result in sustainable high performance teams.

Now a low PDI culture does not suggest there is not structure and authority. Nor is the team a democracy. Instead, each team member knows their area of responsibility and understands that in difficult and complex situations, all must work together with flexibility to come up with ideas and options for the group to consider for the solution. Each member views their area as a critical responsibility and strives to be the best at their competency in a disciplined approach. Leaders solicit data, recommended courses and ideas from team members, and consider them fully. Discussion and constructive debate, if possible given the time and the urgency, are encouraged. Leaders then make clear decisions, and once decided, everyone falls in line and provides full support and commitment.

In many respects, this is a similar profile to the Level 5 leader that Jim Collins wrote about that mixes ‘a paradoxical blend of personal humility and professional will. They feature a lack of pretense (low PDI) but fierce resolve to get things done for the benefit of their company. Their modesty allows them to be approachable and ensures that the facts and expert opinions are heard. Their focus and resolve enables them to make clear decisions. And their dedication to the company and the organizations ensure the company goals are foremost. (Of course, they also have all the other personal and management strengths and qualities (intelligence, integrity, work ethic, etc.).)

Low PDI or Level 5 leaders set in place three key approaches for their organizations:

  • they set in place a clear vision and build momentum with sustained focus and energy, motivating and leveraging the entire team
  • they do not lurch from initiative to initiative or jump on the latest technology bandwagon, instead they judiciously invest in key technologies and capabilities that are core to their company’s competence and value and provide sustainable advantage
  • because they drive a fact-based, disciplined approach to decisioning as leaders, excessive hierarchy and bureaucracy are not required. Further, quality and forethought are built in to processes freeing the organization of excessive controls and verification.

To achieve a high performance IT organization, these are the same leadership qualities required. Someone who storms around and makes all the key decisions without input from the team will not achieve a high performance organization nor will someone who focuses only on technology baubles and not on the underlying capabilities and disciplines. And someone who shrinks from key decisions and turf battles and does not sponsor his team will fail as well. We have all worked for bosses that reflected these qualities so we understand what happens and  why there is a lack of enthusiasm in those organizations.

So, instead, resolve to be a level 5 leader, and look to lower your PDI. Set a compelling vision, and every day seek out the facts, press your team to know their area of expertise as top in the industry, and sponsor the dialogue that enables the best decisions, and then make them.

Best, Jim

Your Start of the Year Leadership Checklist

Just as I published a quick checklist for you to use as the year was closing, here is a checklist for your first week back to help you get off to a great start of the new year.  Plus, this is a lot more fun than taking down those outdoor Christmas decorations or doing returns of the unwanted gifts. So before the office gets busy, use your first few weeks to get a jump on outstanding results in 2012 with this list:

1. Remember to get the things done we planned in December. You have booked time in January with your team to do the detailed planning to ensure you have the IT goals for 2012 clearly defined with the key steps to get there. Knock it out with your team.

2. Set your 1st and 2nd quarter virtualization goals for your server and storage and sit down with them and ensure they are mapping out how to get it done. Get them off to a quick start.

3. Pick one or two major contracts to renegotiate in your favor this quarter. A quick hint: Oracle missed expectations last quarter so you may have an opportunity. Remember to hold tight, put something new on the table to get the most out of a deal, and insist on your terms and conditions (and if your company does not have an up-to-date contract template, put that on the plate with your Chief Procurement Officer to get it done).

4. Take those new insights that you gained from your holiday vacation (remember you were going to spend part of your vacation time reading a good management or IT book) and ensure you bring the view to your planning meeting.

5. Review your January schedule and ensure you have time with your customers fully scheduled. Invite one of them to kick off your planning meeting.

6. Also review the planning meeting agenda with your boss and ensure you capture any ‘messages’ he/she wants to make sure come across.

7. Sit down with the intranet team and ensure they are adding 1 or 2 helpful ‘widgets’ a quarter to your intranet site. Start with a ‘How do I …’ list button or improved search tool, or a wikipedia for corporate terms and abbreviations. The little helpful things mean a lot to the productivity of your company’s employees.

8. If you don’t have BYOD yet, sit down with your client device team and review the plans to pilot and then implement it this year.

9. Schedule a visit for you and several of your team to review either a customer facing site (call center or retail store) or a key operations facility for your company. Ask questions and see how IT is working where the rubber meets the road in your firm. I am certain you will learn plenty.

10. Review and report on your performance for the past year – do it with thought and be provocative. Challenge yourself and your team where you have not delivered well. Then follow up with a high level and positive note to your entire team talking in broad strokes about the goals for the year. Strong communication at the start of the year will help ensure you and your team are lined up for success throughout the year.

Many of these items are reconnecting activities: with your business, with your customers, with your boss, and with your team. Before you start off on any major endeavor, it is critical to recheck the plan and the communication lines — that is in essence what we are doing. And with it you will be much more likely to have a successful and rewarding 2012.

All the best, and roger on those plans! Jim