Turning the Corner on Data Centers

Recently I covered the ‘green shift’ of servers where each new server generation is not only driving major improvements in compute power but is also requires about the same or even less environmentals (power, cooling, space) as the previous generation. Thus, compute efficiency, or compute performance per watt, is improving exponentially. And this trend in servers, which started in 2005 or so, is also being repeated in storage. We have seen a similar improvement in power per terabyte  for the past 3 generations (since 2007). Current storage product pipeline suggests this efficiency trend will continue for the next several years. Below is a chart showing representative improvements in storage efficiency (power per terabyte) across storage product generations from a leading vendor.

Power (VA) per Terabyte
Power (VA) per Terabyte

With current technology advances, a terabyte of storage on today’s devices requires approximately 1/5 of the amount of power as a device from 5 years ago. And these power requirements could drop even more precipitously with the advent of flash technology. By some estimates, there is a drop of 70% or more in power and space requirements with the switch to flash products. In addition to being far more power efficient, flash will offer huge performance advantages for applications with corresponding time reductions in completing workload. So expect flash storage to quickly convert the market once mainstream product introductions occur. IBM sees this as just around the corner, while other vendors see the flash conversion as 3 or more years out. In either scenario, there are continued major improvements in storage efficiency in the pipeline that deliver far lower power demands even with increasing storage requirements.

Ultimately, with the combined efficiency improvements of both storage and server environments over the next 3 to 5 years, most firms will see a net reduction in data center requirements. The typical corporate data center power requirements are approximately one half server, one third storage, and the rest being network and other devices. With the two biggest components experiencing ongoing dramatic power efficiency trends, the net power and space demand should decline in the coming years for all but the fastest growing firms. Add in the effects of virtualization, engineered stacks and SaaS and the data centers in place today should suffice for most firms if they maintain a healthy replacement pace of older technology and embrace virtualization.

Despite such improvements in efficiency, we still could see a major addition in total data center space because cloud and consumer firms like Facebook are investing major sums in new data centers. This resulting consumer data center boom also shows the effects of growing consumerization in the technology market place. Consumerization, which started with PCs and PC software, and then moved to smart phones, has impacted the underlying technologies dramatically. The most advanced compute chips are now those developed for smart phones and video games. Storage technology demand and advances are driven heavily by smart phones and products like the MacBook Air which already leverage only flash storage. The biggest and best data centers? No longer the domain of corporate demand, instead, consumer demand (e.g. Gmail, FaceBook, etc) drives bigger and more advanced centers. The proportion of data center space dedicated to direct consumer compute needs (a la GMail or Facebook) versus enterprise compute needs (even for companies that provide directly consumer services) will see a major shift from enterprise to consumer over the next decade. This will follow the shifts in chips and storage that at one time were driven by the enterprise space (and previously, the government) and are now driven by the consumer segment. And it is highly likely that there will be a surplus of enterprise class data centers (50K – 200K raised floor space) in the next 5 years. These centers are too small and inefficient for a consumer data center (500K – 2M or larger), and with declining demand and consolidation effects, plenty of enterprise data center space will be on the market.

As an IT leader, you should ensure your firm is riding the effects of the compute and storage efficiency trends. Further multiply these demand reduction effects by leveraging virtualization, engineered stacks and SaaS (where appropriate). If you have a healthy buffer of data center space now, you could avoid major investments and costs in data centers in the next 5 to 10 years by taking these measures. Those monies can instead be spent on functional investments that drive more direct business value or drop to the bottom line of your firm. If you have excess data centers, I recommend consolidating quickly and disposing of the space as soon as possible. These assets will be worth far less in the coming years with the likely oversupply. Perhaps you can partner with a cloud firm looking for data center space if your asset is strategic enough for them. Conversely, if you have minimal buffer and see continued higher business growth, it may be possible to acquire good data center assets for far less unit cost than in the past.

For 40 years, technology has ridden Moore’s Law to yield ever-more-powerful processors at lower cost. Its compounding effects have been astounding — and we are now seeing nearly 10 years of similar compounding on the power efficiency side of the equation (below is a chart for processor compute power advances and compute power efficiency advances).

Trend Change for Power Efficiency

The chart above shows how the compute efficiency (performance per watt — green line) has shifted dramatically from its historical trend (blue lines). And it’s improving about as fast as compute performance is improving (red lines), perhaps even faster.

These server and storage advances have resulted in fundamental changes in data centers and their demand trends for corporations. Top IT leaders will be take advantage of these trends and be able to direct more IT investment into business functionality and less into the supporting base utility costs of the data center, while still growing compute and storage capacities to meet business needs.

What trends are you seeing in your data center environment? Can you turn the corner on data center demand ? Are you able to meet your current and future business needs and growth within your current data center footprint and avoid adding data center capacity?

Best, Jim Ditmore

Riding with the Technology Peloton

One of the most important decisions that technology leaders make is when to strike out and leverage new and unique technologies for competitive advantage and when to stay with the rest of the industry and stay on a common technology platform. Nearly every project and component contains a micro decision of the custom versus common path. And while it is often easy to have great confidence in our ability and capacity to build and integrate new technologies, the path of striking out on new technologies ahead of the crowd is often much harder and has less payback than we realize.  In fact, I would suggest that the payback is similar to what occurs during cycling’s Tour de France: many, many riders strike out in small groups to beat the majority of cyclists (or peloton), only to be subsequently caught by the peloton but with enormous energy expended, fall further behind the pack.

In the peloton, everyone is doing some of the work. The leaders of the peloton take on the most wind resistance but rotate with others in pack so that work is balanced. In this way the peloton can move as quickly as any cyclist can individually but at 20 or 30% less energy due to much less wind resistance. Thus, with energy conserved, later in the race, the peloton can move much faster than individual cyclists. Similarly, in developing a new technology or advancing an existing technology, with enough industry mass and customers (a peloton), the technology can be advanced as quickly or more than quickly than an individual firm or small group and at much less individual cost. Striking out on your own to develop highly customized capabilities (or in concert with a vendor) could leave you with a high cost capability that provides a brief competitive lead only to be quickly passed up by the technology mainstream or peloton.

If you have ever watched one of the stages of the Tour de France, what can be most thrilling is to see a small breakaway group of riders trying to build or preserve their lead over the peloton. As the race progresses closer to the finish, the peloton relentlessly (usually) reels in and then passes the early leaders because of its far greater efficiency. Of course, those riders who time it correctly and have the capacity and determination to maintain their lead can reap huge time gains to their advantage.

Similarly, I think, in technology and business, you need to choose your breakaways wisely. You must identify where you can reap gains commensurate with the potential costs. For example, breaking away on commodity infrastructure technology is typically not wise. Plowing ahead and being the first to incorporate the latest in infrastructure or cloud or data center technology where there is little competitive advantage is not where you should invest your energy (unless that is your business). Instead, your focus should be on those areas where an early lead can be driven to business advantage and then sustained. Getting closer to your customer, being able to better cross-sell to them, significantly improving cycle time or quality or usability or convenience, or being first to market with a new product — these are all things that will win in the marketplace and customers will value. That is where you should make your breakaway. And when you do look to customize or lead the pack, understand that it will require extra effort and investment and be prepared to make and sustain it.

And while I caution selecting the breakaway course, particular in this technology environment where industry change is on an accelerated cycle already, I also caution against being in the back of the peloton. There, just as in the Tour de France when you are lagging and in the back, it is too easy to be dropped by the group. And once you drop from the peloton, you must now work on your own to work even harder just to get back in with the peloton. Similarly, once an IT shop falls significantly behind the advance of technology, and loses pace with its peers, further consequence incur. It becomes harder to recruit and retain talent because the technology is dated and the reputation is stodgy. Extra engineering and repair work must be done to patch older systems that don’t work well with newer components.  And extra investment must be justified with the business to ‘catch’ technology back up. So you must keep the pace with the peloton, and even better be a leader among your peers in technology areas of potential competitive advantage. That way, when you do see a breakaway opportunity for competitive advantage you are positioned to make it.

The number of breakaways you can do of course depends on the size of your shop and the intensity of IT investment in your industry. The larger you are, and the greater the investment, the more breakaways you can afford. But make sure they are truly competitive investments with strong potential to yield benefits. Otherwise you are far better off ensuring you stay at the front of the peloton leveraging best-in-class practices and common but leading technology approaches. Or as an outstanding CEO that I worked for once said ‘There should be no hobbies’. Having a cool lab environment without rigorous business purpose and ongoing returns (plenty of failures are fine as long as there are successful projects as well) is a breakaway with no purpose.

I am sure there are some experienced cyclists among our readers — how does this resonate? What ‘breakaways’ worked for you or your company? Which ones got reeled in by the industry peloton?

I look forward to hearing from you.

Best, Jim Ditmore

 

 

Outsourcing and Out-tasking Best Practices

I recently published this post first at InformationWeek and it generated quite a few comments, both published and several sent directly via e-mail.  I would note that a strong theme is the frustration of talented staff dealing with senior leadership that does not understand how IT works well or do not appear to be focused on the long term interests of the company. It is a key responsibility of leadership to ensure they keep these interests at the core of their approach, especially when executing complex efforts like outsourcing or offshoring so that they do achieve benefits and do not harm their company. I think the national debate that is occurring at this time as well with Romney and Obama only serves to show how complex executing these efforts are. As part of a team, we were able to adjust and resolve effectively many different situations and I have extracted much of that knowledge here. If you are looking to outsource or are dealing with an inherited situation, this post should assist you in improving your approach and execution.

While the general trend of more IT outsourcing but via smaller, more focused deals continues, it remains an area that is difficult for IT management to navigate successfully.  In my experience, every large shop that I have turned around had significant problems caused or made worse by the outsourcing arrangement, particularly large deals. While understanding that these shops performed poorly for primarily other reasons (leadership, process failures, talent issues), achieving better performance in these situations required substantial revamp or reversal of the outsourcing arrangements. And various industries continue to be littered with examples of failed outsourcing, many with leading outsource firms (IBM, Accenture, etc) and reputable clients. While formal statistics are hard to come by (in part because companies are loathe to report failure publicly), my estimate is that at least 25% and possibly more than 50% fail or perform very poorly. Why do the failures occur? And what should you do when engaging in outsourcing to improve the probability of success?

Much of the success – or failure – depends on what you choose to outsource followed by effectively managing the vendor and service. You should be highly selective on both the extent and the activities you chose for outsourcing. A frequent mistake is the assumption that any activity that is not ‘core’ to a company can and should be outsourced to enable focus on the ‘core’ competencies. I think this perspective originates from principles first proposed in The Discipline of Market Leaders by Michael Treacy and Fred Wisrsema. In essence, Treacy and Wisrsema state that companies that are market leaders do not try to be all things to all customers. Instead, market leaders recognize their competency either in product and innovation leadership, customer service and intimacy, or operational excellence. Good corporate examples of each would be 3M for product, Nordstrom for service, and FedEx for operational excellence. Thus business strategy should not attempt to excel at all three areas but instead to leverage an area of strength and extend it further while maintaining acceptable performance elsewhere. And by focusing on corporate competency, the company can improve market position and success. But generally IT is absolutely critical to improving customer knowledge intimacy and thus customer service. Similarly, achieving outstanding operational competency requires highly reliable and effective IT systems backing your operational processes.  And even in product innovation, IT plays a larger and large role as products become more digital and smarter.

Because of this intrinsic linkage to company products and services, IT is not like a security guard force, nor like legal staff — two areas that are commonly fully or highly outsourced (and generally, quite successfully). And by outsourcing intrinsic capabilities, companies put their core competency at risk. In a recent University of Utah business school article, the authors found significantly higher rates of failure of firms who had outsourced. They concluded that  “companies need to retain adequate control over specialized components that differentiate their products or have unique interdependencies, or they are more likely to fail to survive.” My IT best practice rule is ‘ You must control your critical IP (intellectual property)’. If you use an outsourcer to develop and deliver the key features or services that differentiate your products and define your company’s success, then you likely have someone doing the work with different goals and interests than you, that can typically easily turn around and sell advances to your competitors. Why would you turn over your company’s fate to someone else? Be wary of approaches that recommend outsourcing because IT is not a ‘core’ competency when with every year that passes, there is greater IT content in products in nearly every industry. Chose instead to outsource those activities where you do not have scale (or cost advantage), or capacity or competence, but ensure that you either retain or build the key design, integration, and management capabilities in-house.

Another frequent reason for outsourcing is to achieve cost savings. And while most small and mid-sized companies do not have the scale to achieve cost parity with a large outsourcer, nearly all large companies, and many mid-sized do have the scale.  Further, nearly every outsourcing deal that I have reversed in the past 20 years yielded savings of at least 30% and often much more. Cost savings can only be accomplished by an outsourcer for a large firm for a broad set of services if the current shop is a mediocre shop. If you have a well-run shop, your all-in costs will be similar to the better outsource firms’ costs. If you are world-class, you can beat the outsourcer by 20-40%.

Even more, the outsourcer’s cost difference typically degrades over time. Note that the goals of the outsourcer are to increase revenue and margin (or increase your costs and spend less resources doing your work). Invariably, the outsourcer will find ways to charge you more, usually for changes to services and minimize work being done. And previously, when you had used your ‘run’ resources to complete minor fixes and upgrades, you could find you are charged for those very same resources for such efforts once outsourced. I have often seen that ‘run’ functions will be hollowed out and minimized and the customer will pay a premium for every change or increase in volume. And while the usual response to such a situation is that the customer can put terms in the contract to avoid this, I have yet to see such terms that ensure the outsourcer works in your best interest to do the ‘right’ thing throughout the life of the contract. One interesting example that I reversed a few years back was an outsourced desktop provisioning and field support function for a major bank (a $55M/year contract). When an initial (surprise) review of the function was done, there were warehouses full of both obsolete equipment that should have been disposed and new equipment that should have been deployed. Why? Because the outsourcer was paid to maintain all equipment whether in use in the offices or in a warehouse, and they had full control of the logisitics function (here, the critical IP). So, they had ordered up their own revenue in effect. Further, the service had degraded over the years as the initial workforce had been hollowed out and replaced with less qualified individuals. The solution? We immediately in-sourced back the logistics function to a rebuilt in-house team with cost and quality goals established. Then we split the field support geography and conducted a competitive auction to select two firms to handle the work. Every six months each firm’s performance would be evaluated for quality, timeliness and cost and the higher performing firm would gain further territory. The lower performing firm would lose territory or be at risk of replacement. And we maintained a small but important pool of field support experts to ensure training and capabilities were kept up to par and service routines were updated and chronic issues resolved. The end result was far better quality and service, and the cost of the services were slashed by over 40% (from $55M/year to less than $30M/year). And these results — better quality at lower costs — from effective management of the functions and having key IP and staff in-house are the typical results achieved with similar actions across a wide range of services, organizations and locales.

When I was at BankOne, working under Jamie Dimon and his COO Austin Adams, they provided the support for us to tackle bringing back in what had been the largest outsourcing deal ever consummated at its time in 1998. Three years after the outsource had started, it had become a millstone around BankOne’s neck. Costs had been going up every year, quality continued to erode to where systems availability and customer complaints became worst in the industry. In sum, it was a burning platform. In 2001 we cut the deal short (it was scheduled to run another 4 years). In the next 18 months, after hiring 2200 infrastructure staff (via best practice talent acquisition), revamping the processes and infrastructure, we reduced defects (and downtime) to 1/20th of the levels in 2001 and reduced our ongoing expenses by over $200M per year. This supported significantly the bank’s turnaround and enabled the merger with JP Morgan a few years later.  As for having in-house staff do critical work, Jamie Dimon said it best with ‘Who do you want doing your key work? Patriots or mercenaries?’

Delivering comparable cost to an outsourcer is not that difficult for mid to large IT shops. Note that the outsourcer must include a 20% margin in their long term costs (though they may opt to reduce profits in the first year or two of the contract) as well as an account team’s costs. And, if in Europe, they must add 15 to 20% VAT. Further, they will typically avoid making the small investments required for continuous improvement over time. Thus, three to five years out, nearly all outsourcing arrangements cost 25% to 50% more than a well-run in-house service (that will have the further benefit of higher quality). You should set the bar that your in-house services can deliver comparable or better value than typical out-sourced alternatives. But ensure you have the leadership in place and provide the support for them to reach such a capability.

But like any tool or management approach, used properly and in the right circumstances, outsourcing is a benefit to the company. As a leader you cannot focus on all company priorities at once, nor would you have the staff even if you could, to deliver. And in some areas such as field support there are natural economies of scale that benefit a third party doing the same work for many companies. So consider outsourcing in these areas but the extent of the outsource carefully. Ensure that you still retain critical IP and control. Or use it to augment and increase your capacity, or where you can leverage best-in-class specialized services to your company’s benefit. Then, once selected and effectively negotiated, manage the outsourcing vendor effectively. Since effective management of large deals is complex and nearly impossible, it is far better to do small outsourcing deals or selective out-tasking. The management of the outsourcing should be handled like any significant in-house function, where SLAs are established and proper operational metrics are gathered, performance is regularly reviewed with management and actions are noted and tracked to address issues or improve service. Properly constructed contracts that accommodate potential failure are key if things do not go well. Senior management should jointly review the service every 3 to 6 months, and consequences must be in place for performance (good or bad).

Well-selected and managed outsourcing will then complement your in-house team with more traditional approaches that leverage contractors for peak workloads or projects or the modern alternative to use cloud services and out-task some functions and applications. With these best practices in place and with a selective hand, your IT shop and company can benefit from outsourcing and avoid the failures.

What experiences have you had with outsourcing? Do you see improvement in how companies leverage such services? I look forward to your comments.

Best, Jim Ditmore