Whither Virtual Desktops?

The enterprise popularity of tablets and smartphones at the expense of PCs and other desktop devices is also sinking desktop virtualization. In addition to the clear link that tablets and smartphones are cannibalizing PC sales, mobility and changing device economics is also impacting corporate desktop virtualization or VDI.

The heyday of virtual desktop infrastructure came around 2008 to 2010, as companies sought to cut their desktop computing costs — VDI promised savings from 10% to as much as 40%. Those savings were possible despite the additional engineering and server investments required to implement the VDI stack. Some companies even anticipated replacing up to 90% of their PCs with VDI alternatives. Companies sought to reduce desktop costs and address specific issues not well-served by local PCs (e.g., smaller overseas sites with local software licensing and security complexities).

But something happened on the way to VDI dominance. The market changed faster than the maturing of VDI. Employee demand for mobile devices, in line with the BYOD phenomenon, has refocused IT shops on delivering mobile device management capabilities, not VDI. On-the-go employees are gravitating toward new lightweight laptops, a variety of tablets and other non-desktop innovations that aren’t VDI-friendly. Mobile employees want to use multiple devices; they don’t want to be tied down to a single VDI-based interface. And enterprise IT shops have refocused on delivering mobile device management capabilities so company employees can securely use their smartphones for their work. Given the VDI interface is at best cumbersome on a touch interface with a different OS than Windows, there will be less and less demand for VDI as the way to interconnect.  Given the dominance of these highly mobile smartphones and tablets will only increase in the next few years as the client device war between Apple, Android, and Microsoft (Nokia) heats up further (and they continue to produce better and cheaper products) VDI’s appeal will fall even farther.

Meantime, PC prices, both desktop and laptop, which have had a steady decline in the past 4 years, dropping 30-40% (other than Apple’s products, of course), will accelerate their price drop.  With the decline in shipments these past 18 months, the entire industry is overcapacity and the only way to out of the situation is to spur demand and better consumer interest in PCs is through further cost reductions. (Note that the answer is not that Windows 8 will spur demand). Already Dell and Lenovo are using lower prices to try to hold their volumes steady. And with other devices entering the market (e.g. Smart TVs, smart game stations, etc), it will become a very bloody marketplace. The end result for IT shops will be $300 laptops that are pretty slick that come fully with Windows (perhaps even Office). At those prices, VDI will have minimal or no cost advantage especially taking into account the backend VDI engineering costs.  And if you can buy a $300 laptop or tablet fully equipped that is preferred by most employees, IT shops will be hard pressed to pass that up and impose VDI. In fact, by late 2014, corporate IT shops in 2014 could be faced with their VDI solutions costing more than traditional client devices (e.g., that $300 laptop). This is because the major components of VDI costs (servers and engineering work and support) will not drop nearly as quickly as the distressed market PC costs. 

There is no escaping the additional engineering time and attention VDI requires. The complex stack (either Citrix or VMware) still requires more engineering than a traditional solution. And with this complexity, there will still be bugs between the various client and VDI and server layers that impact user experience. Recent implementations still show far too many defects between the layers. At Allstate, we have had more than our share of defects in our recent rollout between the virtualization layer, Windows, and third party products. And this is for what should be by now, a mature technology.

Faced with greater costs, greater engineering resources (which are scarce) and employee demand for the latest mobile client devices, organizations will begin to throw in the towel on VDI. Some companies now deploying will reduce the scope of current VDI deployments. Some now looking at VDI will jump instead to mobile-only alternatives more focused on tablets and smartphones. And those with extensive deployments will allow significant erosion of their VDI footprint as internal teams opt for other solutions, employee demand moves to smartphones and tablets or lifecycle events occur. This is a long fall from the lofty goals of 90% deployment from a few years ago. IT shops do not want to be faced with both supporting VDI for an employee who also has a tablet, laptop or desktop solution because it essentially doubles the cost of the client technology environment. In an era of very tight IT budgets, excess VDI deployments will be shed.

One of the more interesting phenomenon in the rapidly changing world of technology is when a technology wave gets overtaken well before it peaks. This occurred many times before (think optical disk storage in the data center) but perhaps most recently with netbooks where their primary advantages of cost and simplicity where overwhelmed by smartphones (from below) and ultra-books from above. Carving out a sustainable market niche on cost alone in the technology world is a very difficult task, especially when you consider that you are reversing long term industry trends.

Over the past 50 years of computing history, the intelligence and capability has been drawn either to the center or to the very edge. In the 60s, mainframes were the ‘smart’ center and 3270 terminals were the ‘dumb’ edge device. In the 90s, client computing took hold and the ‘edge’ became much smarter with PCs but there was a bulging middle tier of the three tier client compute structure. This middle tier disappeared as hybrid data centers and cloud computing re-centralized computing. And the ‘smart’ edge moved out even farther with smartphones and tablets. While VDI has a ‘smart’ center, it assumes a ‘dumb’ edge, which goes against the grain of long term compute trends. Thus the VDI wave, a viable alternative for a time, will be dissipated in the next few years as the long term compute trends overtake it fully.

I am sure there will still be niche applications, like offshore centers (especially where VDI also enables better control of software licensing) and there will still be small segments of the user population that will swear by the flexibility to access their device from anywhere they can log in without carrying anything, but these are ling term niches. Long term, VDI solutions will have a smaller and smaller portion of the device share, perhaps 10%, maybe even 20%, but not more.

What is your company’s experience with VDI? Where do you see its future?

Best, Jim Ditmore

 This post was first published in InformationWeek on September 13, 2013 and has been slightly revised and updated.

Getting to Private Cloud: Key Steps to Build Your Cloud

Now that I am back from summer break, I want to continue to further the discussion on cloud and map out how medium and large enterprises can build their own private cloud. As we’ve discussed previously, software-as-a-service, engineered stacks and private cloud will be the biggest IT winners in the next five to ten years. Private clouds hold the most potential — in fact, early adopters such as JP Morgan Chase and Fidelity are seeing larger savings and greater benefits than initially anticipated.

While savings is a key reason to move to a private cloud, shorter development cycles and faster time to market are more significant. Organizations can test risky ideas more easily as small, low-cost projects, quickly dispensing with those projects that fail and accelerating those that show more promise.

While savings is a key driver to moving to private cloud, faster development cycles and better time to market are turning out to be both more significant and more valuable to early adopter firms than initially estimated. And it is not just a speed improvement but a qualitative improvement where smaller projects can trialled or riskier pilots can be executed with far greater speed and nominal costs. This allows a ‘fast fail’ approach on corporate innovation that greatly speeds the selection process, avoids extensive wasted investment in lengthier traditional pilots (that would have failed anyway) and greatly improves time to market on those ideas that are successful.

As for the larger savings, early implementations at scale are seeing savings well in excess of 50%. This is well beyond my estimate of 30% and is occurring in large part because of the vastly reduced labor requirements to build and administer a private cloud versus traditional infrastructure.

So with greater potential benefits, how should an IT department go about building a private cloud? The fundamental building blocks required for private cloud are a base of virtualized servers utilizing commodity servers and leveraging open systems. And of course you need the server engineering and administration expertise to support the platform. There’s also a strong early trend toward leveraging open source software for private clouds, from the Linux operating system to OpenNebula and Eucalyptus for infrastructure management. But just having a virtualized server platform does not result in private cloud. There are several additional elements required.

First, establish a set of standardized images that constitute most of the stack. Preferably, that stack will go from the hardware layer to the operating system to the application server layer, and it will include systems management, security, middleware and database. Ideally, go with a dozen or fewer server images and certainly no more than 20. Consider everything else to be custom and treated separately and differently from the cloud.

Once you have established your target set of private cloud images you should build a catalogue and ordering process that is easy, rapid, and transparent. The costs should be clear, and the server units should be processor-months or processor-weeks. You will need to couple the catalogue with highly automated provisioning and de-provisioning. Your objective should be to deliver servers quickly, certainly within hours, preferably within minutes (once the costs are authorized by the customer). And de-provisioning should be just as rapid and regular. In fact, you should offer automated ‘sunset’ servers in test and development environments (e.g., after 90 days the server(s) are allocated, they are automatically returned to the pool). I strongly recommend well-published and clear cost and allocation reporting to drive the right behaviors among your users. It will encourage quicker adoption, better and more efficient usage and rapid turn-in when no longer needed. With these 4 prerequisites in place (standard images, a catalogue and easy ordering process, clear costs and allocations, and automated provisioning and de-provisioning) you are ready to start your private cloud.

Look to build your private cloud in parallel to your traditional data center platforms. There should be both a development and test private cloud as well as a production private cloud. Seed the cloud with an initial investment of servers of each standard type. Then transition demand into the private as new projects initiate and proceed to grow it project by project.

You could begin by routing small and medium size projects to the private cloud environment and as it builds up scale and provisioning kinks are ironed out, migrate more and more server requests until nearly all requests are routed through your private cloud path. As you begin to achieve scale and you prove out your ordering and provisioning (and de-provisioning processes) you can begin to tighten the criteria for projects to proceed with traditional custom servers. Within 6 months, custom, traditional servers should be the rare exception and should be charged fully for the excess costs they will generate.

 Once the private cloud is established you can verify the costs savings and advantages. And there will be additional advantages such as improved time to market because of improvements in the speed of your development efforts given server deployment is no longer a long pole in the tent. Well-armed with this data, you can now circle back and tackle existing environments and legacy custom servers. While often the business case for a platform transition is not a good investment, a transition to private cloud during another event (e.g., major application release, server end-of-life migration) should easily become a winning investment. A few early adopters (such as JPMC or Fidelity) are seeing outsized benefits and strong developer push into these private cloud environments. So, if you build it well, you should be able to reap the same advantages.

How is your cloud journey proceeding? Are there other key steps necessary to be successful? I look forward to hearing your perspective.

Best, Jim Ditmore

 

Beyond Big Data

Today’s post on Big Data is authored by Anthony Watson, CIO of Europe, Middle East Retail & Business Banking at Barclays Bank. It is thought-provoking take on ‘Big Data’ and how best to effectively use it. Please look past the atrocious British spelling :). We look forward to your comments and perspective.  Best, Jim Ditmore

In March 2013,  I read with great interest the results of the University of Cambridge analysis of some 58,000 Facebook profiles. The results predicted unpublished information like gender, sexual orientation, religious and political leanings of the profile owners. In one of the biggest studies of its kind, scientists from the university’s psychometrics team developed algorithms that were 88% accurate in predicting male sexual orientation, 95% for race and 80% for religion and political leanings. Personality types and emotional stability were also predicted with accuracy ranging from 62­?75%. The experiment was conducted over the course of several years through their MyPersonality website and Facebook Application. You can sample a limited version of the method for yourself at http://www.YouAreWhatYouLike.com.

Not surprisingly, Facebook declined to comment on the analysis, but I guarantee you none of this information is news to anyone at Facebook. In fact it’s just the tip of the iceberg. Without a doubt the good people of Facebook have far more complex algorithms trawling, interrogating and manipulating its vast and disparate data warehouses, striving to give its demanding user base ever richer, more unique and distinctly customised experiences.

As an IT leader, I’d have to be living under a rock to have missed the “Big Data” buzz. Vendors, analysts, well-­?intentioned executives and even my own staff – everyone seems to have an opinion lately, and most of those opinions imply that I should spend more money on Big Data.

It’s been clear to me for some time that we are no longer in the age of “what’s possible” when it comes to Big Data. Big Data is “big business” and the companies that can unlock, manipulate and utilise data and information to create compelling products and services for their consumers are going to win big in their respective industries.

Data flow around the world and through organisations is increasing exponentially and becoming highly complex; we’re dealing with greater and greater demands for storing, transmitting, and processing it. But in my opinion, all that is secondary. What’s exciting is what’s being done with it to enable better customer service and bespoke consumer interactions that significantly increase value along all our service lines in a way that was simply not possible just a few years ago. This is what’s truly compelling. Big Data is just a means to an end, and I question whether we’re losing sight of that in the midst of all the hype.

Why do we want bigger or better data? What is our goal? What does success look like? How will we know if we have attained it? These are the important questions and I sometimes get concerned that – like so often before in IT – we’re rushing (or being pushed by vendors, both consultants and solution providers alike) to solutions, tools and products before we really understand the broader value proposition. Let’s not be a solution in search of a problem. We’ve been down that supply-centric road too many times before.

For me it’s simple; Innovation starts with demand. Demand is the force that drives innovation. However this should not be confused with the axiom “necessity is the mother of invention”. When it comes to technology we live in a world where invention and innovation are defining the necessity and the demand. It all starts with a value experience for our customers. Only through a deep understanding of what “value” means to the customer can we truly be effective in searching out solutions. This understanding requires an open mind and the innovative resolve to challenge the conventions of “how we’ve always done it.”

Candidly I hate the term “Big Data”. It is marketing verbiage, coined by Gartner that covers a broad ecosystem of problems, tools, techniques, products, and solutions. If someone suggests you have a Big Data problem, that doesn’t say much as arguably any company operating at scale, in any industry, will have some sort of challenge with data. But beyond tagging all these challenges with the term Big Data, you’ll find little in common across diverse industries, products or services.

Given this diversity across industry and within organisations, how do we construct anything resembling a Big Data strategy? We have to stop thinking about the “supply” of Big Data tools, techniques, and products peddled by armies of over eager consultants and solution providers. For me technology simply enables a business proposition. We need to look upstream, to the demand. Demand presents itself in business terms. For example in Financial Services you might look at:

  • Who are our most profitable customers and, most importantly, why?
  • How do we increase customer satisfaction and drive brand loyalty?
  • How do we take excess and overbearing processes out of our supply chain and speed up time to market/service?
  • How do we reduce our losses to fraud without increasing compliance & control costs?

Importantly, asking these questions may or may not lead us down a Big Data road. But we have to start there. And the next set of questions is not about the solutions but framing the demand and potential solutions:

  • How do we understand the problem today? How is it measured? What would improvement look like?
  • What works in our current approach, in terms of the business results? What doesn’t? Why? What needs to improve?
  • Finally, what are the technical limitations in our current platforms? Have new techniques and tools emerged that directly address our current shortcomings?
  • Can we develop a hypothesis, an experimental approach to test these new techniques, so that they truly can deliver an improvement?
  • Having conducted the experiment, what did we learn? What should we abandon, and what should we move forward with?

There’s a system to this. Once we go through the above process, we start the cycle over. In a nutshell, it’s the process of continuous improvement. Some of you will recognise the well?known cycle of Plan, Do, Check, Act (“PDCA”) in the above.

Continuous improvement and PDCA are interesting, in that they are essentially the scientific method applied to business and one of the notable components of the Big Data movement is the emerging role of the Data Scientist.

So, who can help you assess this? Who is qualified to walk you through the process of defining your business problem and solving them through innovative analytics? I think it is the Data Scientist.

What’s a Data Scientist? It’s not a well?defined position, but here would be an ideal candidate:

  • Hands?on experience with building and using large and complex databases, relational and non-relational, and in the fields of data architecture and information management more broadly
  • Solid applied statistical training, grounded in a broader context of mathematical modeling.
  • Exposure to continuous improvement disciplines and industrial theory.
  • Most Importantly: Functional understanding of whatever industry is paying their salary i.e., Real world operational experience – theory is valuable; “scar tissue” is essential.

This person should be able to model data, translate that model into a physical schema, load that schema from sources, and write queries against it, but that’s just the start. One semester of introductory stats isn’t enough. They need to know what tools to use and when, and the limits and trade?offs of those tools. They need to be rigorous in their understanding and communication of confidence levels in their models and findings, and cautious of the inferences they draw.

Some of the Data Scientist’s core skills are transferrable, especially at the entry level. But at higher levels, they need to specialise. Vertical industry problems are rich, challenging, and deep. For example, an expert in call centre analytics would most certainly struggle to develop comparable skills in supply chain optimisation or workforce management.

And ultimately, they need to be experimentalists – true scientists engaged in a quest for knowledge on behalf of their company or organisation with an unresolvable sense of curiosity: engaged in a continuous cycle of:

  • examining the current reality,
  • developing and testing hypotheses, and
  • delivering positive results for broad implementation so that the cycle can begin again.

There are many sectors we can apply Big Data techniques to: financial services, manufacturing, retail, energy, and so forth. There are also common functional domains across the sectors: human resources, customer service, corporate finance, and even IT itself.

IT is particularly interesting. It’s the largest consumer of capital in most enterprises. IT represents a set of complex concerns that are not well understood in many enterprises: projects, vendors, assets, skilled staff, and intricate computing environments. All these come together to (hopefully) deliver critical and continuous value in the form of agile, stable and available IT services for internal business stakeholders, and most importantly external customers.

Given the criticality of IT, it’s often surprising how poorly managed IT is in terms of data and measurement. Does IT represent a Big Data domain? Yes, absolutely. From the variety of IT deliverables and artefacts and inventories, to the velocity of IT events feeding management consoles, to the volume of archived IT logs, IT itself is challenged by Big Data. IT is a microcosm of many business models. We in IT don’t do ourselves any favours starting from a supply perspective here, either. IT’s legitimate business questions include:

  • Are we getting the IT we’re paying for? Do we have unintentional redundancy in what we’re buying? Are we paying for services not delivered?
  • Why did that high severity incident occur and can we begin to predict incidents?
  • How agile are our systems? How stable? How available?
  • Is there a trade-off between agility? stability? and/or availability? How can we increase all three?

With the money spent on IT, and its operational criticality, Data Scientists can deliver value here as well. The method is the same: understand the current situation, develop and test new ideas, implement the ones that work, and watch results over time as input into the next round.

For example, the IT organisation might be challenged by a business problem of poor stakeholder trust, due to real or perceived inaccuracies in IT cost recovery. In turn, it is then determined that these inaccuracies stem from poor data quality for the IT assets on which cost recovery is based.

Data Scientists can explain that without an understanding of data quality, one does not know what confidence a model merits. If quality cannot be improved, the model remains more uncertain. But often, the quality can be improved. Asking “why” – perhaps repeatedly – may uncover key information that assists in turn with developing working and testable hypotheses for how to improve. Perhaps adopting master data management techniques pioneered for customer and product data will assist. Perhaps measuring the IT asset data quality trends over time is essential to improvement – people tend to focus on what is being measured and called out in a consistent way. Ultimately, this line of inquiry might result in the acquisition of a toolset like Blazent, which provides IT analytics & data quality solutions enabling a true end?to-end view of the IT ecosystem. Blazent is a toolset we’ve deployed at Barclays to great effect.

Similarly, a Data Scientist schooled in data management techniques, and with an experimental, continuous improvement orientation might look at an organisation’s recurring problems in diagnosing and fixing major incidents, and recommend that analytics be deployed against the terabytes of logs accumulating every day, both to improve root cause analysis, and ultimately to proactively predict outage scenarios based on previous outage patterns. Vendors like Splunk and Prelert might be brought in to assist with this problem at the systems management level. SAS has worked with text analytics across incident reports in safety-­?critical industries to identify recurring patterns of issues.

It all starts with business benefit and value. The Big Data journey must begin with the end in mind, and not rush to purchase vehicles before the terrain and destination is known. A Data Scientist, or at least someone operating with a continuous improvement mind-­?set who will champion this cause, is an essential component. So, rather than just talking about “Big Data,” let’s talk about “demand-­?driven data science.” If we take that as our rallying cry and driving vision, we’ll go much further in delivering compelling, demonstrable and sustainable value in the end.

Best, Anthony Watson

Massive Mobile Shifts and Keeping Score

As the first quarter of 2013 has come to a close, we see a technology industry moving at an accelerated pace. Consumerization is driving a faster level of change, with consequent impacts on technology ecosystem and the companies occupying different perches. From rapidly growing BYOD demand to the projected demise of the PC, we are seeing consumers shift their computing choices much faster than corporations, and some suppliers struggling to keep up. These rapid shifts require corporate IT groups to follow more quickly in their services. From implementing MDM (mobile device management), to increasing the bandwidth of wireless networks to adopting tablets and smartphones as the primary customer interfaces for future development, IT teams must adjust to ensure effective services and a competitive parity or advantage.

Let’s start with mobile. Consumers today use their smartphones to conduct much of their everyday business. And they use the devices if not for the entire transaction, then often to research or initiate the transaction. The lifeblood of most retail commerce has heavily shifted to the mobile channel. Thus, companies must have significant and effective mobile presence to achieve competitive advantage (or even survive). Mobile has become the first order of delivery for company services. Next in importance is the internet and then internal systems for call centers and staff. And since the vast majority of mobile devices (smartphone or tablet) are not Windows-based (nor is the internet), application development shops need to build or augment current Windows-oriented skills to enable native mobile development. Back end systems must be re-engineered to more easily support mobile apps.  And given your company’s competitive edge may be determined by its mobile apps, you need to be cautious about fully outsourcing this critical work.

Internally such devices are becoming a pervasive feature in the corporate landscape. It is important to be able to accommodate many of the choices of your company’s staff and yet still secure and manage the client device environment. Thus, implementations of MDM to manage these devices and enable corporate security on the portion of the device that contains company data are increasing at a rapid pace. Further, while relatively few companies currently have a corporate app store, this will become prevalent feature within a few years and companies will shift from a ‘push’ model of software deployment to a ‘pull’ model. Further consequences of the rapid adoption of mobile devices by staff include such items as needing to implement wireless at your company sites, adding visitor wireless capabilities (like a Starbucks wifi), or just increasing the capacity to handle the additional load (a 50% increase in internal wifi demand in January is not unheard as everyone returns to the office with their Christmas gifts).

A further consequence of the massive shift to smartphones and tablets is the diminishing reach and impact of Microsoft based on Gartner latest analysis and projections. The shift away from PCs and towards tablets in the consumer markets reduces the largest revenue sources of Microsoft. It is stunning to realize that Microsoft with its long consumer market history, could become ever more dependent on the enterprise versus consumer market. Yet, because the consumer’s choices are rapidly making inroads into the corporate device market, even this will be a safe harbor for only a limited time. With Windows 8, Microsoft tried to address both markets with one OS platform, perhaps not succeeding well in either. A potential outcome for Microsoft is to introduce the reported ‘Blue’ OS version which will be a complete touch interface (versus a hybrid touch and traditional). Yet, Microsoft has struggled to gain traction against Android and iOS tablets and smartphones, so it is hard to see how this will yield significant share improvement. And with new Chrome devices and a reputed cheap iPhone coming, perhaps even Gartner’s projections for Microsoft are optimistic. The last overwhelming consumer OS competitive success Microsoft had was against OS/2 and IBM — Apple iOS and Google Android are far different competitors! With the consumer space exceedingly difficult to make much headway, my top prediction for 2013 is that Microsoft will subsequently introduce a new Windows ‘classic’ to satisfy the millions of corporate desktops where touch interfaces are inadequate or application have not been redesigned. Otherwise, enterprises may sit pat on the current versions for an extended period, depriving Microsoft of critical revenue streams. Subsequent to the 1st version of this post, there were reports of Microsoft introducing Windows 8 stripped of the ‘Metro’ or touch interface! Corporate IT shops need to monitor these outcomes because once a shift occurs, there could be a rapid transition not just in the OS, but in the productivity suites and email as well.

There is also upheaval in the PC supplier base as a result of the worldwide sales decline of 13.9% (year over year in Q1). Also predicted here in January, HP struggled the most among the top 3 of HP, Lenovo and Dell. HP was down almost 24%, barely retaining the title of top volume manufacturer. Lenovo was flat, delivering the best performance in a declining market. Lenovo delivered 11.7 million units in the quarter, just below HP’s 12 million units. Dell suffered a 10.9% drop, which given the company is up for sale, is remarkable. Acer and other smaller firms saw major drops in sales as well (more than 31% for Acer). The ongoing decline of the market will see massive impact on the smaller market participants, with consolidation and fallout likely occurring late this year and early in 2014. The real question is whether HP can turn around their rapid decline. It will be a difficult task because the smartphone, tablet and Chrome book onslaught is occurring when HP is facing a rejuvenated Lenovo and a very aggressive Dell. Ultrabooks will provide some margin and volume improvement, but not enough to make up for the declines. Current course suggests that early 2014 will see a declining market where Lenovo is comfortably leading followed by a lagging HP fighting tooth and nail with Dell for 2nd place. HP must pull off a major product refresh, supply chain tightening, and aggressive sales to turn it around. It will be a tall order.

Perhaps the next consumerization influence will be the greater use of desktop video. Many of our employees have experienced the pretty good video of Skype or Facetime and potentially will be expecting similar experiences in the corporate conversations. Current internal networks often do not have the bandwidth for such casual and common video interactions, especially for smaller campuses or remote offices. It will be important for IT shops to manage the introduction of the capabilities so that more critical workload is not impacted.

How is your company’s progress on mobile? do you have an app store? Have you implemented desktop video? I look forward to hearing from you.

Best, Jim Ditmore

Cloud Trends: Turning the Tide on Data Centers

A recent study by Intel shows that the compute load that required 184 single-core processors in 2005 now can be handled with just 21 processors where every nine servers are replaced by one.

Moores LawFor 40 years, technology rode Moore’s Law to yield ever-more-powerful processors at lower cost. Its compounding effect was astounding: One of the best analogies is that we now have more processing power in a smart phone than the Apollo astronauts had when they landed on the moon. At the same time though, the electrical power requirements for those processors continued to increase at a similar rate as the increase in transistor count. While new technologies (CMOS, for example) provided a one-time step-down in power requirements, each turn-up in processor frequency and density resulted in similar power increases.

As a result, by the 2000-2005 timeframe there were industry concerns regarding the amount of power and cooling required for each rack in the data center. And with the enormous increase in servers spurred by Internet commerce, most IT shops have labored for the past decade to supply adequate data center power and cooling.

Meantime, most IT shops have experienced compute and storage growth rates of 20% to 50% a year, requiring either additional data centers or major increases in power and cooling capacity at existing centers. Since 2008, there has been some alleviation due to both slower business growth and the benefits of virtualization, which has let companies reduce their number of servers by as much as 10 to 1 for 30% to 70% of their footprint. But IT shops can deploy virtualization only once, suggesting that they’ll be staring at a data center build or major upgrade in the next few years.

But an interesting thing has happened to server power efficiency. Before 2006, such efficiency improvements were nominal, represented by the solid blue line below. Even if your data center kept the number of servers steady but just migrated to the latest model, it would need significant increases in power and cooling. You’d experience greater compute performance, of course, but your power and cooling would increase in a corresponding fashion. Since 2006, however, compute efficiency (green line) has improved dramatically, even outpacing the improvement in processor performance (red lines).

Trend Change for Power Efficiency

The chart above shows how the compute efficiency (performance per watt — green line) has shifted dramatically from its historical trend (blue lines). And it’s improving about as fast as compute performance is improving (red lines), perhaps even faster. The chart above is for the HP DL 380 server line over the past decade, but most servers are showing a similar shift.

This stunning shift is likely to continue for several reasons. Power and cooling costs continue to be a significant proportion of overall server operating costs. Most companies now assess power efficiency when evaluating which server to buy. Server manufacturers can differentiate themselves by improving power efficiency. Furthermore, there’s a proliferation of appliances or “engineered stacks” that eke significantly better performance from conventional technology within a given power footprint.

A key underlying reason for future increases in compute efficiency is the fact that chipset technologies are increasingly driven by the requirements for consumer mobile devices. One of the most important requirements of the consumer market is improved battery life, which also places a premium on energy-efficient processors. Chip (and power efficiency) advances and designs in the consumer market will flow back into the corporate (server) market. An excellent example is HP’s Moonshot program which leverages ARM chips (previously typically used in consumer devices only) for a purported 80%+ reduction in power consumption. Expect this power efficiency trend to continue for the next five and possibly the next 10 years.

So how does this propitious trend impact the typical IT shop? For one thing, it reduces the need to build another data center. If you have some buffer room now in your data center and you can move most of your server estate to a private cloud (virtualized, heavily standardized, automated), then you will deliver more compute poer yet also see a leveling and then a reduction in the number of servers(blue line) and a similar trend in the power consumed (green line).

Traditional versus Optimized Server Trends

This analysis assumes 5% to 10% business growth, (translating to a need for a 15% to 20% increase in server performance/capacity). You’ll have to employ best practices in capacity and performance management to get the most from your server and storage pools, but the long-term payoff is big. If you don’t leverage these technologies and approaches, your future is the red and purple lines on the chart: ever-rising compute and data center costs over the coming years.

By applying these approaches, you can do more than stem the compute cost tide; you can turn it. Have you started this journey? Have you been able to reduce the total number of servers in your environment? Are you able to meet your current and future business needs and growth within your current data center footprint?

What changes or additions to this approach would you make? I look forward to your thoughts and perspective.

Best, Jim Ditmore

Note this post was first published on January 23rd in InformationWeek. It has been updated since then. 

First Quarter Technology Trends to Note

For those looking for the early signs of spring, crocuses and flowering quince are excellent harbingers. For those looking for signs of technology trends and shifts, I thought it would be worthwhile to point out some new ones and provide further emphasis or confirmation of a few recent ones:

1. Enterprise server needs have flattened and the cumulative effect of cloud, virtualization, SaaS, and appliances will mean the corporate server market has fully matured. The 1Q13 numbers bear this trend is continuing (as mentioned here last month). Some big vendors are even seeing revenue declines in this space. Unit declines are possible in the near future. The result will be consolidation in the enterprise server and software industry. VMWare, BMC and CA have already seen their share prices fall as investors are concerned the growth years are behind them. Make sure your contracts consider potential acquisitions or change of control.

2. Can dual SIM smartphones be just around the corner? Actually, they are now officially here. Samsung just launched a Galaxy dual SIM in China, so perhaps it will not be long for other device makers to follow suit. Dual SIM enables excellent flexibility – just what the carriers do not want. Consider when you travel overseas, you will be able to insert a local SIM into your phone and handle all local or regional calls at low rates, and will still receive your ‘home’ number calls. Or for everyone who carries two devices, one for business and one for personal, now you still can keep your business and personal numbers separate, but only have one device.

3. Further evidence has appeared of the massive compromises enterprise are experiencing due to Advanced Persistent Threats (APTs). Most recently, Mandiant published a report that ties the Chinese government and the PLA to a broad set of compromises of US corporations and entities over many years.  If you have not begun to move you enterprise security from a traditional perimeter model to a post-perimeter design, make the investment. You can likely bet you are compromised, you need to not only lock the doors but thwart those who have breached your perimeter. A post here late last year covers many of the measures you need to take as an IT leader.

4. Big data and decision sciences could drive major change in both software development and business analytics. It may not be of the level of change that computers had say on payroll departments and finance accountants in the 1980s, but it could be wide-ranging. Consider that perhaps 1/3 to 1/2 of all business logic now encoded in systems (by analysts and software developers) could instead be handled by data models and analytics to make business rules and decisions in real time. Perhaps all of the business analysts and software developers will then move to developing the models and  proving them out, or we could see a fundamental shift in the skills demanded in the workplace. We still have accountants of course, they just no longer do the large amount of administrative tasks. Now, perhaps applying this to legal work….

5. The explosion in mobile continues apace. Wireless data traffic is growing at 60 to 70% per year and projected to continue at this pace. The use of the mobile phone as your primary commerce device is likely to become real for most purchases in the next 5 years. So businesses are under enormous pressure to adapt and innovate in this space.  Apps that can gracefully handle poor data connections (not everywhere is 4G) and hurried consumers will be critical for businesses. Unfortunately, there are not enough of these.

Any additions you would make? Please send me a note.

Best, Jim Ditmore

 

A Cloudy Future: Hard Truths and How to Best Leverage Cloud

We are long into the marketing hype cycle on cloud. That means that clear criteria to assess and evaluate the different cloud options are critical. Given these complexities, what approach should the medium to large enterprise take to best leverage cloud and optimize their data center? What are the pitfalls as well? While cloud computing is often presented as homogenous, there are many different types of cloud computing from infrastructure as a service (IaaS) to software as a service (SaaS) and many flavors in between. Perhaps some of the best examples are Amazon’s infrastructure services (IaaS), Google’s Email and office productivity services (SaaS), and Salesforce.com’s customer relationship management or CRM services (SaaS). Typically, the cloud is envisioned as an accessible and low cost compute utility in the sky that is always available. Despite this lofty promise, companies will need to select and build their cloud environment carefully to avoid fracturing their computing capabilities, locking themselves into a single, higher cost environment or impacting their ability to differentiate and gain competitive advantage – or all three.

The chart below provides an overview of the different types of cloud computing:

Cloud Computing and Variants

 

Note the positioning of the two dominant types of cloud computing:

  • there is the specialized Software-as-a-Service (SaaS) where the entire stack from server to application (even version) are provided — with minimal variation
  • there is the very generic IaaS or PaaS where a set of server and OS version(s) is available with types of storage. Any compatible database, middleware, or application can be installed to then run.

Other types of cloud computing include private cloud – essentially IaaS that an enterprise builds for itself. The private cloud variant is the evolution of the current corporate virtualized server and storage farm to a more mature instance with clearly defined service configurations, offerings, billing as well as highly automated provisioning and management.

Another impacting technology in the data center is engineered stacks. These are a further evolution of the computer appliances that have been available for decades. Engineered stacks are tightly specified, designed and engineered components integrated to provide superior performance and cost. These devices have typically been in the network, security, database and specialized compute spaces. Firewalls and other security devices have long leveraged an this approach where generic technology (CPU, storage, OS) is closely integrated with additional special purpose software and sold and serviced as an packaged solution. There has been a steady increase in the number of appliance or engineered stack offerings moving further into data analytics, application servers, and  middleware.

With the landscape set it is important to understand the technology industry market forces and the customer economics that will drive the data center landscape over the next five years. First, the technology vendors will continue to invest and increase the SaaS and engineered stack offerings because they offer significantly better margin and more certain long term revenue. A SaaS offering gets a far higher Wall Street multiple than traditional software licenses — and for good reason — it can be viewed as a consistent ongoing revenue stream where the customer is heavily locked in. Similarly for engineered stacks, traditional hardware vendors are racing to integrate as far up the stack as possible to both create additional value but more importantly enable locked in advantage where upgrades, support and maintenance can be more assured and at higher margin than traditional commodity servers or storage. It is a higher hurdle to replace an engineered stack than commodity equipment.

The industry investment will be accelerated by customer spend. Both SaaS and engineered stacks provide appealing business value that will justify their selection. For SaaS, it is speed and ease of implementation as well as potentially variable cost. For engineered stacks, it is a performance uplift at potentially lower costs that often makes the sale. Both SaaS and engineered stacks should be selected where the business case makes sense but with the cautions of:

  • for SaaS:
    • be very careful if it is core business functionality or processes, you could be locking away your differentiation and ultimate competitiveness.
    • know how you will get your data back before you sign should you stop using the SaaS
    • make sure you have ensured integrity and security of your data in your vendor’s hands
  • for engineered stacks:
    • understand where the product is in its lifecycle before selecting
    • anticipate the eventual migration path as the product fades at the end of its cycle

For both, ensure you avoid integrating key business logic into the vendor’s product. Otherwise you will be faced with high migration costs at the end of the product life or when there is a better compelling product. There are multiple ways to ensure that your key functionality and business rules remain independent and modular outside of the vendor service package.

With these caveats in mind, and a critical eye to your contract to avoid onerous terms and lock-ins, you will be successful with the project level decisions. But you should drive optimization at the portfolio level as well. If you are a medium to large enterprise, you should be driving your internal infrastructure to mature their offering to an internal private cloud. Virtualization, already widespread in the industry,  is just the first step. You should move to eliminate or minimize your custom configurations (preferably less than 20% of your server population). Next, invest in the tools and process and engineering so you can heavily automate the provisioning and management of the data center. Doing  this will also improve the quality of service).

Make sure that you do not shift so much of your processing to SaaS that you ‘balkanize’ your own utility. Your data center utility would then operate subscale and inefficiently. Should you overreach, expect to incur heavy integration costs on subsequent initiatives (because your functionality will be spread across multiple SaaS vendors in many data centers). And you can expect to experience performance issues as your systems operate at WAN speeds versus LAN speeds across these centers. And expect to lose  negotiating position with SaaS providers because you have lost your ‘in-source’ strength.  

I would venture that over the next five year the a well-managed IT shop will see:

  • the most growth in its SaaS and engineered stack portfolio, 
  • a conversion from custom infrastructure to a robust private cloud with a small sliver of custom remaining for unconverted legacy systems
  • minimal growth in PaaS and IaaS (growth here is actually driven by small to medium firms)
This transition is represented symbolically in the chart below:
Data Center Transition Over the Next 5 Years

So, on the road to a cloud future, SaaS and engineered stacks will be a part of nearly every company’s portfolio. Vendor lock-in could be around every corner, but good IT shops will leverage these capabilities judiciously and develop their own private cloud capabilities as well as retain critical IP and avoid the lock-ins. We will see far greater efficiency in the data center as custom configurations are heavily reduced.  So while the prospects are indeed ‘cloudy’, the future is potentially bright for the thoughtful IT shop.

What changes or guidelines would you apply when considering cloud computing and the many offerings? I look forward to your perspective.

This post appeared in its original version at Information Week January 4. I have extended and revised it since then.

Best, Jim Ditmore

At the Start of 2013: Topics, Updates, and Predictions

Given it is the start of the year, I thought I would map out some of the topics I plan to cover this coming year in my posts. I also wish to relay some of the improvements that are planned for the reference page areas. As you know, the focus of Recipe for IT  is practical, workable techniques and advice that works in the real world and enables IT managers to be more successful. In 2012, we had a very successful year with over 34,000 views from over 100 countries, though most are from the US, UK, and Canada. And I wish to thank the many who have contributed comments and feedback — it has really helped me craft a better product. So with that in mind, please provide your perspective on the upcoming topics, especially if there are areas you would like to see covered that are not.

As you know, I have structured the site into two main areas: posts – which are short, timely essays on a particular topic and pages– which often take a post and provide a more structured and possibly deeper view of the topic. The pages are intended to be an ongoing reference of best practice for you leverage.

For posts, I will be continue the discussion on cloud and data centers. I will also delve more into production practices and how to achieve high availability. Some of you may have noticed some posts are placed first on InformationWeek and then subsequently here. This helps increase the exposure of Recipe for IT and also ensure good editing (!).

For the reference pages, I have recently refined and will continue to improve the project delivery and project managements sections. Look also for updates and improvements to efficiency and cost reductions in IT and well as the service desk.

What other topics would you like to see explored? Please comment and provide your feedback and input.

And now to the fun part, six predictions for Technology in 2013:

6. 2013 is the year of the ‘connected house’ as standards and ‘hub’ products achieve critical mass.

5. The IT job market will continue to tighten requiring companies to invest in growing talent as well as higher IT compensation.

4. Fragmentation will multiply in the mobile market, leaving significant advantage to Apple and Samsung being the only companies commanding premiums for their products.

3. HP will suffer further distress in the PC market both from tablet cannibalization and aggressive performance from Lenovo and Dell.

2. The corporate server market will continue to experience minimal increases in volume and flat or downward pressure on revenue.

1. Microsoft will do a Coke Classic on Windows 8.

As an IT manager, it is important to have strong, robust competition – so perhaps both Microsoft and HP will sort out their issues in the consumer device/OS space and come back stronger than ever.

What would you add or change on the predictions? I look forward to your take on 2013 and technology!

Best, Jim Ditmore

 

Smartphones and 2013: What we really want

With CES 2013 starting this week, we will see a number of new features and product introductions particularly in the Android space. Some of the new features are questionable (Do we really want our smartphones to be projectors?).  But the further fragmentation (just within Android but also with the advent of Tizen (the Linux-based OS)) will drive feature innovation and differentiation faster. And to help with that differentiation is an updated list of features that I’d like to see in 2013!

1. Multiple Personalities: It would be great to be able to use one device for personal and for business – but seamlessly. Consider a phone where your office or business mobile number ring to your phone as well as your personal number. And when your boss calls, the appropriate ring, screen background, contacts, and everything else aligns with the number calling. You can switch back and forth between your business world and your personal world on your phone, just as you do, but with one device not two (or if you are using one device today, you get both numbers, and no mixing). Some versions of these phones are available today, but not on the best smartphones, and not in a fully finished mode.

2. Multiple SIMs: Multiple SIM phones – where two or more SIM cards are active at the same time, have been available in some form since 2000. These enables you to leverage two networks at once (or have a business phone on one network, and a personal phone on another), or more easily handle different networks when traveling (e.g., one network for domestic, one for Europe and one for Asia). When you landed in a new country, you could keep your primary SIM in your phone, purchase a low cost local SIM and voila! you would still receive calls on your primary number but could make local calls inexpensively on the local SIM. Today, there are low end phones in developing markets or China where these features are available — so why not have this in the high end smartphones in the developed world? Samsung may be cracking this barrier – there are reports of a dual SIM Samsung high end smartphone. Perhaps Apple will follow? This would be much to the dismay of the carriers as it is then easy to switch carriers (call by call) and lower your costs.

3. Better, perhaps seamless voice: Siri can be good in some situations but like all other voice apps currently on the market, the limitations are real. And the limitations are particularly evident when you most need to be hands-free — like when you are driving. With the continued improvement in processor speed and voice recognition software, we should see next generation voice recognition capability that makes it an ease to use voice rather than a chore.

4. The nano-smartphone companion: How many times have you been either exercising or on a fishing trip or out for night out or an elegant evening, and the last thing you want to do is bring along a large, potentially bulky smartphone that you might lose (or drop in the lake)? Why can’t you have a nano iPod that has the same number and contacts as your iPhone that works as a passable phone? Then you can leave the iPhone at home bring your music and a nano cell phone, and not worry all evening about losing it! Again, the manufacturers must work with the carriers to enable two devices with the same number to be on the network and for you to chose to which one the calls ring. But think of the convenience and possibilities of having multiple orchestrated devices, each tuned precisely to what and when you want to use them. Isn’t this what Apple does best?

5. Better power management: Even with the continued advances in battery life, nearly everyone encounters each month times when their use or their apps have completely drained the battery. Today’s data intensive apps can chew up battery life quickly without the user being aware. Why not alert the user to high usage (rather than wait until the battery is almost dead to alert), and enable the option for power saving mode. when this mode is selected the phone OS switches apps to low power mode unless the user overrides. This will keep power hog apps from draining the battery doing unimportant tasks. It will avoid a late afternoon or evening travail when you discover your phone is dead and yet you need it to make a call.

6. Socially and physically aware: While there are plenty of apps that create social networks and provide some physical awareness and some phone plans that enable you to know where a family member is by their device location, you still require a precise device/app/option selected that minimizes the possibility of casual interaction with your known acquaintances. Consider your linked in network and when you are traveling for business, it would be excellent to be able to chose to let your links know that you are walking through O’Hare, and for those associates that chose similarly, you would know that your colleague John is at gate B5, which you happen to be walking by, and you can stop and chat before you have to catch your flight. You can chose to be anonymous, or just aware to your friends or links, or for extroverts, publicly aware. Unfortunately, this would require a common ‘awareness’ standard and security for devices and social sites, which at this stage of the social media ‘Oklahoma land rush’, it is doubtful that cooperation required would occur.

7. Better ‘offline’ capabilities: Far too many apps today still require a constant internet connection to work. Even for those apps where it is used when offline mode is likely, many apps still require an internet connection – translation apps and London tube apps come to mind. Why can’t you download 90% of the translation requirements to your app while on your home wi-fi, and then, when in Paris, bring up the app to suffice for faster translation offline instead of using international data rates? (At which point a paid translator would be cheaper and much faster). Again, I wonder how much collusion (or lack of common sense) goes on to encourage nonsensical data usage versus designing ‘data-lite’ apps.

These are the seven features I would like to see in 2013. And while I am sure that there are phones or apps that do some of the features, I think it would be an advance to have the features mainstreamed on the latest and best smartphones. (Though I am still looking for a great translation app with good ‘offline’ capability, if you know of one, please recommend it!). What features would you like to see in the next generation of smartphones in 2013?

Best, Jim Ditmore

A Cloudy Future: The Rise of Appliances and SaaS

As I mentioned in my previous post, I will be exploring infrastructure trends, and in particular, cloud computing. But while cloud computing is getting most of the marketing press, there are two additional phenomena that are capturing as much if not more of the market: computer appliances and SaaS. So, before we dive deep into cloud, let’s explore these other two trends and then set the stage for a comprehensive cloud discussion that will yield effective strategies for IT leaders.

Computer appliances have been available for decades, typically in the network, security, database and specialized compute spaces. Firewalls and other security devices have long leveraged an appliance approach where generic technology (CPU, storage, OS) is closely integrated with additional special purpose software and sold and serviced as an packaged solution. Specialized database appliances for data warehousing were quite successful starting in the early 1990s (remember Teradata?).

The tighter integration of appliances gives significant advantage over traditional approaches with generic systems. First, since the integrator of the package often is also the supplier of the software and thus can achieve improved tuning of performance and capacity of the software with a specific OS and hardware set. Further, this integrated stack then requires much less install and implementation effort by the customer.  The end result can be impressive performance for similar cost to a traditional generic stack without the implementation effort or difficulties. Thus appliances can have a compelling performance and business case for the typical medium and large enterprise. And they are compelling for the technology supplier as well because they will command higher prices and are much higher margin than the individual components.

It is important to recognize that appliances are part of a normal tug and pull between generic and specialized solutions. In essence, throughout the past 40 years of computing,   there has been the constant improvement in generic technologies under the march of Moore’s law. And with each advance there are two paths to take: leverage generic technologies and keep your stack loosely coupled so you can continue to leverage the advance of generic components or, closely integrated your stack with the then most current components and drive much better performance from this integration.

By their very nature though, appliances become rooted in a particular generation of technology. The initial iteration can be done with the latest version of technology but the integration will likely result in tight links to the OS, hardware and other underlying layers to wring out every performance improvement available. These tight links yield both the performance improvement and the chains to a particular generation of technology. Once an appliance is developed and marketed successfully, ongoing evolutionary improvements will continue to be made, layering in further links to the original base technology. And the margins themselves are addictive with the suppliers doing everything possible to maintain the margins (thus evolutionary low cost advances will occur but revolutionary (next generation) will likely require too high of an investment to maintain the margins).  This then spells the eventual fading and demise of that appliance, as the generic technologies continue their relentless advance and typically surpass the appliance in 2 or 3 generations. This is represented in the chart below and can be seen in the evolution of data warehousing.

The Leapfrog of Appliances and Generics

The first instances of data warehousing were done using the primary generic platform of the time (the mainframe) and mainstream databases. But with the rise of another generic technology, proprietary chipsets out of the midrange and high end workstation sector, Teradata and others combined these chipsets with specialized hardware and database software to develop much more powerful data warehouse appliances. From the late 1980s through the 1990s the Teradata appliance maintained a significant performance and value edge over generic alternatives. But that began to fray around 2000 with the continued rise of mainstream databases and server chipsets along with low cost operating systems and storage that could be combined to match the performance of Teradata at much lower cost. In this instance, the Teradata appliance held a significant performance advantage for about 10  years before falling back into or below the mainstream generic performance. The value advantage diminished much sooner of course. Typically, the appliance performance advantage is for 4 to 6 years at most. Thus, early in the cycle (typically 3 to 4 generic generations or 4 to 5 years), an appliance offering will present material performance and possibly cost advantages over traditional, generic solutions.

As a technology leader, I recommend the following considerations when looking at appliances:

  • If you have real business needs that will drive significant benefit from such performance, then investigate the appliance solution.
  • Keep in mind that in the mid-term the appliance solution will steadily lose advantage and subsequently cost more than the generic solution. Understand where the appliance solution is in its evolution – this will determine its effective life and the likely length of your advantage over generic systems
  • Factor the hurdle, or ‘switchback’ costs at the end of its life. (The appliance will likely require a hefty investment to transition back to generic solutions that have steadily marched forward).
  • The switchback costs will be much higher where business logic is layered in (e.g. for middleware, database or business software appliances versus network or security appliances (where there is minimal custom business logic layered in).
  • Include the level of integration effort and cost required. Often a few appliances within a generic infrastructure will have a smooth integration and less cost. On the other hand, weaving multiple appliances within a service stack can cause much higher integration costs and not yield desired results. Remember that you have limited flexibility with an appliance due to its integrated nature and this could cause issues when they are strung together (e.g., a security appliance with a load balance appliance with a middleware appliance with a business application appliance and data warehouse appliance (!)).
  • Note for certain areas, security and network in particular, often the follow-on to an appliance will be a next generation appliance from the same or different vendor. This is because there is minimal business logic incorporated in the system (yes, there are lots of parameter settings like firewall rules customer for a business, but the firewall operates essentially the same regardless of the business that uses it).

With these guidelines, you should be able to make better decisions about when to use an appliance and how much of a premium you should pay.

In my next post, I will cover SaaS and I will then bring these views together with a perspective on cloud in a final post.

What changes or additions would you make when considering appliances? I look forward to your perspective.

Best, Jim Ditmore