Expect More Casualties

Smart phones, tablets, and their digital ecosystems have had a stunning impact on a range of industries in just a few short years. Those platforms changed how we work, how we shop, and how we interact with each other. And their disruption of traditional product companies has only just begun.

The first casualties were the entrenched smart phone vendors themselves, as IOS and Android devices and their platforms rose to prominence. It is remarkable that BlackBerry, which owned half of the US smart phone industry at the start of 2009, saw its share collapse to barely 10% by the end of 2010 and to less than 1% in 2014, even as it responded with comparable devices. It’s proving nearly impossible for BlackBerry to re-establish its foothold in a market where your ‘platform’, including your OS software and its features, the number of apps in your store, the additional cloud services, and the momentum in your user or social community are as important as the device.

A bit further afield is the movie rental business. Unable to compete with electronic delivery to a range of consumer devices, Blockbuster filed for bankruptcy protection in September 2010 just 6 years after its market peak. Over in another content business, Borders, the slowest of the big bookstore chains, filed for bankruptcy shortly after, while the other big bookstore chain, Barnes & Noble, has hung on with its Nook tablet and better store execution — a “partial” platform play. But the likes of Apple, Google, and Amazon have already won this race, with their vibrant communities, rich content channels , value-added transactions (Geniuses and automated recommendations), and constantly evolving software and devices. Liberty Mutual recently voted on the likely outcome of this industry with its disinvestment from Barnes & Noble.

What’s common to these early casualties? They failed to anticipate and respond to fundamental business model shifts brought on by advances in mobile, cloud computing, application portfolios and social communities. Combined, these technologies have evolved to lethal platforms that can completely overwhelm established markets and industries.  They failed to recognize that their new competitors were operating on a far more comprehensive level than their traditional product competitors. Competing on product versus platform is like a catapult going up against a precision-guided missile.

Sony provides another excellent example of a superior product company (remember the Walkman?) getting mauled by platform companies. Or consider the camera industry: IDC predicts that shipments of what it calls “interchangeable-lens cameras” or high end digital cameras peaked in 2012 and will decline 9.1% this year compared with last year  as the likes of Apple, HTC, Microsoft, and Samsung build high-quality cameras into their mobile devices. By some estimates, the high-end camera market in 2017 will be half what it was in 2012 as those product companies try to compete against the platform juggernauts.

The casualties will spread throughout other industries, from environmental controls to security systems to appliances. Market leadership will go to those players using Android or iOS as the primary control platform.

Over in the gaming world, while the producers of content (Call of Duty, Assassin’s Creed, Madden NFLC, etc.) are doing well, the console makers are having a tougher time. The market has already forever split  into games on mobile devices and those for specialized consoles, making the market much more turbulent for the console makers. Wii console maker Nintendo, for example, is expected to report a loss this fiscal year. If not for some dedicated content (e.g., Mario), the game might already be over for the company. In contrast, however, Sony’s PS4 and Microsoft’s Xbox One had successful launches in late 2013, with improved sales and community growth bolstering both “partial” platforms for the long term.

In fact, the retail marketplace for all manner of goods and services is changing to where almost all transactions start with the mobile device, leaving little room for traditional stores that can’t compete on price. Those stores must either add physical value (touch and feel, in-person expertise), experience (malls with ice skating rinks, climbing walls, aquariums), or exclusivity/service (Nordstrom’s) to thrive.

It is difficult for successful product companies to move in the platform direction, even as they start to see the platforms eating their lunch. Even for technology companies, this recognition is difficult. Only earlier this year did Microsoft drop the license fee for its ‘small screen’  operating systems. After several years, Microsoft finally realized that it can’t win against a mobile platform behemoths that give away their OS while it charges steep licensing fees for its mobile platform.

It will be interesting to see if Microsoft’s hugely successful Office product suite can compete over the long term with a slew of competing ecosystem plays. By extending Office to the iPad, Microsoft may be able to graft onto that platform and maintain its strong performance. While it’s still early to predict who will ultimately win that battle, I can only reference the battle of consumer iPhone and Android versus corporate BlackBerry — and we all know who won that one.

Over the next few years, expect more battles and casualties in a range of industries, as players leveraging Android, iOS, and other cloud/community platforms take on entrenched companies. Even icons such as Sony and Microsoft are at risk, should they cling to traditional product strategies.

Meantime, the likes of Google, Apple, Amazon, and Facebook are investing in future platforms — for homes, smart cars, robotics and drones, etc. As the ongoing impacts from the smart phone platforms continue, new platforms will add further impacts, so expect more casualties among traditional product companies, even seemingly in unrelated industries. 

This post first appeared in InformationWeek in February. It has been updated. Let me know your thoughts about platform futures. Best, Jim Ditmore.

Moving from Offshoring to Global Service Centers II

As we covered in our first post on this topic, since the mid-90s, companies have used offshoring to achieve cost and capacity advantages in IT. Offshoring was a favored option to address Y2K issues and has continued to expand at a steady rate throughout the past twenty years. But many companies still approach offshoring as  ‘out-tasking’ and fail to leverage the many advantages of a truly global and high performance work force.

With out-tasking, companies take a limited set of functions or ‘tasks’ and move these to the offshore team. They often achieve initial economic advantage through labor arbitrage and perhaps some improvement in quality as the tasks are documented and  standardized in order to make it easier to transition the work to the new location. This constitutes the first level of a global team: offshore service provider. But larger benefits around are often lost and only select organizations have matured the model to its highest performance level as ‘global service centers’.

So, how do you achieve high performance global service centers instead of suboptimal offshore service providers? As discussed previously, you must establish the right ‘global footprint’ for your organization. Here we will cover the second half of getting to global service centers:  implementing a ‘global team’ model. Combined with the right footprint, you will be able to achieve global service centers and enable competitive advantage.

Global team elements include:

  • consistent global goals and vision across global sites with commensurate rewards and recognition by site
  • a matrix team structure that enables both integrated processes and local and global leadership and controls
  • clarity on roles based on functional responsibility and strategic competence rather than geographic location
  • the opportunity for growth globally from a junior position to a senior leader
  • close partnership with local universities and key suppliers at each strategic location

To understand the variation in performance for the different structures, first consider the effectiveness of your entire team – across the globe – on several dimensions:

  • level of competence (skill, experience)
  • productivity, ability to improve current work
  • ownership and engagement
  • customization and innovation contributions
  • source of future leaders

For an offshore service provider, where work has been out-tasked to a particular site, the team can provide similar or in some cases, better levels of competence. Because of the lower cost in the offshore location, if there is adequate skilled labour, the offshore service provider can more easily acquire such skill and experience within a given budget. A recognizable global brand helps with this talent acquisition. But since only tasks are sent to the center, productivity and continuous improvement can only be applied to the portions of the process within the center. Requirements, design, and other early stage activities are often left primarily to the ‘home office’ with little ability for the offshore center to influence. Further, the process standards and ownership typically remain at home office as well, even though most implementation may be done at the offshore service provider. This creates a further gap where implications of new standards or home office process ‘improvements’ must be borne by the offshore service provider even if the theory does not work well in actual practice. And since implementation and customer interfaces are often limited as well, the offshore service provider receives little real feedback, furthering constraining the improvement cycle.

For the offshore service provider,  the ability to improve processes and productivity is limited to local optimization only, and capabilities are often at the whims of poor decisions from a distant home office. More comprehensive productivity and process improvements can be achieved by devolving competency authority to the primary team executing the work. So, if most testing is done in India, then the testing process ownership and testing best practices responsibility should reside in India. By shifting process ownership closer to the primary team, there will be a natural interchange and flow of ideas and feedback that will result in better improvements, better ownership of the process, and better results. The process can and should still be consistent globally, the primary competency ownership just resides at its primary practice location.  This will result in a highly competent team striving to be among the best in the world. Even better, the best test administrators can now aspire to become test best practice experts and see a longer career path at the offshore location. Their productivity and knowledge levels will improve significantly. These improvements will reduce attrition and increase employee engagement in the test team, not just in India but globally. In essence, by moving from proper task placement to proper competency placement, you enable both the offshore site and the home sites to perform better on both team skill and experience, as well as team productivity and process improvement.

Proper competency placement begins the movement of your sites from offshore service providers to global service excellence. Couple competency placement with transparent reporting on the key metrics for the selected competencies (e.g., all test teams, across the globe, should report based on best in class operational metrics) and drive improvement cycles (local and global) based on findings from the metrics. Full execution of these three adjustments will enable you to achieve sustained productivity improvements of 10 to 30% and lower attrition rates (of your best staff) by  20 to 40%.

It is important to understand that pairing competency leadership with primary execution is required in IT disciplines much more so than other fields due to the rapid fluidity and advance of technology practices, the frequent need to engage multiple levels of the same expertise to resource and complete projects, and the ambiguity and lack of clear industry standards in many IT engineering areas. In many other industries (manufacturing, chemicals, petroleum), stratification between engineering design and implementation is far more rigorous and possible given the standardization of roles and slower pace of change. Thus, organizations can operate far closer to optimum even with task offshoring that is just not possible in the IT space over any sustained time frame.

To move beyond global competency excellence, the structures around functions (the entire processes, teams and leadership that deliver a service) must be optimized and aligned. First and foremost, goals and agenda must be set consistently across the globe for all sites. There can be no sub agendas where offshore sites focus only on meeting there SLAs or capturing a profit, instead the goals must be the appropriate IT goals globally. (Obviously, for tax implications, certain revenue and profit overheads will be achieved but that is an administrative process not an IT goal. )

Functional optimization is achieved by integrating the functional management across the globe where it becomes the primary management structure. Site and resource leadership is secondary to the functional management structure. It is important to maintain such site leadership to meet regulatory and corporate requirements as well as provide local guidance, but the goals, plans, initiatives, and even day-to-day activities flow through a natural functional leadership structure. There is of course a matrix management approach where often the direct line for reporting and legal purposes is the site management, but the core work is directed via the functional leadership. Most large international companies have mastered this matrix management approach and staff and management understand how to properly work within such a setup.

It is worth noting that within any large services corporation ‘functional’ management will reign supreme over ‘site’ management. For example, in a debate deciding what are the critical projects to be tackled by the IT development team, it is the functional leaders working closely with the global business units that will define the priorities and make the decisions. And if the organization has a site-led offshore development shop, they will find out about the resources required long after the decisions are made (and be required to simply fulfill the task). Site management is simply viewed as not having worthy knowledge or authority to participate in any major debate. Thus if you have you offshore centers singularly aligned to site leadership all the way up the corporate chain, the ability to influence or participate in corporate decisions is minimal. However, if you have matrixed the structure to include a primary functional reporting mechanism, then the offshore team will have some level of representation. This increases particularly as manager and senior managers populate the offshore site and are enable functional control back into home offices or other sites. Thus the testing team, as discussed earlier, if it is primarily located in India, would have not just responsibility for the competency and process direction and goals but also would have the global test senior leader at its site who would have test teams back at the home office and other sites. This structure enables functional guidance and leadership from a position of strength. Now, priorities, goals, initiatives, functional direction can flow smoothly from around the globe to best inform the functional direction. Staff in offshore locations now feel committed to the function resulting in far more energy and innovation arising from these sites. The corporation now benefits from having a much broader pool of strong candidates for leadership positions. And not just more diverse candidates, but candidates who understand a global operating model and comfortable reaching across time zones and cultures. Just what is needed to compete globally in the business. The chart below represents this transition from task to competency to function optimization.

Global Team ProgressionIf you combine the functional optimization with a highly competitive site structure, you can typically organize key function in 2 or 3 locations where global functional leadership will reside. This then adds time of day and business continuity advantages. By having the same function at a minimum of two sites, then even if one site is down the other can operate. Or IT work can be started at one site and handed off at the end of the day at the next site that is just beginning their day (in fact most world class IT command centers operate this way). Thus no one ever works the night shift. And time to market can be greatly improved by leveraging such time advantages.

While it is understandably complex that you are optimizing across many variables (site location, contractor and skill mix, location cost, functional placement, competency placement, talent and skill availability), IT teams that can achieve a global team model and put in place global service centers reap substantial benefits in cost, quality, innovation, and time to market.

To properly weigh these factors I recommend a workforce plan approach where each each function or sub function maps out their staff and leaders across site, contractor/staff mix, and seniority mix. Lay out the target to optimize across all key variables (cost, capability, quality, business continuity and so on) and then construct a quarterly trajectory of the function composition from current state until it can achieve the target. Balance for critical mass, leadership, and likely talent sources. Now you have the draft plan of what moves and transactions must be made to meet your target. Every staff transaction (hires, rotations, training, layoffs, etc) going forward should be weighed against whether it meshes with the workforce plan trajectory or not. Substantial progress to an optimized global team can then be made by leveraging a rising tide of accumulated transactions executed in a strategic manner. These plans must be accompanied or even introduced by an overall vision of the global team and reinforcement of the goals and principles required to enable such an operating model. But once laid, you and your organization can expect to achieve far better capabilities and results than just dispersing tasks and activities around the world.

In today’s global competition, this global team approach is absolutely key for competitive advantage and essential for competitive parity if you are or aspire to be a top international company. It would be great to hear of your perspectives and any feedback on how you or your company been either successful (or unsuccessful) at achieving a global team.

I will add a subsequent reference page with Workforce Plan templates that can be leveraged by teams wishing to start this journey.

Best, Jim Ditmore

Moving from Offshoring to Global Shared Service Centers

My apologies for the delay in my post. It has been a busy few months and it has taken an extended time since there is quite a bit I wish to cover in the global shared service center model. Since my NCAA bracket has completely tanked, I am out of excuses to not complete the writing, so here is the first post with at least one to follow. 

Since the mid-90s, companies have used offshoring to achieve cost and capacity advantages in IT. Offshoring was a favored option to address Y2K issues and has continued to expand at a steady rate throughout the past twenty years. But many companies still approach offshoring as  ‘out-tasking’ and fail to leverage the many advantages of a truly global and high performance work force.

With out-tasking, companies take a limited set of functions or ‘tasks’ and move these to the offshore team. They often achieve initial economic advantage through labor arbitrage and perhaps some improvement in quality as the tasks are documented and  standardized in order to make it easier to transition the work to the new location. This constitutes the first level of a global team: offshore service provider. But larger benefits around are often lost and typically include:

  • further ongoing process improvement,
  • better time to market,
  • wider service times or ‘follow the sun’,
  • and leverage of critical innovation or leadership capabilities of the offshore team.

In fact, the work often stagnates at whatever state it was in when it was transitioned with little impetus for further improvement. And because lower level tasks are often the work that is shifted offshore and higher level design work remains in the home country, key decisions on design or direction can often take an extended period – actually lengthening time to market. In fact, design or direction decisions often become arbitrary or disconnected because the groups – one in home office, the other in the offshore location – retain significant divides (time of day, perspective, knowledge of the work, understanding of the corporate strategy, etc). At its extreme, the home office becomes the ivory tower and the offshore teams become serf task executors and administrators. Ownership, engagement, initiative and improvement energies are usually lost in these arrangements. And it can be further exacerbated by having contractors at the offshore location, who have a commercial interest in maintaining the status quo (and thus revenue) and who are viewed as with less regard by the home country staff. Any changes required are used to increase contractor revenues and margins. These shortcomings erase many of the economic advantages of offshoring over time and further impact the competitiveness of the company in areas such as agility, quality, and leadership development.

A far better way to approach your workforce is to leverage a ‘global footprint and a global team’. And this approach is absolutely key for competitive advantage and essential for competitive parity if you are an international company. There are multiple elements of the ‘global footprint and team’ approach, that when effectively orchestrated by IT leadership, can achieve far better results than any other structure. By leveraging high performance global approach, you can move from an offshore service provider to a shared service excellence center and, ultimately to a global service leadership center.

The key elements of a global team approach can be grouped into two areas: high performance global footprint and high performance team. The global footprint elements are:

  • well-selected strategic sites, each with adequate critical mass, strong labor pools and higher education sources
  • proper positioning to meet time-of-day and improved skill and cost mix
  • knowledge and leverage of distinct regional advantages to obtain better customer interface, diverse inputs and designs, or unique skills
  • proper consolidation and segmentation of functions across sites to achieve optimum cost and capability mixes

Global team elements include:

  • consistent global goals and vision across global sites with commensurate rewards and recognition by site
  • a team structure that enables both integrated processes and local and global controls
  • the opportunity for growth globally from a junior position to a senior leader
  • close partnership with local universities and key suppliers at each strategic location
  • opportunity for leadership at all locations

Let’s tackle global footprint today and in a follow on post I will cover global team. First and foremost is selecting the right sites for your company. Your current staff total size and locations will obviously factor heavily into your ultimate site mix. Assess your current sites using the following criteria:

  • Do they have critical mass (typically at least 300 engineers or operations personnel, preferably 500+) that will make the site efficient, productive and enable staff growth?
  • Is the site located where IT talent can be easily sourced? Are there good universities nearby to partner with? Is there a reasonable Are there business units co-located or customers nearby?
  • Is the site in a low, medium, or high cost location?
  • What is the shift (time zone) of the location?

Once you have classified your current sites with these criteria, you can then assess the gaps. Do you have sites in low-cost locations with strong engineering talent (e.g. India, Eastern Europe)? Do you have medium cost locations (e.g., Ireland or 2nd tier cities in the US midwest)? Do you have too many small sites (e.g., under 100 personnel)? Do you have sites close to key business units or customers? Are no sites located in 3rd shift zones? Remember that your sites are more about the cities they are located in than the countries. A second tier city in India or a first or second tier city in Eastern Europe can often be your best site location because of improved talent acquisition and lower attrition than 1st tier locations in your country or in India.

It is often best to locate your service center where there are strong engineering and business universities nearby that will provide an influx of entry level staff eager to learn and develop. Given staff will be the primary cost factor in your service, ensure you locate in lower cost areas that have good language skills, access to the engineering universities, and appropriate time zones. For example, if you are in Europe, you should look to have one or two consolidated sites located just outside 2nd tier cities with strong universities. For example, do not locate in Paris or London, instead base your service desk either in or just outside Manchester or Budapest or Vilnius. This will enable you to tap into a lower cost yet high quality labor market that also is likely to provide more part-time workers that will help you solve peak call periods. You can use a similar approach in the US or Asia.

A highly competitive site structure enables you to meet a global optimal cost and capability mix as well. At the most mature global teams in very large companies, we drove for a 20/40/40 cost mix (20% high cost, 40% medium and 40% low cost) where each site is in a strong engineering location. Where possible, we also co-located with key business units. Drive to the optimal mix by selecting 3, 4, or 5 strategic sites that meet the mix target and that will also give you the greatest spread of shift coverage.  Once you have located your sites correctly, you must then of course drive to have effective recruiting, training, and management of the site to achieve outstanding service. Remember also that you must properly consolidate functions to these strategic sites.  Your key functions must be consolidated to 2 or 3 of the sites – you cannot run a successful function where there are multiple small units scattered around your corporate footprint. You will be unable to invest in the needed technology and provide an adequate career path to attract the right staff if it is highly dispersed.

You can easily construct a matrix and assess your current sites against these criteria. Remember these sites are likely the most important investments your company will make. If you have poor portfolio of sites, with inadequate labor resources or effective talent pipelines or other issues, it will impact your company’s ability to attract and retain it’s most important asset to achieve competitive success. It may take substantial investment and an extended period of time, but achieving an optimal global site and global team will provide lasting competitive advantage.

I will cover the global team aspects in my next post along with the key factors in moving from a offshore service provider to shared service excellence to shared service leadership.

It would be great to hear of your perspectives and any feedback on how you or your company been either successful (or unsuccessful) at achieving a global team.

Best, Jim Ditmore

Celebrate 2013 Technology or Look to 2014?

The year is quickly winding down and 2013 will not be remembered as a stellar year for technology. Between the NSA leaks and Orwellian revelations, the Healthcare.gov mishaps, the cloud email outages (and Yahoo’s is still lingering) and now the 40 million credit identities stolen from Target, 2013 actually was a pretty tough year for the promise of technology to better society.

While the breakneck progress of technology continued, we witnessed so many shortcomings in its implementation. Fundamental gaps in large project delivery and availability design and implementation continue to plague large and widely used systems.   It is as if the primary design lessons of ‘Galloping Gertie’ regarding resonance were never absorbed by bridge builders. The costs of such major flaws in these large systems are certainly similar to that of a failed bridge.  And as it turns out, if there is a security flaw or loophole, either the bad guys or the NSA will exploit it. I particularly like NSA’s use of ‘smiley faces’ on internal presentations when they find a major gap in someone else’s system.

So, given 2013 has shown the world we live in all too clearly, as IT leaders let’s look to 2014 and resolve to do things better. Let’s continue to up the investment in security within our walls and be more demanding of our vendors to improve their security. Better security is the number 2 focus item (behind data analytics) for most firms and the US government. And security spend will increase an out-sized amount even as total spend goes up by 5%. This is good news, but let’s ensure the money is spent well and we make greater progress in 2014. Of course, one key step is to get XP out of your environment by March since it will no longer be patched by Microsoft. For a checklist on security, here is a good start at my best practices security reference page.

As for availability, remember that quality provides the foundation to availability. Whether design, implementation or change, quality must be woven throughout these processes to enable robust availability and meet the demands of today’s 7×24 mobile consumers. Resolve to move your shop from craft to science in 2014, and make a world of a difference for your company’s interface to its customers. Again, if you are wondering how best to start this journey and make real progress, check out this primer on availability.

Now, what should you look for in 2014? As with last January, where I made 6 predictions for 2013, I will make 6 technology predictions for 2014. Here we go!

6. There will be consolidation in the public cloud market as smaller companies fail to gather enough long term revenue to survive and compete in a market with rapidly falling prices. Nirvanx was the first of many.

5. NSA will get real governance, though it will be secret governance. There is too much of a firestorm for this to continue in current form.

4. Dual SIM phones become available in major markets. This is my personal favorite wish list item and it should come true in the Android space by 4Q.

3. Microsoft’s ‘messy’ OS versions will be reduced, but Microsoft will not deliver on the ‘one’ platform. Expect Microsoft to drop RT and continue to incrementally improve Pro and Enterprise to be more like Windows 7. As for Windows Phone OS, it is a question of sustained market share and the jury is out. It should hang on for a few more years though.

2. With a new CEO, a Microsoft breakup or spinoffs are in the cards. The activist shareholders are holding fire while waiting for the new CEO, but will be applying the flame once again. Effects? How about Office on the iPad? Everyone is giving away software and charging for hardware and services, forcing an eventual change in the Microsoft business model.

1. Flash revolution in the enterprise. What looked at the start of 2013 to be 3 or more years out looks now like this year. The emergence of flash storage at prices (with de-duplication) comparable to traditional storage and 90% reductions in environmentals will become a stampede with the next generation of flash costing significantly less than disk storage.

What are your top predictions? Anything to change or add?

I look forward to your feedback and next week I will assess how my predictions from January 2013 did — we will keep score!

Best, and have a great holiday,

Jim Ditmore

How Did Technology End Up on the Sunday Morning Talk Shows?

It has been two months since the Healthcare.gov launch and by now nearly every American has heard or witnessed the poor performance of the websites. Early on, only one of every five users was able to actually sign in to Healthcare.gov, while poor performance and unavailable systems continue to plague the federal and some state exchanges. Performance was still problematic several weeks into the launch and even as of Friday, November 30, the site was down for 11 hours for maintenance. As of today, December 1, the promised ‘relaunch day’, it appears the site is ‘markedly improved’ but there are plenty more issues to fix.

What a sad state of affairs for IT. So, what does the Healthcare website issues teach us about large project management and execution? Or further, about quality engineering and defect removal?

Soon after the launch, former federal CTO Aneesh Chopra, in an Aspen Institute interview with The New York Times‘ Thomas Friedman, shrugged off the website problems, saying that “glitches happen.” Chopra compared the Healthcare.gov downtime to the frequent appearances of Twitter’s “fail whale” as heavy traffic overwhelmed that site during the 2010 soccer World Cup.

But given that the size of the signup audience was well known and that website technology is mature and well understood, how could the government create such an IT mess? Especially given how much lead time the government had (more than three years) and how much it spent on building the site (estimated between $300 million and $500 million).

Perhaps this is not quite so unusual. Industry research suggests that large IT projects are at far greater risk of failure than smaller efforts. A 2012 McKinsey study revealed that 17% of lT projects budgeted at $15 million or higher go so badly as to threaten the company’s existence, and more than 40% of them fail. As bad as the U.S. healthcare website debut is, there are dozens of examples, both government-run and private of similar debacles.

In a landmark 1995 study, the Standish Group established that only about 17% of IT projects could be considered “fully successful,” another 52% were “challenged” (they didn’t meet budget, quality or time goals) and 30% were “impaired or failed.” In a recent update of that study conducted for ComputerWorld, Standish examined 3,555 IT projects between 2003 and 2012 that had labor costs of at least $10 million and found that only 6.4% of them were successful.

Combining the inherent problems associated with very large IT projects with outdated government practices greatly increases the risk factors. Enterprises of all types can track large IT project failures to several key reasons:

  • Poor or ambiguous sponsorship
  • Confusing or changing requirements
  • Inadequate skills or resources
  • Poor design or inappropriate use of new technology

Unfortunately, strong sponsorship and solid requirements are difficult to come by in a political environment (read: Obamacare), where too many individual and group stakeholders have reason to argue with one another and change the project. Applying the political process of lengthy debates, consensus-building and multiple agendas to defining project requirements is a recipe for disaster.

Furthermore, based on my experience, I suspect the contractors doing the government work encouraged changes, as they saw an opportunity to grow the scope of the project with much higher-margin work (change orders are always much more profitable than the original bid). Inadequate sponsorship and weak requirements were undoubtedly combined with a waterfall development methodology and overall big bang approach usually specified by government procurement methods. In fact, early testimony by the contractors ‘cited a lack of testing on the full system and last-minute changes by the federal agency’.

Why didn’t the project use an iterative delivery approach to hone requirements and interfaces early? Why not start with healthcare site pilots and betas months or even years before the October 1 launch date? The project was underway for three years, yet nothing was made available until October 1. And why did the effort leverage only an already occupied pool of virtualized servers that had little spare capacity for a major new site? For less than 10% of the project costs a massive dedicated farm could have been built.  Further, there was no backup site, nor any monitoring tools implemented. And where was the horizontal scaling design within the application to enable easy addition of capacity for unexpected demand? It is disappointing to see such basic misses in non-functional requirements and design in a major program for a system that is not that difficult or unique.

These basic deliverables and approaches appear to have been fully missed in the implementation of the wesite. Further, the website code appears to have been quite sloppy, not even using common caching techniques to improve performance. Thus, in addition to suffering from weak sponsorship and ambiguous requirements, this program failed to leverage well-known best practices for the technology and design.

One would have thought that given the scale and expenditure on the program, top technical resources would have been allocated and ensured these practices were used. The feds are  scrambling with a “surge” of tech resources  for the site. And while the new resources and leadership have made improvements so far, the surge will bring its own problems. It is very difficult to effectively add resources to an already large program. And, new ideas introduced by the ‘surge’ resources, may not be either accepted or easily integrated. And if the issues are deeply embedded in the system, it will be difficult for the new team to fully fix the defects. For every 100 defects identified in the first few weeks, my experience with quality suggests there are 2 or 3 times more defects buried in the system. Furthermore, if one wonders if the project couldn’t handle the “easy” technical work — sound website design and horizontal scalability – how will they can handle the more difficult challenges of data quality and security?

These issues will become more apparent in the coming months when the complex integration with backend systems from other agencies and insurance companies becomes stressed. And already the fraudsters are jumping into the fray.

So, what should be done and what are the takeaways for an IT leader? Clear sponsorship and proper governance are table stakes for any big IT project, but in this case more radical changes are in order. Why have all 36 states and the federal government roll out their healthcare exchanges in one waterfall or big bang approach? The sites that are working reasonably well (such as the District of Columbia’s) developed them independently. Divide the work up where possible, and move to an iterative or spiral methodology. Deliver early and often.

Perhaps even use competitive tension by having two contractors compete against each other for each such cycle. Pick the one that worked the best and then start over on the next cycle. But make them sprints, not marathons. Three- or six-month cycles should do it. The team that meets the requirements, on time, will have an opportunity to bid on the next cycle. Any contractor that doesn’t clear the bar gets barred from the next round. Now there’s no payoff for a contractor encouraging endless changes. And you have broken up the work into more doable components that can then be improved in the next implementation.

Finally, use only proven technologies. And why not ask the CIOs or chief technology architects of a few large-scale Web companies to spend a few days reviewing the program and designs at appropriate points. It’s the kind of industry-government partnership we would all like to see.

If you want to learn more about how to manage (and not to manage) large IT programs, I recommend “Software Runaways,” , by Robert L. Glass, which documents some spectacular failures. Reading the book is like watching a traffic accident unfold: It’s awful but you can’t tear yourself away. Also, I expand on the root causes of and remedies for IT project failures in my post on project management best practices.

And how about some projects that went well? Here is a great link to the 10 best government IT projects in 2012!

What project management best practices would you add? Please weigh in with a comment below.

Best, Jim Ditmore

This post was first published in late October in InformationWeek and has been updated for this site.

Using Organizational Best Practices to Handle Cloud and New Technologies

I have extended and updated this post which was first published in InformationWeek in March, 2013. I think it is a very salient and pragmatic organizational method for IT success. I look forward to your feedback! Best, Jim

IT organizations are challenged to keep up with the latest wave of cloud, mobile and big data technologies, which are outside the traditional areas of staff expertise. Some industry pundits recommend bringing on more technology “generalists,” since cloud services in particular can call on multiple areas of expertise (storage, server, networking). Or they recommend employing IT “service managers” to bundle up infrastructure components and provide service offerings.

But such organizational changes can reduce your team’s expertise and accountability and make it more difficult to deliver services. So how do you grow your organization’s expertise to handle new technologies? At the same time, how do you organize to deliver business demands for more product innovation and faster delivery yet still ensure efficiency, high quality and security?

Rather than acquire generalists and add another layer of cost and decision making to your infrastructure team, consider the following:

Cloud computing. Assign architects or lead engineers to focus on software-as-a-service and infrastructure-as-a-service, ensuring that you have robust estimating and costing models and solid implementation and operational templates. Establish a cloud roadmap that leverages SaaS and IaaS, ensuring that you don’t overreach and end up balkanizing your data center.

For appliances and private cloud, given their multiple component technologies, let your best component engineers learn adjacent fields. Build multi-disciplinary teams to design and implement these offerings. Above all, though, don’t water down the engineering capacity of your team by selecting generalists who lack depth in a component field. For decades, IT has built complex systems with multiple components by leveraging multi-faceted teams of experts, and cloud is no different.

Where to use ‘service managers’. A frequent flaw in organizations is to employ ‘service managers’ who group multiple infrastructure components (e.g. storage, servers, data centers, etc) into a ‘product’ (e.g. ‘hosting service’) and provide direction and interface for this product. This is an entirely artificial layer that then removes accountability from the component teams and often makes poor ‘product’ decision because of limited knowledge and depth. In the end IT does not deliver ‘hosting services’; IT delivers systems that meet business functions (e.g., for banking, teller or branch functions, ATMs; or for insurance, claims reporting or policy quote or issue). These business functions are the true IT services and are where you should apply a service manager role. Here, a service manager can ensure end-to-end integration and quality, drive better overall transaction performance and reliability, and provide deep expertise on system connections and SLAs and business needs back across the application and infrastructure component teams. And because it is directly attached to the business functions to be done, it will yield high value. These service managers will be invaluable for both new development and enhancement work as well as assisting during production issues.

Mobile. If mobile isn’t already the most critical interface for your company, it will be in three to five years. So don’t treat mobile as an afterthought, to be adapted from traditional interfaces. And don’t outsource this capability, as mobile will be pervasive in everything you build.

Build a mobile competency center that includes development, user experience and standards expertise. Then fan out that expertise to all of your development teams, while maintaining the core mobile group to assist with the most difficult efforts. And of course, continue with a central architecture and control of the overall user experience. A consistent mobile look, feel and flow is essentially your company’s brand, invaluable in interacting with customers.

Big data. There are two key aspects of this technology wave: the data (and traditional analytic uses) and real-time data “decisioning,” similar to IBM’s Watson. You can handle the data analytics as an extension of your traditional data warehousing (though on steroids). However, real-time decisioning has the potential to dramatically alter how your organization specifies and encodes business rules.

Consider the possibility that 30% to 50% of all business logic traditionally encoded in 3 or 4 generation programming languages instead becomes decisioned in real time. This capability will require new development and business analyst skills. For now, cultivate a central team with these skills. As you pilot and determine how to more broadly leverage real-time data decisioning, decide how to seed your broader development teams with these capabilities. In the longer run, I believe it will be critical to have these skills as an inherent portion of each development team.

Competing Demands. Overall, IT organizations must meet several competing demands: Work with business partners to deliver competitive advantage; do so quickly in order to respond to (and anticipate) market demands; and provide efficient, consistent quality while protecting the company’s intellectual property, data and customers. In essence, there are business and market drivers that value speed, business knowledge and closeness at a reasonable cost and risk drivers that value efficiency, quality, security and consistency.

Therefore, we must design an IT organization and systems approach that meets both sets of drivers and accommodates business organizational change. As opposed to organizing around one set of drivers or the other, the best solution is to organize IT as a hybrid organization to deliver both sets of capabilities.

Typically, the functions that should be consolidated and organized centrally to deliver scale, efficiency and quality are infrastructure (especially networks, data centers, servers and storage), IT operations, information security, service desks and anything else that should be run as a utility for the company. The functions to be aligned and organized along business lines to promote agility and innovation are application development (including Web and mature mobile development), data marts and business intelligence.

Some functions, such as database, middleware, testing and project management, can be organized in either mode. But if they aren’t centralized, they’ll require a council to ensure consistent processes, tools, measures and templates.

For services becoming a commodity, or where there’s a critical advantage to having one solution (e.g., one view of the customer for the entire company), it’s best to have a single team or utility that’s responsible (along with a corresponding single senior business sponsor). Where you’re looking to improve speed to market or market knowledge, organize into smaller IT teams closer to the business. The diagram below gives a graphical view of the hybrid organization.

The IT Hybrid Model diagram
With this approach, your IT shop will be able to deliver the best of both worlds. And you can then weave in the new skills and teams required to deliver the latest technologies such as cloud and mobile. You can read more about this hybrid model in our best practice reference page.

Which IT organizational approaches or variations have you seen work best? How are you accommodating new technologies and skills within your teams? Please weigh in with a comment below.

Best, Jim Ditmore

Massive Mobile Shifts and Keeping Score

As the first quarter of 2013 has come to a close, we see a technology industry moving at an accelerated pace. Consumerization is driving a faster level of change, with consequent impacts on technology ecosystem and the companies occupying different perches. From rapidly growing BYOD demand to the projected demise of the PC, we are seeing consumers shift their computing choices much faster than corporations, and some suppliers struggling to keep up. These rapid shifts require corporate IT groups to follow more quickly in their services. From implementing MDM (mobile device management), to increasing the bandwidth of wireless networks to adopting tablets and smartphones as the primary customer interfaces for future development, IT teams must adjust to ensure effective services and a competitive parity or advantage.

Let’s start with mobile. Consumers today use their smartphones to conduct much of their everyday business. And they use the devices if not for the entire transaction, then often to research or initiate the transaction. The lifeblood of most retail commerce has heavily shifted to the mobile channel. Thus, companies must have significant and effective mobile presence to achieve competitive advantage (or even survive). Mobile has become the first order of delivery for company services. Next in importance is the internet and then internal systems for call centers and staff. And since the vast majority of mobile devices (smartphone or tablet) are not Windows-based (nor is the internet), application development shops need to build or augment current Windows-oriented skills to enable native mobile development. Back end systems must be re-engineered to more easily support mobile apps.  And given your company’s competitive edge may be determined by its mobile apps, you need to be cautious about fully outsourcing this critical work.

Internally such devices are becoming a pervasive feature in the corporate landscape. It is important to be able to accommodate many of the choices of your company’s staff and yet still secure and manage the client device environment. Thus, implementations of MDM to manage these devices and enable corporate security on the portion of the device that contains company data are increasing at a rapid pace. Further, while relatively few companies currently have a corporate app store, this will become prevalent feature within a few years and companies will shift from a ‘push’ model of software deployment to a ‘pull’ model. Further consequences of the rapid adoption of mobile devices by staff include such items as needing to implement wireless at your company sites, adding visitor wireless capabilities (like a Starbucks wifi), or just increasing the capacity to handle the additional load (a 50% increase in internal wifi demand in January is not unheard as everyone returns to the office with their Christmas gifts).

A further consequence of the massive shift to smartphones and tablets is the diminishing reach and impact of Microsoft based on Gartner latest analysis and projections. The shift away from PCs and towards tablets in the consumer markets reduces the largest revenue sources of Microsoft. It is stunning to realize that Microsoft with its long consumer market history, could become ever more dependent on the enterprise versus consumer market. Yet, because the consumer’s choices are rapidly making inroads into the corporate device market, even this will be a safe harbor for only a limited time. With Windows 8, Microsoft tried to address both markets with one OS platform, perhaps not succeeding well in either. A potential outcome for Microsoft is to introduce the reported ‘Blue’ OS version which will be a complete touch interface (versus a hybrid touch and traditional). Yet, Microsoft has struggled to gain traction against Android and iOS tablets and smartphones, so it is hard to see how this will yield significant share improvement. And with new Chrome devices and a reputed cheap iPhone coming, perhaps even Gartner’s projections for Microsoft are optimistic. The last overwhelming consumer OS competitive success Microsoft had was against OS/2 and IBM — Apple iOS and Google Android are far different competitors! With the consumer space exceedingly difficult to make much headway, my top prediction for 2013 is that Microsoft will subsequently introduce a new Windows ‘classic’ to satisfy the millions of corporate desktops where touch interfaces are inadequate or application have not been redesigned. Otherwise, enterprises may sit pat on the current versions for an extended period, depriving Microsoft of critical revenue streams. Subsequent to the 1st version of this post, there were reports of Microsoft introducing Windows 8 stripped of the ‘Metro’ or touch interface! Corporate IT shops need to monitor these outcomes because once a shift occurs, there could be a rapid transition not just in the OS, but in the productivity suites and email as well.

There is also upheaval in the PC supplier base as a result of the worldwide sales decline of 13.9% (year over year in Q1). Also predicted here in January, HP struggled the most among the top 3 of HP, Lenovo and Dell. HP was down almost 24%, barely retaining the title of top volume manufacturer. Lenovo was flat, delivering the best performance in a declining market. Lenovo delivered 11.7 million units in the quarter, just below HP’s 12 million units. Dell suffered a 10.9% drop, which given the company is up for sale, is remarkable. Acer and other smaller firms saw major drops in sales as well (more than 31% for Acer). The ongoing decline of the market will see massive impact on the smaller market participants, with consolidation and fallout likely occurring late this year and early in 2014. The real question is whether HP can turn around their rapid decline. It will be a difficult task because the smartphone, tablet and Chrome book onslaught is occurring when HP is facing a rejuvenated Lenovo and a very aggressive Dell. Ultrabooks will provide some margin and volume improvement, but not enough to make up for the declines. Current course suggests that early 2014 will see a declining market where Lenovo is comfortably leading followed by a lagging HP fighting tooth and nail with Dell for 2nd place. HP must pull off a major product refresh, supply chain tightening, and aggressive sales to turn it around. It will be a tall order.

Perhaps the next consumerization influence will be the greater use of desktop video. Many of our employees have experienced the pretty good video of Skype or Facetime and potentially will be expecting similar experiences in the corporate conversations. Current internal networks often do not have the bandwidth for such casual and common video interactions, especially for smaller campuses or remote offices. It will be important for IT shops to manage the introduction of the capabilities so that more critical workload is not impacted.

How is your company’s progress on mobile? do you have an app store? Have you implemented desktop video? I look forward to hearing from you.

Best, Jim Ditmore

First Quarter Technology Trends to Note

For those looking for the early signs of spring, crocuses and flowering quince are excellent harbingers. For those looking for signs of technology trends and shifts, I thought it would be worthwhile to point out some new ones and provide further emphasis or confirmation of a few recent ones:

1. Enterprise server needs have flattened and the cumulative effect of cloud, virtualization, SaaS, and appliances will mean the corporate server market has fully matured. The 1Q13 numbers bear this trend is continuing (as mentioned here last month). Some big vendors are even seeing revenue declines in this space. Unit declines are possible in the near future. The result will be consolidation in the enterprise server and software industry. VMWare, BMC and CA have already seen their share prices fall as investors are concerned the growth years are behind them. Make sure your contracts consider potential acquisitions or change of control.

2. Can dual SIM smartphones be just around the corner? Actually, they are now officially here. Samsung just launched a Galaxy dual SIM in China, so perhaps it will not be long for other device makers to follow suit. Dual SIM enables excellent flexibility – just what the carriers do not want. Consider when you travel overseas, you will be able to insert a local SIM into your phone and handle all local or regional calls at low rates, and will still receive your ‘home’ number calls. Or for everyone who carries two devices, one for business and one for personal, now you still can keep your business and personal numbers separate, but only have one device.

3. Further evidence has appeared of the massive compromises enterprise are experiencing due to Advanced Persistent Threats (APTs). Most recently, Mandiant published a report that ties the Chinese government and the PLA to a broad set of compromises of US corporations and entities over many years.  If you have not begun to move you enterprise security from a traditional perimeter model to a post-perimeter design, make the investment. You can likely bet you are compromised, you need to not only lock the doors but thwart those who have breached your perimeter. A post here late last year covers many of the measures you need to take as an IT leader.

4. Big data and decision sciences could drive major change in both software development and business analytics. It may not be of the level of change that computers had say on payroll departments and finance accountants in the 1980s, but it could be wide-ranging. Consider that perhaps 1/3 to 1/2 of all business logic now encoded in systems (by analysts and software developers) could instead be handled by data models and analytics to make business rules and decisions in real time. Perhaps all of the business analysts and software developers will then move to developing the models and  proving them out, or we could see a fundamental shift in the skills demanded in the workplace. We still have accountants of course, they just no longer do the large amount of administrative tasks. Now, perhaps applying this to legal work….

5. The explosion in mobile continues apace. Wireless data traffic is growing at 60 to 70% per year and projected to continue at this pace. The use of the mobile phone as your primary commerce device is likely to become real for most purchases in the next 5 years. So businesses are under enormous pressure to adapt and innovate in this space.  Apps that can gracefully handle poor data connections (not everywhere is 4G) and hurried consumers will be critical for businesses. Unfortunately, there are not enough of these.

Any additions you would make? Please send me a note.

Best, Jim Ditmore

 

A Cloudy Future: Hard Truths and How to Best Leverage Cloud

We are long into the marketing hype cycle on cloud. That means that clear criteria to assess and evaluate the different cloud options are critical. Given these complexities, what approach should the medium to large enterprise take to best leverage cloud and optimize their data center? What are the pitfalls as well? While cloud computing is often presented as homogenous, there are many different types of cloud computing from infrastructure as a service (IaaS) to software as a service (SaaS) and many flavors in between. Perhaps some of the best examples are Amazon’s infrastructure services (IaaS), Google’s Email and office productivity services (SaaS), and Salesforce.com’s customer relationship management or CRM services (SaaS). Typically, the cloud is envisioned as an accessible and low cost compute utility in the sky that is always available. Despite this lofty promise, companies will need to select and build their cloud environment carefully to avoid fracturing their computing capabilities, locking themselves into a single, higher cost environment or impacting their ability to differentiate and gain competitive advantage – or all three.

The chart below provides an overview of the different types of cloud computing:

Cloud Computing and Variants

 

Note the positioning of the two dominant types of cloud computing:

  • there is the specialized Software-as-a-Service (SaaS) where the entire stack from server to application (even version) are provided — with minimal variation
  • there is the very generic IaaS or PaaS where a set of server and OS version(s) is available with types of storage. Any compatible database, middleware, or application can be installed to then run.

Other types of cloud computing include private cloud – essentially IaaS that an enterprise builds for itself. The private cloud variant is the evolution of the current corporate virtualized server and storage farm to a more mature instance with clearly defined service configurations, offerings, billing as well as highly automated provisioning and management.

Another impacting technology in the data center is engineered stacks. These are a further evolution of the computer appliances that have been available for decades. Engineered stacks are tightly specified, designed and engineered components integrated to provide superior performance and cost. These devices have typically been in the network, security, database and specialized compute spaces. Firewalls and other security devices have long leveraged an this approach where generic technology (CPU, storage, OS) is closely integrated with additional special purpose software and sold and serviced as an packaged solution. There has been a steady increase in the number of appliance or engineered stack offerings moving further into data analytics, application servers, and  middleware.

With the landscape set it is important to understand the technology industry market forces and the customer economics that will drive the data center landscape over the next five years. First, the technology vendors will continue to invest and increase the SaaS and engineered stack offerings because they offer significantly better margin and more certain long term revenue. A SaaS offering gets a far higher Wall Street multiple than traditional software licenses — and for good reason — it can be viewed as a consistent ongoing revenue stream where the customer is heavily locked in. Similarly for engineered stacks, traditional hardware vendors are racing to integrate as far up the stack as possible to both create additional value but more importantly enable locked in advantage where upgrades, support and maintenance can be more assured and at higher margin than traditional commodity servers or storage. It is a higher hurdle to replace an engineered stack than commodity equipment.

The industry investment will be accelerated by customer spend. Both SaaS and engineered stacks provide appealing business value that will justify their selection. For SaaS, it is speed and ease of implementation as well as potentially variable cost. For engineered stacks, it is a performance uplift at potentially lower costs that often makes the sale. Both SaaS and engineered stacks should be selected where the business case makes sense but with the cautions of:

  • for SaaS:
    • be very careful if it is core business functionality or processes, you could be locking away your differentiation and ultimate competitiveness.
    • know how you will get your data back before you sign should you stop using the SaaS
    • make sure you have ensured integrity and security of your data in your vendor’s hands
  • for engineered stacks:
    • understand where the product is in its lifecycle before selecting
    • anticipate the eventual migration path as the product fades at the end of its cycle

For both, ensure you avoid integrating key business logic into the vendor’s product. Otherwise you will be faced with high migration costs at the end of the product life or when there is a better compelling product. There are multiple ways to ensure that your key functionality and business rules remain independent and modular outside of the vendor service package.

With these caveats in mind, and a critical eye to your contract to avoid onerous terms and lock-ins, you will be successful with the project level decisions. But you should drive optimization at the portfolio level as well. If you are a medium to large enterprise, you should be driving your internal infrastructure to mature their offering to an internal private cloud. Virtualization, already widespread in the industry,  is just the first step. You should move to eliminate or minimize your custom configurations (preferably less than 20% of your server population). Next, invest in the tools and process and engineering so you can heavily automate the provisioning and management of the data center. Doing  this will also improve the quality of service).

Make sure that you do not shift so much of your processing to SaaS that you ‘balkanize’ your own utility. Your data center utility would then operate subscale and inefficiently. Should you overreach, expect to incur heavy integration costs on subsequent initiatives (because your functionality will be spread across multiple SaaS vendors in many data centers). And you can expect to experience performance issues as your systems operate at WAN speeds versus LAN speeds across these centers. And expect to lose  negotiating position with SaaS providers because you have lost your ‘in-source’ strength.  

I would venture that over the next five year the a well-managed IT shop will see:

  • the most growth in its SaaS and engineered stack portfolio, 
  • a conversion from custom infrastructure to a robust private cloud with a small sliver of custom remaining for unconverted legacy systems
  • minimal growth in PaaS and IaaS (growth here is actually driven by small to medium firms)
This transition is represented symbolically in the chart below:
Data Center Transition Over the Next 5 Years

So, on the road to a cloud future, SaaS and engineered stacks will be a part of nearly every company’s portfolio. Vendor lock-in could be around every corner, but good IT shops will leverage these capabilities judiciously and develop their own private cloud capabilities as well as retain critical IP and avoid the lock-ins. We will see far greater efficiency in the data center as custom configurations are heavily reduced.  So while the prospects are indeed ‘cloudy’, the future is potentially bright for the thoughtful IT shop.

What changes or guidelines would you apply when considering cloud computing and the many offerings? I look forward to your perspective.

This post appeared in its original version at Information Week January 4. I have extended and revised it since then.

Best, Jim Ditmore

At the Start of 2013: Topics, Updates, and Predictions

Given it is the start of the year, I thought I would map out some of the topics I plan to cover this coming year in my posts. I also wish to relay some of the improvements that are planned for the reference page areas. As you know, the focus of Recipe for IT  is practical, workable techniques and advice that works in the real world and enables IT managers to be more successful. In 2012, we had a very successful year with over 34,000 views from over 100 countries, though most are from the US, UK, and Canada. And I wish to thank the many who have contributed comments and feedback — it has really helped me craft a better product. So with that in mind, please provide your perspective on the upcoming topics, especially if there are areas you would like to see covered that are not.

As you know, I have structured the site into two main areas: posts – which are short, timely essays on a particular topic and pages– which often take a post and provide a more structured and possibly deeper view of the topic. The pages are intended to be an ongoing reference of best practice for you leverage.

For posts, I will be continue the discussion on cloud and data centers. I will also delve more into production practices and how to achieve high availability. Some of you may have noticed some posts are placed first on InformationWeek and then subsequently here. This helps increase the exposure of Recipe for IT and also ensure good editing (!).

For the reference pages, I have recently refined and will continue to improve the project delivery and project managements sections. Look also for updates and improvements to efficiency and cost reductions in IT and well as the service desk.

What other topics would you like to see explored? Please comment and provide your feedback and input.

And now to the fun part, six predictions for Technology in 2013:

6. 2013 is the year of the ‘connected house’ as standards and ‘hub’ products achieve critical mass.

5. The IT job market will continue to tighten requiring companies to invest in growing talent as well as higher IT compensation.

4. Fragmentation will multiply in the mobile market, leaving significant advantage to Apple and Samsung being the only companies commanding premiums for their products.

3. HP will suffer further distress in the PC market both from tablet cannibalization and aggressive performance from Lenovo and Dell.

2. The corporate server market will continue to experience minimal increases in volume and flat or downward pressure on revenue.

1. Microsoft will do a Coke Classic on Windows 8.

As an IT manager, it is important to have strong, robust competition – so perhaps both Microsoft and HP will sort out their issues in the consumer device/OS space and come back stronger than ever.

What would you add or change on the predictions? I look forward to your take on 2013 and technology!

Best, Jim Ditmore