Building Advanced Technology Capabilities

Robotics, AI, Advanced Analytics, BPM, Agile, DevOps, Cloud… Technology and business leaders are inundated with marketing pitches and consulting proposals on the latest technology, and how, by applying them, they can win against the competition. Unfortunately far too often, the implementations of advanced technology in their organizations fall well short of the promises: they’re expensive, require enormous organizational change that doesn’t happen, or they just doesn’t produce nearly the expected results. Often, applying advanced technology achieves modest success only in a few areas, for a slim portion of the enterprise while broader impact and benefits never seem to materialize. Frequently, the highly hyped efforts have overhyped success as a pilot only to peter out well before the promised returns are delivered. Organizations and teams then quietly go about their business as they have always done, now with added complexity and perhaps a bit more cynical and disengaged.

The far too frequent inability to broadly digitalize and leverage advanced technologies means most organizations remain anchored to legacy systems and processes with only digital window dressing on interfaces and minimal true commercial digitalization success. Some of the lack of success is due to the shortcomings of the technologies and tools themselves – some advanced technologies are truly overhyped and not ready for primetime, but more often, the issue is due to the adoption approach within the organization and the leadership and discipline necessary to fully implement new technologies at scale.

Advanced technology implementations are not silver bullets that can be readily adopted with benefits delivered in a snap. As with any significant transformation in a large organization, advanced technology initiatives require senior sponsorship and change management. Further, because of the specialty skills, new tools and methods, new personnel, and critically, new ways of getting the work done, there is additional complexity and more possibilities for issues and failures with advanced technology implementations. Thus, the transformation program must plan and anticipate to ensure these factors are properly addressed to enable the implementation to succeed. Having helped implement successfully a number of advanced technologies at large firms, I have outlined the key steps to successful advanced technology adoption as well as major pitfalls to be avoided.

Foremost, leadership and sponsorship must be full in place before embarking on a broad implementation of an advanced technology. In addition, it is particularly crucial at integration points, where advanced technologies require different processes and cycle times than those of the legacy organization. For example, traditional, waterfall financial planning processes and normal but typically undisciplined business decision processes can cause great friction when they are used to drive agile technology projects. The result of an unmanaged integration like this is then failing or greatly underperforming agile projects accompanied by frustration on the technology side and misunderstanding and missed expectations on the business side.

Success is also far more likely to come if the ventures into advanced technologies are sober and iterative. An iterative process, building success but starting small and growing its scope, while at each step, using a strong feedback loop to improve the approach and address weaknesses. Further, robust change management should accompany the effort given the level of transformation. Such change management should encompass all of the ‘human’ and organizational aspects from communications to adjusting incentives and goals, to defining new roles properly, to training and coaching, and ensuring the structures and responsibilities support the new ways of working.

Let’s start with Robotics and Business Process Management, two automation and workflow alternatives to traditional IT and software programming. Robotics or better put, Robotic Process Automation (RPA) has been a rapidly growing technology in the past 5 years, and the forecasts are for even more rapid growth over the next 5 years. For those not familiar, here is a reference page for a quick primer on RPA. Briefly, RPA is the use of software robots to do repetitive tasks that a human typically does when interfacing with a software application. RPA tools allow organizations to quickly set up robots to handle basic tasks thus freeing up staff time from repetitive typing tasks. At Danske Bank, since our initial implementation in 2014 (yes, 2014), we have implemented well over 300 robots leveraging the Blue Prism toolset. Each robot that was built was typically completed in a 2 to 6 week cycle where the automation suggestion was initially analyzed, reviewed for applicability and business return, and then prioritized. We had set up multiple ‘robotic teams’ to handle the development and implementation. Once a robotic team freed up, they would then go to work on the next best idea. The team would take the roughly drafted idea, further analyze and then build and deliver it into production (in two to six weeks). Each robot implemented could save anywhere from a third of an FTE to 30 FTEs (or even more). Additionally, and usually of greater value, the automation typically increased process quality (no typos) and improved cycle time.

Because the cycle time and actual robotic analyze, build, and implement process were greatly different than those of traditional IT projects, it was necessary to build a different discovery, review, and approval process than those for traditional IT projects. Traditional IT (unfortunately) often operates on an annual planning cycle with lengthy input and decision cycles with plenty of debate and tradeoffs considered by management. The effort to decision a traditional mid-sized or large IT project would dwarf the total effort required for implementation of a robot (!) which then would be impractical and wasteful. Thus, a very different review and approval process is required to match the advanced technology implementation. Here, a far more streamlined and ‘pipelined’ approach was used for robotic automation projects. Instead of funding each project separately, a ‘bucket’ of funding was set up annually for Robotics that then had certain hurdle criteria for each robotic project to be prioritized. A backlog of automation ideas was generated by business and operations teams and then based on an quick analysis of ease of implementation, FTE savings, and core functional capability, the ideas were prioritized. Typical hurdle rates were a 6 month or less ROI (yes, the implementation would save more money within 6 months than the full cost of implementation) and at least .5 FTE of savings. Further, implementations that required completing critical utility functionality (e.g., interfacing with the email system or financial approval system) were prioritized early in our Robotics implementation to enable reuse of these capabilities by later automation efforts.

The end result was a strong pipeline of worthwhile ideas that could be easily prioritized and approved. This steady stream of ideas was then fed into multiple independent robotics development teams that were each composed of business or operations analysts, process analysts, and technology developers (skilled in the RPA tool) that could take the next best idea out of the pipeline and work it as soon as that team were ready. This pipeline and independent development factory line approach greatly improved time to market and productivity. So, not only can you leverage the new capabilities and speed of the advanced technology, you also eliminate the stop-go and wait time inefficiencies of traditional projects and approval processes.

To effectively scale advanced technology in a large firm requires proper structure and sponsorship. Alternative scaling approaches can range from a broad or decentralized approach where each business unit or IT team experiment and try to implement the technology to a fully centralized and controlled structure where one program team is tasked to implement and roll it out across the enterprise. While constraints (scarce resources, desire for control (local or central), lack of senior sponsorship) often play a role in dictating the structure, technology leaders should recognize that taking a Center of Excellence (COE) approach is far more likely to succeed at scale for advanced technology implementations. I strongly recommend the COE approach as it addresses fundamental weaknesses that will hamper both the completely centralized or the decentralized approaches and is much more likely to succeed.

When rolling out an advanced technology, the first challenge to overcome is the difficulty to attract and retain advanced technology talent. It is worthwhile to note that just because your firm has decided to embark on adopting an advanced technology does not mean you will be able to easily attract the critical talent to design and implement the technology. Nearly every organization is looking for these talents, and thus you need to have a compelling proposition to attract the top engineers and leaders. In fact, few strong talents would want to join a decentralized structure where it’s not clear who is deciding on toolset and architecture, and the projects have only local sponsors without clear enterprise charters, mandates or impact. Similarly, they will be turned off from top-heavy, centralized programs that will likely plod along for years and not necessarily complete the most important work. By leveraging a COE, where the most knowledgeable talent is concentrated and where demand for services is driven by the prioritizing the areas with the highest needs, your firm will be able attract talent as well as establish an effective utility and deliver the most commercial value. And the best experts and experienced engineers will want to work in a structure where they can both drive the most value as well as set the example for how to get things done. Even better, with a COE construct, each project leverages the knowledge of the prior projects, thus improving productivity and reuse with each implementation. As you scale and increase volume, you get better at doing the work. With a decentralized approach, you often end up with discord where multiple toolsets, multiple groups of experts, and inexperienced users often lead to teams in conflict with each other or duplicating work.

When the COE is initially set up, the senior analysts, process engineers and development engineers in the COE should ensure proper RPA toolset selection and architecture. Further, they lead the definition of the analysis and prioritization methodology. Once projects begin to be implemented, they maintain the libraries of modules and encourage reuse, and they ensure the toolsets and systems are properly updated and supported for production including backups, updates and adequate capacity. Thus, as the use of RPAs grow, your productivity improves with scale, ensuring more and broader commercial successes. By assigning responsibility for the service to your best advanced technology staff, they will plan and avoid pitfalls due to immature, disparate implementations that often fail 12 or 18 months after initial pilots.

Importantly though, in a COE model, demand is not determined by the central team, but rather, is developed by the COE team consulting with each business unit to determine the appetite and capability to tackle automation projects. This consulting results in a rough portfolio being drafted, which is then used as the basis to fund that level of advanced technology implementation for that business unit. Once the draft portfolio is developed and approved, it is jointly and tightly managed by the business unit with the COE to ensure greatest return. With such an arrangement, the business unit feels in control, the planned work is in the areas that the business feels is most appropriate, and the business unit can then line up the necessary resources, direction, and adoption to ensure the automation succeeds commercially (since it is their ambition). Allowing the business unit to drive the demand avoids the typical flaws of a completely centralized model where an organization separate from the unit where the implementation will occur makes the decisions on where and what to implement. Such centralized structures usually result in discord and dissatisfaction between the units doing ‘real business work’ and an ‘ivory tower’ central team that doesn’t listen well. By using a COE with a demand driven portfolio, you get the advantages of a single high performance team yet avoid the pitfalls of ‘central planning’ which often turns into ‘central diktat’.

As it turns out, the COE approach is also valuable for BPM rollouts. In fact, it could be synergistic to run both RPA and BPM from ‘sister’ COEs. Yes, BPM will require more setup and has a longer 6 to 12 or even 18 week development cycle. Experts in BPM are not necessarily experts in RPA tools but they will share process engineering skills and documentation. Further, problems or automation that is too complex for RPAs could be perfectly suited for BPM, thus enabling a broader level of automation of your processes. In fact, some automation or digitalization solutions may turn out to be best using a mix of RPA and BPM. Treating them as similar, each with their own COE structure, their own methodology and their own business demand stream but where they leverage common process knowledge and work together on more complex solutions will yield optimal results and progress.

A COE approach can also work well for advanced analytics. In my experience, it is very difficult for the business unit to attract and retain critical data analytics talent. But, by establishing a COE you can more easily attract enough senior and mid talent for the entire enterprise. Next, you can establish a junior pipeline as part of the COE that works alongside the senior talent and is trained and coached to advance as you experience inevitable attrition for these skills. Further, I recommend establishing Analytics COEs for each ‘data cluster’ so that models and approaches can be shared within a COE that is driven by the appropriate business units. In Financial Services, we found success with a ‘Customer and Product’ Analytics team, a ‘Fraud and Security’ team and a ‘Financing and Risk’ team. Of course, organize the COEs along the data clusters that make sense for your business. This allows greater focus by a COE and impressive development and improvement of their data models, business knowledge and thus results. Again, the COE must be supplemented by full senior sponsorship and a comprehensive change management program.

The race is on to digitalize and take advantage of the latest advanced technologies. Leveraging these practices and approaches will enable your shop to more forward more quickly with advanced technology. What alternatives have you seen or implemented that were successful?

I look forward to hearing your comments and wish you the best on attaining outstanding advanced technology capabilities for your organization. Best, Jim

Digitalization and Its Broader Impacts

We have discussed several times in the past few years the impacts of digitalization on the corporate landscape with a particular focus on the technology industry. We have also explored the key constraints of organizations to keep pace and to innovate. But the recent pace of digitalization is causing rapid change in corporations and has implications for broader and more radical change at a societal level. These are certainly important topics for the IT leader, and there are more significant implications for the broader society.

It is perhaps relatively easy to consider the latest technology innovations as additional steps or an extension of what we have seen since the internet era began in the mid-90s. And it was nearly 20 years ago when Deep Blue beat Gary Kasparov in chess, so the advances in machine intelligence could be viewed as incremental and measured in decades. But the cumulative effect of technology advances in multiple realms over the past 40 years has now turned into not just a quantative acceleration but a qualitative leap as well. This leap is well explained in The Second Machine Age by Erik Brynjolfsson where he discusses the cumulative effects of exponential growth over time even from a small base. This is the essence of the power of digitalization. But where Erik portrays ‘a brilliant’ future with man and advanced machine working together, Martin Ford in his book, The Rise of Robots, sees a very troubled world with massive unemployment and inequality.

As an IT leader in a corporation, you must ensure the competitiveness of your firm by leveraging the new technologies available at a pace that keeps you abreast or better, ahead of your competitors. It is a relentless pace across industries, driven not only by traditional competition but also by new digital competitors that did not even exist a few years prior. And every old line firm is driven by the fear of becoming another Kodak, while firms on top of their industry worry they will be another Nokia. For those firms able to keep pace and leverage the technologies, they are seeing substantially reduced costs of production, with significant qualitative advances. These advantages will occur across industries as digitalization is revolutionizing even ‘physical’ industries like logging and sawmills. But where does this leave communities and society in general? Is it a brilliant or troubled future?

Let’s explore some digitalization scenarios in different industries to shed light on the longer term. Shipping and logistics is an excellent example where near term there continue to be significant improvements due to digitalization in shipment tracking, route management, navigation, and optimization of loads. Leveraging sensors and interconnected planning, scheduling and communication software can result in greatly improved shipment fill rates while increasing shipment visibility. In the next 5 years, the most advanced shipping firms will completely eliminate paper from the shipment chain and have integrated end-to-end digital processes. These fully digital processes will enable more timely shipment and distribution while reducing errors and enabling greater knowledge and thus flexibility to meet emerging demands. They will also reduce the manual labor of administration – and with embedded sensors, reduce the need for intermediate checkpoints. The introduction of robotics in distribution warehouses (such as Amazon’s) currently greatly extends the productivity of the workers by having the robots run the floor and pick the product, bringing it back to the worker. The current generation of robots provide a 30% productivity gain. The next one – within 5 years, could expect perhaps a further 30% or even 50%? Amazon certainly made the investment by buying it’s own robotics company (Kiva) not just for its warehouses, but perhaps to build a relentlessly productive distribution chain able to deliver for everything (and not very dissimilar to their cloud foray). While the distribution center is being automated with robot assistants, within 15 years we will see commercial trucking move to highly autonomous trucks. Not unlike how commercial pilots today work with the autopilot. This could be good news as in the US alone, trucks are involved in over 300,000 accidents and cause more than 4500 deaths each year. It would be a tremendous benefit to society to dramatically reduce such negative impacts through autonomous or mostly autonomous trucks. Robots do not fall asleep at the wheel and do not drive aggressively in rush hour traffic. Drivers will become mostly escorts and guardians for their shipments while robots will handle nearly all of the monotonous driving chores. Convoying and 24 hour driving will become possible, all the while enabling greater efficiency and safety. And within 10 years, expect the shipment from the warehouse to the customer for small package goods to change dramatically as well. Amazon unveiled it’s latest delivery drone and while it will take another 2 generations of work to make it fully viable (and of course FAA approval), when ready it will make a huge impact on how goods are delivered to customers and enable Amazon to compete fully with retail stores, fulfilling same day if not same hour delivery. In the US, the trucking industry overall employs about 8 million, with 3.5 million of those being truck drivers. So whether a scheduler, distribution clerk or truck driver, it is likely these positions will be both greatly changed and fewer in 10 to 12 years. Just these impacts alone would likely reduce labor requirements by 20 or 30% in 10 years and possibly 50% in 15 years. But there is the increasing volume effect where digitalization is causing more rapid and smaller shipments as customers order goods online that are then rapidly delivered to their home, thus potential increasing demand (and required labor) over the longer term. Yet these effects will not overcome the reductions — expect a reduction of 20% of shipping and logistics labor where humans partner with robot assistants and autonomous vehicles as the normal operating mode. And increased demand in direct to customer shipments will come at a cost to the retail industry. Already online sales have begun to exceed in-store sales. This trend will continue resulting much lower retail employment as more and more commerce moves online and stores that do not offer an ‘experience’ lose out. It is reasonable to expect retail employment to peak around 5.3M (from the current 4.8M)  in the next 5 years and then slowly decline over the following 10 years.

Manufacturing, which has leveraged the use of robotics for four decades or more, is seeing ever greater investments in machines, even in lower wage countries like China and India. Once only the domain of large companies and precisely designed assembly lines, the relentless reduction of the cost of robotics with each generation, and their increasing ease of use, is making it economical for smaller firms to leverage such technology in more flexible ways. The pace of progress in robotics has become remarkable. In another two generations of robotics, it will be unthinkable to be a manufacturer and NOT leverage robotics. And if you combine robotics with the capabilities of 3D printing, the changes become even greater. So the familiar patterns of moving plants to where there are lower wages will no longer occur. Indeed, this pattern which has repeated since the first industrial revolution started in Britain is already being broken. Factories in China and India are being automated, not expanded or moved to lower cost countries or regions. And some manufacturing is being greatly automated and moved back to high cost regions to be closer to demand to enable greater flexibility, better time to market, and control. The low cost manufacturing ladder, which has lifted so much of society out of poverty in the past two centuries is being pulled away, with great implications for those countries either not yet on the developing curve, or just starting. This ‘premature de-industrialization’ may forever remove the manufacturing ladder to prosperity for much of India and for many African and Asian countries still in great poverty.  And while these trends will require drive more designers and better creative services, the overall manufacturing employment will continue its long term decline. Perhaps it will be partly offset with an explosion of small, creative firms able to compete against larger, traditional firms. But this will occur only the most developed regions of the globe. For the 12 million manufacturing jobs in the US, expect to see a very slight uptick even as factories are brought back due to re-shoring in an automated format and the growth of smaller, highly automated factories leveraging 3D printing. But globally, one should expect to see a major decline in manufacturing jobs as robots take over the factories of the developing world.

And whither the restaurant business? McDonald’s is experimenting with self-service kiosks and robots making hamburgers, and new chains from Paris to San Francisco are re-inventing the automat – staple of the mid-1900s. While per store labor has declined by 30 to 50% in the past 50 years, there is potential for acceleration given the new skills of robots and the increasing demand for higher wages for low level employees. These moves, combined with easy-to-use mobile apps to order your food ahead of time likely means fewer jobs in 10 years, even with more meals and sales. One should expect the return of the 1950s ‘automat’ in the next 5 or 10 years as restaurants leverage a new generation of robots that are far more capable than their 1950s predecessors.

Just a quick review of a handful of major employment industries shows at best a mixed forecast of jobs and possibly a stronger negative picture. For developed nations, it will be more mixed, with new jobs also appearing in robotics and technology as well as the return of some manufacturing work and perhaps volume increases in other areas. But globally, one can expect a significant downward trend over the next 10 to 15 years. And the spread between the nations that have industrialized and those that haven’t will surely widen.

What jobs will increase? Obviously, technology-related jobs will continue to increase but these are a very small portion of the total pool. More significantly, any profession that produces content, from football players to musicians to filmmakers will see continued demand for their products as digitalization drives greater consumption through ever more channels. But we have also seen for content producers that this is a ‘winner take all’ world, where only the very best reap most of the rewards and the rest have very low wages.

Certainly as IT leaders, we must leverage technology wherever possible to enable our firms to compete effectively in this digital race. As leaders, we are all familiar with the challenges of rapid change. Especially at this pace, change is hard — for individuals and for organizations. We will also need to be advocates for smarter change, by helping our communities understand the coming impacts, enabling our staff to upscale to better compete and achieve a better livelihood, and advising for better government and legislature. If all taxes and social structures make the employee far more expensive than the robot, than shouldn’t we logically expect the use of robots to accelerate? Increasing the costs of labor (e.g., the ‘living wage’ movement in the US) is actually more likely to hasten the demise of jobs!  Perhaps it would be far better to tax the robots. Or even better, in twenty years, every citizen will get their own robot – or perhaps two: one to send in to do work and one to help out at home. The future is coming quickly, let’s strive to adjust fast enough for it.

What is your view of the trends in robotics? What do you see as the challenges ahead for you? for your company? for your community?

Best, Jim Ditmore

 

The Recent Quarter Results Confirm Tech Industry Trends

Some surprising and not so surprising results for the tech industry this past quarter (2Q15) confirm both longer term industry trends and also high volatility for mismatches in expectations and performance.

First, Apple delivered strong growth in revenue and profits again (38% growth in profits to $10.8B), and yet, because it was slightly below expectations, lost $60 Billion in value. While Apple sold a record 47.5 million iPhones and saw Mac sales of 4.8 million units (up 9%), investors were apparently disappointed in both the number of iPhones sold and the lack of clear information on the iWatch. Even though it appears the iWatch is more successful at this point in the sales cycle than the iPad or iPhone, investors were apparently expecting a leadoff home run and sent the stock down 7% on the results.

And the reverse occurred for both Google and Amazon. Google delivered solid growth with 11% increase in revenue to $17.7B with net income of $3.9B which sent shares up 12%. Investors were surprised with the breadth of growth, particularly in mobile, and that managers showed some cost control. Amazon actually delivered some profit, $214M on revenue of $29.33B, and showed continued robust growth of 15%. Investors sent Amazon’s stock up on the profit results, a rarity given Amazon’s typical long term vision focus and  willingness to spend for reach and scale even in areas well beyond its core.

What the quarterly results also reveal is that the tech platform companies (Amazon, Apple, Google) are continuing to be viewed as dominant but investors are uneasy about the long term stability of their platforms and thus have a quick trigger finger if they see any cracks in their future dominance. So, with Apple’s potential over-reliance on the iPhone, when there are fewer shipments than expected, or there is not clear evidence of a new platform extension (e.g. iWatch) then investors react sharply. On the reverse, when Google appears to be overcoming the mobile threat to its core search platform, it is well-rewarded by investors.

What do the quarter’s results say about the tech product companies? Unless they have strong portfolio of winning products, it appears they will continue to struggle to regain form. IBM, AMD, HP and others all posted disappointing results as they grapple with the technology revolutions in each of their industry sectors. AMD saw a 35% loss in revenue, dipping below $1B in quarterly revenue for the first time in years to $942M with a loss of $181M. Of course, the slow to declining PC sales worldwide is the primary cause and only console sales were healthy for AMD. Expect further difficult quarters as AMD adapts to the changing component industry (driven by impacts from the platform companies). HP continues a listless journey, its 2nd quarter reflecting a 7% slide in revenue from $27.3B to $25.5B, a 50% drop in operating margin, and a 10% drop in PC market shipments. While HP will split into two entities in November later this year which has some analysts upbeat, prospects look tough across all product segments with slow or declining growth except possibly enterprise software and 3D printing. IBM had mixed results, with better than expected profit on $20.81B in sales, yet saw continued revenue decline, which left investors nervous, sending the stock down. IBM did see strong growth in cloud services and analytics, but lackluster products and results in other core segments (e.g., hardware) which make up the vast bulk of IBM revenue yielded disappointing revenue and profit showings. IBM recently sold off its low end server business as it views that sector becoming increasingly commoditized. Yet, IBM will continue to find that selling services when you have limited ‘winning’ products is a tough lower margin business. And cloud services are far lower margin business than its traditional hardware business – and one where Amazon and Google are first-comers with volume edges. IBM can certainly leverage its enterprise relationships and experience, but that is far easier to do when you have products that provide real advantage to the customer. Other than analytics (Watson) and some software areas, IBM lacks these winning products, having neglected their product pipeline (instead focusing on services) for many years. While the alliance with Apple provides some possibility of developing modern, vertical industry applications that will be compelling, there is far more IBM must do to get back on track and part of that innovation must be in hardware.

EMC and Oracle are the exceptional large technology product companies that have been able to navigate the turbulent waters of their industry the past few years. Oracle did have weaker results this quarter, primarily due to currency fluctuations but also slowing software sales. Only EMC beat expectations and had new products overcome slowing demand for core areas.  Winning products for EMC like VMware and Pivotal as well as high demand for services and products in its information security division (RSA) and analytics more than overcame issues in the core storage division (which showed some recovery from 1Q). One could argue that with the VMWare franchise and leading engineered systems, EMC has established the strongest cloud platform, thus it has a more assured place with growth and margin in this rapidly changing sector.

The bottom line? Product companies will continue to struggle with revenue growth and margin pressure as technology advances undercut volumes and platform companies offer lower cost alternatives (e.g. public cloud options instead of proprietary server hardware and services, or smartphones instead of PCs) Unless technology product companies stay on the leading edge of innovating (or acquiring) compelling products, generating additional high margin revenues through services or software will be tough sledding. As we have mentioned here before, digitalization and the emergence of platform companies will result in more casualties in product companies – both in the tech space and outside it.

And of course, there is Microsoft. Microsoft is in a unique spot where it still has a strong productivity platform (e.g. Office, Exchange) but a diminishing OS platform. And with only low margin business that are growing rapidly (e.g. cloud), the road back to dominance looks very tough. Further, their forays into other tech sectors have been middling at best and disastrous at worst. The second quarter results included an $8B write down of the Nokia acquisition, which was made two years ago. The ‘devices and services’ strategy has shown to be a ‘phenomenal error’ by some accounts. PC sales continue to decline, and Microsoft was unable to effectively crack the smartphone market. The past quarter revealed declining revenue volume for phones even with 10% more volume as the only market segment MS gained traction was phone models at lower cost points. And it is hard to see that Samsung or other handset makers will add Windows OS to their product mix. Further, traditional Windows OS revenue (from OEMs) dropped 22%. The bright spots for MS were gaming (Xbox) and of course enterprise software and cloud services. There remain major concerns for the enterprise area where the rapidly growing cloud services has far lower margin than their traditional software business. Microsoft should continue to worry that increasing import of dominance in the consumer space often translates later into winning business space – thus,  the Google and Apple productivity platforms could be the long term trojan horses that blow up the enterprise cash cow for Microsoft. Microsoft may lose the war by trying to maintain its OS platform by limiting the reach of its productivity platform to consumers on their device of choice. Already, Google and Apple have changed the game by offering such software on the platforms for free, with free upgrades. Some assessments already show Microsoft lagging in feature without even considering its far higher cost. Windows 10 should be a solid hit for Microsoft, reversing some of the ground lost with Windows 8, but it will not dent the momentum of the Apple and Android platforms – especially when Microsoft introduces such new ways to monetize as the formerly free Solitaire’s lengthy advertisements or $9.99 annual subscription fee. They continue to misread the consumer market. Despite these continual missteps, or as recently called out in a New York Times article, their ‘feet of clay’, Microsoft has a strong enterprise business, a well-positioned productivity platform, and plenty of money. Can they figure out how to win in the consumer world while growing their productivity ecosystem with compelling extensions?

There remain multiple gaps that Microsoft, IBM, HP or even Oracle could exploit to win the next platform or obtain strong enterprise market share. While Apple and Android are pursuing the future car and the home platforms, the internet of things is still an open race. And there is opportunity given that most of the gazillion apps in the Android and Apple space are games or other rudimentary (1st generation) apps oriented for consumers. But there could be tremendous demand for myriad vertical industry applications that can easily link to a company’s legacy systems. IBM has started down this road with Apple, but plenty of opportunity remains for enterprise software players to truly leverage the dominant platforms for their own gain. Let’s hope the tech product companies can rekindle their growth by bringing out great products again.

Best, Jim Ditmore

The Key Growth Constraints on Your Business

We have seen the accelerating impact of technology on a wide variety of industries in the past decade. We have witnessed the impact of internet shopping on retail industries, and seen the impact of digital content and downloads on the full spectrum of traditional media companies across books and newspapers as well as movies and games. Traditional enterprises are struggling to anticipate and stay abreast of the advances and changes. Even in those industries far away from the digital world, where they seem very physical, it is critical to leverage ‘smart’ technologies to improve output and products.

Let’s take logging and sawmills as an example. Certainly there have been physical advances from improved hydraulic systems to better synthetic ropes, but playing an increasingly prominent role is the use of digital technology to assist operators to drive complex integrated machines to optimize the entire logging and sawing process. The latest  purpose-built forestry machines operate at the roadside or nearby off-road cutting logs from whole trees combining steps and eliminating manual labor. These integrated machines are guided on-board computers and electronic controls. This enables the operator to optimize log products which are machine cut by skillfully delimbing and “bucking” the whole trees into the best log lengths and loading them onto trucks. Subsequently, the logs are take to modern sawmills, where new scanning technologies and computers analyze each log and determine how to optimize the dimensional lumber cut from each log. Not only does this dramatically reduce manual labor and waste, but it improves safety and increases log product value by 20 or 30% from previous methods. And it is not just digital machinery leveraging computers to analyze and cut, but it is also mobile apps with mapping and image analysis so better decisions are made when and where to log in the field. When digitalization is revolutionizing even ‘physical’ industries like logging and sawmills, it is clear that the pace and potential to disrupt industries by applying information technology has increased dramatically. Below is a chart that represents the pace of disruption or ‘gain’ possible by digitalization over the mid-term horizon (7 to 10 years).

Slide1

It is clear that digitalization has dramatically changed the travel and media industries already. Digital disruption has been moving down into other industries as either their components move from physical to digital (e.g., cameras, film) or industry leaders apply digital techniques to take advantage (e.g., Amazon, Ameritrade, Uber). Unfortunately, many companies do not have in place the key components necessary to apply and leverage technology to digitalize in rapid or disruptive ways. The two most important ingredients to successfully digitalize are software development capacity and business process engineering skill. Even for large companies with sizable IT budgets there are typically major constraints on both software development and business process engineering. And ample quantities of both are required for significant and rapid progress in digitalization.

Starting with software development, common constraints on this capability are:

  • a large proportion of legacy systems that consume an oversize portion of resources to maintain them
  • inadequate development toolsets and test environments
  • overworked teams with a focus on schedule delivery
  • problematic architectures that limit digital interfaces and delivery speed
  • software projects that are heavily oriented to incremental product improvement versus disruptive customer-focused efforts

And even if there are adequate resources, there must be a persistent corporate focus on the discipline, productivity and speed needed for breakout efforts.

Perhaps even more lacking are the necessary business process engineering in many companies.  Here the issue is often not capacity or productivity but inadequate skill and improper focus. Most corporate investment agendas are controlled by ‘product’ teams whose primary focus is on incrementally improving their product’s features and capabilities rather than end to end service or process views that truly impact the customer. Further, process engineering skills are not a hallmark of service industry product teams. Most senior product leaders ‘grew up’ in a product focused environment, and unless they have a manufacturing background, usually do not have process improvement experience or knowledge. Typically, product team expertise lies primarily in the current product and its previous generations and not in the end-to-end process supporting the actual product. Too often the focus is on a next quarter product release with incremental features as opposed to fully reworking the customer interface from the customer’s point of view and reworking end-to-end the supporting business process to take full advantage of digitalization and new customer interfaces. There is far too much product tinkering versus customer experience design and business process engineering. Yet, the end-to-end process is actually what drives the digital customer experience versus the product features. Service firms that excel at the customer experience utilize the end-to-end process design from the customer viewpoint while taking full advantage of the digital opportunities. This yields a far better customer experience that is relatively seamless and easy. Further, the design normally incorporates a comprehensive interface approach that empowers each of the customer interaction points with the required knowledge about the customer and their next step. The end result is a compelling digital platform that enables them to win in the market.

As an IT leader certainly you must identify and sponsor the key digitalization projects for your company, but you must also build and develop the two critical capabilities to sustain digitalization. It is paramount that you build a software development factory that leverages modern methods on top of discipline and maturity so you have predictable and high quality software deliverables. And ensure you are building on an architecture that is both flexible and scalable so precious effort is not wasted on arcane internal interfaces or siloed features that must be replicated across your estate.

Work with your business partners to establish continuous process improvement and process engineering as desired and highly valued skills in both IT and the business team. Establish customer experience and user experience design as important competencies for product managers. Then take the most critical processes serving customers and revisit them from an end-to-end process view and a customer view. Use the data and analysis to drive the better holistic process and customer experience decisions, and you will develop far more powerful digital products and services.

Where is your team or your company on the digital journey? Do you have an abundance of software development or business process engineering skills and resources? Please share your perspective and experience in these key areas in our digital age.

Best, Jim Ditmore

 

IT Resolutions for 2015

While there is still one bowl game left to be played and confetti to clean up, 2014 is now done and leaves a mixed IT legacy.  After 2013’s issues with the NSA leaks, the Healthcare.gov mishaps, and the 40 million credit identities stolen from Target, 2014 did not turn out much better on security and availability. Home Depot, eBay, JPMC all had major incidents in the ‘year of the hacks‘. Add to that the celebrity photo leaks from the Apple hacks. Add to that of course the Sony uber-hack and their playstation service failure at Christmas. All in all, 2014 was quite a dismal year for IT security. On the positive side, we saw continued advances in smart technology, from phones to cars. Robots and drones are seeing major reductions in price while leapfrogging in usability and capability. So, technology’s potential seems brighter than ever, yet we still underachieve in our ability to prevent its mis-use. Now 2015 is upon us and I have compiled some IT resolutions that should contribute to greater success for IT shops in the coming year!

The first IT resolution is …. security, security, security. While corporate IT security has improved in the past several years, we are still well behind the hackers. The many breaches of 2014 demonstrate these shortcomings. Security is one of the fastest growing portions of IT (the number 2 focus item behind data analytics), but much more needs to be done though most of the crucial work is just basic, diligent execution of proper security practices. Many of the breaches took advantage of well-known vulnerabilities either at the company breached or one of its suppliers. For example, lack of current server patching was a frequent primary root cause on hacks in 2014.  And given the major economic hits of the Sony and Target breaches, these events are no longer speed bumps but instead threaten a company’s reputation and viability. Make the case now to your senior business management to double down your information security investment and not show up on the 2015 list of hacks. Not sure where to start?  Here’s a good checklist on security best practices that is still current and if fully applied would have prevented the majority of the public breaches in 2014.

Next is to explore and begin to leverage real-time decisioning. It’s more than big data — it is where you use all the information about the customer and trends to make the best decision for them (and your company) while they are transacting. It is taking the logic for ‘recommendations of what other people bought’ and applying data analytics to many kinds of business rules and choices. For example, use all the data and hidden patterns to better and more easily qualify a customer for a home loan — rather than asking them for a surfeit of documents and proofs. And offer them optimal pricing on the loan most suited for them — again determined by the data analytics. In the end, business policies will move from being almost static where changes occurs slowly, to where business policies are determined in real-time, by the data patterns. It is critical in almost every industry to understand and begin mastery of this technology.

Be on the front edge of the flash revolution in the enterprise. 2015 will be the year of flash. Already many IT shops are using hybrid flash disk technologies. With the many offerings on the market and 2nd generation releases by mainstream storage vendors like EMC, IT shops should look to leverage flash for their most performance-bound workloads. The performance improvements with flash can be remarkable. And the 90% savings on environmentals in your data center is icing on the cake. Flash, factoring in de-duplication, is comparable in cost to disk storage today. By late 2015, it could be significantly less.

If you haven’t already, go mobile, from the ground up. Mobile is the primary way most consumers interface with companies today. And with better phones and data networks, this will only increase. But don’t rely on a ‘mobilized’ version of your internet site. Make sure you tuning your customer interface for their mode of interaction. Nothing is more cumbersome to a consumer than trying to enter data from a phone into an internet form designed for PC. Yes, its doable, but nowhere near the experience you can deliver with a native app. Go mobile, go native.

Bring new talent into the IT labor force. By 2020, the Bureau of Labor Statistics estimates there will be another 1.4 million IT jobs in the US — and not nearly enough computer science graduates to fill them. Companies big and small should be looking to hire both new graduates in the field AND encourage more to look to computers for their career. In the 1970s and 1980s, before there were formal computer science programs at universities, many outstanding computer scientists received their degrees in music, biology, languages, or teaching. We need another wave of converts for us to have the skilled teams required for the demands of the next decade. As IT leaders, let’s make sure we contribute to our field and help bring along the next generation.

What are your 2015 IT resolutions? Let us know what should be on the list!

Best, and have a great New Year!

Jim

 

Expect More Casualties

Smart phones, tablets, and their digital ecosystems have had a stunning impact on a range of industries in just a few short years. Those platforms changed how we work, how we shop, and how we interact with each other. And their disruption of traditional product companies has only just begun.

The first casualties were the entrenched smart phone vendors themselves, as IOS and Android devices and their platforms rose to prominence. It is remarkable that BlackBerry, which owned half of the US smart phone industry at the start of 2009, saw its share collapse to barely 10% by the end of 2010 and to less than 1% in 2014, even as it responded with comparable devices. It’s proving nearly impossible for BlackBerry to re-establish its foothold in a market where your ‘platform’, including your OS software and its features, the number of apps in your store, the additional cloud services, and the momentum in your user or social community are as important as the device.

A bit further afield is the movie rental business. Unable to compete with electronic delivery to a range of consumer devices, Blockbuster filed for bankruptcy protection in September 2010 just 6 years after its market peak. Over in another content business, Borders, the slowest of the big bookstore chains, filed for bankruptcy shortly after, while the other big bookstore chain, Barnes & Noble, has hung on with its Nook tablet and better store execution — a “partial” platform play. But the likes of Apple, Google, and Amazon have already won this race, with their vibrant communities, rich content channels , value-added transactions (Geniuses and automated recommendations), and constantly evolving software and devices. Liberty Mutual recently voted on the likely outcome of this industry with its disinvestment from Barnes & Noble.

What’s common to these early casualties? They failed to anticipate and respond to fundamental business model shifts brought on by advances in mobile, cloud computing, application portfolios and social communities. Combined, these technologies have evolved to lethal platforms that can completely overwhelm established markets and industries.  They failed to recognize that their new competitors were operating on a far more comprehensive level than their traditional product competitors. Competing on product versus platform is like a catapult going up against a precision-guided missile.

Sony provides another excellent example of a superior product company (remember the Walkman?) getting mauled by platform companies. Or consider the camera industry: IDC predicts that shipments of what it calls “interchangeable-lens cameras” or high end digital cameras peaked in 2012 and will decline 9.1% this year compared with last year  as the likes of Apple, HTC, Microsoft, and Samsung build high-quality cameras into their mobile devices. By some estimates, the high-end camera market in 2017 will be half what it was in 2012 as those product companies try to compete against the platform juggernauts.

The casualties will spread throughout other industries, from environmental controls to security systems to appliances. Market leadership will go to those players using Android or iOS as the primary control platform.

Over in the gaming world, while the producers of content (Call of Duty, Assassin’s Creed, Madden NFLC, etc.) are doing well, the console makers are having a tougher time. The market has already forever split  into games on mobile devices and those for specialized consoles, making the market much more turbulent for the console makers. Wii console maker Nintendo, for example, is expected to report a loss this fiscal year. If not for some dedicated content (e.g., Mario), the game might already be over for the company. In contrast, however, Sony’s PS4 and Microsoft’s Xbox One had successful launches in late 2013, with improved sales and community growth bolstering both “partial” platforms for the long term.

In fact, the retail marketplace for all manner of goods and services is changing to where almost all transactions start with the mobile device, leaving little room for traditional stores that can’t compete on price. Those stores must either add physical value (touch and feel, in-person expertise), experience (malls with ice skating rinks, climbing walls, aquariums), or exclusivity/service (Nordstrom’s) to thrive.

It is difficult for successful product companies to move in the platform direction, even as they start to see the platforms eating their lunch. Even for technology companies, this recognition is difficult. Only earlier this year did Microsoft drop the license fee for its ‘small screen’  operating systems. After several years, Microsoft finally realized that it can’t win against a mobile platform behemoths that give away their OS while it charges steep licensing fees for its mobile platform.

It will be interesting to see if Microsoft’s hugely successful Office product suite can compete over the long term with a slew of competing ecosystem plays. By extending Office to the iPad, Microsoft may be able to graft onto that platform and maintain its strong performance. While it’s still early to predict who will ultimately win that battle, I can only reference the battle of consumer iPhone and Android versus corporate BlackBerry — and we all know who won that one.

Over the next few years, expect more battles and casualties in a range of industries, as players leveraging Android, iOS, and other cloud/community platforms take on entrenched companies. Even icons such as Sony and Microsoft are at risk, should they cling to traditional product strategies.

Meantime, the likes of Google, Apple, Amazon, and Facebook are investing in future platforms — for homes, smart cars, robotics and drones, etc. As the ongoing impacts from the smart phone platforms continue, new platforms will add further impacts, so expect more casualties among traditional product companies, even seemingly in unrelated industries. 

This post first appeared in InformationWeek in February. It has been updated. Let me know your thoughts about platform futures. Best, Jim Ditmore.

Moving from Offshoring to Global Service Centers II

As we covered in our first post on this topic, since the mid-90s, companies have used offshoring to achieve cost and capacity advantages in IT. Offshoring was a favored option to address Y2K issues and has continued to expand at a steady rate throughout the past twenty years. But many companies still approach offshoring as  ‘out-tasking’ and fail to leverage the many advantages of a truly global and high performance work force.

With out-tasking, companies take a limited set of functions or ‘tasks’ and move these to the offshore team. They often achieve initial economic advantage through labor arbitrage and perhaps some improvement in quality as the tasks are documented and  standardized in order to make it easier to transition the work to the new location. This constitutes the first level of a global team: offshore service provider. But larger benefits around are often lost and only select organizations have matured the model to its highest performance level as ‘global service centers’.

So, how do you achieve high performance global service centers instead of suboptimal offshore service providers? As discussed previously, you must establish the right ‘global footprint’ for your organization. Here we will cover the second half of getting to global service centers:  implementing a ‘global team’ model. Combined with the right footprint, you will be able to achieve global service centers and enable competitive advantage.

Global team elements include:

  • consistent global goals and vision across global sites with commensurate rewards and recognition by site
  • a matrix team structure that enables both integrated processes and local and global leadership and controls
  • clarity on roles based on functional responsibility and strategic competence rather than geographic location
  • the opportunity for growth globally from a junior position to a senior leader
  • close partnership with local universities and key suppliers at each strategic location

To understand the variation in performance for the different structures, first consider the effectiveness of your entire team – across the globe – on several dimensions:

  • level of competence (skill, experience)
  • productivity, ability to improve current work
  • ownership and engagement
  • customization and innovation contributions
  • source of future leaders

For an offshore service provider, where work has been out-tasked to a particular site, the team can provide similar or in some cases, better levels of competence. Because of the lower cost in the offshore location, if there is adequate skilled labour, the offshore service provider can more easily acquire such skill and experience within a given budget. A recognizable global brand helps with this talent acquisition. But since only tasks are sent to the center, productivity and continuous improvement can only be applied to the portions of the process within the center. Requirements, design, and other early stage activities are often left primarily to the ‘home office’ with little ability for the offshore center to influence. Further, the process standards and ownership typically remain at home office as well, even though most implementation may be done at the offshore service provider. This creates a further gap where implications of new standards or home office process ‘improvements’ must be borne by the offshore service provider even if the theory does not work well in actual practice. And since implementation and customer interfaces are often limited as well, the offshore service provider receives little real feedback, furthering constraining the improvement cycle.

For the offshore service provider,  the ability to improve processes and productivity is limited to local optimization only, and capabilities are often at the whims of poor decisions from a distant home office. More comprehensive productivity and process improvements can be achieved by devolving competency authority to the primary team executing the work. So, if most testing is done in India, then the testing process ownership and testing best practices responsibility should reside in India. By shifting process ownership closer to the primary team, there will be a natural interchange and flow of ideas and feedback that will result in better improvements, better ownership of the process, and better results. The process can and should still be consistent globally, the primary competency ownership just resides at its primary practice location.  This will result in a highly competent team striving to be among the best in the world. Even better, the best test administrators can now aspire to become test best practice experts and see a longer career path at the offshore location. Their productivity and knowledge levels will improve significantly. These improvements will reduce attrition and increase employee engagement in the test team, not just in India but globally. In essence, by moving from proper task placement to proper competency placement, you enable both the offshore site and the home sites to perform better on both team skill and experience, as well as team productivity and process improvement.

Proper competency placement begins the movement of your sites from offshore service providers to global service excellence. Couple competency placement with transparent reporting on the key metrics for the selected competencies (e.g., all test teams, across the globe, should report based on best in class operational metrics) and drive improvement cycles (local and global) based on findings from the metrics. Full execution of these three adjustments will enable you to achieve sustained productivity improvements of 10 to 30% and lower attrition rates (of your best staff) by  20 to 40%.

It is important to understand that pairing competency leadership with primary execution is required in IT disciplines much more so than other fields due to the rapid fluidity and advance of technology practices, the frequent need to engage multiple levels of the same expertise to resource and complete projects, and the ambiguity and lack of clear industry standards in many IT engineering areas. In many other industries (manufacturing, chemicals, petroleum), stratification between engineering design and implementation is far more rigorous and possible given the standardization of roles and slower pace of change. Thus, organizations can operate far closer to optimum even with task offshoring that is just not possible in the IT space over any sustained time frame.

To move beyond global competency excellence, the structures around functions (the entire processes, teams and leadership that deliver a service) must be optimized and aligned. First and foremost, goals and agenda must be set consistently across the globe for all sites. There can be no sub agendas where offshore sites focus only on meeting there SLAs or capturing a profit, instead the goals must be the appropriate IT goals globally. (Obviously, for tax implications, certain revenue and profit overheads will be achieved but that is an administrative process not an IT goal. )

Functional optimization is achieved by integrating the functional management across the globe where it becomes the primary management structure. Site and resource leadership is secondary to the functional management structure. It is important to maintain such site leadership to meet regulatory and corporate requirements as well as provide local guidance, but the goals, plans, initiatives, and even day-to-day activities flow through a natural functional leadership structure. There is of course a matrix management approach where often the direct line for reporting and legal purposes is the site management, but the core work is directed via the functional leadership. Most large international companies have mastered this matrix management approach and staff and management understand how to properly work within such a setup.

It is worth noting that within any large services corporation ‘functional’ management will reign supreme over ‘site’ management. For example, in a debate deciding what are the critical projects to be tackled by the IT development team, it is the functional leaders working closely with the global business units that will define the priorities and make the decisions. And if the organization has a site-led offshore development shop, they will find out about the resources required long after the decisions are made (and be required to simply fulfill the task). Site management is simply viewed as not having worthy knowledge or authority to participate in any major debate. Thus if you have you offshore centers singularly aligned to site leadership all the way up the corporate chain, the ability to influence or participate in corporate decisions is minimal. However, if you have matrixed the structure to include a primary functional reporting mechanism, then the offshore team will have some level of representation. This increases particularly as manager and senior managers populate the offshore site and are enable functional control back into home offices or other sites. Thus the testing team, as discussed earlier, if it is primarily located in India, would have not just responsibility for the competency and process direction and goals but also would have the global test senior leader at its site who would have test teams back at the home office and other sites. This structure enables functional guidance and leadership from a position of strength. Now, priorities, goals, initiatives, functional direction can flow smoothly from around the globe to best inform the functional direction. Staff in offshore locations now feel committed to the function resulting in far more energy and innovation arising from these sites. The corporation now benefits from having a much broader pool of strong candidates for leadership positions. And not just more diverse candidates, but candidates who understand a global operating model and comfortable reaching across time zones and cultures. Just what is needed to compete globally in the business. The chart below represents this transition from task to competency to function optimization.

Global Team ProgressionIf you combine the functional optimization with a highly competitive site structure, you can typically organize key function in 2 or 3 locations where global functional leadership will reside. This then adds time of day and business continuity advantages. By having the same function at a minimum of two sites, then even if one site is down the other can operate. Or IT work can be started at one site and handed off at the end of the day at the next site that is just beginning their day (in fact most world class IT command centers operate this way). Thus no one ever works the night shift. And time to market can be greatly improved by leveraging such time advantages.

While it is understandably complex that you are optimizing across many variables (site location, contractor and skill mix, location cost, functional placement, competency placement, talent and skill availability), IT teams that can achieve a global team model and put in place global service centers reap substantial benefits in cost, quality, innovation, and time to market.

To properly weigh these factors I recommend a workforce plan approach where each each function or sub function maps out their staff and leaders across site, contractor/staff mix, and seniority mix. Lay out the target to optimize across all key variables (cost, capability, quality, business continuity and so on) and then construct a quarterly trajectory of the function composition from current state until it can achieve the target. Balance for critical mass, leadership, and likely talent sources. Now you have the draft plan of what moves and transactions must be made to meet your target. Every staff transaction (hires, rotations, training, layoffs, etc) going forward should be weighed against whether it meshes with the workforce plan trajectory or not. Substantial progress to an optimized global team can then be made by leveraging a rising tide of accumulated transactions executed in a strategic manner. These plans must be accompanied or even introduced by an overall vision of the global team and reinforcement of the goals and principles required to enable such an operating model. But once laid, you and your organization can expect to achieve far better capabilities and results than just dispersing tasks and activities around the world.

In today’s global competition, this global team approach is absolutely key for competitive advantage and essential for competitive parity if you are or aspire to be a top international company. It would be great to hear of your perspectives and any feedback on how you or your company been either successful (or unsuccessful) at achieving a global team.

I will add a subsequent reference page with Workforce Plan templates that can be leveraged by teams wishing to start this journey.

Best, Jim Ditmore

Keeping Score and What’s In Store for 2014

Now that 2013 is done, it is time to review my predictions from January last year. For those keeping score, I had six January predictions for Technology in 2013:

6. 2013 is the year of the ‘connected house’ as standards and ‘hub’ products achieve critical mass. Score: Yes! – A half dozen hubs were introduced in 2013 including Lowe’s and AT&T’s as well as SmartThings and Nest. The sector is taking off but is not quite mainstream as there is a bit of administration and tinkering to get everything hooked in. Early market share could determine the standards and the winners here.

5. The IT job market will continue to tighten requiring companies to invest in growing talent as well as higher IT compensation. Score: Nope! – Surprisingly, while the overall job market declined from a 7.9% unemployment rate to 7.0% over 2013, the tech sector had a slight uptick from 3.3% to 3.9% in the 3rd quarter (4Q numbers not available). However, this uptick seems to be caused by more tech workers switching jobs (and thus quitting old jobs) perhaps due to more confidence and better pay elsewhere. Look for a continued tight supply of IT workers as the Labor department predicts that by 2020, another 1.4M IT workers are required and there will only be 400K IT graduates during that time!

4. Fragmentation will multiply in the mobile market, leaving significant advantage to Apple and Samsung being the only companies commanding premiums for their products. Score: Yes and no – Fragmentation did occur in Android segment, but the overall market consolidated greatly. And Samsung and Apple continued in 2013 to capture the lion’s share of all profits from mobile and smart phones. Android picked up market share (and fragment into more players), as well as Windows Phone, notably in Europe. Apple dipped some, but the greatest drop was in ‘other’ devices (Symbian, Blackberry, etc). So expect a 2014 market dominated by Android, iOS, and a distant third to Windows Phone. And Apple will be hard pressed to come out with lower cost volume phones to encourage entry into their ecosystem. Windows Phone will need to continue to increase well beyond current levels especially in the US or China in order to truly compete.

3. HP will suffer further distress in the PC market both from tablet cannibalization and aggressive performance from Lenovo and Dell. Score: Yes! – Starting with the 2nd quarter of 2013, Lenovo overtook HP as the worldwide leader in PC shipments and then widened it in the 3rd quarter. Dell continued to outperform the overall market sector and finished a respectable second in the US and third in the world. Overall PC shipments continued to slide with an 8% drop from 2012, in large part due to tablets. Windows 8 did not help shipments and there does not look like a major resurgence in the market in the near term. Interestingly, as with smart phones, there is a major consolidation occurring around the top 3 vendors in the market — again ‘other’ is the biggest loser of market share.

2. The corporate server market will continue to experience minimal increases in volume and flat or downward pressure on revenue. Score: Yes! – Server revenues declined year over year from 2012 to 2013 in the first three quarters (declines of 5.0%, 3.8%, and 2.1% respectively). Units shipped treaded water with a decline in the first quarter of .7%, an uptick in the second quarter of  4%, and a slight increase in the third quarter of 2%. I think 2014 will show more robust growth with greater business investment.

1. Microsoft will do a Coke Classic on Windows 8. Score: Yes and no – Windows 8.1 did put back the Start button, but retained much of the ‘Metro’ interface. Perhaps best cast as the ‘Great Compromise’, Windows 8.1 was a half step back to the ‘old’ interface and a half step forward to a better integrated user experience. We will see how the ‘one’ user experience across all devices works for Microsoft in 2014.

So, final score was 3 came true, 2 mostly came true, and 1 did not – for a total score of 4. Not too bad though I expected a 5 or 6 🙂 . I will do one re-check of the score when the end of year IT unemployment figures come out to see if the strengthening job market made up for the 3rd quarter dip.

As an IT manager, it is important to have strong, robust competition – it was good to see both Microsoft and HP come out swinging in 2013. Maybe they did not land many punches but it is good to have them back in the games.

Given it is the start of the year, I thought I would map out some of the topics I plan to cover this coming year in my posts. As you know, the focus of Recipe for IT  is useful best practice techniques and advice that works in the real world and enables IT managers to be more successful. In 2013, we had a very successful year with over 43,000 views from over 150 countries, (most are from the US, UK, India, and Canada). And I wish to thank the many who have contributed comments and feedback — it has really helped me craft a better product. So with that in mind, please provide your perspective on the upcoming topics, especially if there are areas you would like to see covered that are not.

For new readers, I have structured the site into two main areas: posts – which are short, timely essays on a particular topic and reference pages– which often take a post and provide a more structured and possibly deeper view of the topic. The pages are intended to be an ongoing reference of best practice for you leverage. You can reach the reference pages from the drop down links on the home page.

For posts, I will be continue the discussion on cloud and data centers. I will also explore flash storage and the continuing impact of mobile. Security will invariably be a topic. Some of you may have noticed some posts are placed first on InformationWeek and then subsequently here. This helps increase the exposure of Recipe for IT and also ensure good editing (!).

For the reference pages, I have recently refined and will continue to improve the production and quality areas. Look also for updates and improvements to leadership  as well as the service desk.

What other topics would you like to see explored? Please comment and provide your feedback and input.

Best, and I wish you a great start to 2014,

Jim Ditmore

Celebrate 2013 Technology or Look to 2014?

The year is quickly winding down and 2013 will not be remembered as a stellar year for technology. Between the NSA leaks and Orwellian revelations, the Healthcare.gov mishaps, the cloud email outages (and Yahoo’s is still lingering) and now the 40 million credit identities stolen from Target, 2013 actually was a pretty tough year for the promise of technology to better society.

While the breakneck progress of technology continued, we witnessed so many shortcomings in its implementation. Fundamental gaps in large project delivery and availability design and implementation continue to plague large and widely used systems.   It is as if the primary design lessons of ‘Galloping Gertie’ regarding resonance were never absorbed by bridge builders. The costs of such major flaws in these large systems are certainly similar to that of a failed bridge.  And as it turns out, if there is a security flaw or loophole, either the bad guys or the NSA will exploit it. I particularly like NSA’s use of ‘smiley faces’ on internal presentations when they find a major gap in someone else’s system.

So, given 2013 has shown the world we live in all too clearly, as IT leaders let’s look to 2014 and resolve to do things better. Let’s continue to up the investment in security within our walls and be more demanding of our vendors to improve their security. Better security is the number 2 focus item (behind data analytics) for most firms and the US government. And security spend will increase an out-sized amount even as total spend goes up by 5%. This is good news, but let’s ensure the money is spent well and we make greater progress in 2014. Of course, one key step is to get XP out of your environment by March since it will no longer be patched by Microsoft. For a checklist on security, here is a good start at my best practices security reference page.

As for availability, remember that quality provides the foundation to availability. Whether design, implementation or change, quality must be woven throughout these processes to enable robust availability and meet the demands of today’s 7×24 mobile consumers. Resolve to move your shop from craft to science in 2014, and make a world of a difference for your company’s interface to its customers. Again, if you are wondering how best to start this journey and make real progress, check out this primer on availability.

Now, what should you look for in 2014? As with last January, where I made 6 predictions for 2013, I will make 6 technology predictions for 2014. Here we go!

6. There will be consolidation in the public cloud market as smaller companies fail to gather enough long term revenue to survive and compete in a market with rapidly falling prices. Nirvanx was the first of many.

5. NSA will get real governance, though it will be secret governance. There is too much of a firestorm for this to continue in current form.

4. Dual SIM phones become available in major markets. This is my personal favorite wish list item and it should come true in the Android space by 4Q.

3. Microsoft’s ‘messy’ OS versions will be reduced, but Microsoft will not deliver on the ‘one’ platform. Expect Microsoft to drop RT and continue to incrementally improve Pro and Enterprise to be more like Windows 7. As for Windows Phone OS, it is a question of sustained market share and the jury is out. It should hang on for a few more years though.

2. With a new CEO, a Microsoft breakup or spinoffs are in the cards. The activist shareholders are holding fire while waiting for the new CEO, but will be applying the flame once again. Effects? How about Office on the iPad? Everyone is giving away software and charging for hardware and services, forcing an eventual change in the Microsoft business model.

1. Flash revolution in the enterprise. What looked at the start of 2013 to be 3 or more years out looks now like this year. The emergence of flash storage at prices (with de-duplication) comparable to traditional storage and 90% reductions in environmentals will become a stampede with the next generation of flash costing significantly less than disk storage.

What are your top predictions? Anything to change or add?

I look forward to your feedback and next week I will assess how my predictions from January 2013 did — we will keep score!

Best, and have a great holiday,

Jim Ditmore

How Did Technology End Up on the Sunday Morning Talk Shows?

It has been two months since the Healthcare.gov launch and by now nearly every American has heard or witnessed the poor performance of the websites. Early on, only one of every five users was able to actually sign in to Healthcare.gov, while poor performance and unavailable systems continue to plague the federal and some state exchanges. Performance was still problematic several weeks into the launch and even as of Friday, November 30, the site was down for 11 hours for maintenance. As of today, December 1, the promised ‘relaunch day’, it appears the site is ‘markedly improved’ but there are plenty more issues to fix.

What a sad state of affairs for IT. So, what does the Healthcare website issues teach us about large project management and execution? Or further, about quality engineering and defect removal?

Soon after the launch, former federal CTO Aneesh Chopra, in an Aspen Institute interview with The New York Times‘ Thomas Friedman, shrugged off the website problems, saying that “glitches happen.” Chopra compared the Healthcare.gov downtime to the frequent appearances of Twitter’s “fail whale” as heavy traffic overwhelmed that site during the 2010 soccer World Cup.

But given that the size of the signup audience was well known and that website technology is mature and well understood, how could the government create such an IT mess? Especially given how much lead time the government had (more than three years) and how much it spent on building the site (estimated between $300 million and $500 million).

Perhaps this is not quite so unusual. Industry research suggests that large IT projects are at far greater risk of failure than smaller efforts. A 2012 McKinsey study revealed that 17% of lT projects budgeted at $15 million or higher go so badly as to threaten the company’s existence, and more than 40% of them fail. As bad as the U.S. healthcare website debut is, there are dozens of examples, both government-run and private of similar debacles.

In a landmark 1995 study, the Standish Group established that only about 17% of IT projects could be considered “fully successful,” another 52% were “challenged” (they didn’t meet budget, quality or time goals) and 30% were “impaired or failed.” In a recent update of that study conducted for ComputerWorld, Standish examined 3,555 IT projects between 2003 and 2012 that had labor costs of at least $10 million and found that only 6.4% of them were successful.

Combining the inherent problems associated with very large IT projects with outdated government practices greatly increases the risk factors. Enterprises of all types can track large IT project failures to several key reasons:

  • Poor or ambiguous sponsorship
  • Confusing or changing requirements
  • Inadequate skills or resources
  • Poor design or inappropriate use of new technology

Unfortunately, strong sponsorship and solid requirements are difficult to come by in a political environment (read: Obamacare), where too many individual and group stakeholders have reason to argue with one another and change the project. Applying the political process of lengthy debates, consensus-building and multiple agendas to defining project requirements is a recipe for disaster.

Furthermore, based on my experience, I suspect the contractors doing the government work encouraged changes, as they saw an opportunity to grow the scope of the project with much higher-margin work (change orders are always much more profitable than the original bid). Inadequate sponsorship and weak requirements were undoubtedly combined with a waterfall development methodology and overall big bang approach usually specified by government procurement methods. In fact, early testimony by the contractors ‘cited a lack of testing on the full system and last-minute changes by the federal agency’.

Why didn’t the project use an iterative delivery approach to hone requirements and interfaces early? Why not start with healthcare site pilots and betas months or even years before the October 1 launch date? The project was underway for three years, yet nothing was made available until October 1. And why did the effort leverage only an already occupied pool of virtualized servers that had little spare capacity for a major new site? For less than 10% of the project costs a massive dedicated farm could have been built.  Further, there was no backup site, nor any monitoring tools implemented. And where was the horizontal scaling design within the application to enable easy addition of capacity for unexpected demand? It is disappointing to see such basic misses in non-functional requirements and design in a major program for a system that is not that difficult or unique.

These basic deliverables and approaches appear to have been fully missed in the implementation of the wesite. Further, the website code appears to have been quite sloppy, not even using common caching techniques to improve performance. Thus, in addition to suffering from weak sponsorship and ambiguous requirements, this program failed to leverage well-known best practices for the technology and design.

One would have thought that given the scale and expenditure on the program, top technical resources would have been allocated and ensured these practices were used. The feds are  scrambling with a “surge” of tech resources  for the site. And while the new resources and leadership have made improvements so far, the surge will bring its own problems. It is very difficult to effectively add resources to an already large program. And, new ideas introduced by the ‘surge’ resources, may not be either accepted or easily integrated. And if the issues are deeply embedded in the system, it will be difficult for the new team to fully fix the defects. For every 100 defects identified in the first few weeks, my experience with quality suggests there are 2 or 3 times more defects buried in the system. Furthermore, if one wonders if the project couldn’t handle the “easy” technical work — sound website design and horizontal scalability – how will they can handle the more difficult challenges of data quality and security?

These issues will become more apparent in the coming months when the complex integration with backend systems from other agencies and insurance companies becomes stressed. And already the fraudsters are jumping into the fray.

So, what should be done and what are the takeaways for an IT leader? Clear sponsorship and proper governance are table stakes for any big IT project, but in this case more radical changes are in order. Why have all 36 states and the federal government roll out their healthcare exchanges in one waterfall or big bang approach? The sites that are working reasonably well (such as the District of Columbia’s) developed them independently. Divide the work up where possible, and move to an iterative or spiral methodology. Deliver early and often.

Perhaps even use competitive tension by having two contractors compete against each other for each such cycle. Pick the one that worked the best and then start over on the next cycle. But make them sprints, not marathons. Three- or six-month cycles should do it. The team that meets the requirements, on time, will have an opportunity to bid on the next cycle. Any contractor that doesn’t clear the bar gets barred from the next round. Now there’s no payoff for a contractor encouraging endless changes. And you have broken up the work into more doable components that can then be improved in the next implementation.

Finally, use only proven technologies. And why not ask the CIOs or chief technology architects of a few large-scale Web companies to spend a few days reviewing the program and designs at appropriate points. It’s the kind of industry-government partnership we would all like to see.

If you want to learn more about how to manage (and not to manage) large IT programs, I recommend “Software Runaways,” , by Robert L. Glass, which documents some spectacular failures. Reading the book is like watching a traffic accident unfold: It’s awful but you can’t tear yourself away. Also, I expand on the root causes of and remedies for IT project failures in my post on project management best practices.

And how about some projects that went well? Here is a great link to the 10 best government IT projects in 2012!

What project management best practices would you add? Please weigh in with a comment below.

Best, Jim Ditmore

This post was first published in late October in InformationWeek and has been updated for this site.