IT Service Desk: Turning around a ‘helpless’ desk

In one of our earliest posts on service desks, I mentioned how an inherited service desk had delivered such poor service that it was referred to by users as the ‘Helpless Desk’ rather than the Help Desk. With that in mind, for those IT leaders who have a poor service situation on your hands with your most important customer interface, this post outlines how to stabilize and then turnaround your service desk. For those new to this site, there is a service desk reference page and also posts to understand service desk elements and best practices.
Service Desks can underperform for a number of reasons, but ongoing poor performance is generally due to a confluence of causes. Typically, underlying issues thrust service desks into poor performance when combined with major changes to the supply (or agent service) or the demand (the calls and requests coming into the desk).  It is important to recognize that service desks are in essence call centres. Call Centre performance is driven by the supply and demand, with an effective service at an efficient cost representing equilibrium – the point at which the competing forces of supply and demand are in optimized with each other. A supply side or demand side ‘shock’ can move the state of equilibrium to a point outside of management control and if there are other fundamental issues, it will result in sustained underperformance by the Service Desk.

There is a ‘tipping point’ within Call Centre mechanics which means that the rate of deterioration will become exponential  – i.e., the gentle gradient of deterioration does not last long before service falls over the cliff edge (i.e. wait times in seconds, quickly become minutes and then tens of minutes – even hours). Calls are abandoned by customers, with call backs adding further volume. Agents become overworked and stressed due to the tone of the calls, their efficiency reduces and attrition goes up, exacerbating any supply shortage. These dynamics also work in reverse and so what can seem to be an insurmountable problem can in fact be rapidly returned to stability if managed appropriately.

Common supply side issues include:

  • Organisations increasingly use headcount caps and quotas to control their cost base. As the quota filters through the organisation, there can be a tendency to retain ‘higher end’ roles, which means the Service Desk comes under particular scrutiny and challenge. A reduction in the supply of labour (without equivalent changes in demand) can very quickly lead to significant service deterioration.
  • Similarly, Service Desk tends to be a function in which organisations have an uplifted appetite to make organisational changes to outsource and offshore (and similarly insource and onshore as the cycle runs). The wholesale replacement of the Service Desk workforce is a fairly common scenario within the industry and is frequently the root cause of acute service issues in the run up to change (as attrition without replacement bites) and during and post change (as a new workforce struggles to support their new customer base).
  • Any issue / initiative that either reduces the availability of agents to handle live calls or leads to a significant increase in the transaction time to do so can very quickly have a catastrophic impact on service. For example; the implementation of a new Service Management toolset is likely to elongate call duration in the short to medium term, a call centre with a high attrition rate will constantly lose agents just as they start to perform – to be replaced by a trainee performing at a sub optimal level and a call centre operating at too high an occupancy level will quickly burn out staff and have an increasing level of absenteeism.

Demand side issues commonly include;

  • Growth of the user base, generating an uplifted volume of contacts to the Service Desk.
  • An increase in contacts per supported user, driven by increasing IT usage or deterioration of IT performance (this is frequently driven by Business or IT change activity delivered by Projects and Programmes – such as the deployment of a new application to the workforce).

Irrespective of the root cause of the failure, service remediation needs to be a concerted effort combining strong analysis with disciplined planning and focused execution. Identifying that there is an issue and responding appropriately in a timely manner should happen automatically if you are already operating with maturity from fact based metrics that have a healthy mix of lead and lag indicators. If the organisation is less mature in its use of metrics (and particularly lead indicators) then the ‘crisis’ is not likely to be noticed (or at least taken seriously by senior leadership) until after the Service Desk hits the tipping point and service is deteriorating at an alarming pace, generating severe customer dissatisfaction (i.e. until it is too late).

Remediating a failing Service Desk requires multiple and varied actions dependent upon the root cause of the issues. The approach to identifying and rectifying those root causes can be managed effectively by following a logical framework.

Step 1 – Stabilize

If service has tumbled over the tipping point and is deteriorating rapidly, there is going to be little sponsorship for an extended analysis and planning exercise. Results – or at least avoiding further deterioration of performance – will be expected immediately. Your first priority is to create the space to put together a meaningful recovery plan.

Do everything that you can do to boost the supply side in the short term (overtime / shift amendments / cessation of any non-time critical, non-customer contact work by your agents, boost the number of agents by diverting resources to customer service roles from other positions etc, bring in temporary resources, etc). This will not fix the issue and is not a sustainable containment strategy; it will however create the window of opportunity you require and give a much needed boost to stakeholder confidence that the ‘freefall’ may be over. By itself, it will reduce the cycle of abandons and call backs that create additional work for the team.

Similar attention should be paid to any demand side actions that can be deployed quickly, it is less likely however that you can act immediately on the demand side, but there are steps that can be taken quickly. If there are recent system or application rollouts that are generating questions and calls, it may be worthwhile to send out a FAQ or Quick Tips to help employees understand what to do without calling the service desk. Or any self help initiatives already in the pipeline could be accelerated to remove some calls. While these actions are more likely to form elements of your strategic recovery plan, the may provide some level of relief.

Step 2 – Undertake the Analysis

Your leadership group and analysts need to undertake the analysis to understand why service has deteriorated. What has gone wrong, when, where and why? If your desk has been performing well (or even adequately) for some time, remember that a recent ‘change’ in either the demand or supply side is likely to be the root cause.

If the desk has been underperforming for a significant period, there are likely to be more systemic causes of the failure and so a full strategic review of your operations is required. Reading the full set of Service Desk Best Practices published within Recipes for IT will provide guidance on the areas of focus required.

After understanding your call volumes and their trends (everything from call time to why customers are calling) you should be able to identify some of the root causes. Are there new issues that are now in the top 10 call reasons? Are your call times elongated? Have call volumes or peaks increased? For each shift in metrics, analyze for the following:

  • determine if the root cause for a customer call is due to:
      • system issues or changes
      • user training or understanding
      • lack of viable self-service channel
  • identify if increases in calls are due to:
      • underlying user volume increases or growth
      • new user technologies or systems
      • major rollouts or releases that are problematic
  • or if service is suffering due to:
      • lack of resources or mismatched resource levels and call volumes
      • inadequate training of service desk personnel
      • new or ineffective toolsets that elongate service time
      • inefficient procedures or poor engagement
      • high attrition or loss of personnel

If you do not have adequate metrics to support the analysis, then you will need to establish basic metrics collection as the first, fundamental step.

Step 3 – Construct the Recovery Plan

Constructing the recovery plan needs to be genuinely solution-oriented and outcome focused. The objective of the plan is not usually to resolve the source of the ‘shock’ to return us back to the old equilibrium (e.g. we aren’t likely to want to back out the new Service Management toolset that we have just implemented – we will want to build forward). The objective of the plan is to detail the actions required to resolve the issues identified as well as build a solid foundation to allow us to move back to a steady state operation, delivering with quality and consistency to our SLA.

A good recovery plan will be clear about what actions are to be undertaken, by who, when, to achieve which specific deliverable and with specific measures and metrics tracking progress to achievement of the overall outcome.

The plan needs to focus on prioritising actions that can make a positive impact of scale and of pace commensurate to the scale of the service issues being experienced. Many and multiple actions on a service recovery plan creates a false sense of comfort for those involved in the crisis and will almost certainly hinder genuine service improvement. Targeted action is required and this needs discipline and skill from the plan owner to ensure that benefits will be realised, will be relevant to the problem statement and that our actions in aggregate will move bottom line performance to where we need it to be.

We recommend a recovery plan that has the following elements:

a. Maintain an interim staffing boost to stabilize the service desk until other work is completed

b. If clear problem causes are identified (poorly rolled out systems, ongoing defective systems causing high volumes of calls) then ensure these areas are high priority for fixes on the next release or maintenance cycle.

c. Match resources to demand cycles based on current volumes and call handle times. Then forecast proper resource levels based on improvement initiatives and their completion dates.

d. If self service can address a significant volume of calls, these should also be a top priority for investment as this solution is also usually an overall cost save as well as service experience improvement (e.g. password resets).

e. Ensure your service desk staff can efficiently handle calls — proper training, tool adjustments, thoughtful goals, incentives and a productive environment.

f. Address staff recruiting as well as development, incentives, and training and career progression to ensure you will have an engaged and well-trained staff to deliver exceptional service

g. Review your IVR and call centre technology setup and look to optimize menus, self-service, and call back options. Specialize resources into pools as appropriate to improve efficiency.

h. Define strategic service goals and SLAs along with initiatives to achieve them (e.g., additional or different sites, knowledge management tools, revamp of problem systems, etc).

Step 4 – Execute the Recovery Plan

Ensure that the plan is owned by an individual with the gravitas, influence, experience and focus to manage it through with real pace and drive. Ideally, the individual should not own actions within the plan itself (as this undermines their ability to hold everyone fully to account and removes their impartiality when inevitable conflicting priorities arise).

The plan can (and should) be meaningfully tuned as you progress with delivery. It should not however be a constant exercise in planning and re-planning and particular focus needs to be applied to ensure that commitments (delivery to time, cost and quality) are met by all action owners.

Communicate the issue, the plan, progress & improvement achieved to date and upcoming actions from your recovery plan to stakeholders. Ensure that stakeholder management is a specific activity within your plan and that you pay particular attention to each stakeholder group as a constituency. The role of senior leaders in recovery situations should be to protect the operations team to enable it to focus on delivery through the management of senior clients and customers and to ensure that the required resources to remediate the issues are provided.

Step 5 – Take a Look Back

Once service has been remediated and stabilised there are a number of residual actions to undertake.

  • As additional resources were utilised in the recovery effort (holidays restricted, time off in lieu accumulated, overtime paid etc…) there may well be negative service and / or financial implications of those decisions. It is important to quickly understand any such impacts and to manage them appropriately (e.g., review the holiday allocation process to ensure accumulated holidays can still be scheduled without a bottleneck, determine whether to grant time off in lieu for extra hours worked or to pay overtime, ensure that departments and functions who have been loaning staff to the front line receive support and resources to now clear their backlogs quickly etc.).
  • Review the control processes and responsiveness of your Service Desk in the identification of the issue / issues and how this could be improved upon in the future (in particular the use of lead and lag performance metrics). The ‘root causes’ identified should be eliminated or carefully tracked to ensure that future occurrences can be identified and dealt with before they manifest as service impact to your customers.
  • Ensure that the findings of your root cause analysis are communicated to and understood by your stakeholders. Be honest, be clear and be candid about what has happened, why and the measures that are now in place to prevent / mitigate any future such occurrences.
  • Say Thank You as the milestones are completed. A number of people will have participated in the recovery effort, some very explicitly and others in a very low key manner (for example by absorbing extra workload from colleagues seconded to the front line). Recognising their contribution and taking the time to say Thank You will ensure that your team feel rewarded for their efforts and motivated to stand shoulder to shoulder in tackling future adverse events that impact customer service.

And with these efforts, you will have turned the ‘helpless desk’ into a positive key interface for your customers.

Best, Steve Wignall and Jim Ditmore


Another Wave of Security Breaches: Meeting It with Security Best Practices

With the latest breaches in the news, I felt it was important to map out base practices and well as some of the best practices in Information Security. In the age of LulzSec, industrial espionage, and everyday breaches, it’s more important than ever to be proactive about security. I consulted with several top security engineers that I have worked with in the past to construct these practices. Much of this post was first published in early April in Information Week and I have updated it further. Unfortunately, this area should be a top priority for IT leaders to protect their firms, customers and information. If it’s not at your firm, you need to change that. Best, Jim.

PS. Here is a good reference on the biggest data breaches the past 15 years to help you get the investment required to properly implement IT Security.

Mark Twain observed 150 years ago: “A lie can travel halfway round the world while the truth is putting on its shoes.” With the advent of social media, these days that lie has likely made it all the way around the world and back while the truth is still in bed.

And today it is not just the false information it’s the confidential information, your customer’s information or your company intellectual property that is spirited away. The pace and sophistication of attacks by hackers and others who expose confidential data and emails has increased dramatically. For their latest exploit, a group calling itself LulzSec Reborn recently hacked a military dating website releasing the usernames and passwords of more than 170,000 of the site’s subscribers.

Then there are the for-profit attacks by nation states and companies seeking intellectual property, and fraud by organized crime outfits. Consider the blatant industrial espionage conducted against Nortel and more recently, AMSC, or the recent fraud attack against Global Payments. These are sobering stories of how company’s falter or fail in part due to  such espionage.

One of a CIO’s most critical responsibilities is to protect his or her company’s information assets. Such protection often focuses on preventing others from entering company systems and networks, but it must also identify and prevent data from leaving. The following recommendations can help you do this. They are listed in two sections: conventional measures that focus on system access, and best practices given the profiles of today’s attacks.

Conventional Measures:

Establish a thoughtful password policy. Sure, this is pretty basic, but it’s worth revisiting. Definitely require that users change their passwords regularly, but set a reasonable frequency–any less than three months and users will write their passwords down, compromising security. As for password complexity, require at least six or seven characters, with one capital letter and one number or other special character.

Publicize best security and confidentiality practices. Do a bit of marketing to raise user awareness and improve security and confidentiality practices. No security tool can be everywhere. Remind your employees that security threats can follow them home from work or to work from home. Help your employees take part of your company’s security practices — there is a good post on this at How To Make Information Security Everyone’s Problem.

Install and update robust antivirus software on your network and client devices. Enough said, but keep it up-to-date and make it comprehensive (all devices)

Review access regularly. Also, ensure that all access is provided on a “need-to-know” or “need-to- do” basis. This is an integral part of any Sarbanes-Oxley review, and it’s a good security practice as well. Educate your users at the same time you ask them to do the review. This will reduce the possibility of a single employee being able to commit fraud resulting from retained access from a previous position.

Put in place laptop bootup hard drive encryption. This encryption will make it very difficult to expose confidential company information via lost or stolen laptops, which is still a big problem. Meanwhile, educate employees to avoid leaving laptops in their vehicles or other insecure places.

Require secure access for “superuser” administrators. Given their system privileges, any compromise to their access can open up your systems completely. Ensure that they don’t use generic user IDs, that their generic passwords are changed to a robust strength, and that all their commands are logged (and subsequently reviewed by another engineering team and management). Implement two-factor authentication for any remote superuser ID access.

Maintain up-to-date patching. Enough said.

Encrypt critical data only. Any customer or other confidential information transmitted from your organization should be encrypted. The same precautions apply to any login transactions that transmit credentials across public networks.

Perform regular penetration testing. Have a reputable firm test your perimeter defenses regularly.

A Thoughtful Set of Additional Current Best Practices: With the pace of change of technology and the rise of additional threats from hackers and state-sposored espionage, your company’s security posture must adopt the latest best techniques and be updated regularly. Here are the current best practices that I would highly recommend.

Provide two-factor authentication for customers. Some of your customers’ personal devices are likely to be compromised, so requiring two-factor authentication for access to accounts prevents easy exploitation. Also, notify customers when certain transactions have occurred on their accounts (for example, changes in payment destination, email address, physical address, etc.).

Secure all mobile devices. Equip all mobile devices with passcodes, encryption, and wipe clean. Encrypt your USD flash memory devices. On secured internal networks, minimize encryption to enable detection of unauthorized activity as well as diagnosis and resolution of production and performance problems.

Further strengthen access controls. Permit certain commands or functions (e.g., superuser) to be executed only from specific network segments (not remotely). Permit contractor network access via a partitioned secure network or secured client device.

Secure your sites from inadvertent outside channels.Implement your own secured wireless network, one that can detect unauthorized access, at all corporate sites. Regularly scan for rogue network devices, such as DSL modems set up by employees, that let outgoing traffic bypass your controls.

Prevent data from leaving. Continuously monitor for transmission of customer and confidential corporate data, with the automated ability to shut down illicit flows using tools such as NetWitness. Establish permissions whereby sensitive data can be accessed only from certain IP ranges and sent only to another limited set. Continuously monitor traffic destinations in conjunction with a top-tier carrier in order to identify traffic going to fraudulent sites or unfriendly nations.

Keep your eyes and ears open. Continually monitor underground forums (“Dark Web”) for mentions of your company’s name and/or your customers’ data for sale. Help your marketing and PR teams by monitoring social networks and other media for corporate mentions, providing a twice-daily report to summarize activity.

Raise the bar on suppliers. Audit and assess how your company’s suppliers handle critical corporate data. Don’t hesitate to prune suppliers with inadequate security practices. Be careful about having a fully open door between their networks and yours.

Put in place critical transaction process checks. Ensure that crucial transactions (i.e., large transfers) require two personnel to execute, and that regular reporting and management review of such transactions occurs.

Best, Jim D.

In some ways you can view it as no longer a matter of if you get hacked, but when. Information Week has a special retrospective of news coverage, Monitoring Tools And Logs Make All The Difference, where they take a look at ways to measure your security posture and the challenges that lie ahead with the emerging threat landscape. (Free registration required.)

Key Steps to Building High Performance Teams

Today I have returned to a topic that is at the core of Recipes for IT: High Performance IT Teams. While tax day did take a bit of time and I am slightly delayed in posting this, I have actually laid out three accompanying posts or pages for today’s post. I think it is a good start on the complex topic of how to build or energize your team and create a high performing team. I look forward to your comments! Best, Jim

Building High Performance Teams: The essence of being a leader is defining a vision and compelling others to pursue and achieve that vision. Recently a good colleague relayed an article in Harvard Business Review describing how it is more difficult today to be a outstanding leader due to a number of factors including the wider availability of knowledge and easier access to each other as well as a reduced perception of glory of institutions that leaders represent.

And while I would agree these factors may make things more difficult to be a great leader, I tend to believe that we have just as many, if not more, good and decent men and women who are effective and even outstanding leaders today as ever in history. But because the circumstances are less dire (e.g., there is not a world war to require a Churchill) and because the competence has risen (yes, management is a far more analyzed and practiced field than ever before), there are not the towering gaps between the best and the average that might have previously been. So with a positive outlook on the competence of today’s managers and leaders, I have assembled a set of practices that I have leveraged or I have seen peers or other senior IT leaders use to build high performance IT teams.

For the emerging senior IT leader with his or her senior management team, can use these practices to build a high performing team, in the following steps:

Today’s post covers how to set such a vision, then define and cascade the goals to match your vision, align the incentives, and set the proper expectations and behaviors. And I have constructed pages with links above on the next three steps.  Subsequent posts will cover the remaining steps.

I think the aspiration of building a high performing team is a lofty, worthwhile, and achievable vision. If you have ever participated in a high performance team at the top of their game, in other words: a championship team, then you know the level of professional reward and sense of accomplishment that accompanies such membership. And for most companies that rely significantly on IT, if their IT team is a high performing team, it can make a very large difference in their products, their customer experience, and their bottom line. But if you are to set out to build such a team it must be for a vision that is more than just the team, it must be to enable your company to achieve achieve outsized goals of appropriate scale and aspiration. You will not attract or retain top talent and inspire others if you and your company have only modest goals.

So, first, consider your company’s goals and then outline what IT must become and must accomplish to enable corporate success of major significance. And the draw out the IT vision and goals that will enable that success. Do not get trapped by cloaking your vision in uninspired definitions (e.g., don’t state your vision as ‘ Save $40M in costs’ instead ‘ Become top quartile in efficiency in IT for our industry and company size by 201x’). You can only state your vision in this manner, of course, if you mean it. So, I will assume you have true aspirations for your team to become a world class IT organization and you will meld those goals with your company’s goals for a compelling vision. Further, consider IT goals to match both your company’s service and operations goals as well as product and innovation. Make sure the vision you define for IT drives both areas as well looking to a two to three year horizon for the target. (Rebuilding or energizing a team usually takes such a time period to truly reach high performance and you must lift the sight line to the horizon to ensure your team does not get trapped in just extending the every day steps.)

Once you have defined a compelling vision, the next step is to set the right goals to achieve the vision. The right goals will logically cascade as mileposts on the journey to high performance as well as be inevitable products of achieving corporate excellence. I recommend framing such goals as the primary measures by year that you will determine if you are to achieve the required progress to reach your vision. For the upcoming year, it is often worthwhile to set quarterly milestones as well. These measures should be relevant, well-defined, as quantifiable as possible and they should be set at stretch but achievable levels. If, for example, your vision is for your company to become an industry leader in service quality then you would want to set cascaded goals where the IT team dramatically improves its quality (so the systems now enable much better customer service for the company) as well as delivers key workflow improvements or feature enhancements to enable the company to lift its service directly (e.g., such as the package tracking capability that Federal Express uses to ensure extremely high quality service). Ensure that your measures are not uni-dimensional, that is, they only cover one aspect of what your company and thus your team must achieve. There should be clear focus in one area (e.g. quality and operational excellence, or product feature, or speed, or innovation) but it should not be a the full neglect of the other areas. Further, you should set at least modest goals for both cost and risk, otherwise these could become risk areas as your team pursues only one facet.

Once you have defined the right, cascading goals you will need to reinforce the goals with a set of behaviors and expectations as well as aligned incentives. And the approach to achieving the goals should reflect the strengths of the teamFor example, if one of your goals is to achieve outstanding quality then measures for the goals may include process definition work and metrics implementation if your team has low maturity or jump right to leveraging already reported metrics and driving improved feedback cycles if of high maturity. Further though, if your team has an engineering bias you may approach the solution through robust root cause and better design processes whereas if the team has a strong collaboration approach you may reach the same quality goal through better peer reviews and additional coordination and validation of changes.

More importantly though is to reinforce your goals through aligned behaviors and expectations and most importantly, incentives. For example, if you are looking to drive more predictable project delivery for the business than having incentives that reward firefighting for some of your staff when they contributed to the potential issues in the first place will tremendously undermine how much the rest of the staff support your goals. Similarly, if you reward those who while delivering a particular set of results cause significant damage to other team members or ignore other standards or principles, than you will minimize the likelihood that such principles or standards will be followed in the future. It is important, as a leader to reward not just factor in the results but more critically how the effort was achieved. Often in organizations, those who quietly and effectively carry out significant projects with excellent team behaviors are neglected by management when good leaders would call out the very same individuals for exemplary performance. Quite simply, your smartest team members, will observe what you reward, and if you do not reinforce the values you are looking to achieve (productivity, quality, initiative, etc), you will not get the changes desired.

Most importantly though, your behaviors as leaders must reflect the very same expectations you have outlined for your team. And you must demonstrate a tenacious focus on the goals and vision you have defined. Your behaviors must reinforce your expectations. Every day there are conflicts and setbacks that strong leaders would turn these into episodes that strengthen rather than weaken your team. Understand, that when you lead by example, you will make a daily difference and demonstrate to your team what should be done. Your focus, discipline, thoughtfulness, and sacrifice for the team and goals will not be lost and will result in better effort all around.

In sum, it is important to define a compelling vision and establish the right goals and incentives, but at each stage of the journey, there will be key moments where you as a leader will reinforce the vision, goals and principles you have set or you will undermine them. As a leader, perhaps we can easily step into the role of setting a direction, sponsoring a program or making  a major decision, but for all the visibility and importance of these actions, the time we spend interacting and communicating and coaching your team will determine the effectiveness and reach of our goals.

Now with the steps above you will have the foundation to build a high performance team and deliver sustainable and outstanding results.

What steps or approaches have you used to successfully define a vision and goals for a team? What would you change to this approach?

Best, Jim Ditmore

Why you want an Australian Pilot: Lessons for Outstanding IT Leadership

Perhaps you are wondering what nationality or culture has to do with piloting an airplane? And how could piloting an airplane be similar to making decisions in an IT organization?

For those of you who have read Outliers, which I heartily recommend, you would be familiar with the well-supported conclusions that Malcolm Gladwell makes:

  • that incredible success often has strong parallels and patterns among those high achievers, often factors you would not expect or easily discern
  • and no one ever makes it alone

A very interesting chapter in Outliers is based on the NTSB analysis of what occurred in the cockpit during several crashes as well as the research work done by Dutch psychologist Geert Hofstede. What Hofstede found in his studies for IBM HR department in the 70s and 80s is that people from different countries or cultures behave differently in their work relationships. Not surprising of course, and Hofstede did not place countries as right or wrong but used the data as a way to measure differences in cultures. A very interesting measure of culture is the Power Distance Index (PDI). Countries with a high PDI have cultures where those in authority are treated with great respect and deference. For countries with a low PDI, those in authority go to great lengths to downplay their stature and members feel comfortable challenging authority.

Now back to having an Australian pilot your plane: commercial aircraft, while highly automated and extremely reliable, are complex machines that when in difficult circumstances, require all of the crew to do their job well and in concert. But for multiple crashes in the 1990s and early 2000s, the NTSB found that crew communication and coordination were significant factors. And those airlines with crews from countries with high PDI scores, had the worst records. Why? As Malcolm Gladwell lays out so well, it is because of the repeated deference of lower status crew members to a captain who is piloting the plane. And when the captain makes repeated mistakes, these crew members defer and do not call vigorously call out the issues when it is their responsibility to do so even to fatal effect. So, if you were flying a plane in the 1990s, you would want your pilot to be from Australia, New Zealand, Ireland, South Africa, or the US, as they have the lowest PDI cultural score. Since that time, it is worth noting that most airlines with high PDI ratings have incorporated crew responsibility training to overcome these effects and all airlines have added further crew training on communications and interaction resulting in the continued improvement in safety we witnessed this past decade.

But this experience yields insight into how teams operate effectively in complex environments. Simply put, the highest performance teams are those with a low PDI that enables team members to provide appropriate input into a decision. Further, once the leader decides, with this input, the team rotates quickly and with confidence to take on the new tack. Elite teams in our armed forces operate on very similar principles.

I would suggest that high performance IT teams operate in a low PDI manner as well. Delivering an IT system in today’s large corporations requires integrating a dozen or more technologies to deliver features that require multiple experts to fully comprehend. In contrast, if you have the project or organization driven by a leader whose authoritative style imposes high deference by all team members and alternative views cannot be expressed, than it is simply a matter of time before poor performance will set in. Strong team members and experts will look elsewhere for employment as their voices are not heard, and at some point, one person cannot be an expert in everything required to succeed and delivery failure will occur. High PDI leaders will not result in sustainable high performance teams.

Now a low PDI culture does not suggest there is not structure and authority. Nor is the team a democracy. Instead, each team member knows their area of responsibility and understands that in difficult and complex situations, all must work together with flexibility to come up with ideas and options for the group to consider for the solution. Each member views their area as a critical responsibility and strives to be the best at their competency in a disciplined approach. Leaders solicit data, recommended courses and ideas from team members, and consider them fully. Discussion and constructive debate, if possible given the time and the urgency, are encouraged. Leaders then make clear decisions, and once decided, everyone falls in line and provides full support and commitment.

In many respects, this is a similar profile to the Level 5 leader that Jim Collins wrote about that mixes ‘a paradoxical blend of personal humility and professional will. They feature a lack of pretense (low PDI) but fierce resolve to get things done for the benefit of their company. Their modesty allows them to be approachable and ensures that the facts and expert opinions are heard. Their focus and resolve enables them to make clear decisions. And their dedication to the company and the organizations ensure the company goals are foremost. (Of course, they also have all the other personal and management strengths and qualities (intelligence, integrity, work ethic, etc.).)

Low PDI or Level 5 leaders set in place three key approaches for their organizations:

  • they set in place a clear vision and build momentum with sustained focus and energy, motivating and leveraging the entire team
  • they do not lurch from initiative to initiative or jump on the latest technology bandwagon, instead they judiciously invest in key technologies and capabilities that are core to their company’s competence and value and provide sustainable advantage
  • because they drive a fact-based, disciplined approach to decisioning as leaders, excessive hierarchy and bureaucracy are not required. Further, quality and forethought are built in to processes freeing the organization of excessive controls and verification.

To achieve a high performance IT organization, these are the same leadership qualities required. Someone who storms around and makes all the key decisions without input from the team will not achieve a high performance organization nor will someone who focuses only on technology baubles and not on the underlying capabilities and disciplines. And someone who shrinks from key decisions and turf battles and does not sponsor his team will fail as well. We have all worked for bosses that reflected these qualities so we understand what happens and  why there is a lack of enthusiasm in those organizations.

So, instead, resolve to be a level 5 leader, and look to lower your PDI. Set a compelling vision, and every day seek out the facts, press your team to know their area of expertise as top in the industry, and sponsor the dialogue that enables the best decisions, and then make them.

Best, Jim

The Accelerating Impact of the iPad on Corporate IT

Apple announced the iPad two years ago and began shipping in April 2010. In less than two years, the rapidity and scale of the shock to the PC marketplace from the iPad has been stunning.   The PC market trends in 2011 show PCs of all types (netbook, notebook, desktop) are being cannabilized by tablet sales. And iPad sales (15.4M in 4Q11) are now equivalent to PC desktop sales. We saw desktop PCs shipments slowing the last several years due to the earlier advent of notebooks and then netbooks, but now stagnating and even dropping in great part due to the iPad. With the release of the iPad 3 just around the corner (next week), these impacts will accelerate. And while the release of Windows 8 and new PC ultrabooks (basically PC versions of the MacBook Air) could possibly cause improved shipments in 2012, the implications of this consumer shift are significant for corporate technology.

Just as IT managers used to determine what mobile devices their employees used (and thus invariably Blackberry) and now companies are adopting a ‘bring your own device’ (BYOD) approach to mobile, IT managers will need to shift to not just accommodating iPads as an additional mobile device, but should move to a full-fledged BYOD for client computing for their knowledge workers. Let them decide if they need a tablet, ultrabook, or laptop. Most front office staff will also be better served by a mobile or tablet approach (consider the retail staff in Apple’s stores today). Importantly, IT will need to develop applications first and second for the internet and tablets, and only for traditional PCs for back office and production workers.

The implications of this will cause further shock to the marketplace. Just as in the mobile device marketplace where the traditional consumer vendors were impacted first by the new smart phones **(i.e., Nokia impacted first by Apple and Android) and then the commercial mobile vendor** (Blackberry), PC vendors are now seeing their consumer divisions severely impacted with modest growth in commercial segments. But front office and knowledge workers will demand the use of tablets first and air books or ultrabooks second. Companies will make this shift because their employees will be more productive and satisfied and it will cost the company less. And as the ability to implement and leverage this BYOD approach increases, the migration will become a massive rush, especially as front office systems convert.  And the commercial PC segment will follow what already is happening in the broader consumer segment.

As an IT manager you should ensure your shop is on the front edge of this transition as much as possible to provide your company advantage. The tools to deploy, manage, and implement secure sessions are rapidly maturing and are already well-proven **. Many companies started pilots or small implementations in that past year in such areas as providing iPads instead of 5 inch thick binders for their boards ** or giving senior executives iPads to use in meetings instead of printed presentations. But the big expansion has been allowing senior managers and knowledge workers to begin accessing corporate email and intranets via their own iPads from home or when traveling. And with the success of these pilots, companies are planning broader rollouts and are adopting formal BYOD policies for laptops and pads.

So how do you ensure that your company stays abreast of these changes? If you have not already piloted corporate email and intranet access from smart phones and pads, get going. Look also to pilot the devices for select groups such as your board and senior executives. This will enable you to get the support infrastructure in place and issues worked out before a larger rollout.

Work with your legal and procurement team to define the new corporate policy on employee devices. Many companies help solve this issue by providing the employee a voucher covering the cost of the device purchase but the employee is the owner. And because the corporate data is held in a secure partition on the device and can be wiped clean if lost or stolen, you can meet your corporate IT security standards.

More importantly, IT must adjust its thinking about what the most vital interface is for internal applications. For more than a decade, it has been the PC with perhaps an internet interface. Going forward, it needs to be an internet interface, possibly with a smartphone and iPad app. Corporate PC client interfaces (outside of dedicated production applications such as the general ledger for the finance team or a call center support application) will be one of casualties of this shift from PCs.

If you are looking for leaders in the effort, I would suggest that government agencies, especially in the US, have been surprisingly agile in delivering their reference works for everything from the state legal code to driving rules and regulations on iPad applications in the Itunes store. I actually think the corporate sector is trailing the government in this adoption. How many of you actually have their HR policies and procedures in an iPad application that can be downloaded? Or a smart phone app to handle your corporate travel expenses? Or a front office application that enables your customer facing personnel to be as mobile and productive as an Apple retail employee?

And ensure you build the infrastructure to handle the download and version management of these new applications. You can configure your own corporate version of a iTunes store  that enables users to self-provision and easily download apps to their devices just as they download Angry Birds today. This again will provide a better experience for the corporate user at reduced cost. And the leading senior infrastructure client managers today are looking to further exploit this corporate store and later extend this infrastructure and download approach to all their devices. This is just another example of a product or approach, first developed for the consumer market, cross-impacting the commercial market.

As for those desktop PCs, where will they be in 2 to 3 years? They will still be used by production workers (call centers, back office personnel, etc) but they will be more and more virtualized so the heavy computing is done in the data center and not the PC. And desktop PCs  will be a much smaller proportion of your overall client devices. This will have significant implications on your client software licenses (e.g. Windows Office, etc) and you should begin considering now how your contracts will handle this changing situation.  And perhaps just beyond that timeframe, it is possible that we will consider traditional desktops to be similar to floppy drives today — an odd bit of technology from the past.

Best, Jim Ditmore

* for those of you who read my occasional post on InformationWeek, I have updated my article that was originally posted there on Feb 14.

 

A Scientific Approach to IT Metrics

In order to achieve a world class or first quartile performance, it is critical to take a ‘scientific’ approach to IT metrics. Many shops remain rooted in ‘craft’ approaches to IT where techniques and processes are applied in an ad hoc manner to the work at hand and little is measured. Or, a smattering of process improvement methodologies (such as Six Sigma or Lean) or development approaches (e.g., Agile) are applied indiscriminately across the organization. Frequently then, due to a lack of success, the process methods or metrics focus are then tarred as being ineffective by managers.

Most organizations that I have seen that were mediocre performers typically have such craft or ad hoc approaches to their metrics and processes. And this includes not just the approach at the senior management level but at each of the 20 to 35 distinct functions that make up an IT shop (e.g., Networking, mainframes, servers, desktops, service desk, middleware , etc, and each business-focused area of development and integration). In fact, you must address the process and metrics at each distinct function level in order to then build a strong CIO level process, governance and metrics. And if you want to achieve 1st quartile or world-class performance, a scientific approach to metrics will make a major contribution. So let’s map out how to get to such an approach.

1) Evaluate your current metrics: You can pick several of the current functions you are responsible for and evaluate them to see where you are in your metrics approach and how to adjust to apply best practices. Take the following steps:

  • For each distinct function, identify the current metrics that are routinely used by the team to execute their work or make decisions.
  • Categorize these metrics as either operational metrics or reporting numbers. If they are not used by the team to do their daily work or they are not used routinely to make decisions on the work being done by the team, then these are reporting numbers. For example, they may be summary numbers reported to middle management or reported for audit or risk requirements or even for a legacy report that no one remembers why it is being produced.
  • Is a scorecard being produced for the function? An effective scorecard would have quantitative measures for the deliverables of the functions as well as objective scores for function goals that have been properly cascaded for the overall IT goals

2) Identify gaps with the current metrics: For each of IT functions there should be regular operational metrics for all key dimensions of delivery (quality, availability, cost, delivery against SLAs, schedule). Further, each area should have unit measures to enable an understanding of performance (e.g., unit cost, defects per unit, productivity, etc). As an example, the server team should have the following operational metrics:

    • all server asset inventory and demand volumes maintained and updated
    • operational metrics such as server availability, server configuration currency, server backups, server utilization should all be tracked
    • also time to deliver a server, total server costs, and delivery against performance and availability SLAs should be tracked
    • further secondary or verifying metrics such as server change success, server obsolescense, servers with multiple backup failures, chronic SLA or availability misses, etc should be tracked as well
    • function performance metrics should include cost per server (by type of server), administrators per server, administrator hours to build a server, percent virtualized servers, percent standardized servers, etc should also be derived

3) Establish full coverage: By comparing the existing metrics against the full set of delivery goals, you can quickly establish the appropriate operational metrics along with appropriate verifying metrics. Where there are metrics missing that should be gathered, work with the function to incorporate the additional metrics into their daily operational work and processes. Take care to work from the base metrics up to more advanced:

    • start with base metrics such as asset inventories and staff numbers and overall costs before you move to unit costs and productivity and other derived metrics
    • ensure the metrics are gathered in as automated a fashion as possible and as an inherent part of the overall work (they should not be gathered by a separate team or subsequent to the work being done

Ensure that verifying metrics are established for critical performance areas for the function as well. An example of this for the server function could be for the key activity of backups for a server:

    • the operational metrics would be perhaps backups completed against backups scheduled
    • the verifying metric would be twofold:
      • any backups for a single server that fail twice in a row get an alert and an engineering review as to why they failed (typically, for a variety of reasons 1% or fewer of your backups will fail, this is reasonable operational performance. But is one server does not get a successful backup for many days, you are likely putting the firm at risk if there is a database or disk failure, thus the critical alert)
      • every month or quarter, 3 or more backups are selected at random, and the team ensures they can successfully recover from the backup files. This will verify everything associated with the backup is actually working.

4) Collect the metrics only once: Often, teams collect similar metrics for different audiences. The metrics that they use to monitor for example configuration currency or configuration to standards, can be mostly duplicated by risk data collected against security parameter settings or executive management data on a percent server virtualization. This is a waste of the operational team’s time and can lead to confusing reports where one view doesn’t match another view. I recommend that you establish and overall metrics framework that includes risk and quality metrics as well as management and operational metrics so that all groups agree to the proper metrics. The metrics are then collected once, distilled and analyzed once, and congruent decisions can then be made by all groups. Later this week I will post a recommended metrics framework for a typical IT shop.

5) Drop the non-value numbers activity: For all those numbers that were identified as being gathered for middle management reports or for legacy reports with an uncertain audience; if there is no tie to a corporate or group goal, and the numbers are not being used by the function for operational purposes, I recommend to stop collecting the numbers and stop publishing any associated reports. It is non-value activity.

6) Use the metrics in regular review: At both the function team level and function management level the metrics should be trended, analyzed and discussed. These should be regular activities: monthly, weekly, and even daily depending on the metrics. The focus should be on how to improve, and based on the trends are current actions, staffing, processes, etc, enabling the team to improve and be successful on all goals or not. A clear feedback loop should be in place to enable the team and management to identify actions to take place to correct issues apparent through the metrics as quickly and as locally as possible. This gives control of the line to the team and the end result is better solutions, better work and better quality. This is what has been found in manufacturing time and again and is widely practiced by companies such as Toyota in their factories.

7) Summarize the metrics from across your functions into a scorecard: Ensure you identify the key metrics within each function and properly summarize and aggregate the metrics into an overall group score card. Obviously the score card should match you goals and key services that you deliver. It may be appropriate to rotate in key metrics from a function based on visibility or significant change. For example, if you are looking to improve overall time to market(TTM) of your projects, it may be appropriate to report on server delivery time as a key subcomponent and hopefully leading indicator of your improving TTM.  Including on your score card, even at a summarized level, key metrics from the various functions, will result in greater attention and pride being taken in the work since there is a very visible and direct consequences. I also recommend that on a quarterly basis, that you provide an assessment as to the progress and perhaps highlights of the team’s work as reflected in the score card.

8 ) Drive better results through proactive planning: The team and function management, once the metrics and feedback loop are in place, will be able to drive better performance through ongoing improvement as part of their regular activity. Greater increases in performance may require broader analysis and senior management support. Senior management should do proactive planning sessions with the function team to enable greater improvement to occur. The assignment for the team should be how take key metrics and what would be required to set them on a trajectory to a first quartile level in a certain time frame. For example, you may have both a cost reduction goal overall and within the server function there is a subgoal to achieve greater productivity (at a first quartile level)  and reduce the need for additional staff. By asking the team to map out what is required and by holding a proactive planning session on some of the key metrics (e.g. productivity) you will often identify the path to meet both local objectives that also contribute to the global objectives. Here, in the server example, you may find that with a moderate investment in automation, productivity can be greatly improved and staff costs reduced substantially. Thus both objectives could be obtained by the investment.  By holding such proactive sessions, where you ask the team to try and identify what would needs to be done to achieve a trajectory on their key metrics as well as considering what are the key goals and focus at the corporate or group level, you can often identify such doubly beneficial actions.

By taking these steps, you will employ a scientific approach to your metrics. If you add a degree of process definition and maturity, you will make significant strides to controlling and improving your environment in a sustainable way. This will build momentum and enable your team to enter a virtuous cycle of improvement and better performance. And then if add to the mix process improvement techniques (in moderation and with the right technique for each process and group), you will accelerate your improvement and results.

But start with your metrics and take a scientific approach. In the next week, I will be providing metrics frameworks that have stood well in large, complex shops along with templates that should help the understanding and application of the approach.

What metrics approaches have worked well for you? What keys would you add to this approach? What would you change?

Best, Jim

Moving from IT Silos to IT Synergy

We will revisit the Service Desk best practices this weekend as we have several additions ready to go, but I wanted to cover how you, as an IT leader, can bring about much greater synergy within your IT organization. In many IT shops, and some that I found when I first arrived at a company, the IT is ‘siloed’ or separate into different divisions, with typically each division supporting a different business line. Inevitability there are frustrations with this approach, and they are particularly acute when a customer is served by two or more lines of business. How should you approach this situation as a leader and what are the best steps to take to improve IT’s delivery?

I think it is best first to understand what are the drivers for variations in IT organizations under large corporations. With that understanding we can then work out the best IT organizational and structural models to serve them. There are two primary sets of business drivers:

  • those drivers that require IT to be closer to the business unit such as:
    • improving market and business knowledge,
    • achieving faster time-to-market (TTM),
    • and the ability to be closer in step and under control of the business leads of each division
  • those drivers that require IT to be more consolidated such as:
    • achieving greater efficiencies,
    • attaining a more holistic view of the customers,
    • delivering greater consistency and quality
    • providing greater scale and lower comprehensive risk

So, with these legitimate pushes, in two different directions, there is always a conflict in how IT should be organized. In some organizations, the history has been two or three years of being decentralized to enable IT to be more responsive to the business, and then, after costs are out of control, or risk is problematic, a new CFO, COO, or CEO comes in and  IT is re-centralized. This pendulum swing back and forth, is not conducive to a high performance team, as IT senior staff either hunker down to avoid conflict, or play politics to be on the next ‘winning’ side. Further, given that business organizations have a typical life span of 3 to 4 years (or less) before being re-organized again, corollary changes in IT to match the prevailing business organization then cause havoc with IT systems and structures that have 10 or 20 year life spans. Moreover, implementing a full business applications suite and supporting business processes takes at least 3 years for decent-sized business division, so if organizations change in the interim, then much valuable investment is lost.

So it is best to design an IT organization and systems approach that meets both sets of drivers and anticipates that business organizational change will happen. The best solution for meeting both sets of drivers is to organize IT as a ‘hybrid‘ organization. In other words, some portions of IT should be organized to deliver scale, efficiency, and high quality, while others should be organized to deliver time to market, and market feature and innovation.

The functions that should be consolidated and organized centrally to deliver scale, efficiency and quality should include:

  • Infrastructure areas, especially networks, data centers, servers and storage
  • Information security
  • Field support
  • Service desk
  • IT operations and IT production processes and tools

These functions should then be run as a ‘utility’ for the corporation. There should be allocation mechanisms in place to ensure proper usage and adequate investment in these key foundational elements. Every major service the utility delivers should be benchmarked at least every 18 months against industry to ensure delivery is at top quartile levels and best practices are adopted. And the utility teams should be relentlessly focused on continuous improvement with strong quality and risk practices in place.

The functions that should be aligned and organized along business lines to deliver time to market, market feature and innovation should include:

  • Application development areas
  • Internet and mobile applications
  • Data marts, data reporting, and data analysis

These functions should be held accountable for high quality delivery. Effective release cycles should be in place to enable high utilization of the ‘build factory’ as well as a continuous cycle of feature delivery. These functions should be compared and marked against each other to ensure best practices are adopted and performance is improved.

And those functions which can be organized flexibly in either mode would be:

  • Database
  • Middleware
  • Testing
  • Applications Maintenance
  • Data Warehousing
  • Project Management
  • Architecture

For these functions that can centralized or organized along business lines, it is possible to organize in a variety of ways. For example, systems integration testing could be centralized and unit test and system testing could be allocated by business line application team. Or, data base could have physical design and administration centralized and logical design and administration allocated by application team. There are some critical elements that should be singular or consolidated, including:

  • if architecture is not centralized, there must be architecture council reporting to the CIO with final design authority
  • there should be one set of project methodologies, tools, and process for all project managers
  • there should be one data architecture team
  • there should be one data warehouse physical design and infrastructure team

In essence, as the services are more commodity, or there is critical advantage to have a single solution (e.g. one view of the customer for the entire corporation) then you should establish a single team to be responsible for that service. And where you are looking for greater speed or better market knowledge, then organize IT into smaller teams closer to the business (but still with technical fiduciary accountabilities back to the CIO).

With this hybrid organization, as outlined in the diagram, you will be able to deliver the best of both worlds: outstanding utility services that provide cost and quality advantage and business-aligned services that provide TTM and market feature and innovation. .

As CIO, you should look to optimize your organization using the ‘hybrid’ structure. If you are currently entirely siloed, then start the journey by making the business case for the most commodity of functions: networks, service desks, and data centers. It will be less costly, and there will be more capability and lower risk if these are integrated. As you successfully complete integration of these area, you can move up the chain to IT Operations, storage, and servers. As these areas are pooled and consolidated you should be able to release excess capacity and overheads while providing more scale and better capabilities. Another area to start could be to deliver a complete view of the customer across the corporation. This requires a single data architecture, good data stewardship by the business, and a consolidated data warehouse approach. Again, as functions are successfully consolidated, the next layer up can be addressed.

Similarly, if you are highly centralized, it will be difficult to maintain pace with multiple business units. It is often better to divest some level of integration to achieve a faster TTM or better business unit support. Pilot an application or integration team in those business areas where innovation and TTM are most important or business complexity is highest. Maintain good oversight but also provide the freedom for the groups to perform in their new accountabilities.

And realize that you can dial up or down the level of integration within the hybrid model.  Obviously you would not want to decentralize commodity functions like the network or centralize all application work, but there is the opportunity to vary intermediate functions to meet the business needs. And by staying within the hybrid framework, you can deliver to the business needs without careening from a fully centralized to a decentralized model every three to five years and all the damage that such change causes to the IT team and systems.

There are additional variations that can be made to the model to accommodate organizational size and scope (e.g., global versus regional and local) that I have not covered here. What variations to the hybrid or other models have you used with success? Please do share.

Best, Jim

 

 

 

Real Lessons of Innovation from Kodak

At Davos this past week, innovation was trumpeted as a necessity for business and solution for economic ills. And in corporations around the world, business executives speak of the need for ‘innovation’ and ‘agility’ for them to win in the marketplace. Chalk that up to the Apple effect. With the latest demise of Kodak, preceded by Borders, Nokia, and Blockbusters, among others, some business leaders are racing to out-innovate and win in the marketplace. Unfortunately, most of these efforts cause more chaos and harm than good.

Let’s take Kodak. Here was a company that since 1888 has been an innovator. Kodak’s labs are  full of inventions and techniques. It has a patent portfolio worth an estimated $2.5B just for the patents. The failure of Kodak was due to several causes, but it was not due to lack of innovation. Instead, as rightly pointed out by John Bussey at the WSJ, ‘it failed to monetize its most important asset – its inventions.’ It invented digital photography but never took an early or forceful position with product (though it is unlikely that even  a strong position in that market would have contributed enough revenue given digital cameras come for free on every smart phone today). The extremely lucrative film business paralyzed Kodak until it plunged into the wrong sector – the highly mature and competitive printing market.

So, it is all well and good to run around and innovate, but if you cannot monetize it, and worse, if it distracts you from the business at hand, then you will run your business into the ground. I think there are four patterns of companies that successfully innovate:

The Big Bet at the Top: One way that innovation can be successfully implemented is through the big bet at the top. In other words, either the owner or CEO decides the direction and goes ‘all-in’. This has happened time and again. For example, in the mid-80s, Intel made the shift from memory chips to microprocessors. This dramatic shift included large layoffs and shuttering plants in its California operations, but the shift was an easier decision by top management because the microprocessor marketplace was more lucrative than the memory semiconductor marketplace. Intel’s subsequent rapid improvement in processors and later branding proved out and further capitalized on this decision. And I think Apple is a perfect example of big bets by Steve Jobs. From the iPod and iTunes, to the iPhone, to the iPad, Apple made big bets on new consumer devices and experiences that were, at the time, derided by pundits and competitors (I particularly like this one rant by Steve Ballmer on the iPhone). Of course, after a few successes, the critics are hard to find now. These bets require prescient senior management with enough courage and independence to place them, otherwise, even when you have a George Fisher, as Kodak did, the bets are not placed correctly. I also commend bets where you get out of a sector that you know you cannot win in. An excellent example of this is IBM’s spinoff of its printer business in the ’90s and its sale of the PC business to Lenovo more recently. Both turn out to be well ahead of the market (just witness HP’s belated and poorly thought though PC exit attempt this past summer).

Innovating via Acquisition: Another effective strategy is to use acquisition as a weapon. But success comes to those who make multiple small acquisitions as opposed to one or two large acquisitions. Cisco and IBM come to mind with this approach. Cisco effectively branched out to new markets and extended its networking lead in the 1990s and early 2000s with this approach. IBM has greatly broadened and deepened its software and services portfolio in the past decade with it as well. Compaq and Digital, America Online and Time Warner, or perhaps recently, HP’s acquisition of Autonomy, represent those companies that make a late, massive acquisition to try to stave off or shift their corporate course. These fare less well. In part, it is likely due to culture and vision. Small acquisitions, when care is taken by senior management to fully leverage the product, technology and talent of the takeover, can mesh well with the parent. A major acquisition can set off a clash of cultures, visions, and competing products that waste internal energy and place the company further behind in the market. Hats off on at least one major acquisition that changed completely the course of a company: Apple’s acquisition of Next. Of course, along with Next they also got the leader of Next: Steve Jobs, and we all know what happened next to Apple.

Having a Separate Team: Another successful approach is to recognize that the reason a company does well is because it is focused on ensuring predictable delivery and quality to its customer base. And to do so, its operations, product and technology divisions all strive to deliver such value predictably. Innovation by its very nature is discontinuous and causes failure (good innovators require many failures for every success). By teaching the elephant to dance, all you do is ruin the landscape and the productive work that kept the company in business before it lost its edge. Instead, by setting up a separate team, as IBM has done for the past decade and others have done successfully, a company can be far more successful. The separate team will require sponsorship, and it must be recognized that the bulk of the organization will focus on the proper task of delivering to the customer as well as making incremental improvements. You could argue that Kodak’s focus of the bulk of its team on film was its downfall. But I would suggest instead it was the failure of the innovation teams to take what they already had in the lab and make them successful new products in the market.

A Culture of Tinkering: This approach relies on the culture and ingenuity of the team to foster an environment where outstanding delivery in the corporation’s competence area is done routinely, and time and resources are set aside to continuously improve and tinker with the products and capabilities. To have the time and space for teams to be engaged in such ‘tinkering’ requires that the company master the base disciplines of quality and operational excellence. I think you would find such companies in many fields and it has enabled ongoing success and market advantage, in part because not only do they innovate, but they also out-execute. For example, Fedex, well-known for operational excellence, introduced package tracking in 1994, essentially exposing to customers what was a back end system. This product innovation has now become commonplace in the package and shipping industry. Similarly, 3M is well-known as an industry innovator, regularly deriving large amounts of revenue from products that did not exist for them even 5  years prior. But some of their greatest successes (e.g., Post-It Notes) did not come about from some corporate innovation session and project. Instead they came together over years as ideas and failures percolated in a culture of tinkering until finally the teams hit on the right combination of a successful product. And Google is probably the best technology company example where everyone sets aside 20% of their time to ‘tinker’.

So what approach is best? Well, unless you have a Steve Jobs, or are a pioneering company in a new industry, making the big bet for an established corporation should be out. If your performance does not show outstanding excellence, and if your corporate culture does not encourage collegiality and support experimentation and then leverage failure, then a tinkering approach will not work. So you are left with two options, make multiple small acquisitions in the areas of your product direction, and with effective corporate sponsorship, fold the new product set and capabilities into your own. Or, set up a separate team to pursue the innovation areas. This team should brainstorm and create the initial products, test and refine them, and then after market pilot, have the primary production organization deliver it in volume (again with effective corporate sponsorship). Thus the elephant dances the steps it can do and the mice do the work the elephant cannot do.

As for our example, Kodak only had part of the tinkering formula. Kodak had the initial innovation and experimentations but they were unable to take the failures and adjust their delivery to match what was required in the market for success. And they should have executed multiples of smaller efforts across more diverse product sets (similar to what Fujifilm did) to find their new markets.

Have you been part of a successful innovation effort or culture? What approaches did you see being used effectively?

Best, Jim

A Continued Surge in IT Investment

In recent posts, I have noted previous articles on how the recovery in the US has been a ‘jobless’ recovery yet one with stronger investment in equipment and IT. This peculiar effect in the latest reporting appears to be even more pronounced, perhaps even running in ‘overdrive’. According to yesterday’s Wall Street Journal article, the investment in labor savings in the US stepped up at the beginning of the decade, but with the recession, companies found bigger opportunities to automate even more with machinery and software.  Timothy Aeppel, the author notes, this investment and spending level has continued broadly through the fourth quarter. And while certainly investment in technology will cause employment subsequently to rise, I think it appears that CEOs are investing in technology far more than adding staff as in previous recoveries. And they are doing this for a reason — they can get better returns from the machinery and robotics and software than before. Part of this is due to low interest rates and unique tax breaks, but I believe that technology is enabling greater returns on reducing costs and improving productivity than previously. Further, I think fundamental changes in IT capabilities and robotics are fueling improved returns from automation even more than has occurred in the past, spurring even greater investment in IT.

Historically, IT applications and systems were applied to domains with large amounts of the routine work, often done by hundreds if not thousands of staff. These were the areas that justified the cost of IT and provided the greatest return. Improvements in application and database technology enabled technology to tackle tougher and more complex problems. It also enabled leverage of technology on medium scale processes. Basic toolsets like email and Sharepoint tackled the least complex and departmental processes. This progress is represented in the diagram below.

The introduction of client server platforms allowed solutions to be applied to routine work on a smaller scale, to large departments and medium sized companies rather than just divisions and large corporations. This accelerated with the internet and the advent of virtualization.

But the cumulative and accelerating effect of new development technologies and methods, new toolsets, new client devices, cloud infrastructure, and advancing data and analytics capabilities has enabled a far broader range of solutions to be applied easily and effectively across a wide range of institutions and problem sets. Small and medium sized companies through cloud services can now leverage similar infrastructure capabilities that previously could only be implemented and afforded by the largest corporations. This step change in progression is represented by the chart below.

The increase of automation scope with new Tech toolsets

The new lightweight workflow tools like IBM’s Lombardi toolset and many others open up almost any departmental process to be easily and rapidly automated and achieve a decent ROI.  The proliferation of client interfaces through the internet and mobile allow customers to self serve intelligently for almost any product and service, enabling the elimination of large amounts of front office and back office work. This cumulative and compounding effect is truly a step change in what can be done by IT to automate the work within medium and large companies. And yet, despite this sea change in capabilities, you still find many IT departments focused almost wholely on their traditional scope. The projection initiation and selection process is laborious and even arduous, oriented towards doing large (and fewer) projects. The amount of overhead required to execute almost any typical project would overwhelm lightweight automation for departmental-sized efforts. And yet, there are huge new areas of scope and automation that are possible now in almost every company. And so how do you start to prove out these new areas and adapt your processes to enable them to get done?

I think there are several ways to get started here, some of which I mentioned briefly in my December post on  ‘A few things to add to your technology plans’.

1. Improve customer signup or account opening: Unless your company has redone these functions within the past 24 months, it is doubtful that this area is up to the latest expectations of customers.  Enable account opening from the web and mobile devices, leverage the app stores to provide mobile clients that have additional, useful and cool functions (nearest store, nearest ATM, or if you are a climbing gear company, the current temperature and weather at base camp on Mount Everest). Make them easy to navigate and progress through the application with  progress bar and associated menu. Ensure the physical process at the store or branch is as easy as the internet version (i.e., not twenty pages of forms). And tighten up the security with strong passwords (many sites today have a strength indicator as you type in the new password) and two factor security on critical transactions (e.g. wire transfers or bill payments). Remember you can now deliver the two factor security through the customer’s mobile device and not a separate token.

2. Fix two or three process issues that are basic transactions for customers: Just as you need to continually cycle back through your customer interfaces to keep them fresh and take advantage of the latest consumer technology, you should also need to revisit some of the basic transactions that business typically fail to put on their list to invest and yet become problematic service areas for customers. These would be areas like change of address or statement reprints or getting a replacement card. Because they are never on the project list, these services remain backwaters of process and automation with predictable frustration for the customers, high error rates, and disproportionate manual effort to complete. Work with a strong business partner (perhaps the COO or someone close to the customer experience) to tee these up. Use the latest workflow tools to tackle the process piece. Leverage the latest data warehouse and ETL capabilities to integrate the customer data across business units and applications so that the process can be once and done. If you are not sure on what basic customer process to start, then talk to the unit handling customer complaints and look for those processes that have the highest number of issues yet the customer is trying to do something quite basic. Remember, every customer complaint requires expensive responses that by eliminating, you drive material improvements in productivity and cost.

3. Implement more self-service: An oft-neglected area is the improvement of corporate support functions and their productivity and service. In a typical corporation almost every employee is touched by the HR and Fiance processes, which can be heavily manual and archaic (again they rarely show up on the project list) By working with your Finance and HR functions you can reduce their costs and improve their delivery to their users through implementation of automation and self-service. The advanced workflow toolsets (IBM’s BPM) mean you can do far more with incremental, small tiger team efforts than ever before. Your scope to automate and move to self service on your intranet is much greater. More and more minor business processes than ever before can be automated at far less cost and effort. The end results are higher productivity for your business, lower operations costs in HR and Finance, a more satisfied user base, and a better perception of IT.

4. Get to a modern production and service toolset for IT: For the past twenty years, there have been two traditional toolsets that most companies leveraged for production processes and service requests (Remedy and Peregrine). And most of us have implemented (with some struggle) reasonable implementations that met the bulk of our needs. But the latest generation of these toolsets (e.g., ServiceNow) make our previous implementations look like dinosaurs. And when you consider the 60 or 70% of your staff and service desk are using these tools every day, and you can make them far more productive with a new toolset, it is worth taking a look at. Further, your business users, will love the new IT ordering facilities on the intranet that are better than ordering a book from Amazon. By the way, the all-in operating cost of the new tools should be substantially less than your current costs for Remedy or Peregrine. And your team will be operating at a step level improvement in efficiency and productivity.

5. Get Going in Business intelligence: One last item to make sure your company is capitalizing on is leveraging the data you already have to know your customer better, improve your products or services, and reduce your costs. Why have advertising for customers who never click through or buy? Why do customers call your call center when they should be able to do it easier online? Wade through all the unstructured data being generated on social about your company to figure out how to improve your brand. Knowing the mood of the market, understanding your customers and the perspectives on your products and services requires IT to partner with the business to leverage the data you have to obtain intelligence. Investing in this area can now be tackled with the new big data tools on the market. If you are not doing much here, then I recommend finding out what your competitors are doing and sitting down with your business partners to sort through what you must do.

So, there’s 5 things that five years ago, would have never made any list. Yet if you make real progress in 3 of the 5, you can hit home runs in customer satisfaction, service quality, and a much better view of IT. And most important, you can ensure your company stays ahead of the game to achieve greater productivity and lower costs.

Any views or alternate perspectives on the progression of IT tools and solutions? Do you see the same sea change that I am calling out here for us to take advantage of?

Let me know. Best, Jim

Delivering More by Leveraging the ‘Flow of Work’

As you start out this year as an IT leader and you are trying to meet both project demand on one hand and savings goals on the other, remember to leverage what I term the IT ‘Flow of Work‘. Too often, once work comes into an organization, either through a new project going into production or through the original structure of the work, it is static. The work, whether it is server administration, batch job execution, or routine fixes, continues to be done by the same team that often developed it in the project cycle. Or at best, the system and its support are handed off to a dedicated run team, that continues to treat it as ‘custom’ work. In other words, once the system has been crafted by the project team, the regular work to run, maintain, and fix the system continues to be custom-crafted work.

This situation, where the work is static and continues to be executed by the initial, higher cost resources, would be analogous to a car company crafting a prototype or concept car, and then using that same team to produce every single subsequent car of that model with the same ‘craft’ approach. This of course does not happen as the car company moves the prototype to a production factory where the work is standardized and automated and leaned, and far lower cost resources execute the work. Yet in the IT industry we often fail to leverage this ‘flow of work’. We use senior server engineers to do basic server administration tasks (thus making it custom work). We fail to ‘package’ or productionalize or automate the tasks thus requiring exorbitant amounts of manual work because the project focused on delivering the feature and there was not optimization step to get it into the IT ‘factory’.

Below is a diagram that represents the flow of work that should occur in your IT organization.

Moving work to where it is most efficiently executed

The custom work, work done for new design, or complex analysis or maintenance, is the work done by your most capable and expensive resources. Yet, many IT organization waste these resources by doing custom work where it doesn’t matter. A case in point would be anywhere that you have IT projects leveraging custom designed and built servers/storage/middleware (custom work) instead of standardized templates (common work). And rarely do you see those tasks documented and automated such that they can be executed by the IT Operations team (routine work). And not only do you then waste your best resources on design that adds minimal business value, you then do not have those best resources available to do the new projects and initiatives the business needs to have done. Or similarly, your senior and high cost engineers are doing routine administrative work because that is how the work was implemented in production. Later, no investment has been made to document or package up this routine work so it can easily be done by your junior or mid-level engineering resources.

Further, I would suggest that you often find the common or routine engineering work stays in that domain. Investment is infrequently made to further shrinkwrap and automate and document the routine administrative tasks your mid-level engineers so that you can hand it off to the IT Operations staff to execute as part of their regular routine (and by the way, the Ops team typically executes these tasks with much greater precision than the engineering teams).

So, rather than fall into the traps of having static pools of work within your organization, drive and investment so that the work can be continually packaged and executed at a more optimal level and free up your specialty resources to tackle more business problems.  Set a bar for each of your teams for productivity improvements. Enable them the time and investment to package the work and send it to the optimal pool. Encourage your teams to partner with the group that will receive their work on the next step of the flow. And make sure they understand that for every bit of more routine work that they can transition to their partner team, they will receive more rewarding custom work.

After a few cycles of executing the flow of work within your organization, you will find you have gained significant capacity and reduced your cost to execute routine work substantially. This enables you to achieve a much improved ratio of run cost versus build costs by continually compressing the run costs.

I have seen this approach executed with great effect in both large and mid-sized organizations. Have you been part of or lead similar exercises? What results did you see?

Best, Jim