Just about Time for Spring Break

It is just about time for spring break and given the significant number of new readers, I thought I would touch again on the key goals for this site and also touch on some posts and additions you may have missed.

I would also like to thank Steve Wignall for his contributions in the Service Desk arena. We now have 5 solid pages relating to all aspects of Service Desk and we are typically ranked in the top 20 searches in Google for a number of searches related to service desk (e.g., ‘service desk best practices’, etc). While I am quite pleased for us to achieve this in a matter of a few months, what is most important is that the content is useful and relevant and meaningful to IT leaders.

As many of you know, delivering IT today, with all of the cost reduction demands, the rapidly changing technology, the impact of IT consumerization, and security and risk demands, is, simply put, tough work. It is complicated and hard to get the complex IT mechanism, with all the usual legacy systems issues, to perform as well as the business requires. RecipesforIT has been built to help those IT leaders out, to provide recipes and ideas on how to tackle the tough but solvable problems they face. And in addition to providing best practices, we will give you a solid and sensible perspective on the latest trends and technologies.

And note we will continue to build out the best practices areas and not necessarily post the material. For example, we have added Build an Outstanding Customer Interface, Service Desk Leadership and Service Desk Metrics pages to the appropriate best practice areas. So don’t forget to leverage these areas for material when you are faced with issues or challenges.

As promised in January, we have really covered the service desk area with the help of Steve Wignall and Bob Barnes. And we covered innovation (and Kodak). There was also a request to cover how to effectively consolidate an IT organization and that was covered in  the post Building IT Synergy

So what is upcoming? I will continue to touch on current topics (hopefully you liked the Australian pilot and low PDI post) but I will also divert time to cover leadership and high performance teams. I have also received a request to cover production operations and best practices in that area that I hope to complete. Steve will also cover another page on service desk for us. And I will continue with incremental but hopefully material improvements to the site pages that will provide further details on best practices in a comprehensive manner.

I continue to receive strong feedback from many of you on the usefulness and actionability of the material. I will definitely work to ensure we maintain that relevance.

One last note: don’t forget you can subscribe to the site so you get an email when there’s a new post (subscribing is on the rightmost bar, halfway down the page). And feel free to provide comments or suggestions — the feedback really helps!

If you are new to the site, I recommend a few posts for relevance and fundamentals:

So, expect plenty more and enjoy your break and the warm weather!

Best, Jim Ditmore

Why you want an Australian Pilot: Lessons for Outstanding IT Leadership

Perhaps you are wondering what nationality or culture has to do with piloting an airplane? And how could piloting an airplane be similar to making decisions in an IT organization?

For those of you who have read Outliers, which I heartily recommend, you would be familiar with the well-supported conclusions that Malcolm Gladwell makes:

  • that incredible success often has strong parallels and patterns among those high achievers, often factors you would not expect or easily discern
  • and no one ever makes it alone

A very interesting chapter in Outliers is based on the NTSB analysis of what occurred in the cockpit during several crashes as well as the research work done by Dutch psychologist Geert Hofstede. What Hofstede found in his studies for IBM HR department in the 70s and 80s is that people from different countries or cultures behave differently in their work relationships. Not surprising of course, and Hofstede did not place countries as right or wrong but used the data as a way to measure differences in cultures. A very interesting measure of culture is the Power Distance Index (PDI). Countries with a high PDI have cultures where those in authority are treated with great respect and deference. For countries with a low PDI, those in authority go to great lengths to downplay their stature and members feel comfortable challenging authority.

Now back to having an Australian pilot your plane: commercial aircraft, while highly automated and extremely reliable, are complex machines that when in difficult circumstances, require all of the crew to do their job well and in concert. But for multiple crashes in the 1990s and early 2000s, the NTSB found that crew communication and coordination were significant factors. And those airlines with crews from countries with high PDI scores, had the worst records. Why? As Malcolm Gladwell lays out so well, it is because of the repeated deference of lower status crew members to a captain who is piloting the plane. And when the captain makes repeated mistakes, these crew members defer and do not call vigorously call out the issues when it is their responsibility to do so even to fatal effect. So, if you were flying a plane in the 1990s, you would want your pilot to be from Australia, New Zealand, Ireland, South Africa, or the US, as they have the lowest PDI cultural score. Since that time, it is worth noting that most airlines with high PDI ratings have incorporated crew responsibility training to overcome these effects and all airlines have added further crew training on communications and interaction resulting in the continued improvement in safety we witnessed this past decade.

But this experience yields insight into how teams operate effectively in complex environments. Simply put, the highest performance teams are those with a low PDI that enables team members to provide appropriate input into a decision. Further, once the leader decides, with this input, the team rotates quickly and with confidence to take on the new tack. Elite teams in our armed forces operate on very similar principles.

I would suggest that high performance IT teams operate in a low PDI manner as well. Delivering an IT system in today’s large corporations requires integrating a dozen or more technologies to deliver features that require multiple experts to fully comprehend. In contrast, if you have the project or organization driven by a leader whose authoritative style imposes high deference by all team members and alternative views cannot be expressed, than it is simply a matter of time before poor performance will set in. Strong team members and experts will look elsewhere for employment as their voices are not heard, and at some point, one person cannot be an expert in everything required to succeed and delivery failure will occur. High PDI leaders will not result in sustainable high performance teams.

Now a low PDI culture does not suggest there is not structure and authority. Nor is the team a democracy. Instead, each team member knows their area of responsibility and understands that in difficult and complex situations, all must work together with flexibility to come up with ideas and options for the group to consider for the solution. Each member views their area as a critical responsibility and strives to be the best at their competency in a disciplined approach. Leaders solicit data, recommended courses and ideas from team members, and consider them fully. Discussion and constructive debate, if possible given the time and the urgency, are encouraged. Leaders then make clear decisions, and once decided, everyone falls in line and provides full support and commitment.

In many respects, this is a similar profile to the Level 5 leader that Jim Collins wrote about that mixes ‘a paradoxical blend of personal humility and professional will. They feature a lack of pretense (low PDI) but fierce resolve to get things done for the benefit of their company. Their modesty allows them to be approachable and ensures that the facts and expert opinions are heard. Their focus and resolve enables them to make clear decisions. And their dedication to the company and the organizations ensure the company goals are foremost. (Of course, they also have all the other personal and management strengths and qualities (intelligence, integrity, work ethic, etc.).)

Low PDI or Level 5 leaders set in place three key approaches for their organizations:

  • they set in place a clear vision and build momentum with sustained focus and energy, motivating and leveraging the entire team
  • they do not lurch from initiative to initiative or jump on the latest technology bandwagon, instead they judiciously invest in key technologies and capabilities that are core to their company’s competence and value and provide sustainable advantage
  • because they drive a fact-based, disciplined approach to decisioning as leaders, excessive hierarchy and bureaucracy are not required. Further, quality and forethought are built in to processes freeing the organization of excessive controls and verification.

To achieve a high performance IT organization, these are the same leadership qualities required. Someone who storms around and makes all the key decisions without input from the team will not achieve a high performance organization nor will someone who focuses only on technology baubles and not on the underlying capabilities and disciplines. And someone who shrinks from key decisions and turf battles and does not sponsor his team will fail as well. We have all worked for bosses that reflected these qualities so we understand what happens and  why there is a lack of enthusiasm in those organizations.

So, instead, resolve to be a level 5 leader, and look to lower your PDI. Set a compelling vision, and every day seek out the facts, press your team to know their area of expertise as top in the industry, and sponsor the dialogue that enables the best decisions, and then make them.

Best, Jim

The Accelerating Impact of the iPad on Corporate IT

Apple announced the iPad two years ago and began shipping in April 2010. In less than two years, the rapidity and scale of the shock to the PC marketplace from the iPad has been stunning.   The PC market trends in 2011 show PCs of all types (netbook, notebook, desktop) are being cannabilized by tablet sales. And iPad sales (15.4M in 4Q11) are now equivalent to PC desktop sales. We saw desktop PCs shipments slowing the last several years due to the earlier advent of notebooks and then netbooks, but now stagnating and even dropping in great part due to the iPad. With the release of the iPad 3 just around the corner (next week), these impacts will accelerate. And while the release of Windows 8 and new PC ultrabooks (basically PC versions of the MacBook Air) could possibly cause improved shipments in 2012, the implications of this consumer shift are significant for corporate technology.

Just as IT managers used to determine what mobile devices their employees used (and thus invariably Blackberry) and now companies are adopting a ‘bring your own device’ (BYOD) approach to mobile, IT managers will need to shift to not just accommodating iPads as an additional mobile device, but should move to a full-fledged BYOD for client computing for their knowledge workers. Let them decide if they need a tablet, ultrabook, or laptop. Most front office staff will also be better served by a mobile or tablet approach (consider the retail staff in Apple’s stores today). Importantly, IT will need to develop applications first and second for the internet and tablets, and only for traditional PCs for back office and production workers.

The implications of this will cause further shock to the marketplace. Just as in the mobile device marketplace where the traditional consumer vendors were impacted first by the new smart phones **(i.e., Nokia impacted first by Apple and Android) and then the commercial mobile vendor** (Blackberry), PC vendors are now seeing their consumer divisions severely impacted with modest growth in commercial segments. But front office and knowledge workers will demand the use of tablets first and air books or ultrabooks second. Companies will make this shift because their employees will be more productive and satisfied and it will cost the company less. And as the ability to implement and leverage this BYOD approach increases, the migration will become a massive rush, especially as front office systems convert.  And the commercial PC segment will follow what already is happening in the broader consumer segment.

As an IT manager you should ensure your shop is on the front edge of this transition as much as possible to provide your company advantage. The tools to deploy, manage, and implement secure sessions are rapidly maturing and are already well-proven **. Many companies started pilots or small implementations in that past year in such areas as providing iPads instead of 5 inch thick binders for their boards ** or giving senior executives iPads to use in meetings instead of printed presentations. But the big expansion has been allowing senior managers and knowledge workers to begin accessing corporate email and intranets via their own iPads from home or when traveling. And with the success of these pilots, companies are planning broader rollouts and are adopting formal BYOD policies for laptops and pads.

So how do you ensure that your company stays abreast of these changes? If you have not already piloted corporate email and intranet access from smart phones and pads, get going. Look also to pilot the devices for select groups such as your board and senior executives. This will enable you to get the support infrastructure in place and issues worked out before a larger rollout.

Work with your legal and procurement team to define the new corporate policy on employee devices. Many companies help solve this issue by providing the employee a voucher covering the cost of the device purchase but the employee is the owner. And because the corporate data is held in a secure partition on the device and can be wiped clean if lost or stolen, you can meet your corporate IT security standards.

More importantly, IT must adjust its thinking about what the most vital interface is for internal applications. For more than a decade, it has been the PC with perhaps an internet interface. Going forward, it needs to be an internet interface, possibly with a smartphone and iPad app. Corporate PC client interfaces (outside of dedicated production applications such as the general ledger for the finance team or a call center support application) will be one of casualties of this shift from PCs.

If you are looking for leaders in the effort, I would suggest that government agencies, especially in the US, have been surprisingly agile in delivering their reference works for everything from the state legal code to driving rules and regulations on iPad applications in the Itunes store. I actually think the corporate sector is trailing the government in this adoption. How many of you actually have their HR policies and procedures in an iPad application that can be downloaded? Or a smart phone app to handle your corporate travel expenses? Or a front office application that enables your customer facing personnel to be as mobile and productive as an Apple retail employee?

And ensure you build the infrastructure to handle the download and version management of these new applications. You can configure your own corporate version of a iTunes store  that enables users to self-provision and easily download apps to their devices just as they download Angry Birds today. This again will provide a better experience for the corporate user at reduced cost. And the leading senior infrastructure client managers today are looking to further exploit this corporate store and later extend this infrastructure and download approach to all their devices. This is just another example of a product or approach, first developed for the consumer market, cross-impacting the commercial market.

As for those desktop PCs, where will they be in 2 to 3 years? They will still be used by production workers (call centers, back office personnel, etc) but they will be more and more virtualized so the heavy computing is done in the data center and not the PC. And desktop PCs  will be a much smaller proportion of your overall client devices. This will have significant implications on your client software licenses (e.g. Windows Office, etc) and you should begin considering now how your contracts will handle this changing situation.  And perhaps just beyond that timeframe, it is possible that we will consider traditional desktops to be similar to floppy drives today — an odd bit of technology from the past.

Best, Jim Ditmore

* for those of you who read my occasional post on InformationWeek, I have updated my article that was originally posted there on Feb 14.

 

Metrics Roundup: Key Techniques and References

As an IT leader either of a function or of a large IT team with multiple functions, what is the emphasis you should place on metrics and how are you able to leverage them to attain improvement and achieve more predictable delivery? What other foundational elements must be in place for you to effectively leverage the metrics? Which metrics are key measures or leading indicators and which ones are lagging or less relevant?

For those of you just joining, this is the fourth post on metrics and in our previous posts we focused on key aspects of IT metrics (transparency, benchmarking, and a scientific approach). You can find these posts in the archives or, better yet, in the pages linked under the home page menu of Best Practices. The intent of the RecipeforIT blog is to provide IT leaders with useful, actionable practices, techniques and guidance to be more successful IT leaders and enable their shops to achieve much higher performance. I plan to cover most of the key practices for IT leaders in my posts, and as a topic is covered, I try to migrate the material into Best Practices pages. So, now back to metrics.

In essence there are three types of relevant metrics:

  • operational or primary metrics – those metrics used to monitor, track and decision the daily or core work. Operational metrics are the base of effective management and are the fundamental measures of the activity being done. It is important that these metrics are collected inherently as part of the activity and best if the operational team  collects, understands and changes their direction based on these metrics.
  • verification or secondary metrics – those metrics used to verify that the work completed meets standards or is functioning as designed. Verification metrics should also be collected and reviewed by the same operational team though potentially by different members of the team or as participation as part of a broader activity (e.g. DR test). Verification metrics provide an additional measure of either overall quality or critical activity effectiveness.
  • performance or tertiary metrics – those metrics that provide insight as to the performance of the function or activity. Performance metrics enable insight as to the team’s efficiency, timeliness, and effectiveness.

Of course, your metrics for a particular function should consist of those measures needed to successfully execute and manage the function as well as those measures that demonstrate progress towards the goals of your organization. For example, let’s take am infrastructure function: server management. What operational metrics should be in place? What should be verified on a regular basis? And what performance metrics should we have? While this will vary based on the maturity, scale, and complexity of the server team and environment, here is a good subset:

Operational Metrics:

  • Server asset counts (by type, by OS, by age, location, business unit, etc) and server configurations by version (n, n-1, n-2, etc) or virtualized/non-virtual and if EOL or obsolete
  • Individual, grouped and overall server utilization, performance, etc by component (CPU, memory, etc)
  • Server incidents, availability, customer impacts by time period, trended with root cause and chronic or repeat issues areas identified
  • Server delivery time, server upgrade cycle time
  • Server cost overall and by type of server, by cost area (admin, maintenance, HW, etc) and cost by vendor
  • Server backup attempt and completion, server failover in place, etc

Verification metrics:

  • Monthly sample of the configuration management database server records for accuracy and completeness, Ongoing scan of network for servers not in the configuration management database, Regular reporting of all obsolete server configs with callouts on those exceeding planned service or refresh dates
  • Customer transaction times, Regular (every six months) capacity planning and performance reviews of critical business service stacks including servers
  • Root cause review of all significant impacting customer events, auto-detection of server issues versus manual or user detection ratios
  • DR Tests, server privileged access and log reviews, regular monthly server recovery or failover tests (for a sample)

Performance metrics:

  • Level of standardization or virtualization, level of currency/obsolescence
  • Level of customer impact availability, customer satisfaction with performance, amount of headroom to handle business growth
  • Administrators per server, Cost per server, Cost per business transaction
  • Server delivery time, man hours required to deliver a server

Obviously, if you are just setting out, you will collect on some of these metrics first. As you incorporate their collection and automate the work and reporting associated with them you can then tackle the additional metrics. And you will vary them according to the importance of different elements in your shop. If cost is critical, then reporting on cost and efficiency plays such as virtualization will naturally be more important. If time to market or availability are critical, than those elements should receive greater focus. Below is a diagram that reflects the construct of the three types of metrics and their relationship to the different metrics areas and score cards:

Metrics Framework

So, you have your metrics framework, what else is required to be successful leveraging the metrics?

First and foremost, the culture of your team must be open to alternate views and support healthy debate. Otherwise, no amount of data (metrics) or facts will enable the team to change directions from the party line. If you and your management team do not lead regular, fact-based discussions where course can be altered and different alternatives considered based on the facts and the results, you likely do not have the openness needed for this approach to be successful. Consider leading by example here and emphasize fact based discussions and decisions.

Also you must have defined processes that are generally adhered. If your group’s work is heavily ad hoc and different each time, measuring what happened the last time will not yield any benefits. If this is the case, you need to first focus on defining even at a high level, the major IT processes and help your team’s adopt them. Then you can proceed to metrics and the benefits they will accrue.

Accountability, sponsorship and the willingness to invest in the improvement activities are also key factors in the speed and scope of the improvements that can occur. As a leader you need to maintain a personal engagement in the metrics reviews and score card results. They should into your team’s goals and you should monitor the progress in key areas. Your sponsorship and senior business sponsorship where appropriate will be major accelerators to progress. And hold teams accountable for their results and improvements within their domain.

How does this correlate with your experience with metrics? Any server managers out there that would have suggestions on the server metrics? I expect we will have two further posts on metrics:

  • a post on how to evolve the metrics you measure as you increase the maturity and capability of your team,
  • and one on unit costing and allocations

I look forward to your suggestions.

Best, Jim

 

A Scientific Approach to IT Metrics

In order to achieve a world class or first quartile performance, it is critical to take a ‘scientific’ approach to IT metrics. Many shops remain rooted in ‘craft’ approaches to IT where techniques and processes are applied in an ad hoc manner to the work at hand and little is measured. Or, a smattering of process improvement methodologies (such as Six Sigma or Lean) or development approaches (e.g., Agile) are applied indiscriminately across the organization. Frequently then, due to a lack of success, the process methods or metrics focus are then tarred as being ineffective by managers.

Most organizations that I have seen that were mediocre performers typically have such craft or ad hoc approaches to their metrics and processes. And this includes not just the approach at the senior management level but at each of the 20 to 35 distinct functions that make up an IT shop (e.g., Networking, mainframes, servers, desktops, service desk, middleware , etc, and each business-focused area of development and integration). In fact, you must address the process and metrics at each distinct function level in order to then build a strong CIO level process, governance and metrics. And if you want to achieve 1st quartile or world-class performance, a scientific approach to metrics will make a major contribution. So let’s map out how to get to such an approach.

1) Evaluate your current metrics: You can pick several of the current functions you are responsible for and evaluate them to see where you are in your metrics approach and how to adjust to apply best practices. Take the following steps:

  • For each distinct function, identify the current metrics that are routinely used by the team to execute their work or make decisions.
  • Categorize these metrics as either operational metrics or reporting numbers. If they are not used by the team to do their daily work or they are not used routinely to make decisions on the work being done by the team, then these are reporting numbers. For example, they may be summary numbers reported to middle management or reported for audit or risk requirements or even for a legacy report that no one remembers why it is being produced.
  • Is a scorecard being produced for the function? An effective scorecard would have quantitative measures for the deliverables of the functions as well as objective scores for function goals that have been properly cascaded for the overall IT goals

2) Identify gaps with the current metrics: For each of IT functions there should be regular operational metrics for all key dimensions of delivery (quality, availability, cost, delivery against SLAs, schedule). Further, each area should have unit measures to enable an understanding of performance (e.g., unit cost, defects per unit, productivity, etc). As an example, the server team should have the following operational metrics:

    • all server asset inventory and demand volumes maintained and updated
    • operational metrics such as server availability, server configuration currency, server backups, server utilization should all be tracked
    • also time to deliver a server, total server costs, and delivery against performance and availability SLAs should be tracked
    • further secondary or verifying metrics such as server change success, server obsolescense, servers with multiple backup failures, chronic SLA or availability misses, etc should be tracked as well
    • function performance metrics should include cost per server (by type of server), administrators per server, administrator hours to build a server, percent virtualized servers, percent standardized servers, etc should also be derived

3) Establish full coverage: By comparing the existing metrics against the full set of delivery goals, you can quickly establish the appropriate operational metrics along with appropriate verifying metrics. Where there are metrics missing that should be gathered, work with the function to incorporate the additional metrics into their daily operational work and processes. Take care to work from the base metrics up to more advanced:

    • start with base metrics such as asset inventories and staff numbers and overall costs before you move to unit costs and productivity and other derived metrics
    • ensure the metrics are gathered in as automated a fashion as possible and as an inherent part of the overall work (they should not be gathered by a separate team or subsequent to the work being done

Ensure that verifying metrics are established for critical performance areas for the function as well. An example of this for the server function could be for the key activity of backups for a server:

    • the operational metrics would be perhaps backups completed against backups scheduled
    • the verifying metric would be twofold:
      • any backups for a single server that fail twice in a row get an alert and an engineering review as to why they failed (typically, for a variety of reasons 1% or fewer of your backups will fail, this is reasonable operational performance. But is one server does not get a successful backup for many days, you are likely putting the firm at risk if there is a database or disk failure, thus the critical alert)
      • every month or quarter, 3 or more backups are selected at random, and the team ensures they can successfully recover from the backup files. This will verify everything associated with the backup is actually working.

4) Collect the metrics only once: Often, teams collect similar metrics for different audiences. The metrics that they use to monitor for example configuration currency or configuration to standards, can be mostly duplicated by risk data collected against security parameter settings or executive management data on a percent server virtualization. This is a waste of the operational team’s time and can lead to confusing reports where one view doesn’t match another view. I recommend that you establish and overall metrics framework that includes risk and quality metrics as well as management and operational metrics so that all groups agree to the proper metrics. The metrics are then collected once, distilled and analyzed once, and congruent decisions can then be made by all groups. Later this week I will post a recommended metrics framework for a typical IT shop.

5) Drop the non-value numbers activity: For all those numbers that were identified as being gathered for middle management reports or for legacy reports with an uncertain audience; if there is no tie to a corporate or group goal, and the numbers are not being used by the function for operational purposes, I recommend to stop collecting the numbers and stop publishing any associated reports. It is non-value activity.

6) Use the metrics in regular review: At both the function team level and function management level the metrics should be trended, analyzed and discussed. These should be regular activities: monthly, weekly, and even daily depending on the metrics. The focus should be on how to improve, and based on the trends are current actions, staffing, processes, etc, enabling the team to improve and be successful on all goals or not. A clear feedback loop should be in place to enable the team and management to identify actions to take place to correct issues apparent through the metrics as quickly and as locally as possible. This gives control of the line to the team and the end result is better solutions, better work and better quality. This is what has been found in manufacturing time and again and is widely practiced by companies such as Toyota in their factories.

7) Summarize the metrics from across your functions into a scorecard: Ensure you identify the key metrics within each function and properly summarize and aggregate the metrics into an overall group score card. Obviously the score card should match you goals and key services that you deliver. It may be appropriate to rotate in key metrics from a function based on visibility or significant change. For example, if you are looking to improve overall time to market(TTM) of your projects, it may be appropriate to report on server delivery time as a key subcomponent and hopefully leading indicator of your improving TTM.  Including on your score card, even at a summarized level, key metrics from the various functions, will result in greater attention and pride being taken in the work since there is a very visible and direct consequences. I also recommend that on a quarterly basis, that you provide an assessment as to the progress and perhaps highlights of the team’s work as reflected in the score card.

8 ) Drive better results through proactive planning: The team and function management, once the metrics and feedback loop are in place, will be able to drive better performance through ongoing improvement as part of their regular activity. Greater increases in performance may require broader analysis and senior management support. Senior management should do proactive planning sessions with the function team to enable greater improvement to occur. The assignment for the team should be how take key metrics and what would be required to set them on a trajectory to a first quartile level in a certain time frame. For example, you may have both a cost reduction goal overall and within the server function there is a subgoal to achieve greater productivity (at a first quartile level)  and reduce the need for additional staff. By asking the team to map out what is required and by holding a proactive planning session on some of the key metrics (e.g. productivity) you will often identify the path to meet both local objectives that also contribute to the global objectives. Here, in the server example, you may find that with a moderate investment in automation, productivity can be greatly improved and staff costs reduced substantially. Thus both objectives could be obtained by the investment.  By holding such proactive sessions, where you ask the team to try and identify what would needs to be done to achieve a trajectory on their key metrics as well as considering what are the key goals and focus at the corporate or group level, you can often identify such doubly beneficial actions.

By taking these steps, you will employ a scientific approach to your metrics. If you add a degree of process definition and maturity, you will make significant strides to controlling and improving your environment in a sustainable way. This will build momentum and enable your team to enter a virtuous cycle of improvement and better performance. And then if add to the mix process improvement techniques (in moderation and with the right technique for each process and group), you will accelerate your improvement and results.

But start with your metrics and take a scientific approach. In the next week, I will be providing metrics frameworks that have stood well in large, complex shops along with templates that should help the understanding and application of the approach.

What metrics approaches have worked well for you? What keys would you add to this approach? What would you change?

Best, Jim

Moving from IT Silos to IT Synergy

We will revisit the Service Desk best practices this weekend as we have several additions ready to go, but I wanted to cover how you, as an IT leader, can bring about much greater synergy within your IT organization. In many IT shops, and some that I found when I first arrived at a company, the IT is ‘siloed’ or separate into different divisions, with typically each division supporting a different business line. Inevitability there are frustrations with this approach, and they are particularly acute when a customer is served by two or more lines of business. How should you approach this situation as a leader and what are the best steps to take to improve IT’s delivery?

I think it is best first to understand what are the drivers for variations in IT organizations under large corporations. With that understanding we can then work out the best IT organizational and structural models to serve them. There are two primary sets of business drivers:

  • those drivers that require IT to be closer to the business unit such as:
    • improving market and business knowledge,
    • achieving faster time-to-market (TTM),
    • and the ability to be closer in step and under control of the business leads of each division
  • those drivers that require IT to be more consolidated such as:
    • achieving greater efficiencies,
    • attaining a more holistic view of the customers,
    • delivering greater consistency and quality
    • providing greater scale and lower comprehensive risk

So, with these legitimate pushes, in two different directions, there is always a conflict in how IT should be organized. In some organizations, the history has been two or three years of being decentralized to enable IT to be more responsive to the business, and then, after costs are out of control, or risk is problematic, a new CFO, COO, or CEO comes in and  IT is re-centralized. This pendulum swing back and forth, is not conducive to a high performance team, as IT senior staff either hunker down to avoid conflict, or play politics to be on the next ‘winning’ side. Further, given that business organizations have a typical life span of 3 to 4 years (or less) before being re-organized again, corollary changes in IT to match the prevailing business organization then cause havoc with IT systems and structures that have 10 or 20 year life spans. Moreover, implementing a full business applications suite and supporting business processes takes at least 3 years for decent-sized business division, so if organizations change in the interim, then much valuable investment is lost.

So it is best to design an IT organization and systems approach that meets both sets of drivers and anticipates that business organizational change will happen. The best solution for meeting both sets of drivers is to organize IT as a ‘hybrid‘ organization. In other words, some portions of IT should be organized to deliver scale, efficiency, and high quality, while others should be organized to deliver time to market, and market feature and innovation.

The functions that should be consolidated and organized centrally to deliver scale, efficiency and quality should include:

  • Infrastructure areas, especially networks, data centers, servers and storage
  • Information security
  • Field support
  • Service desk
  • IT operations and IT production processes and tools

These functions should then be run as a ‘utility’ for the corporation. There should be allocation mechanisms in place to ensure proper usage and adequate investment in these key foundational elements. Every major service the utility delivers should be benchmarked at least every 18 months against industry to ensure delivery is at top quartile levels and best practices are adopted. And the utility teams should be relentlessly focused on continuous improvement with strong quality and risk practices in place.

The functions that should be aligned and organized along business lines to deliver time to market, market feature and innovation should include:

  • Application development areas
  • Internet and mobile applications
  • Data marts, data reporting, and data analysis

These functions should be held accountable for high quality delivery. Effective release cycles should be in place to enable high utilization of the ‘build factory’ as well as a continuous cycle of feature delivery. These functions should be compared and marked against each other to ensure best practices are adopted and performance is improved.

And those functions which can be organized flexibly in either mode would be:

  • Database
  • Middleware
  • Testing
  • Applications Maintenance
  • Data Warehousing
  • Project Management
  • Architecture

For these functions that can centralized or organized along business lines, it is possible to organize in a variety of ways. For example, systems integration testing could be centralized and unit test and system testing could be allocated by business line application team. Or, data base could have physical design and administration centralized and logical design and administration allocated by application team. There are some critical elements that should be singular or consolidated, including:

  • if architecture is not centralized, there must be architecture council reporting to the CIO with final design authority
  • there should be one set of project methodologies, tools, and process for all project managers
  • there should be one data architecture team
  • there should be one data warehouse physical design and infrastructure team

In essence, as the services are more commodity, or there is critical advantage to have a single solution (e.g. one view of the customer for the entire corporation) then you should establish a single team to be responsible for that service. And where you are looking for greater speed or better market knowledge, then organize IT into smaller teams closer to the business (but still with technical fiduciary accountabilities back to the CIO).

With this hybrid organization, as outlined in the diagram, you will be able to deliver the best of both worlds: outstanding utility services that provide cost and quality advantage and business-aligned services that provide TTM and market feature and innovation. .

As CIO, you should look to optimize your organization using the ‘hybrid’ structure. If you are currently entirely siloed, then start the journey by making the business case for the most commodity of functions: networks, service desks, and data centers. It will be less costly, and there will be more capability and lower risk if these are integrated. As you successfully complete integration of these area, you can move up the chain to IT Operations, storage, and servers. As these areas are pooled and consolidated you should be able to release excess capacity and overheads while providing more scale and better capabilities. Another area to start could be to deliver a complete view of the customer across the corporation. This requires a single data architecture, good data stewardship by the business, and a consolidated data warehouse approach. Again, as functions are successfully consolidated, the next layer up can be addressed.

Similarly, if you are highly centralized, it will be difficult to maintain pace with multiple business units. It is often better to divest some level of integration to achieve a faster TTM or better business unit support. Pilot an application or integration team in those business areas where innovation and TTM are most important or business complexity is highest. Maintain good oversight but also provide the freedom for the groups to perform in their new accountabilities.

And realize that you can dial up or down the level of integration within the hybrid model.  Obviously you would not want to decentralize commodity functions like the network or centralize all application work, but there is the opportunity to vary intermediate functions to meet the business needs. And by staying within the hybrid framework, you can deliver to the business needs without careening from a fully centralized to a decentralized model every three to five years and all the damage that such change causes to the IT team and systems.

There are additional variations that can be made to the model to accommodate organizational size and scope (e.g., global versus regional and local) that I have not covered here. What variations to the hybrid or other models have you used with success? Please do share.

Best, Jim

 

 

 

Real Lessons of Innovation from Kodak

At Davos this past week, innovation was trumpeted as a necessity for business and solution for economic ills. And in corporations around the world, business executives speak of the need for ‘innovation’ and ‘agility’ for them to win in the marketplace. Chalk that up to the Apple effect. With the latest demise of Kodak, preceded by Borders, Nokia, and Blockbusters, among others, some business leaders are racing to out-innovate and win in the marketplace. Unfortunately, most of these efforts cause more chaos and harm than good.

Let’s take Kodak. Here was a company that since 1888 has been an innovator. Kodak’s labs are  full of inventions and techniques. It has a patent portfolio worth an estimated $2.5B just for the patents. The failure of Kodak was due to several causes, but it was not due to lack of innovation. Instead, as rightly pointed out by John Bussey at the WSJ, ‘it failed to monetize its most important asset – its inventions.’ It invented digital photography but never took an early or forceful position with product (though it is unlikely that even  a strong position in that market would have contributed enough revenue given digital cameras come for free on every smart phone today). The extremely lucrative film business paralyzed Kodak until it plunged into the wrong sector – the highly mature and competitive printing market.

So, it is all well and good to run around and innovate, but if you cannot monetize it, and worse, if it distracts you from the business at hand, then you will run your business into the ground. I think there are four patterns of companies that successfully innovate:

The Big Bet at the Top: One way that innovation can be successfully implemented is through the big bet at the top. In other words, either the owner or CEO decides the direction and goes ‘all-in’. This has happened time and again. For example, in the mid-80s, Intel made the shift from memory chips to microprocessors. This dramatic shift included large layoffs and shuttering plants in its California operations, but the shift was an easier decision by top management because the microprocessor marketplace was more lucrative than the memory semiconductor marketplace. Intel’s subsequent rapid improvement in processors and later branding proved out and further capitalized on this decision. And I think Apple is a perfect example of big bets by Steve Jobs. From the iPod and iTunes, to the iPhone, to the iPad, Apple made big bets on new consumer devices and experiences that were, at the time, derided by pundits and competitors (I particularly like this one rant by Steve Ballmer on the iPhone). Of course, after a few successes, the critics are hard to find now. These bets require prescient senior management with enough courage and independence to place them, otherwise, even when you have a George Fisher, as Kodak did, the bets are not placed correctly. I also commend bets where you get out of a sector that you know you cannot win in. An excellent example of this is IBM’s spinoff of its printer business in the ’90s and its sale of the PC business to Lenovo more recently. Both turn out to be well ahead of the market (just witness HP’s belated and poorly thought though PC exit attempt this past summer).

Innovating via Acquisition: Another effective strategy is to use acquisition as a weapon. But success comes to those who make multiple small acquisitions as opposed to one or two large acquisitions. Cisco and IBM come to mind with this approach. Cisco effectively branched out to new markets and extended its networking lead in the 1990s and early 2000s with this approach. IBM has greatly broadened and deepened its software and services portfolio in the past decade with it as well. Compaq and Digital, America Online and Time Warner, or perhaps recently, HP’s acquisition of Autonomy, represent those companies that make a late, massive acquisition to try to stave off or shift their corporate course. These fare less well. In part, it is likely due to culture and vision. Small acquisitions, when care is taken by senior management to fully leverage the product, technology and talent of the takeover, can mesh well with the parent. A major acquisition can set off a clash of cultures, visions, and competing products that waste internal energy and place the company further behind in the market. Hats off on at least one major acquisition that changed completely the course of a company: Apple’s acquisition of Next. Of course, along with Next they also got the leader of Next: Steve Jobs, and we all know what happened next to Apple.

Having a Separate Team: Another successful approach is to recognize that the reason a company does well is because it is focused on ensuring predictable delivery and quality to its customer base. And to do so, its operations, product and technology divisions all strive to deliver such value predictably. Innovation by its very nature is discontinuous and causes failure (good innovators require many failures for every success). By teaching the elephant to dance, all you do is ruin the landscape and the productive work that kept the company in business before it lost its edge. Instead, by setting up a separate team, as IBM has done for the past decade and others have done successfully, a company can be far more successful. The separate team will require sponsorship, and it must be recognized that the bulk of the organization will focus on the proper task of delivering to the customer as well as making incremental improvements. You could argue that Kodak’s focus of the bulk of its team on film was its downfall. But I would suggest instead it was the failure of the innovation teams to take what they already had in the lab and make them successful new products in the market.

A Culture of Tinkering: This approach relies on the culture and ingenuity of the team to foster an environment where outstanding delivery in the corporation’s competence area is done routinely, and time and resources are set aside to continuously improve and tinker with the products and capabilities. To have the time and space for teams to be engaged in such ‘tinkering’ requires that the company master the base disciplines of quality and operational excellence. I think you would find such companies in many fields and it has enabled ongoing success and market advantage, in part because not only do they innovate, but they also out-execute. For example, Fedex, well-known for operational excellence, introduced package tracking in 1994, essentially exposing to customers what was a back end system. This product innovation has now become commonplace in the package and shipping industry. Similarly, 3M is well-known as an industry innovator, regularly deriving large amounts of revenue from products that did not exist for them even 5  years prior. But some of their greatest successes (e.g., Post-It Notes) did not come about from some corporate innovation session and project. Instead they came together over years as ideas and failures percolated in a culture of tinkering until finally the teams hit on the right combination of a successful product. And Google is probably the best technology company example where everyone sets aside 20% of their time to ‘tinker’.

So what approach is best? Well, unless you have a Steve Jobs, or are a pioneering company in a new industry, making the big bet for an established corporation should be out. If your performance does not show outstanding excellence, and if your corporate culture does not encourage collegiality and support experimentation and then leverage failure, then a tinkering approach will not work. So you are left with two options, make multiple small acquisitions in the areas of your product direction, and with effective corporate sponsorship, fold the new product set and capabilities into your own. Or, set up a separate team to pursue the innovation areas. This team should brainstorm and create the initial products, test and refine them, and then after market pilot, have the primary production organization deliver it in volume (again with effective corporate sponsorship). Thus the elephant dances the steps it can do and the mice do the work the elephant cannot do.

As for our example, Kodak only had part of the tinkering formula. Kodak had the initial innovation and experimentations but they were unable to take the failures and adjust their delivery to match what was required in the market for success. And they should have executed multiples of smaller efforts across more diverse product sets (similar to what Fujifilm did) to find their new markets.

Have you been part of a successful innovation effort or culture? What approaches did you see being used effectively?

Best, Jim

IT Service Desk: Structure and Key Elements

As we mentioned in our first service desk post, the service desk is the critical central point where you interact daily with your customers. To deliver outstanding IT capabilities and service, you need to ensure your service desk performs at a high level. From ITIL and other industry resources you can obtain the outlines of a properly structured service desk, but their perspective places the service desk as an actor in production processes and does not necessarily yield insight into best practices and techniques to make a world class service desk. It is worthwhile though to ensure you start with ITIL as a base of knowledge on service management and here we will provide best practices to enable you to reach greater performance.

Foremost, of course is that you understand what are the needs of the business and the levels of service required. We start with the assumption that you have outlined your business requirements, you understand the primary services and levels of performance you must achieve. With this in hand, there are 7 areas of best practice that we think are required to achieve 1st quartile or world-class performance.

Team and Location: First and foremost is the team and location. As the primary determinant to delivering outstanding service is the quality of the personnel and adequate staffing of the center, how you recruit, staff and develop your team is critical. Further, if you locate the service desk where it is difficult to attract and retain the right caliber of staff, you will struggle to be successful. The service desk must be a consolidated entity, you cannot run a successful service desk where there are multiple small units scattered around your corporate footprint. You will be unable to invest in the needed call center technology and provide the career path to attract the right staff if it is highly dispersed. It is appropriate and typically optimal, for large organization to have two or three services in different time zones to optimize coverage (time of day) and languages.

Locate your service desk where there are strong engineering universities nearby that will provide an influx of entry level staff eager to learn and develop. Given staff cost will be the primary cost factor in your service, ensure you locate in lower cost areas that have good language skills, access to the engineering universities, and appropriate time zones. For example, if you are in Europe, you should look to have one or two consolidated sites located just outside 2nd tier cities with strong universities. For example, do not locate in Paris or London, instead base your service desk either in or just outside Manchester or Budapest or Vilnius. This will enable you to tap into a lower cost yet high quality labor market that also is likely to provide more part-time workers that will help you solve peak call periods.

Knowledge Management and Training: Once you have your location and a good staff, you need to ensure you equip the staff with the tools and the knowledge to resolve issues. The key to a good service desk is to actually solve problems or provide services instead of just logging them. It is far less costly to have a service desk member be able to reset a password, correct a software configuration issue, or enable a software license than for them to log the user name and issue or request and then pass it to a second level engineering group. And it is far more satisfying for your user. So invest in excellent tools and training including:

  • Start with an easy and powerful service request system that is tied into your knowledge management system.
  • Invest in and leverage a knowledge management system that will enable your service desk staff to quickly parse potential solution paths and apply to the issue at hand.
  • Ensure that all new applications or major changes that go into production are accompanied by appropriate user and service desk documentation and training.
  • Have a training plan for your staff. Every service desk no matter how large or small should have a plan that trains agents to solve problems, use the tools and understand the business better.  We recommend a plan that enables 8 hours of training per agent per month. This continuous training keeps your organization more knowledgeable on how to solve problems and understand how incidents impact the businesses they are supporting.
  • Support external engineering training. We also recommend fully supporting external training and certification. When your service desk staff get that additional education or certification such as Windows or network certifications, your company now has a more capable service desk employee who could potentially (eventually) move into the junior engineering ranks. This migration can be both a benefit to your engineering team, and enable you to attract more qualified service desk staff because of the existence of such an upward career route.
  • Foster a positive customer service attitude and skills. Ensure your service desk team is fully trained in how to work with customers, who may arrive on the phone already frustrated. These important customer interface skills are powerful tools for them to deliver a positive experience. Give them the right attitude and vision (not just how to serve, but being a customer advocate with the rest of IT) as they are your daily connection with the customer.
  • Communicate your service desk vision and goals. This regular communication ties everything together and prevents the team from wandering in many directions.  Proper communication between operations, knowledge management, training, process and procedures ensures you focus on the right areas at the right time and it also ensures the team is always moving in the same direction, striving for the same goal of high performance at very competitive price points.

Modern infrastructure and production tools:  The service desk is not a standalone entity. It must have a clean mesh with the production, change, asset, and delivery processes and functions within IT. It is best to have a single production system serving production, change and incident leveraging a single configuration database. The service desk request toolset should also be tightly integrated with this system (and there are some newer but very strong toolsets that deliver all aspects) so that all information is both available at each interaction and the quality of the data can be maintained without multiple entries. As the service desk is really the interface between the customer and these IT processes, the cleaner and more direct the service desk mesh the better the customer experience and the engineering result. You can also use the service desk interaction with the customer to continually improve the quality of the data at hand. For example, when a customer calls in to order new software or reset a password, you can verify and update a few pieces of data such as the location of their PC or their mobile device information, etc. This enables better asset management and provides for improved future service. In addition to a well-integrated set of software tools and production processes, you should invest in a modern call center telephony capability with easy-to-use telephony menus. You should also have  internet and chat channels as well as traditional telephony to exploit automated self-service interfaces as much as possible. This is the experience that your users understand and leverage in their consumer interfaces and what they expect from you. You should measure your interfaces against a bar of ordering something from Amazon.

Establish and publish SLAs and a service catalogue: As part of the equation of providing an excellent service desk experience, you need to set the users expectations and provide an effective way to order IT services. It is important to define your services and publish SLAs for them (e.g., a new PC will be delivered in two days, or, we answer 95% of all service desk calls within 45 seconds). When you define the services, ensure that you focus on holistic services or experiences rather than component pieces. For example, ordering a new PC for a new employee should be a clear service that includes everything you would expect to get started (ids, passwords, software, setup and configuration, remote access capability, etc) not a situation where the PC got there in two days but it took the user another two months to get everything else they need subsequently discovered, ordered, and implemented. Think instead of target user result or experience. An analogy would be the McDonald’s value meal: as a consumer you do not order each individual french fry and pickle, you order a No. 3 and drink, fries, meal, etc come together in a value pack. Make sure your service catalogue has ‘value packs’ and not individual fries.

Mature leverage of metrics and feedback loops: with the elements above you will have a strong base of a service desk. To move it to outstanding performance, you must leverage the track of continuous improvement. Use the metrics gathered by your service desk processes to track where key data that is actionable:

  • Chronic issues – use Pareto analysis to determine what the biggest issues are and then leverage root cause to identify how to eliminate the issues from occurring. The solutions will range from better user training to eliminating complex system faults within your applications. But these remedies will eliminate the call (and cost) and remove ongoing problems that are sand in the gears of IT’s relationship with its customers
  • Self-Service opportunities – again, Pareto analysis will show you what volume requests you receive that if you heavily automate and move to self service, you can take significant work out of your IT shop and provide the customer with an interface they expect. This is not just password resets, it could be software downloads or the ability to access particular denied pages on the internet. Set up a lightweight workflow capability with proper management approvals to enable your users to self serve.
  • Poor Service – use customer satisfaction surveys and traditional call center metrics to ensure your staff are delivering to a high level, Use the data to identify service problem areas and address accordingly.
  • Emerging trends – Your applications, your users, and your companies needs are dynamic. Use the incident and service request data to understand what is emerging as an issue or need. For example, increasing performance complaint calls on an application that has been stable could indicate a trend of increasing business usage of a system that is on the edge of performance failure. Or increasing demand for a particular software package may indicate a need to do a standardized rollout of a tool that is used more widely than before.

Predictable IT delivery and positive cross engagement: The final element to ensuring an outstanding service desk and customer experience lies with the rest of the IT team. While the service desk can accomplish a great deal, it cannot deliver if the rest of IT does not provide solid, predictable service delivery. While that is quite obvious, you should use the service desk metrics of how well your IT team is delivering against requests to judge not just the service desk but also to identify engineering team delivery issues. Did you miss the desktop PC delivery because the service desk did not take down the right information or because the desktop implementation team missed its SLA? Further, the engineering component teams should be meeting with the service desk team (at least quarterly) to ascertain what defects they are introducing, what volume issues are arising from their areas, and how they can be resolved.  On a final note, you may find (as is often the case) that the longest delay to service delivery (e.g. that desktop PC) is obtaining either the user’s business management approval or finance approval. With data from the metrics, you should be able to justify and invest in a lightweight workflow system that obtains these approvals automatically (typical via email/intranet combination) and reduces the unproductive effort of chasing approvals by your team.

So quite a few elements of a successful service desk. Perhaps one way to summarize these elements is to view it as a sturdy three-legged stool. The seat is the service desk team.  Knowledge management and training are one leg.Processes and metrics and the telephony infrastructure and tools are the other two legs. The legs are made sturdier with effective communications and a supporting IT team.

Perhaps there are other elements or techniques that you would emphasize? Let us know we look forward to your comments. Best, Jim, Bob, and Steve.

Our Additional Authors
About Bob Barnes: Bob has over 20 years of experience managing Service Desk and Infrastructure teams.  He has experience in the financial service industry, manufacturing, pharmaceutical, telecommunication, legal and Government.  He has spoken at many industry conferences such as HDI, ICMI and Pink Elephant.  Bob has degrees in Information Systems and Business Management.
About Steve Wignall: Steve is an IT Service Management professional with significant
experience of leading large scale global IT Service Management functions in the Financial Services industry. Steve has contributed to defining the global industry standards for Service Desk quality as a former member of the Service Desk Institute Standards Committee. Steve led his Service Desk to be the first team globally to achieve the prestigious Service Desk Institute 4 Star Quality Certification, achieving an unparalleled 100% rating in all assessment categories and is a former winner of the SDI UK Service Desk Team of the Year.

 

 

 

When to Benchmark and How to Leverage the Results

Benchmarking is an important tool for management and yet frequently I have found most organizations do not take advantage of it. There seem to be three camps when it comes to benchmarking:

  • those who either don’t have the time or through some rationale, avoid tapping into benchmarking
  • those that try to use it infrequently but for a variety of reasons, are never able to leverage and progress from its use
  • those who use benchmarking regularly and make material improvement

I find it surprising that as so many IT shops don’t benchmark. I have always felt that there were only two possible outcomes from doing benchmarking:

  1. You benchmark and you find out that you compare very well with the rest of the benchmark population and you can now use this data as part of your report to your boss and the business to let them know the value your team in delivering
  2. You benchmark and you find out a bunch of things that you can improve. And you now have the specifics as to where and likely how to improve.

So, with the possibility of good results with either outcome, when and how should you benchmark your IT shop?

I recommend making sure that your team and the areas that you want to benchmark have an adequate maturity level in terms of defined processes, operational metrics, and cost accounting.  Usually, to take advantage of a benchmark, you should be at a minimum a Level 2 shop and preferably a Level 2+ shop where you have well understood unit costs and unit productivity. If you are not at this level, then in order to compare your organization to others, you will need to first gather an accurate inventory of assets, staff, time spent by activity (e.g., run versus change). This data should supplemented with defined processes and standards for the activity to be compared. And then for a thorough benchmark you will need data on the quality of the activity and preferably 6 month trending of all data. In essence, these are prerequisite activities that must take place before you benchmark.

I do think that many of teams that try to benchmark but then are not able to do much with the results are unable to progress because:

  • they do not have the consistent processes on which improvements and changes can be implemented
  • they do not routinely collect the base data and thus once the benchmark is over, no further data is collected to understand if any of the improvements had effect or not
  • the lack of data and standards results in so much estimation for the benchmark that you cannot then use it to pinpoint the issues

So, rather than benchmark when you are a Level 1 or 2- shop, instead just work on the basics of improving your maturity of your activities. For example, collect and maintain accurate asset data – this foundational to any benchmarking. Similarly collect how your resources spend their time — this is required anyway to estimate or allocate costs to whoever drives it, so do it accurately. And implement process definitions, base operational metrics, and have the team review and publish monthly.

For example, let’s take the Unix server area. If we are looking to benchmark we would want to check various attributes against the industry including:

  • number of servers (versus similar size firms in our industry)
  • percent virtualized
  • unit cost (per server)
  • unit productivity (servers per admin)
  • cost by server by category (e.g. staff, hardware, software, power/cooling/data center, maintenance)

By having this information you can quickly identify where you have inefficiencies or you are paying too much (e.g., the cost of your software per server) or you are unproductive (your servers per admin is low versus the industry). This then allows you to draw up effective action plans because you are addressing problems that can likely be solved (you are looking to bring your capabilities up to what is already best practice in the industry).

I recall a benchmark of the Unix server area where our staff costs per server were out of line with the industry though we had relatively strong productivity. Upon further investigation we realized we had mostly a very senior workforce (thus being paid the top end of the scale and very capable) that had held on to even the most junior tasks. So we set about improving this in several ways:

  • we packaged up and moved the routine administrative work to IT operations (who did it for far less and in a much more automated manner)
  • we put in place a college graduate program and shifted nearly all of our hires in this group from a previous focus of mid to senior only new hires to one where it was mostly graduates, some junior and only a very few mid-level engineers.
  • we also put in place better tools for the work to be done so staff could be more productive (more servers per admin)

The end result after about 12 months was a staff cost per server that was significantly below the industry median, approaching best in class (and thus we could reduce our unit allocations to the business). Even better, with a more balanced workforce (i.e., not all senior staff) we ended up with a happier team because the senior engineers were now looked up to and could mentor the new junior staff. Moreover, the team now could spend more of their time on complex change and project support rather than just run activities. And with the improved tools making everyone more productive, it resulted in a very engaged team that regularly delivered outstanding results.

I am certain that some of this would have been evident with just a review. But by benchmarking, not only were we able to precisely identify where we had opportunity, we were better able to diagnosis and prescribe the right course of action with targets that we knew were doable.

Positive outcomes like this are the rule when you benchmark. I recommend that you conduct a yearly external benchmark for each major component area of infrastructure and IT operations (e.g. storage, mainframe, server, network, etc). And at least every two years, assess IT overall, assess your Information Security function, and if possible, benchmark your development areas and systems in business terms (e.g., cost per account, cost per transaction).

One possibility in the development area is that since most IT shops have multiple sub-teams within development, you can use the operational development metrics to compare them against each other (e.g. defect rates, cost per man hour or cost per function, etc). Then the sub-team with the best metrics can share their approach so all can improve.

If you conduct regular exercises like this, and combine it with a culture of relentless improvement, you will find you achieve a flywheel effect, where each gain of knowledge and each improvement becomes more additive and more synergistic for your team. You will reduce costs faster and improve productivity faster than taking a less open and less scientific approach.

Have you seen such accelerated results? What were the elements that contributed to that progress? I look forward to hearing from you.

Best, Jim

 

 

IT Service Desk best practices

An important interface for your internal customers is through your IT service desk. Unfortunately, in many situations the service desk (or help desk) does not use up-to-date practices and can be a backwater of capability. This can result in a very poor reputation for IT because the service desk is the primary customer interface with the IT organization. I recall starting at a company tasked with turning around the IT organization. When I asked about the IT help desk, the customer turned to me and said ‘ You mean the IT helpless  desk?’  With a reputation that poor with our customers, I immediately set out to turnaround our service desk and supporting areas.

The IT Service Desk may seem quite straightforward to address — maybe the thought is that all you really need to do is have one number, staff it, be courteous and try hard.  This isn’t the case and there are some clear best practice techniques and approaches that will enable you to deliver consistent, positive interactions with your customers as well as enable greater productivity and lower cost for the broader IT team.

For our discussion, I have two esteemed former colleagues who have run top notch service desks that will be authoring material on best practices and how to deliver an outstanding customer experience through this critical interface. Both Steve Wignall and Bob Barnes have run world class service desks at large financial services companies. And I think you will find the guidance and techniques they provide here today and in subsequent posts to be a surefire manner to transforming your ‘helpless’ desk to a best in class service desk.

First, let’s recap why the service desk is such an important area for IT:

  • the service desk is the starting point for many key processes and services for IT
  • well-constructed, the service desk can handle much of the routine work of IT, enabling engineering and other teams to do higher value work
  • it is your primary interface with the customer, where you can gauge the pulse of your users, and make the biggest daily impact on your reputation
  • with the right data and tools, the service desk identify and correct problem outbreaks early, thereby reducing customer impacts and lowering overall support costs.

And yet, despite its importance to IT, too often Service Desks are chronic under-performers due to the following issues:

  • poor processes or widespread lack of adherence to them
  • the absence, or low quality application, of a scientific and metric based management approach
  • lousy handoffs and poor delivery by the rest of IT
  • inadequate resources and recruiting, worsened by weak staff development and team-building
  • weak sponsorship by senior IT leaders
  • ineffective service desk leadership

Before we get into the detailed posts on service desk best practices, here are a few items to identify where your team is, and a few things to get started on:

1. What is your first call resolution rate? If it is below 70% then there is work to do. If it is above 70% but primarily because of password reset, then put in decent self serve password reset and re-evaluate.

2. What is the experience of your customers? Are you doing a monthly or quarterly survey to track satisfaction with the service? If not, get going and implement. I recommend a 7 point scale and hold yourself to the same customer satisfaction bar your company is driving for with its external customers.

3. What are the primary drivers of calls? Are they systems issues? Are they due to user training or system complexity issues? Workstation problems? Are calls being made to report a surplus of availability or general systems performance issues? If you know clearly (e.g., via Pareto analysis) what is driving your calls, then we have the metrics in place (if not, get the metrics – and process if necessary — in place). Once you have the metrics you can begin to sort out the causes and tackle them in turn. What is your team doing with the metrics? Are they being used to identify the cause of calls to then go about eliminating the call in the first place?

For example, if leading drivers of calls are training and complexity, is your team reworking the system so it is more intuitive or improving the training material? If the drivers are workstation issues, do you know what component and what model and are now figuring out what proactive repair or replacement program will reduce these calls? Remember, each call you eliminate probably saves your company at least $40 (mostly in eliminating the downtime of the caller).

4. Do you and your senior staff meet with the service desk team regularly and review their performance and metrics? Do senior IT leaders sponsor major efforts to eliminate the source of calls? Does the service desk feature in your report to customers?

5. If it is a third party provider, have you visited their site lately? Does your service area have the look and feel and knowledge of your company so they can convey your brand? And hold them to the same high performance bar as you would your own team.

Use these five items for a quick triage of your service desk and our next posts will cover the best practices and techniques to build a world-class service desk. Later this week, Bob and Steve will cover the structure and key elements of a world-class service desk and how to go about transforming your current desk or building a great one from scratch. Steve will also cover the customer charter and its importance to maintaining strong performance and meeting expectations.

I look forward to your thoughts and experiences in this area. And perhaps you have a service desk that could use some help to turn around IT’s and your reputation.

Best, Jim