Metrics Roundup: Key Techniques and References

As an IT leader either of a function or of a large IT team with multiple functions, what is the emphasis you should place on metrics and how are you able to leverage them to attain improvement and achieve more predictable delivery? What other foundational elements must be in place for you to effectively leverage the metrics? Which metrics are key measures or leading indicators and which ones are lagging or less relevant?

For those of you just joining, this is the fourth post on metrics and in our previous posts we focused on key aspects of IT metrics (transparency, benchmarking, and a scientific approach). You can find these posts in the archives or, better yet, in the pages linked under the home page menu of Best Practices. The intent of the RecipeforIT blog is to provide IT leaders with useful, actionable practices, techniques and guidance to be more successful IT leaders and enable their shops to achieve much higher performance. I plan to cover most of the key practices for IT leaders in my posts, and as a topic is covered, I try to migrate the material into Best Practices pages. So, now back to metrics.

In essence there are three types of relevant metrics:

  • operational or primary metrics – those metrics used to monitor, track and decision the daily or core work. Operational metrics are the base of effective management and are the fundamental measures of the activity being done. It is important that these metrics are collected inherently as part of the activity and best if the operational team  collects, understands and changes their direction based on these metrics.
  • verification or secondary metrics – those metrics used to verify that the work completed meets standards or is functioning as designed. Verification metrics should also be collected and reviewed by the same operational team though potentially by different members of the team or as participation as part of a broader activity (e.g. DR test). Verification metrics provide an additional measure of either overall quality or critical activity effectiveness.
  • performance or tertiary metrics – those metrics that provide insight as to the performance of the function or activity. Performance metrics enable insight as to the team’s efficiency, timeliness, and effectiveness.

Of course, your metrics for a particular function should consist of those measures needed to successfully execute and manage the function as well as those measures that demonstrate progress towards the goals of your organization. For example, let’s take am infrastructure function: server management. What operational metrics should be in place? What should be verified on a regular basis? And what performance metrics should we have? While this will vary based on the maturity, scale, and complexity of the server team and environment, here is a good subset:

Operational Metrics:

  • Server asset counts (by type, by OS, by age, location, business unit, etc) and server configurations by version (n, n-1, n-2, etc) or virtualized/non-virtual and if EOL or obsolete
  • Individual, grouped and overall server utilization, performance, etc by component (CPU, memory, etc)
  • Server incidents, availability, customer impacts by time period, trended with root cause and chronic or repeat issues areas identified
  • Server delivery time, server upgrade cycle time
  • Server cost overall and by type of server, by cost area (admin, maintenance, HW, etc) and cost by vendor
  • Server backup attempt and completion, server failover in place, etc

Verification metrics:

  • Monthly sample of the configuration management database server records for accuracy and completeness, Ongoing scan of network for servers not in the configuration management database, Regular reporting of all obsolete server configs with callouts on those exceeding planned service or refresh dates
  • Customer transaction times, Regular (every six months) capacity planning and performance reviews of critical business service stacks including servers
  • Root cause review of all significant impacting customer events, auto-detection of server issues versus manual or user detection ratios
  • DR Tests, server privileged access and log reviews, regular monthly server recovery or failover tests (for a sample)

Performance metrics:

  • Level of standardization or virtualization, level of currency/obsolescence
  • Level of customer impact availability, customer satisfaction with performance, amount of headroom to handle business growth
  • Administrators per server, Cost per server, Cost per business transaction
  • Server delivery time, man hours required to deliver a server

Obviously, if you are just setting out, you will collect on some of these metrics first. As you incorporate their collection and automate the work and reporting associated with them you can then tackle the additional metrics. And you will vary them according to the importance of different elements in your shop. If cost is critical, then reporting on cost and efficiency plays such as virtualization will naturally be more important. If time to market or availability are critical, than those elements should receive greater focus. Below is a diagram that reflects the construct of the three types of metrics and their relationship to the different metrics areas and score cards:

Metrics Framework

So, you have your metrics framework, what else is required to be successful leveraging the metrics?

First and foremost, the culture of your team must be open to alternate views and support healthy debate. Otherwise, no amount of data (metrics) or facts will enable the team to change directions from the party line. If you and your management team do not lead regular, fact-based discussions where course can be altered and different alternatives considered based on the facts and the results, you likely do not have the openness needed for this approach to be successful. Consider leading by example here and emphasize fact based discussions and decisions.

Also you must have defined processes that are generally adhered. If your group’s work is heavily ad hoc and different each time, measuring what happened the last time will not yield any benefits. If this is the case, you need to first focus on defining even at a high level, the major IT processes and help your team’s adopt them. Then you can proceed to metrics and the benefits they will accrue.

Accountability, sponsorship and the willingness to invest in the improvement activities are also key factors in the speed and scope of the improvements that can occur. As a leader you need to maintain a personal engagement in the metrics reviews and score card results. They should into your team’s goals and you should monitor the progress in key areas. Your sponsorship and senior business sponsorship where appropriate will be major accelerators to progress. And hold teams accountable for their results and improvements within their domain.

How does this correlate with your experience with metrics? Any server managers out there that would have suggestions on the server metrics? I expect we will have two further posts on metrics:

  • a post on how to evolve the metrics you measure as you increase the maturity and capability of your team,
  • and one on unit costing and allocations

I look forward to your suggestions.

Best, Jim

 

A Scientific Approach to IT Metrics

In order to achieve a world class or first quartile performance, it is critical to take a ‘scientific’ approach to IT metrics. Many shops remain rooted in ‘craft’ approaches to IT where techniques and processes are applied in an ad hoc manner to the work at hand and little is measured. Or, a smattering of process improvement methodologies (such as Six Sigma or Lean) or development approaches (e.g., Agile) are applied indiscriminately across the organization. Frequently then, due to a lack of success, the process methods or metrics focus are then tarred as being ineffective by managers.

Most organizations that I have seen that were mediocre performers typically have such craft or ad hoc approaches to their metrics and processes. And this includes not just the approach at the senior management level but at each of the 20 to 35 distinct functions that make up an IT shop (e.g., Networking, mainframes, servers, desktops, service desk, middleware , etc, and each business-focused area of development and integration). In fact, you must address the process and metrics at each distinct function level in order to then build a strong CIO level process, governance and metrics. And if you want to achieve 1st quartile or world-class performance, a scientific approach to metrics will make a major contribution. So let’s map out how to get to such an approach.

1) Evaluate your current metrics: You can pick several of the current functions you are responsible for and evaluate them to see where you are in your metrics approach and how to adjust to apply best practices. Take the following steps:

  • For each distinct function, identify the current metrics that are routinely used by the team to execute their work or make decisions.
  • Categorize these metrics as either operational metrics or reporting numbers. If they are not used by the team to do their daily work or they are not used routinely to make decisions on the work being done by the team, then these are reporting numbers. For example, they may be summary numbers reported to middle management or reported for audit or risk requirements or even for a legacy report that no one remembers why it is being produced.
  • Is a scorecard being produced for the function? An effective scorecard would have quantitative measures for the deliverables of the functions as well as objective scores for function goals that have been properly cascaded for the overall IT goals

2) Identify gaps with the current metrics: For each of IT functions there should be regular operational metrics for all key dimensions of delivery (quality, availability, cost, delivery against SLAs, schedule). Further, each area should have unit measures to enable an understanding of performance (e.g., unit cost, defects per unit, productivity, etc). As an example, the server team should have the following operational metrics:

    • all server asset inventory and demand volumes maintained and updated
    • operational metrics such as server availability, server configuration currency, server backups, server utilization should all be tracked
    • also time to deliver a server, total server costs, and delivery against performance and availability SLAs should be tracked
    • further secondary or verifying metrics such as server change success, server obsolescense, servers with multiple backup failures, chronic SLA or availability misses, etc should be tracked as well
    • function performance metrics should include cost per server (by type of server), administrators per server, administrator hours to build a server, percent virtualized servers, percent standardized servers, etc should also be derived

3) Establish full coverage: By comparing the existing metrics against the full set of delivery goals, you can quickly establish the appropriate operational metrics along with appropriate verifying metrics. Where there are metrics missing that should be gathered, work with the function to incorporate the additional metrics into their daily operational work and processes. Take care to work from the base metrics up to more advanced:

    • start with base metrics such as asset inventories and staff numbers and overall costs before you move to unit costs and productivity and other derived metrics
    • ensure the metrics are gathered in as automated a fashion as possible and as an inherent part of the overall work (they should not be gathered by a separate team or subsequent to the work being done

Ensure that verifying metrics are established for critical performance areas for the function as well. An example of this for the server function could be for the key activity of backups for a server:

    • the operational metrics would be perhaps backups completed against backups scheduled
    • the verifying metric would be twofold:
      • any backups for a single server that fail twice in a row get an alert and an engineering review as to why they failed (typically, for a variety of reasons 1% or fewer of your backups will fail, this is reasonable operational performance. But is one server does not get a successful backup for many days, you are likely putting the firm at risk if there is a database or disk failure, thus the critical alert)
      • every month or quarter, 3 or more backups are selected at random, and the team ensures they can successfully recover from the backup files. This will verify everything associated with the backup is actually working.

4) Collect the metrics only once: Often, teams collect similar metrics for different audiences. The metrics that they use to monitor for example configuration currency or configuration to standards, can be mostly duplicated by risk data collected against security parameter settings or executive management data on a percent server virtualization. This is a waste of the operational team’s time and can lead to confusing reports where one view doesn’t match another view. I recommend that you establish and overall metrics framework that includes risk and quality metrics as well as management and operational metrics so that all groups agree to the proper metrics. The metrics are then collected once, distilled and analyzed once, and congruent decisions can then be made by all groups. Later this week I will post a recommended metrics framework for a typical IT shop.

5) Drop the non-value numbers activity: For all those numbers that were identified as being gathered for middle management reports or for legacy reports with an uncertain audience; if there is no tie to a corporate or group goal, and the numbers are not being used by the function for operational purposes, I recommend to stop collecting the numbers and stop publishing any associated reports. It is non-value activity.

6) Use the metrics in regular review: At both the function team level and function management level the metrics should be trended, analyzed and discussed. These should be regular activities: monthly, weekly, and even daily depending on the metrics. The focus should be on how to improve, and based on the trends are current actions, staffing, processes, etc, enabling the team to improve and be successful on all goals or not. A clear feedback loop should be in place to enable the team and management to identify actions to take place to correct issues apparent through the metrics as quickly and as locally as possible. This gives control of the line to the team and the end result is better solutions, better work and better quality. This is what has been found in manufacturing time and again and is widely practiced by companies such as Toyota in their factories.

7) Summarize the metrics from across your functions into a scorecard: Ensure you identify the key metrics within each function and properly summarize and aggregate the metrics into an overall group score card. Obviously the score card should match you goals and key services that you deliver. It may be appropriate to rotate in key metrics from a function based on visibility or significant change. For example, if you are looking to improve overall time to market(TTM) of your projects, it may be appropriate to report on server delivery time as a key subcomponent and hopefully leading indicator of your improving TTM.  Including on your score card, even at a summarized level, key metrics from the various functions, will result in greater attention and pride being taken in the work since there is a very visible and direct consequences. I also recommend that on a quarterly basis, that you provide an assessment as to the progress and perhaps highlights of the team’s work as reflected in the score card.

8 ) Drive better results through proactive planning: The team and function management, once the metrics and feedback loop are in place, will be able to drive better performance through ongoing improvement as part of their regular activity. Greater increases in performance may require broader analysis and senior management support. Senior management should do proactive planning sessions with the function team to enable greater improvement to occur. The assignment for the team should be how take key metrics and what would be required to set them on a trajectory to a first quartile level in a certain time frame. For example, you may have both a cost reduction goal overall and within the server function there is a subgoal to achieve greater productivity (at a first quartile level)  and reduce the need for additional staff. By asking the team to map out what is required and by holding a proactive planning session on some of the key metrics (e.g. productivity) you will often identify the path to meet both local objectives that also contribute to the global objectives. Here, in the server example, you may find that with a moderate investment in automation, productivity can be greatly improved and staff costs reduced substantially. Thus both objectives could be obtained by the investment.  By holding such proactive sessions, where you ask the team to try and identify what would needs to be done to achieve a trajectory on their key metrics as well as considering what are the key goals and focus at the corporate or group level, you can often identify such doubly beneficial actions.

By taking these steps, you will employ a scientific approach to your metrics. If you add a degree of process definition and maturity, you will make significant strides to controlling and improving your environment in a sustainable way. This will build momentum and enable your team to enter a virtuous cycle of improvement and better performance. And then if add to the mix process improvement techniques (in moderation and with the right technique for each process and group), you will accelerate your improvement and results.

But start with your metrics and take a scientific approach. In the next week, I will be providing metrics frameworks that have stood well in large, complex shops along with templates that should help the understanding and application of the approach.

What metrics approaches have worked well for you? What keys would you add to this approach? What would you change?

Best, Jim

IT Service Desk: Building a Responsive and Effective Desk

This is the 4th in a series of posts on best practices in the IT Service Desk arena. To catch the previous material, you can check out the first post or you can read through the best practice reference pages on the the IT Recipes site. To help you best use this site, please know that as material is covered in the posts, we subsequently use it to properly build an ongoing reference area that can be used when you encounter a particular issue or problem area and are looking for how to solve it. There’s a good base of material in the reference area on efficiency and cost cutting, project management, recruiting talent, benchmarking, and now service desk. If you have any feedback on how to improve the reference area structure, don’t hesitate to let us know. We will be delivering one more post on service desk after this one and then I will be shifting to leadership techniques and building high performance teams.

One of the key challenges of the Service Desk is to respond to a customer transaction in a timely manner. Often, two situations occur: either efficiency or budget restrictions result in lengthened service times and poor perception of the service or the focus is purely on speed to answer and the initial interaction is positive but the end result is not highly effective. Meeting the customer expectations on timeliness, being cost effective, and delivering real value is the optimal performance that is our target.

Further, this optimal performance must be delivered in a complex environment. Timeliness must be handled differently for each activity (for example, the service for a significant production incident or service loss is handled as a ‘live’ telephone call, whereas an order for new equipment would be primarily submitted via a web form). The demand for the services is often 24 hours a day and global with multiple languages and interaction occurs over phone, web chat, and intranet (and soon, mobile app interfaces). This optimal performance should have both the cost and the effectiveness of the service desk measured holistically, that is, all the costs to deliver a service should be totaled including the end user and customer cost (e.g., wait time, restoration time, lost revenue opportunity, etc) and engineering time (e.g., time required to go back a gather data to deliver a service or time avoided if service is automated or handled by the service desk).

A great Service Desk not only delivers the operational numbers, it ensures that the workload flowing through the process is ‘value add’ activity that is required and necessary. The Service Desk must ensure that it measures performance as a cost / benefit to the whole organisation and not just in isolation. Doing the ‘right thing’ may actually move the narrower operational Service Desk metrics in the wrong direction; yet at the enterprise level it remains the right thing to do.

Optimize your service desk by managing demand and improving productivity

There are two primary factors that drive your service desk cost and delivery:

  • the number of calls and in particular the peak volume of calls, and,
  • the cost base of your service desk (mostly people costs: salary, benefits, taxes, etc

The volume and pattern of transaction demand is in turn the primary driver of the number of people required and is the key determinant of the overall cost base of the Service Desk. More specifically, the peak load of the Service Desk (the time at which call volumes are highest) is the time that drives your peak staffing volume and is therefore the most important target of demand reduction (i.e. the point that reductions in call / transaction volume are most likely to be realised as a financial cost saving or improved responsiveness to customers).

There are three key opportunities:

  • Mange the transaction volume
  • Manage the transaction pattern
  • Manage the transaction time

And in each opportunity area, we will look to apply best practices such that we improve the effectiveness of the service desk and IT experience overall.

Managing the Transaction Volume

Reducing the overall volume of transactions presented to the department reduces total workload. And while reducing the number of transactions is a good thing, these reductions may not be realised as cost savings or reduced customer wait times if they simply increase idle time during your quieter periods and do not reduce the peak load. The peak load is the point at which resourcing levels are at their highest and yet you are likely to have negative capacity (i.e. your customers will queue as you cannot resource fully to meet the peak). Eradicating demand even within troughs is valuable; however the true value is to focus on the peak. So start by identifying your key volume drivers and your peak load key volume drivers through statistical analysis. The use of Pareto analysis will usually demonstrate that a significant volume of your calls (+80%) are driven by a fairly small number of categories of call, sometimes the top 15 / 20 call types can account for as much as 80% of the total volume of calls. Then, for each call type impacting the peak, do the following analysis and actions:

  • Is it a chronic issue — meaning, is it a repetitive problem that users experience (i.e. Citrix is down every Monday morning, or new users are issued the wrong configuration to properly access data, etc)? If it is, then rank by frequency, missed SLAs and total cost (e.g. 200 users a week with the issue costing 2 hours of lost time is a $32,000/month problem). Allocate the investment based on SLA criticality and ROI and assign it to the appropriate engineering area to address with signoff by the service desk required when completed.
  • Is it a navigation or training issue? Having significant numbers of users call the service desk as a last resort to find out how to do something is an indicator that your systems or your intranet is not user friendly, intuitive or well-designed. Use these calls to drive navigation and design improvements for your systems and your intranet in each release cycle. Look to make it a normal input to improve usability of your systems.
  • Is it that requests can only be handled manually? As I have mentioned in previous posts, often the internal systems for employees (e.g. HR, Finance and IT) are the least automated and have manual forms and workflow. Look to migrate as much as possible to a self-serve, highly automated workflow approach. This particularly true for password administration. Unless you have full logical access automation, it is likely that User Administration and Password Management are key call drivers, particularly at peak as users arrive at work and attempt to log on to systems. Automation of password resets at an application / platform level is often achievable quickly and at a much lower cost than a fully integrated solution. Assign to your engineering and development teams so you can make significant peak load demand reductions with little or no investment and corresponding user experience improvement.
  • Can you automate or standardize? If you cannot automate then look to standardise wherever possible. For example, have the Service Desk work in partnership with your IT Security group and ensure that you adopt a corporate standard construction for passwords and log on credentials. This will result in users being less likely to lock themselves out of their accounts and reduce the peak load. And ensure the standards don’t go overboard on security. I once had a group where the passwords were changed for everyone every month. The result was 20,000 additional calls to the service desk because people forgot their passwords, and lax security because nearly everyone else was writing down their passwords. We changed it back to quarterly and saved $400,000 a quarter in reduced calls and made the users happy (and improved security).
  • Can you eliminate calls due to misdirection? Identify failure demand which are calls that are generated by weaknesses in the design or performance of support processes, including: wrong department / misdirected calls (or IVR choices), use of the Service Desk as a ‘Directory Enquiries’ function, repeat calls and chaser calls (i.e. where the customer hasn’t been able to provide all of the required details in one call or had to chase because their expectations have not been met). Failure demand should be eradicated through the re-design of support processes / services to eliminate multiple steps and common defects as well as improved customer communication and education.
  • Can you increase self service? Identify calls that could be resolved without the intervention of the Service Desk, i.e. through the use of alternative channels such as self service. Work with business lines and gain agreement to direct callers to the alternative (cheaper) channels. To encourage adoption, market the best channels and where necessary withdraw the services of the Service Desk to mandate the use of automation or self service solutions.
  • Is root cause addressed by the engineering teams? Undertake robust Problem Management processes and ensure that your engineering and application groups have clear accountabilities to resolve root cause issues and thus reduce the volume of calls into the Service Desk. A good way to secure buy in is to convert the call volumes into a financial figure and ensure the component management team has full awareness and responsibility to reduce these costs they are causing.
  • Can you streamline your online or intranet ticket creation and logging process? Organizations increasingly want to capture management information about technical faults that were fixed locally and it is not uncommon for business lines to request that a ticket is logged  just for management information purposes. Design your online ticket logging facility to be able to handle all of these transactions. Whilst such information is valuable, the Service Desk agent adds no value through their involvement in the transaction.
  • Do you have routing transactions that can have their entry easily automated?Consider reviewing operating procedures and identifying those transactions in which your agents undertake ‘check list’ style diagnosis before passing a ticket to resolver groups. In these instances, creating web forms (or templates within your Service Management toolset) enables the customer to answer the check list directly, raise their own ticket and then route directly to support.

Managing the Transaction Pattern

If workload cannot be eradicated (i.e. it is value-added work that must be done by the agent) then we next look to shift the work from peak to non-peak service times. Delivering service ‘at the point of peak customer demand’ is the most expensive way to deliver service as it increases the resource profile required and could build in high levels of latent non-productive time for your agents.

One technique to apply to shift the work from peak to non-peak is through customer choice. Leverage your IVR or your online self service ticketing systems to enable the customer to choose a call back option at non-peak times if they call in at peak. Many customer would prefer to  to select a specific time of service with certainty versus waiting an indeterminate amount of time for an agent. They can structure their workday productively around the issue. But you must ensure your team reliably calls back at the specified time.

Customer education around busy and quiet periods to call, messages on your IVR and even limiting some non-essential services to only being available during low demand hours will all help to smooth workflow and reduce the peak load. Further, providing a real-time systems production status toolbar on the intranet will minimize repeat call-ins for the same incident or status query calls.

You can also smooth or shift calls due to system releases and upgrades. Ensure that your releases and rollouts are scheduled at the with peak times in mind. A major rollout of a new system at the end of the month on a Monday morning when the service desk experiences a peak and there are other capacity stresses is just not well-planned.  Userids, passwords, and training should all be staged incrementally for a rollout to smooth demand and provide a better system introduction and resulting user experience. As a general rule doing a pilot implementation to gauge true user interaction (and resulting likely calls) is a good approach to identifying potential hotspots before wide introduction and fixing them.

Managing the Transaction Time

Transaction time can be improved in two ways:

  • improve the productivity and skill of the agent (or reduce the work to be done)
  • increase the resources to meet the demand thus reducing the wait time for the transactions.

Start by ensuring your hiring practices are bringing onboard agents with the right skills (technical, customer interface, problem-solving, and languages). Encourage agents to improve their skills through education and certification programs. Have your engineering teams provide periodic seminars on the systems of your firm and how they work. Ensure the service desk team is trained as part of every major new system release or upgrade.  Implement a knowledge management system that is initially written jointly by the engineering team and the service desk team. Enables comments and updates and hints to be added by your service desk agents. Ensure the taxonomy of the problems set is logical and easily navigated. And then ensure the knowledge base and operational documentation is updated for every major system release.

Another method to improve the productivity of your service desk is to capture the service and transaction data by the various service desk subteams (e.g., by region or shift, etc). There will be variation across the subteams, and you can use these variations to pinpoint potential best practices. Identifying and implementing best practice across your desk should lead to a convergence over time of call duration to an optimal number. Measuring the mean average and the standard deviation around the mean should demonstrate convergence over time if best practice is being embedded and used consistently across the workforce. Remember that just having the lowest service time per call may not be the optimal practice. Taking a bit longer on the call and delivering a higher rate of first call resolution is often a more optimal path.  Your agents with the longest call duration may be fixing more transactions; however it could be that some of these are too time consuming and should have been time-shifted as other customers are now being left to queue.

Managing transaction time has to be done very purposefully; otherwise quality is placed at risk. If agents believe they are under pressure to meet a certain transaction time, they will sacrifice quality to do so. This will result in re-work, reduced volumes of calls resolved at first point of contact and reduced customer satisfaction as they will receive inconsistent and reduced service. Transaction time has to be managed as a secondary measure to quality and resolution rates to prevent this from occurring. There should never be a stated deadline as to when a call has become too lengthy – each customer interaction has to be managed on its own merits and only in the aggregate (say a weekly or monthly average) can you fairly compare the delivery of your agents against each other.

Resource planning is the science of matching your supply of resources to meet the demand from the customer within a specified period of time (let’s say 20 seconds). Call Centres will usually manage their resource profile in 15 minute intervals. The mechanics of doing this is driven by probability – the probability of a call being presented within a 15 minute period (predicted using historical data gathered from your telephony) and the probability that an agent will become available within 20 seconds of the call being presented to answer that call. The ‘magic number’ of agents required in a 15 minute period is met when these probabilities are aligned so that we will meet the required level of service (e.g. 90% of calls will be answered in 20 seconds).

The volume of calls presented is one half of this equation, the frequency with which an agent becomes available is the other. Agents become available when they join the pool of active agents (i.e. when they sign in for a shift) or when they complete a call and are ready to receive the next call. The average transaction time (call length plus any additional time placing the agent in an unavailable state) determines how frequently they will become available in any given 15 minute period. A Call Centre with an average call duration of 2 ½ minutes will have each agent becoming available 6 times in a 15 minute period, whereas a call duration of 6 minutes will only have each agent becoming available 2 ½ times. The probability of an agent becoming available within the 20 second window in the first call centre is significantly higher and their resource requirements will therefore be much lower than the second. The right number for your business will be for you to determine. Then apply the best practice staffing approaches mentioned in our earlier posts. Recruit a talented team in the right locations and look to leverage part-time staff to help fulfill peak demand.

Here are a few best practice techniques for you to consider in managing the Transaction Time:

  • When calculating transaction time, ensure that you include not only the length of active talk time but also any other factor that makes the agent unavailable for the next call to be presented (e.g. any rest time that you have built into the telephony, any ‘wrap up’ time that the agent can manually apply to block other calls being presented etc…).
  • Present calls direct to agent headsets (i.e. a beep in the ear) rather than have their phones ring and require manually answering.
  • Analyse what the difference is between your best and worst performing agents, determine what best practice is and roll it out across the team. This may include everyone having access to the right applications on their desktop, having the right shortcuts loaded in their browsers, keeping high volume applications open rather than logging in when a relevant call is presented and a thousand other nuances that all add up to valuable seconds on a call.
  • Do a key stroke by key stroke analysis of how your agents use the Service Management toolset. Manage the logical workflow of a ticket, automate fields where possible, build templates to assist quick information capture and ensure that there is no wasted effort (i.e. that every key stroke counts). Ensure that support groups are not inflating call duration by requesting fields that are re-keyed in tickets for their own convenience.
  • Invest in developing professional coaching skills for your Team Leaders (you may even want to consider dedicated performance coaches) and embed call duration as an important element in your quality management processes. (Focusing on the length of call being right and appropriate for the circumstances and not just short). Coach staff through direct observation and at the time feedback.
  • Ensure that your performance metrics and rewards are aligned so that you reward quality delivery and your people have a clear understanding of the behaviours that you are driving. Ensure that performance is reviewed against the suite of measures (resolution, duration, quality sampling etc…) and not in isolation.
  • Build your checks and measures to keep the process honest. Measure and manage each element of the process to ensure that the numbers are not being manipulated by differences in agent behaviour. Run and check reports against short calls, agent log in / log out, abandoned calls and terminated calls. How agents use these statuses can fundamentally change their individual performance metrics and so it is the role of leaders to ensure that the playing field is level and that the process is not being subverted through negative behaviors.

On a final note on resource planning, if you have more than once central desk, look to consolidate your service desks. If you have different desks for different technologies or business areas, consolidating them will invariable lower cost and actually improve service. The economies of scale in call centers are very material, Further size and scale make it easier to run a Call Center that consistently deliver the quality, call response time and benchmarks favorable to the external market. Don’t let technology or business divisions sub-optimize this enterprise utility.

Effectively resourcing a Service Desk is about the application and manipulation of the laws of supply and demand. The Service Desk is not a passive victim of these forces and great Service Desks will be heavily focused on maximising the productivity, efficiency and effectiveness of the supply of labour. They will equally be managing their demand profile to ensure that all work is required and value add, workflow is managed to smooth demand away from peaks and that customers needs are satisfied through the most effective and efficient channels to deliver exceptional customer service.

We look forward to your thoughts or additions to these best practices for service desk.

Best, Steve and Jim

Moving from IT Silos to IT Synergy

We will revisit the Service Desk best practices this weekend as we have several additions ready to go, but I wanted to cover how you, as an IT leader, can bring about much greater synergy within your IT organization. In many IT shops, and some that I found when I first arrived at a company, the IT is ‘siloed’ or separate into different divisions, with typically each division supporting a different business line. Inevitability there are frustrations with this approach, and they are particularly acute when a customer is served by two or more lines of business. How should you approach this situation as a leader and what are the best steps to take to improve IT’s delivery?

I think it is best first to understand what are the drivers for variations in IT organizations under large corporations. With that understanding we can then work out the best IT organizational and structural models to serve them. There are two primary sets of business drivers:

  • those drivers that require IT to be closer to the business unit such as:
    • improving market and business knowledge,
    • achieving faster time-to-market (TTM),
    • and the ability to be closer in step and under control of the business leads of each division
  • those drivers that require IT to be more consolidated such as:
    • achieving greater efficiencies,
    • attaining a more holistic view of the customers,
    • delivering greater consistency and quality
    • providing greater scale and lower comprehensive risk

So, with these legitimate pushes, in two different directions, there is always a conflict in how IT should be organized. In some organizations, the history has been two or three years of being decentralized to enable IT to be more responsive to the business, and then, after costs are out of control, or risk is problematic, a new CFO, COO, or CEO comes in and  IT is re-centralized. This pendulum swing back and forth, is not conducive to a high performance team, as IT senior staff either hunker down to avoid conflict, or play politics to be on the next ‘winning’ side. Further, given that business organizations have a typical life span of 3 to 4 years (or less) before being re-organized again, corollary changes in IT to match the prevailing business organization then cause havoc with IT systems and structures that have 10 or 20 year life spans. Moreover, implementing a full business applications suite and supporting business processes takes at least 3 years for decent-sized business division, so if organizations change in the interim, then much valuable investment is lost.

So it is best to design an IT organization and systems approach that meets both sets of drivers and anticipates that business organizational change will happen. The best solution for meeting both sets of drivers is to organize IT as a ‘hybrid‘ organization. In other words, some portions of IT should be organized to deliver scale, efficiency, and high quality, while others should be organized to deliver time to market, and market feature and innovation.

The functions that should be consolidated and organized centrally to deliver scale, efficiency and quality should include:

  • Infrastructure areas, especially networks, data centers, servers and storage
  • Information security
  • Field support
  • Service desk
  • IT operations and IT production processes and tools

These functions should then be run as a ‘utility’ for the corporation. There should be allocation mechanisms in place to ensure proper usage and adequate investment in these key foundational elements. Every major service the utility delivers should be benchmarked at least every 18 months against industry to ensure delivery is at top quartile levels and best practices are adopted. And the utility teams should be relentlessly focused on continuous improvement with strong quality and risk practices in place.

The functions that should be aligned and organized along business lines to deliver time to market, market feature and innovation should include:

  • Application development areas
  • Internet and mobile applications
  • Data marts, data reporting, and data analysis

These functions should be held accountable for high quality delivery. Effective release cycles should be in place to enable high utilization of the ‘build factory’ as well as a continuous cycle of feature delivery. These functions should be compared and marked against each other to ensure best practices are adopted and performance is improved.

And those functions which can be organized flexibly in either mode would be:

  • Database
  • Middleware
  • Testing
  • Applications Maintenance
  • Data Warehousing
  • Project Management
  • Architecture

For these functions that can centralized or organized along business lines, it is possible to organize in a variety of ways. For example, systems integration testing could be centralized and unit test and system testing could be allocated by business line application team. Or, data base could have physical design and administration centralized and logical design and administration allocated by application team. There are some critical elements that should be singular or consolidated, including:

  • if architecture is not centralized, there must be architecture council reporting to the CIO with final design authority
  • there should be one set of project methodologies, tools, and process for all project managers
  • there should be one data architecture team
  • there should be one data warehouse physical design and infrastructure team

In essence, as the services are more commodity, or there is critical advantage to have a single solution (e.g. one view of the customer for the entire corporation) then you should establish a single team to be responsible for that service. And where you are looking for greater speed or better market knowledge, then organize IT into smaller teams closer to the business (but still with technical fiduciary accountabilities back to the CIO).

With this hybrid organization, as outlined in the diagram, you will be able to deliver the best of both worlds: outstanding utility services that provide cost and quality advantage and business-aligned services that provide TTM and market feature and innovation. .

As CIO, you should look to optimize your organization using the ‘hybrid’ structure. If you are currently entirely siloed, then start the journey by making the business case for the most commodity of functions: networks, service desks, and data centers. It will be less costly, and there will be more capability and lower risk if these are integrated. As you successfully complete integration of these area, you can move up the chain to IT Operations, storage, and servers. As these areas are pooled and consolidated you should be able to release excess capacity and overheads while providing more scale and better capabilities. Another area to start could be to deliver a complete view of the customer across the corporation. This requires a single data architecture, good data stewardship by the business, and a consolidated data warehouse approach. Again, as functions are successfully consolidated, the next layer up can be addressed.

Similarly, if you are highly centralized, it will be difficult to maintain pace with multiple business units. It is often better to divest some level of integration to achieve a faster TTM or better business unit support. Pilot an application or integration team in those business areas where innovation and TTM are most important or business complexity is highest. Maintain good oversight but also provide the freedom for the groups to perform in their new accountabilities.

And realize that you can dial up or down the level of integration within the hybrid model.  Obviously you would not want to decentralize commodity functions like the network or centralize all application work, but there is the opportunity to vary intermediate functions to meet the business needs. And by staying within the hybrid framework, you can deliver to the business needs without careening from a fully centralized to a decentralized model every three to five years and all the damage that such change causes to the IT team and systems.

There are additional variations that can be made to the model to accommodate organizational size and scope (e.g., global versus regional and local) that I have not covered here. What variations to the hybrid or other models have you used with success? Please do share.

Best, Jim

 

 

 

IT Service Desk: Delivering an Outstanding Customer Interface

This is the 3rd in our series on IT service desk best practices. As we mentioned in our previous post, the Service Desk is the primary daily interface with IT customer. It is the front door into IT; however, the customer usually only comes knocking when something is already wrong. This means that from the outset, the service desk is often dealing with a customer who is frustrated and already having a sub-optimal experience of IT. How the service desk responds will largely determine not just the perception of the service desk but of IT as a whole. Turning the issue into a positive experience of IT can be done consistently and highly effectively if you have designed your support processes correctly and your agents are operating with the right attitude and with the right customer service framework.

Business is complex, IT is complex and the interface between the two (here, the service desk) is by definition also complex. Delivering great customer service however doesn’t have to be. We can distill the core requirements of your customer down to a small number of key behaviours that can be designed in to your services. Of course there is ‘the perfect world’ and some callers will expect this, however most people will have a set of reasonable expectations that are in line with decent ‘customer service’ transaction that they undertake. Their experience of call centers is likely to have been shaped by their dealings with retailers, utility companies (thank goodness) and airlines or holiday companies. Thus their consumer experience drives their expectations of the service desk much as consumer technology is doing the same for other parts of corporate IT.

With this in mind, a service that is constructed with basic ‘good manners’ goes a long way to consistently delivering the fundamentals of great customer service. Just as we expect individuals to demonstrate good manners, we can expect the same of the services that we design. These good manners include the following characteristics:

  • Be Available – Be there to service customers at the point of demand, through an appropriate channel, within an acceptable timeframe and when they need your help
  • Be Capable – Ensure the customer has their need satisfied within an acceptable period of time (ideally but not necessarily within a single transaction at first contact).
  • Be Responsible – Take ownership and don’t expect the customer to navigate the IT organisation, do it on their behalf. Be the customer advocate and hold the rest of the IT organisation to account to deliver the customer promise.
  • Be Truthful – Set expectations and keep your promises. Don’t promise what you can’t deliver. Always deliver what you have promised (i.e. engineer arrival times, status updates and call backs etc…).
  • Be Proactive – Push relevant information out, don’t expect customers to have to come and get it. Ensure the right links are in place with Operations / Engineering so that the Service Desk has the right information to manage the customer expectation.
  • Be Respectful – Train your staff to put the customer at ease and empathise with them. Develop and train good questioning techniques and good listening skills. The customer should feel that they have sought ‘service’ and not ‘help’ (customers can feel patronised if they have had to seek help)
  • Be Respected – Train your staff to manage difficult calls and callers in high pressure situations. Have the procedures in place to escalate calls up your leadership structure quickly and efficiently. Staff will be more confident dealing with difficult calls when they know they are supported by their leadership. Always follow through on any abusive behaviour towards your staff, it is never acceptable and your team will appreciate it more than anything else that you can do for them. Remember your customers are also responsible citizens within the company community.
  • Be Prepared – Have customers details pre-populated; don’t make them repeat basic information each time they call. Look at their recent call history and not just what they are telling you today. Is there a bigger picture – what is their overall relationship likely to be at the moment with IT? Can the agent positively influence that relationship? Have as much knowledge as possible at the agent’s fingertips so they can solve issues the first time.
  • Be Focused – Understand the customers business and the pressures that they may be under due to IT issues. Focus on getting them working again (i.e. work the business requirements and not just the IT) and then go fix the background issues.
  • Be Flexible – Be responsive and flexible when impact or urgency requires it. Take individual circumstances into account and do the right thing by the customer, build effective ‘Service Exception’ processes (i.e. above and beyond any SLA that may be in place) so that your supply chain can respond when you need them to.

In essence, the service desk customer wants their issue resolved / requirement fulfilled in a timely manner and without exhaustive effort on their behalf. If this isn’t immediate, they require accurate information to plan their contingency and confidence that clear ownership will now drive fulfilment. They require confidence that promises will be kept and any significant changes communicated proactively. The customer expects a professional ‘customer service experience’ in line with or better than those they experience in their dealings with commercial suppliers. They expect to be treated courteously, professionally and for their individual requirements to be recognised, with respect, flexibility and responsiveness.

By executing on the foundational elements and techniques we mapped out in the previous post, you will be able to set this customer charter as a goal for nearly every call and be able to achieve it.

A Service Desk that is designed with ‘good manners’, executed by people with an understanding of and belief in those good manners will have laid solid foundations to consistently deliver exceptional customer service.

Best, Steve

Real Lessons of Innovation from Kodak

At Davos this past week, innovation was trumpeted as a necessity for business and solution for economic ills. And in corporations around the world, business executives speak of the need for ‘innovation’ and ‘agility’ for them to win in the marketplace. Chalk that up to the Apple effect. With the latest demise of Kodak, preceded by Borders, Nokia, and Blockbusters, among others, some business leaders are racing to out-innovate and win in the marketplace. Unfortunately, most of these efforts cause more chaos and harm than good.

Let’s take Kodak. Here was a company that since 1888 has been an innovator. Kodak’s labs are  full of inventions and techniques. It has a patent portfolio worth an estimated $2.5B just for the patents. The failure of Kodak was due to several causes, but it was not due to lack of innovation. Instead, as rightly pointed out by John Bussey at the WSJ, ‘it failed to monetize its most important asset – its inventions.’ It invented digital photography but never took an early or forceful position with product (though it is unlikely that even  a strong position in that market would have contributed enough revenue given digital cameras come for free on every smart phone today). The extremely lucrative film business paralyzed Kodak until it plunged into the wrong sector – the highly mature and competitive printing market.

So, it is all well and good to run around and innovate, but if you cannot monetize it, and worse, if it distracts you from the business at hand, then you will run your business into the ground. I think there are four patterns of companies that successfully innovate:

The Big Bet at the Top: One way that innovation can be successfully implemented is through the big bet at the top. In other words, either the owner or CEO decides the direction and goes ‘all-in’. This has happened time and again. For example, in the mid-80s, Intel made the shift from memory chips to microprocessors. This dramatic shift included large layoffs and shuttering plants in its California operations, but the shift was an easier decision by top management because the microprocessor marketplace was more lucrative than the memory semiconductor marketplace. Intel’s subsequent rapid improvement in processors and later branding proved out and further capitalized on this decision. And I think Apple is a perfect example of big bets by Steve Jobs. From the iPod and iTunes, to the iPhone, to the iPad, Apple made big bets on new consumer devices and experiences that were, at the time, derided by pundits and competitors (I particularly like this one rant by Steve Ballmer on the iPhone). Of course, after a few successes, the critics are hard to find now. These bets require prescient senior management with enough courage and independence to place them, otherwise, even when you have a George Fisher, as Kodak did, the bets are not placed correctly. I also commend bets where you get out of a sector that you know you cannot win in. An excellent example of this is IBM’s spinoff of its printer business in the ’90s and its sale of the PC business to Lenovo more recently. Both turn out to be well ahead of the market (just witness HP’s belated and poorly thought though PC exit attempt this past summer).

Innovating via Acquisition: Another effective strategy is to use acquisition as a weapon. But success comes to those who make multiple small acquisitions as opposed to one or two large acquisitions. Cisco and IBM come to mind with this approach. Cisco effectively branched out to new markets and extended its networking lead in the 1990s and early 2000s with this approach. IBM has greatly broadened and deepened its software and services portfolio in the past decade with it as well. Compaq and Digital, America Online and Time Warner, or perhaps recently, HP’s acquisition of Autonomy, represent those companies that make a late, massive acquisition to try to stave off or shift their corporate course. These fare less well. In part, it is likely due to culture and vision. Small acquisitions, when care is taken by senior management to fully leverage the product, technology and talent of the takeover, can mesh well with the parent. A major acquisition can set off a clash of cultures, visions, and competing products that waste internal energy and place the company further behind in the market. Hats off on at least one major acquisition that changed completely the course of a company: Apple’s acquisition of Next. Of course, along with Next they also got the leader of Next: Steve Jobs, and we all know what happened next to Apple.

Having a Separate Team: Another successful approach is to recognize that the reason a company does well is because it is focused on ensuring predictable delivery and quality to its customer base. And to do so, its operations, product and technology divisions all strive to deliver such value predictably. Innovation by its very nature is discontinuous and causes failure (good innovators require many failures for every success). By teaching the elephant to dance, all you do is ruin the landscape and the productive work that kept the company in business before it lost its edge. Instead, by setting up a separate team, as IBM has done for the past decade and others have done successfully, a company can be far more successful. The separate team will require sponsorship, and it must be recognized that the bulk of the organization will focus on the proper task of delivering to the customer as well as making incremental improvements. You could argue that Kodak’s focus of the bulk of its team on film was its downfall. But I would suggest instead it was the failure of the innovation teams to take what they already had in the lab and make them successful new products in the market.

A Culture of Tinkering: This approach relies on the culture and ingenuity of the team to foster an environment where outstanding delivery in the corporation’s competence area is done routinely, and time and resources are set aside to continuously improve and tinker with the products and capabilities. To have the time and space for teams to be engaged in such ‘tinkering’ requires that the company master the base disciplines of quality and operational excellence. I think you would find such companies in many fields and it has enabled ongoing success and market advantage, in part because not only do they innovate, but they also out-execute. For example, Fedex, well-known for operational excellence, introduced package tracking in 1994, essentially exposing to customers what was a back end system. This product innovation has now become commonplace in the package and shipping industry. Similarly, 3M is well-known as an industry innovator, regularly deriving large amounts of revenue from products that did not exist for them even 5  years prior. But some of their greatest successes (e.g., Post-It Notes) did not come about from some corporate innovation session and project. Instead they came together over years as ideas and failures percolated in a culture of tinkering until finally the teams hit on the right combination of a successful product. And Google is probably the best technology company example where everyone sets aside 20% of their time to ‘tinker’.

So what approach is best? Well, unless you have a Steve Jobs, or are a pioneering company in a new industry, making the big bet for an established corporation should be out. If your performance does not show outstanding excellence, and if your corporate culture does not encourage collegiality and support experimentation and then leverage failure, then a tinkering approach will not work. So you are left with two options, make multiple small acquisitions in the areas of your product direction, and with effective corporate sponsorship, fold the new product set and capabilities into your own. Or, set up a separate team to pursue the innovation areas. This team should brainstorm and create the initial products, test and refine them, and then after market pilot, have the primary production organization deliver it in volume (again with effective corporate sponsorship). Thus the elephant dances the steps it can do and the mice do the work the elephant cannot do.

As for our example, Kodak only had part of the tinkering formula. Kodak had the initial innovation and experimentations but they were unable to take the failures and adjust their delivery to match what was required in the market for success. And they should have executed multiples of smaller efforts across more diverse product sets (similar to what Fujifilm did) to find their new markets.

Have you been part of a successful innovation effort or culture? What approaches did you see being used effectively?

Best, Jim

IT Service Desk: Structure and Key Elements

As we mentioned in our first service desk post, the service desk is the critical central point where you interact daily with your customers. To deliver outstanding IT capabilities and service, you need to ensure your service desk performs at a high level. From ITIL and other industry resources you can obtain the outlines of a properly structured service desk, but their perspective places the service desk as an actor in production processes and does not necessarily yield insight into best practices and techniques to make a world class service desk. It is worthwhile though to ensure you start with ITIL as a base of knowledge on service management and here we will provide best practices to enable you to reach greater performance.

Foremost, of course is that you understand what are the needs of the business and the levels of service required. We start with the assumption that you have outlined your business requirements, you understand the primary services and levels of performance you must achieve. With this in hand, there are 7 areas of best practice that we think are required to achieve 1st quartile or world-class performance.

Team and Location: First and foremost is the team and location. As the primary determinant to delivering outstanding service is the quality of the personnel and adequate staffing of the center, how you recruit, staff and develop your team is critical. Further, if you locate the service desk where it is difficult to attract and retain the right caliber of staff, you will struggle to be successful. The service desk must be a consolidated entity, you cannot run a successful service desk where there are multiple small units scattered around your corporate footprint. You will be unable to invest in the needed call center technology and provide the career path to attract the right staff if it is highly dispersed. It is appropriate and typically optimal, for large organization to have two or three services in different time zones to optimize coverage (time of day) and languages.

Locate your service desk where there are strong engineering universities nearby that will provide an influx of entry level staff eager to learn and develop. Given staff cost will be the primary cost factor in your service, ensure you locate in lower cost areas that have good language skills, access to the engineering universities, and appropriate time zones. For example, if you are in Europe, you should look to have one or two consolidated sites located just outside 2nd tier cities with strong universities. For example, do not locate in Paris or London, instead base your service desk either in or just outside Manchester or Budapest or Vilnius. This will enable you to tap into a lower cost yet high quality labor market that also is likely to provide more part-time workers that will help you solve peak call periods.

Knowledge Management and Training: Once you have your location and a good staff, you need to ensure you equip the staff with the tools and the knowledge to resolve issues. The key to a good service desk is to actually solve problems or provide services instead of just logging them. It is far less costly to have a service desk member be able to reset a password, correct a software configuration issue, or enable a software license than for them to log the user name and issue or request and then pass it to a second level engineering group. And it is far more satisfying for your user. So invest in excellent tools and training including:

  • Start with an easy and powerful service request system that is tied into your knowledge management system.
  • Invest in and leverage a knowledge management system that will enable your service desk staff to quickly parse potential solution paths and apply to the issue at hand.
  • Ensure that all new applications or major changes that go into production are accompanied by appropriate user and service desk documentation and training.
  • Have a training plan for your staff. Every service desk no matter how large or small should have a plan that trains agents to solve problems, use the tools and understand the business better.  We recommend a plan that enables 8 hours of training per agent per month. This continuous training keeps your organization more knowledgeable on how to solve problems and understand how incidents impact the businesses they are supporting.
  • Support external engineering training. We also recommend fully supporting external training and certification. When your service desk staff get that additional education or certification such as Windows or network certifications, your company now has a more capable service desk employee who could potentially (eventually) move into the junior engineering ranks. This migration can be both a benefit to your engineering team, and enable you to attract more qualified service desk staff because of the existence of such an upward career route.
  • Foster a positive customer service attitude and skills. Ensure your service desk team is fully trained in how to work with customers, who may arrive on the phone already frustrated. These important customer interface skills are powerful tools for them to deliver a positive experience. Give them the right attitude and vision (not just how to serve, but being a customer advocate with the rest of IT) as they are your daily connection with the customer.
  • Communicate your service desk vision and goals. This regular communication ties everything together and prevents the team from wandering in many directions.  Proper communication between operations, knowledge management, training, process and procedures ensures you focus on the right areas at the right time and it also ensures the team is always moving in the same direction, striving for the same goal of high performance at very competitive price points.

Modern infrastructure and production tools:  The service desk is not a standalone entity. It must have a clean mesh with the production, change, asset, and delivery processes and functions within IT. It is best to have a single production system serving production, change and incident leveraging a single configuration database. The service desk request toolset should also be tightly integrated with this system (and there are some newer but very strong toolsets that deliver all aspects) so that all information is both available at each interaction and the quality of the data can be maintained without multiple entries. As the service desk is really the interface between the customer and these IT processes, the cleaner and more direct the service desk mesh the better the customer experience and the engineering result. You can also use the service desk interaction with the customer to continually improve the quality of the data at hand. For example, when a customer calls in to order new software or reset a password, you can verify and update a few pieces of data such as the location of their PC or their mobile device information, etc. This enables better asset management and provides for improved future service. In addition to a well-integrated set of software tools and production processes, you should invest in a modern call center telephony capability with easy-to-use telephony menus. You should also have  internet and chat channels as well as traditional telephony to exploit automated self-service interfaces as much as possible. This is the experience that your users understand and leverage in their consumer interfaces and what they expect from you. You should measure your interfaces against a bar of ordering something from Amazon.

Establish and publish SLAs and a service catalogue: As part of the equation of providing an excellent service desk experience, you need to set the users expectations and provide an effective way to order IT services. It is important to define your services and publish SLAs for them (e.g., a new PC will be delivered in two days, or, we answer 95% of all service desk calls within 45 seconds). When you define the services, ensure that you focus on holistic services or experiences rather than component pieces. For example, ordering a new PC for a new employee should be a clear service that includes everything you would expect to get started (ids, passwords, software, setup and configuration, remote access capability, etc) not a situation where the PC got there in two days but it took the user another two months to get everything else they need subsequently discovered, ordered, and implemented. Think instead of target user result or experience. An analogy would be the McDonald’s value meal: as a consumer you do not order each individual french fry and pickle, you order a No. 3 and drink, fries, meal, etc come together in a value pack. Make sure your service catalogue has ‘value packs’ and not individual fries.

Mature leverage of metrics and feedback loops: with the elements above you will have a strong base of a service desk. To move it to outstanding performance, you must leverage the track of continuous improvement. Use the metrics gathered by your service desk processes to track where key data that is actionable:

  • Chronic issues – use Pareto analysis to determine what the biggest issues are and then leverage root cause to identify how to eliminate the issues from occurring. The solutions will range from better user training to eliminating complex system faults within your applications. But these remedies will eliminate the call (and cost) and remove ongoing problems that are sand in the gears of IT’s relationship with its customers
  • Self-Service opportunities – again, Pareto analysis will show you what volume requests you receive that if you heavily automate and move to self service, you can take significant work out of your IT shop and provide the customer with an interface they expect. This is not just password resets, it could be software downloads or the ability to access particular denied pages on the internet. Set up a lightweight workflow capability with proper management approvals to enable your users to self serve.
  • Poor Service – use customer satisfaction surveys and traditional call center metrics to ensure your staff are delivering to a high level, Use the data to identify service problem areas and address accordingly.
  • Emerging trends – Your applications, your users, and your companies needs are dynamic. Use the incident and service request data to understand what is emerging as an issue or need. For example, increasing performance complaint calls on an application that has been stable could indicate a trend of increasing business usage of a system that is on the edge of performance failure. Or increasing demand for a particular software package may indicate a need to do a standardized rollout of a tool that is used more widely than before.

Predictable IT delivery and positive cross engagement: The final element to ensuring an outstanding service desk and customer experience lies with the rest of the IT team. While the service desk can accomplish a great deal, it cannot deliver if the rest of IT does not provide solid, predictable service delivery. While that is quite obvious, you should use the service desk metrics of how well your IT team is delivering against requests to judge not just the service desk but also to identify engineering team delivery issues. Did you miss the desktop PC delivery because the service desk did not take down the right information or because the desktop implementation team missed its SLA? Further, the engineering component teams should be meeting with the service desk team (at least quarterly) to ascertain what defects they are introducing, what volume issues are arising from their areas, and how they can be resolved.  On a final note, you may find (as is often the case) that the longest delay to service delivery (e.g. that desktop PC) is obtaining either the user’s business management approval or finance approval. With data from the metrics, you should be able to justify and invest in a lightweight workflow system that obtains these approvals automatically (typical via email/intranet combination) and reduces the unproductive effort of chasing approvals by your team.

So quite a few elements of a successful service desk. Perhaps one way to summarize these elements is to view it as a sturdy three-legged stool. The seat is the service desk team.  Knowledge management and training are one leg.Processes and metrics and the telephony infrastructure and tools are the other two legs. The legs are made sturdier with effective communications and a supporting IT team.

Perhaps there are other elements or techniques that you would emphasize? Let us know we look forward to your comments. Best, Jim, Bob, and Steve.

Our Additional Authors
About Bob Barnes: Bob has over 20 years of experience managing Service Desk and Infrastructure teams.  He has experience in the financial service industry, manufacturing, pharmaceutical, telecommunication, legal and Government.  He has spoken at many industry conferences such as HDI, ICMI and Pink Elephant.  Bob has degrees in Information Systems and Business Management.
About Steve Wignall: Steve is an IT Service Management professional with significant
experience of leading large scale global IT Service Management functions in the Financial Services industry. Steve has contributed to defining the global industry standards for Service Desk quality as a former member of the Service Desk Institute Standards Committee. Steve led his Service Desk to be the first team globally to achieve the prestigious Service Desk Institute 4 Star Quality Certification, achieving an unparalleled 100% rating in all assessment categories and is a former winner of the SDI UK Service Desk Team of the Year.