IT Service Desk: Delivering an Outstanding Customer Interface

This is the 3rd in our series on IT service desk best practices. As we mentioned in our previous post, the Service Desk is the primary daily interface with IT customer. It is the front door into IT; however, the customer usually only comes knocking when something is already wrong. This means that from the outset, the service desk is often dealing with a customer who is frustrated and already having a sub-optimal experience of IT. How the service desk responds will largely determine not just the perception of the service desk but of IT as a whole. Turning the issue into a positive experience of IT can be done consistently and highly effectively if you have designed your support processes correctly and your agents are operating with the right attitude and with the right customer service framework.

Business is complex, IT is complex and the interface between the two (here, the service desk) is by definition also complex. Delivering great customer service however doesn’t have to be. We can distill the core requirements of your customer down to a small number of key behaviours that can be designed in to your services. Of course there is ‘the perfect world’ and some callers will expect this, however most people will have a set of reasonable expectations that are in line with decent ‘customer service’ transaction that they undertake. Their experience of call centers is likely to have been shaped by their dealings with retailers, utility companies (thank goodness) and airlines or holiday companies. Thus their consumer experience drives their expectations of the service desk much as consumer technology is doing the same for other parts of corporate IT.

With this in mind, a service that is constructed with basic ‘good manners’ goes a long way to consistently delivering the fundamentals of great customer service. Just as we expect individuals to demonstrate good manners, we can expect the same of the services that we design. These good manners include the following characteristics:

  • Be Available – Be there to service customers at the point of demand, through an appropriate channel, within an acceptable timeframe and when they need your help
  • Be Capable – Ensure the customer has their need satisfied within an acceptable period of time (ideally but not necessarily within a single transaction at first contact).
  • Be Responsible – Take ownership and don’t expect the customer to navigate the IT organisation, do it on their behalf. Be the customer advocate and hold the rest of the IT organisation to account to deliver the customer promise.
  • Be Truthful – Set expectations and keep your promises. Don’t promise what you can’t deliver. Always deliver what you have promised (i.e. engineer arrival times, status updates and call backs etc…).
  • Be Proactive – Push relevant information out, don’t expect customers to have to come and get it. Ensure the right links are in place with Operations / Engineering so that the Service Desk has the right information to manage the customer expectation.
  • Be Respectful – Train your staff to put the customer at ease and empathise with them. Develop and train good questioning techniques and good listening skills. The customer should feel that they have sought ‘service’ and not ‘help’ (customers can feel patronised if they have had to seek help)
  • Be Respected – Train your staff to manage difficult calls and callers in high pressure situations. Have the procedures in place to escalate calls up your leadership structure quickly and efficiently. Staff will be more confident dealing with difficult calls when they know they are supported by their leadership. Always follow through on any abusive behaviour towards your staff, it is never acceptable and your team will appreciate it more than anything else that you can do for them. Remember your customers are also responsible citizens within the company community.
  • Be Prepared – Have customers details pre-populated; don’t make them repeat basic information each time they call. Look at their recent call history and not just what they are telling you today. Is there a bigger picture – what is their overall relationship likely to be at the moment with IT? Can the agent positively influence that relationship? Have as much knowledge as possible at the agent’s fingertips so they can solve issues the first time.
  • Be Focused – Understand the customers business and the pressures that they may be under due to IT issues. Focus on getting them working again (i.e. work the business requirements and not just the IT) and then go fix the background issues.
  • Be Flexible – Be responsive and flexible when impact or urgency requires it. Take individual circumstances into account and do the right thing by the customer, build effective ‘Service Exception’ processes (i.e. above and beyond any SLA that may be in place) so that your supply chain can respond when you need them to.

In essence, the service desk customer wants their issue resolved / requirement fulfilled in a timely manner and without exhaustive effort on their behalf. If this isn’t immediate, they require accurate information to plan their contingency and confidence that clear ownership will now drive fulfilment. They require confidence that promises will be kept and any significant changes communicated proactively. The customer expects a professional ‘customer service experience’ in line with or better than those they experience in their dealings with commercial suppliers. They expect to be treated courteously, professionally and for their individual requirements to be recognised, with respect, flexibility and responsiveness.

By executing on the foundational elements and techniques we mapped out in the previous post, you will be able to set this customer charter as a goal for nearly every call and be able to achieve it.

A Service Desk that is designed with ‘good manners’, executed by people with an understanding of and belief in those good manners will have laid solid foundations to consistently deliver exceptional customer service.

Best, Steve

Real Lessons of Innovation from Kodak

At Davos this past week, innovation was trumpeted as a necessity for business and solution for economic ills. And in corporations around the world, business executives speak of the need for ‘innovation’ and ‘agility’ for them to win in the marketplace. Chalk that up to the Apple effect. With the latest demise of Kodak, preceded by Borders, Nokia, and Blockbusters, among others, some business leaders are racing to out-innovate and win in the marketplace. Unfortunately, most of these efforts cause more chaos and harm than good.

Let’s take Kodak. Here was a company that since 1888 has been an innovator. Kodak’s labs are  full of inventions and techniques. It has a patent portfolio worth an estimated $2.5B just for the patents. The failure of Kodak was due to several causes, but it was not due to lack of innovation. Instead, as rightly pointed out by John Bussey at the WSJ, ‘it failed to monetize its most important asset – its inventions.’ It invented digital photography but never took an early or forceful position with product (though it is unlikely that even  a strong position in that market would have contributed enough revenue given digital cameras come for free on every smart phone today). The extremely lucrative film business paralyzed Kodak until it plunged into the wrong sector – the highly mature and competitive printing market.

So, it is all well and good to run around and innovate, but if you cannot monetize it, and worse, if it distracts you from the business at hand, then you will run your business into the ground. I think there are four patterns of companies that successfully innovate:

The Big Bet at the Top: One way that innovation can be successfully implemented is through the big bet at the top. In other words, either the owner or CEO decides the direction and goes ‘all-in’. This has happened time and again. For example, in the mid-80s, Intel made the shift from memory chips to microprocessors. This dramatic shift included large layoffs and shuttering plants in its California operations, but the shift was an easier decision by top management because the microprocessor marketplace was more lucrative than the memory semiconductor marketplace. Intel’s subsequent rapid improvement in processors and later branding proved out and further capitalized on this decision. And I think Apple is a perfect example of big bets by Steve Jobs. From the iPod and iTunes, to the iPhone, to the iPad, Apple made big bets on new consumer devices and experiences that were, at the time, derided by pundits and competitors (I particularly like this one rant by Steve Ballmer on the iPhone). Of course, after a few successes, the critics are hard to find now. These bets require prescient senior management with enough courage and independence to place them, otherwise, even when you have a George Fisher, as Kodak did, the bets are not placed correctly. I also commend bets where you get out of a sector that you know you cannot win in. An excellent example of this is IBM’s spinoff of its printer business in the ’90s and its sale of the PC business to Lenovo more recently. Both turn out to be well ahead of the market (just witness HP’s belated and poorly thought though PC exit attempt this past summer).

Innovating via Acquisition: Another effective strategy is to use acquisition as a weapon. But success comes to those who make multiple small acquisitions as opposed to one or two large acquisitions. Cisco and IBM come to mind with this approach. Cisco effectively branched out to new markets and extended its networking lead in the 1990s and early 2000s with this approach. IBM has greatly broadened and deepened its software and services portfolio in the past decade with it as well. Compaq and Digital, America Online and Time Warner, or perhaps recently, HP’s acquisition of Autonomy, represent those companies that make a late, massive acquisition to try to stave off or shift their corporate course. These fare less well. In part, it is likely due to culture and vision. Small acquisitions, when care is taken by senior management to fully leverage the product, technology and talent of the takeover, can mesh well with the parent. A major acquisition can set off a clash of cultures, visions, and competing products that waste internal energy and place the company further behind in the market. Hats off on at least one major acquisition that changed completely the course of a company: Apple’s acquisition of Next. Of course, along with Next they also got the leader of Next: Steve Jobs, and we all know what happened next to Apple.

Having a Separate Team: Another successful approach is to recognize that the reason a company does well is because it is focused on ensuring predictable delivery and quality to its customer base. And to do so, its operations, product and technology divisions all strive to deliver such value predictably. Innovation by its very nature is discontinuous and causes failure (good innovators require many failures for every success). By teaching the elephant to dance, all you do is ruin the landscape and the productive work that kept the company in business before it lost its edge. Instead, by setting up a separate team, as IBM has done for the past decade and others have done successfully, a company can be far more successful. The separate team will require sponsorship, and it must be recognized that the bulk of the organization will focus on the proper task of delivering to the customer as well as making incremental improvements. You could argue that Kodak’s focus of the bulk of its team on film was its downfall. But I would suggest instead it was the failure of the innovation teams to take what they already had in the lab and make them successful new products in the market.

A Culture of Tinkering: This approach relies on the culture and ingenuity of the team to foster an environment where outstanding delivery in the corporation’s competence area is done routinely, and time and resources are set aside to continuously improve and tinker with the products and capabilities. To have the time and space for teams to be engaged in such ‘tinkering’ requires that the company master the base disciplines of quality and operational excellence. I think you would find such companies in many fields and it has enabled ongoing success and market advantage, in part because not only do they innovate, but they also out-execute. For example, Fedex, well-known for operational excellence, introduced package tracking in 1994, essentially exposing to customers what was a back end system. This product innovation has now become commonplace in the package and shipping industry. Similarly, 3M is well-known as an industry innovator, regularly deriving large amounts of revenue from products that did not exist for them even 5  years prior. But some of their greatest successes (e.g., Post-It Notes) did not come about from some corporate innovation session and project. Instead they came together over years as ideas and failures percolated in a culture of tinkering until finally the teams hit on the right combination of a successful product. And Google is probably the best technology company example where everyone sets aside 20% of their time to ‘tinker’.

So what approach is best? Well, unless you have a Steve Jobs, or are a pioneering company in a new industry, making the big bet for an established corporation should be out. If your performance does not show outstanding excellence, and if your corporate culture does not encourage collegiality and support experimentation and then leverage failure, then a tinkering approach will not work. So you are left with two options, make multiple small acquisitions in the areas of your product direction, and with effective corporate sponsorship, fold the new product set and capabilities into your own. Or, set up a separate team to pursue the innovation areas. This team should brainstorm and create the initial products, test and refine them, and then after market pilot, have the primary production organization deliver it in volume (again with effective corporate sponsorship). Thus the elephant dances the steps it can do and the mice do the work the elephant cannot do.

As for our example, Kodak only had part of the tinkering formula. Kodak had the initial innovation and experimentations but they were unable to take the failures and adjust their delivery to match what was required in the market for success. And they should have executed multiples of smaller efforts across more diverse product sets (similar to what Fujifilm did) to find their new markets.

Have you been part of a successful innovation effort or culture? What approaches did you see being used effectively?

Best, Jim

IT Service Desk: Structure and Key Elements

As we mentioned in our first service desk post, the service desk is the critical central point where you interact daily with your customers. To deliver outstanding IT capabilities and service, you need to ensure your service desk performs at a high level. From ITIL and other industry resources you can obtain the outlines of a properly structured service desk, but their perspective places the service desk as an actor in production processes and does not necessarily yield insight into best practices and techniques to make a world class service desk. It is worthwhile though to ensure you start with ITIL as a base of knowledge on service management and here we will provide best practices to enable you to reach greater performance.

Foremost, of course is that you understand what are the needs of the business and the levels of service required. We start with the assumption that you have outlined your business requirements, you understand the primary services and levels of performance you must achieve. With this in hand, there are 7 areas of best practice that we think are required to achieve 1st quartile or world-class performance.

Team and Location: First and foremost is the team and location. As the primary determinant to delivering outstanding service is the quality of the personnel and adequate staffing of the center, how you recruit, staff and develop your team is critical. Further, if you locate the service desk where it is difficult to attract and retain the right caliber of staff, you will struggle to be successful. The service desk must be a consolidated entity, you cannot run a successful service desk where there are multiple small units scattered around your corporate footprint. You will be unable to invest in the needed call center technology and provide the career path to attract the right staff if it is highly dispersed. It is appropriate and typically optimal, for large organization to have two or three services in different time zones to optimize coverage (time of day) and languages.

Locate your service desk where there are strong engineering universities nearby that will provide an influx of entry level staff eager to learn and develop. Given staff cost will be the primary cost factor in your service, ensure you locate in lower cost areas that have good language skills, access to the engineering universities, and appropriate time zones. For example, if you are in Europe, you should look to have one or two consolidated sites located just outside 2nd tier cities with strong universities. For example, do not locate in Paris or London, instead base your service desk either in or just outside Manchester or Budapest or Vilnius. This will enable you to tap into a lower cost yet high quality labor market that also is likely to provide more part-time workers that will help you solve peak call periods.

Knowledge Management and Training: Once you have your location and a good staff, you need to ensure you equip the staff with the tools and the knowledge to resolve issues. The key to a good service desk is to actually solve problems or provide services instead of just logging them. It is far less costly to have a service desk member be able to reset a password, correct a software configuration issue, or enable a software license than for them to log the user name and issue or request and then pass it to a second level engineering group. And it is far more satisfying for your user. So invest in excellent tools and training including:

  • Start with an easy and powerful service request system that is tied into your knowledge management system.
  • Invest in and leverage a knowledge management system that will enable your service desk staff to quickly parse potential solution paths and apply to the issue at hand.
  • Ensure that all new applications or major changes that go into production are accompanied by appropriate user and service desk documentation and training.
  • Have a training plan for your staff. Every service desk no matter how large or small should have a plan that trains agents to solve problems, use the tools and understand the business better.  We recommend a plan that enables 8 hours of training per agent per month. This continuous training keeps your organization more knowledgeable on how to solve problems and understand how incidents impact the businesses they are supporting.
  • Support external engineering training. We also recommend fully supporting external training and certification. When your service desk staff get that additional education or certification such as Windows or network certifications, your company now has a more capable service desk employee who could potentially (eventually) move into the junior engineering ranks. This migration can be both a benefit to your engineering team, and enable you to attract more qualified service desk staff because of the existence of such an upward career route.
  • Foster a positive customer service attitude and skills. Ensure your service desk team is fully trained in how to work with customers, who may arrive on the phone already frustrated. These important customer interface skills are powerful tools for them to deliver a positive experience. Give them the right attitude and vision (not just how to serve, but being a customer advocate with the rest of IT) as they are your daily connection with the customer.
  • Communicate your service desk vision and goals. This regular communication ties everything together and prevents the team from wandering in many directions.  Proper communication between operations, knowledge management, training, process and procedures ensures you focus on the right areas at the right time and it also ensures the team is always moving in the same direction, striving for the same goal of high performance at very competitive price points.

Modern infrastructure and production tools:  The service desk is not a standalone entity. It must have a clean mesh with the production, change, asset, and delivery processes and functions within IT. It is best to have a single production system serving production, change and incident leveraging a single configuration database. The service desk request toolset should also be tightly integrated with this system (and there are some newer but very strong toolsets that deliver all aspects) so that all information is both available at each interaction and the quality of the data can be maintained without multiple entries. As the service desk is really the interface between the customer and these IT processes, the cleaner and more direct the service desk mesh the better the customer experience and the engineering result. You can also use the service desk interaction with the customer to continually improve the quality of the data at hand. For example, when a customer calls in to order new software or reset a password, you can verify and update a few pieces of data such as the location of their PC or their mobile device information, etc. This enables better asset management and provides for improved future service. In addition to a well-integrated set of software tools and production processes, you should invest in a modern call center telephony capability with easy-to-use telephony menus. You should also have  internet and chat channels as well as traditional telephony to exploit automated self-service interfaces as much as possible. This is the experience that your users understand and leverage in their consumer interfaces and what they expect from you. You should measure your interfaces against a bar of ordering something from Amazon.

Establish and publish SLAs and a service catalogue: As part of the equation of providing an excellent service desk experience, you need to set the users expectations and provide an effective way to order IT services. It is important to define your services and publish SLAs for them (e.g., a new PC will be delivered in two days, or, we answer 95% of all service desk calls within 45 seconds). When you define the services, ensure that you focus on holistic services or experiences rather than component pieces. For example, ordering a new PC for a new employee should be a clear service that includes everything you would expect to get started (ids, passwords, software, setup and configuration, remote access capability, etc) not a situation where the PC got there in two days but it took the user another two months to get everything else they need subsequently discovered, ordered, and implemented. Think instead of target user result or experience. An analogy would be the McDonald’s value meal: as a consumer you do not order each individual french fry and pickle, you order a No. 3 and drink, fries, meal, etc come together in a value pack. Make sure your service catalogue has ‘value packs’ and not individual fries.

Mature leverage of metrics and feedback loops: with the elements above you will have a strong base of a service desk. To move it to outstanding performance, you must leverage the track of continuous improvement. Use the metrics gathered by your service desk processes to track where key data that is actionable:

  • Chronic issues – use Pareto analysis to determine what the biggest issues are and then leverage root cause to identify how to eliminate the issues from occurring. The solutions will range from better user training to eliminating complex system faults within your applications. But these remedies will eliminate the call (and cost) and remove ongoing problems that are sand in the gears of IT’s relationship with its customers
  • Self-Service opportunities – again, Pareto analysis will show you what volume requests you receive that if you heavily automate and move to self service, you can take significant work out of your IT shop and provide the customer with an interface they expect. This is not just password resets, it could be software downloads or the ability to access particular denied pages on the internet. Set up a lightweight workflow capability with proper management approvals to enable your users to self serve.
  • Poor Service – use customer satisfaction surveys and traditional call center metrics to ensure your staff are delivering to a high level, Use the data to identify service problem areas and address accordingly.
  • Emerging trends – Your applications, your users, and your companies needs are dynamic. Use the incident and service request data to understand what is emerging as an issue or need. For example, increasing performance complaint calls on an application that has been stable could indicate a trend of increasing business usage of a system that is on the edge of performance failure. Or increasing demand for a particular software package may indicate a need to do a standardized rollout of a tool that is used more widely than before.

Predictable IT delivery and positive cross engagement: The final element to ensuring an outstanding service desk and customer experience lies with the rest of the IT team. While the service desk can accomplish a great deal, it cannot deliver if the rest of IT does not provide solid, predictable service delivery. While that is quite obvious, you should use the service desk metrics of how well your IT team is delivering against requests to judge not just the service desk but also to identify engineering team delivery issues. Did you miss the desktop PC delivery because the service desk did not take down the right information or because the desktop implementation team missed its SLA? Further, the engineering component teams should be meeting with the service desk team (at least quarterly) to ascertain what defects they are introducing, what volume issues are arising from their areas, and how they can be resolved.  On a final note, you may find (as is often the case) that the longest delay to service delivery (e.g. that desktop PC) is obtaining either the user’s business management approval or finance approval. With data from the metrics, you should be able to justify and invest in a lightweight workflow system that obtains these approvals automatically (typical via email/intranet combination) and reduces the unproductive effort of chasing approvals by your team.

So quite a few elements of a successful service desk. Perhaps one way to summarize these elements is to view it as a sturdy three-legged stool. The seat is the service desk team.  Knowledge management and training are one leg.Processes and metrics and the telephony infrastructure and tools are the other two legs. The legs are made sturdier with effective communications and a supporting IT team.

Perhaps there are other elements or techniques that you would emphasize? Let us know we look forward to your comments. Best, Jim, Bob, and Steve.

Our Additional Authors
About Bob Barnes: Bob has over 20 years of experience managing Service Desk and Infrastructure teams.  He has experience in the financial service industry, manufacturing, pharmaceutical, telecommunication, legal and Government.  He has spoken at many industry conferences such as HDI, ICMI and Pink Elephant.  Bob has degrees in Information Systems and Business Management.
About Steve Wignall: Steve is an IT Service Management professional with significant
experience of leading large scale global IT Service Management functions in the Financial Services industry. Steve has contributed to defining the global industry standards for Service Desk quality as a former member of the Service Desk Institute Standards Committee. Steve led his Service Desk to be the first team globally to achieve the prestigious Service Desk Institute 4 Star Quality Certification, achieving an unparalleled 100% rating in all assessment categories and is a former winner of the SDI UK Service Desk Team of the Year.

 

 

 

When to Benchmark and How to Leverage the Results

Benchmarking is an important tool for management and yet frequently I have found most organizations do not take advantage of it. There seem to be three camps when it comes to benchmarking:

  • those who either don’t have the time or through some rationale, avoid tapping into benchmarking
  • those that try to use it infrequently but for a variety of reasons, are never able to leverage and progress from its use
  • those who use benchmarking regularly and make material improvement

I find it surprising that as so many IT shops don’t benchmark. I have always felt that there were only two possible outcomes from doing benchmarking:

  1. You benchmark and you find out that you compare very well with the rest of the benchmark population and you can now use this data as part of your report to your boss and the business to let them know the value your team in delivering
  2. You benchmark and you find out a bunch of things that you can improve. And you now have the specifics as to where and likely how to improve.

So, with the possibility of good results with either outcome, when and how should you benchmark your IT shop?

I recommend making sure that your team and the areas that you want to benchmark have an adequate maturity level in terms of defined processes, operational metrics, and cost accounting.  Usually, to take advantage of a benchmark, you should be at a minimum a Level 2 shop and preferably a Level 2+ shop where you have well understood unit costs and unit productivity. If you are not at this level, then in order to compare your organization to others, you will need to first gather an accurate inventory of assets, staff, time spent by activity (e.g., run versus change). This data should supplemented with defined processes and standards for the activity to be compared. And then for a thorough benchmark you will need data on the quality of the activity and preferably 6 month trending of all data. In essence, these are prerequisite activities that must take place before you benchmark.

I do think that many of teams that try to benchmark but then are not able to do much with the results are unable to progress because:

  • they do not have the consistent processes on which improvements and changes can be implemented
  • they do not routinely collect the base data and thus once the benchmark is over, no further data is collected to understand if any of the improvements had effect or not
  • the lack of data and standards results in so much estimation for the benchmark that you cannot then use it to pinpoint the issues

So, rather than benchmark when you are a Level 1 or 2- shop, instead just work on the basics of improving your maturity of your activities. For example, collect and maintain accurate asset data – this foundational to any benchmarking. Similarly collect how your resources spend their time — this is required anyway to estimate or allocate costs to whoever drives it, so do it accurately. And implement process definitions, base operational metrics, and have the team review and publish monthly.

For example, let’s take the Unix server area. If we are looking to benchmark we would want to check various attributes against the industry including:

  • number of servers (versus similar size firms in our industry)
  • percent virtualized
  • unit cost (per server)
  • unit productivity (servers per admin)
  • cost by server by category (e.g. staff, hardware, software, power/cooling/data center, maintenance)

By having this information you can quickly identify where you have inefficiencies or you are paying too much (e.g., the cost of your software per server) or you are unproductive (your servers per admin is low versus the industry). This then allows you to draw up effective action plans because you are addressing problems that can likely be solved (you are looking to bring your capabilities up to what is already best practice in the industry).

I recall a benchmark of the Unix server area where our staff costs per server were out of line with the industry though we had relatively strong productivity. Upon further investigation we realized we had mostly a very senior workforce (thus being paid the top end of the scale and very capable) that had held on to even the most junior tasks. So we set about improving this in several ways:

  • we packaged up and moved the routine administrative work to IT operations (who did it for far less and in a much more automated manner)
  • we put in place a college graduate program and shifted nearly all of our hires in this group from a previous focus of mid to senior only new hires to one where it was mostly graduates, some junior and only a very few mid-level engineers.
  • we also put in place better tools for the work to be done so staff could be more productive (more servers per admin)

The end result after about 12 months was a staff cost per server that was significantly below the industry median, approaching best in class (and thus we could reduce our unit allocations to the business). Even better, with a more balanced workforce (i.e., not all senior staff) we ended up with a happier team because the senior engineers were now looked up to and could mentor the new junior staff. Moreover, the team now could spend more of their time on complex change and project support rather than just run activities. And with the improved tools making everyone more productive, it resulted in a very engaged team that regularly delivered outstanding results.

I am certain that some of this would have been evident with just a review. But by benchmarking, not only were we able to precisely identify where we had opportunity, we were better able to diagnosis and prescribe the right course of action with targets that we knew were doable.

Positive outcomes like this are the rule when you benchmark. I recommend that you conduct a yearly external benchmark for each major component area of infrastructure and IT operations (e.g. storage, mainframe, server, network, etc). And at least every two years, assess IT overall, assess your Information Security function, and if possible, benchmark your development areas and systems in business terms (e.g., cost per account, cost per transaction).

One possibility in the development area is that since most IT shops have multiple sub-teams within development, you can use the operational development metrics to compare them against each other (e.g. defect rates, cost per man hour or cost per function, etc). Then the sub-team with the best metrics can share their approach so all can improve.

If you conduct regular exercises like this, and combine it with a culture of relentless improvement, you will find you achieve a flywheel effect, where each gain of knowledge and each improvement becomes more additive and more synergistic for your team. You will reduce costs faster and improve productivity faster than taking a less open and less scientific approach.

Have you seen such accelerated results? What were the elements that contributed to that progress? I look forward to hearing from you.

Best, Jim

 

 

IT Service Desk best practices

An important interface for your internal customers is through your IT service desk. Unfortunately, in many situations the service desk (or help desk) does not use up-to-date practices and can be a backwater of capability. This can result in a very poor reputation for IT because the service desk is the primary customer interface with the IT organization. I recall starting at a company tasked with turning around the IT organization. When I asked about the IT help desk, the customer turned to me and said ‘ You mean the IT helpless  desk?’  With a reputation that poor with our customers, I immediately set out to turnaround our service desk and supporting areas.

The IT Service Desk may seem quite straightforward to address — maybe the thought is that all you really need to do is have one number, staff it, be courteous and try hard.  This isn’t the case and there are some clear best practice techniques and approaches that will enable you to deliver consistent, positive interactions with your customers as well as enable greater productivity and lower cost for the broader IT team.

For our discussion, I have two esteemed former colleagues who have run top notch service desks that will be authoring material on best practices and how to deliver an outstanding customer experience through this critical interface. Both Steve Wignall and Bob Barnes have run world class service desks at large financial services companies. And I think you will find the guidance and techniques they provide here today and in subsequent posts to be a surefire manner to transforming your ‘helpless’ desk to a best in class service desk.

First, let’s recap why the service desk is such an important area for IT:

  • the service desk is the starting point for many key processes and services for IT
  • well-constructed, the service desk can handle much of the routine work of IT, enabling engineering and other teams to do higher value work
  • it is your primary interface with the customer, where you can gauge the pulse of your users, and make the biggest daily impact on your reputation
  • with the right data and tools, the service desk identify and correct problem outbreaks early, thereby reducing customer impacts and lowering overall support costs.

And yet, despite its importance to IT, too often Service Desks are chronic under-performers due to the following issues:

  • poor processes or widespread lack of adherence to them
  • the absence, or low quality application, of a scientific and metric based management approach
  • lousy handoffs and poor delivery by the rest of IT
  • inadequate resources and recruiting, worsened by weak staff development and team-building
  • weak sponsorship by senior IT leaders
  • ineffective service desk leadership

Before we get into the detailed posts on service desk best practices, here are a few items to identify where your team is, and a few things to get started on:

1. What is your first call resolution rate? If it is below 70% then there is work to do. If it is above 70% but primarily because of password reset, then put in decent self serve password reset and re-evaluate.

2. What is the experience of your customers? Are you doing a monthly or quarterly survey to track satisfaction with the service? If not, get going and implement. I recommend a 7 point scale and hold yourself to the same customer satisfaction bar your company is driving for with its external customers.

3. What are the primary drivers of calls? Are they systems issues? Are they due to user training or system complexity issues? Workstation problems? Are calls being made to report a surplus of availability or general systems performance issues? If you know clearly (e.g., via Pareto analysis) what is driving your calls, then we have the metrics in place (if not, get the metrics – and process if necessary — in place). Once you have the metrics you can begin to sort out the causes and tackle them in turn. What is your team doing with the metrics? Are they being used to identify the cause of calls to then go about eliminating the call in the first place?

For example, if leading drivers of calls are training and complexity, is your team reworking the system so it is more intuitive or improving the training material? If the drivers are workstation issues, do you know what component and what model and are now figuring out what proactive repair or replacement program will reduce these calls? Remember, each call you eliminate probably saves your company at least $40 (mostly in eliminating the downtime of the caller).

4. Do you and your senior staff meet with the service desk team regularly and review their performance and metrics? Do senior IT leaders sponsor major efforts to eliminate the source of calls? Does the service desk feature in your report to customers?

5. If it is a third party provider, have you visited their site lately? Does your service area have the look and feel and knowledge of your company so they can convey your brand? And hold them to the same high performance bar as you would your own team.

Use these five items for a quick triage of your service desk and our next posts will cover the best practices and techniques to build a world-class service desk. Later this week, Bob and Steve will cover the structure and key elements of a world-class service desk and how to go about transforming your current desk or building a great one from scratch. Steve will also cover the customer charter and its importance to maintaining strong performance and meeting expectations.

I look forward to your thoughts and experiences in this area. And perhaps you have a service desk that could use some help to turn around IT’s and your reputation.

Best, Jim

 

 

 

So far, so good

It has only been a few weeks into the new year but I think we are off to a good start here at recipeforIT.com. Given the significant number of new readers, I thought I would touch again on the key goals for this site and also give you a preview of upcoming posts and pages.

As many of you know, delivering IT today, with all of the cost reduction demands, the rapidly changing technology, the impact of IT consumerization, and security and risk demands, is, simply put, tough work. It is complicated and hard work to get the complex IT mechanism, with all the usual legacy systems issues, to perform as well as the business requires. So to help those IT leaders out, I have started this site to provide recipes and ideas on how to tackle the tough but solvable problems they face. And in addition to providing best practices, we will give you a solid and sensible perspective on the latest trends and technologies.

So what is upcoming? First, I have two esteemed former colleagues who have run top notch service desks that will be authoring material on best practices and how to deliver an outstanding customer experience through this critical interface. Look for their posts on the ‘face of IT’ over the next several weeks. Second, I want to touch on innovation and the increasingly common business fascination with innovation as a solution to all manner of business ills. I hope to also explore leadership further in February. And lastly, I have a number of incremental but hopefully material improvements to the site pages that will provide further details on best practices in a comprehensive manner.

I think it is a good lineup for the next month! I continue to receive strong feedback from many of you on the usefulness and actionability of the material. I will definitely work to ensure we maintain that relevance.

Don’t forget you can subscribe to the site so you get an email when there’s a new post (subscribing is on the rightmost bar, halfway down the page). And feel free to provide comments or suggestions — the feedback really helps!

If you are new to the site, I recommend a few posts for relevance and fundamentals:

There’s lots more to come and have a great week!

Best, Jim

A Continued Surge in IT Investment

In recent posts, I have noted previous articles on how the recovery in the US has been a ‘jobless’ recovery yet one with stronger investment in equipment and IT. This peculiar effect in the latest reporting appears to be even more pronounced, perhaps even running in ‘overdrive’. According to yesterday’s Wall Street Journal article, the investment in labor savings in the US stepped up at the beginning of the decade, but with the recession, companies found bigger opportunities to automate even more with machinery and software.  Timothy Aeppel, the author notes, this investment and spending level has continued broadly through the fourth quarter. And while certainly investment in technology will cause employment subsequently to rise, I think it appears that CEOs are investing in technology far more than adding staff as in previous recoveries. And they are doing this for a reason — they can get better returns from the machinery and robotics and software than before. Part of this is due to low interest rates and unique tax breaks, but I believe that technology is enabling greater returns on reducing costs and improving productivity than previously. Further, I think fundamental changes in IT capabilities and robotics are fueling improved returns from automation even more than has occurred in the past, spurring even greater investment in IT.

Historically, IT applications and systems were applied to domains with large amounts of the routine work, often done by hundreds if not thousands of staff. These were the areas that justified the cost of IT and provided the greatest return. Improvements in application and database technology enabled technology to tackle tougher and more complex problems. It also enabled leverage of technology on medium scale processes. Basic toolsets like email and Sharepoint tackled the least complex and departmental processes. This progress is represented in the diagram below.

The introduction of client server platforms allowed solutions to be applied to routine work on a smaller scale, to large departments and medium sized companies rather than just divisions and large corporations. This accelerated with the internet and the advent of virtualization.

But the cumulative and accelerating effect of new development technologies and methods, new toolsets, new client devices, cloud infrastructure, and advancing data and analytics capabilities has enabled a far broader range of solutions to be applied easily and effectively across a wide range of institutions and problem sets. Small and medium sized companies through cloud services can now leverage similar infrastructure capabilities that previously could only be implemented and afforded by the largest corporations. This step change in progression is represented by the chart below.

The increase of automation scope with new Tech toolsets

The new lightweight workflow tools like IBM’s Lombardi toolset and many others open up almost any departmental process to be easily and rapidly automated and achieve a decent ROI.  The proliferation of client interfaces through the internet and mobile allow customers to self serve intelligently for almost any product and service, enabling the elimination of large amounts of front office and back office work. This cumulative and compounding effect is truly a step change in what can be done by IT to automate the work within medium and large companies. And yet, despite this sea change in capabilities, you still find many IT departments focused almost wholely on their traditional scope. The projection initiation and selection process is laborious and even arduous, oriented towards doing large (and fewer) projects. The amount of overhead required to execute almost any typical project would overwhelm lightweight automation for departmental-sized efforts. And yet, there are huge new areas of scope and automation that are possible now in almost every company. And so how do you start to prove out these new areas and adapt your processes to enable them to get done?

I think there are several ways to get started here, some of which I mentioned briefly in my December post on  ‘A few things to add to your technology plans’.

1. Improve customer signup or account opening: Unless your company has redone these functions within the past 24 months, it is doubtful that this area is up to the latest expectations of customers.  Enable account opening from the web and mobile devices, leverage the app stores to provide mobile clients that have additional, useful and cool functions (nearest store, nearest ATM, or if you are a climbing gear company, the current temperature and weather at base camp on Mount Everest). Make them easy to navigate and progress through the application with  progress bar and associated menu. Ensure the physical process at the store or branch is as easy as the internet version (i.e., not twenty pages of forms). And tighten up the security with strong passwords (many sites today have a strength indicator as you type in the new password) and two factor security on critical transactions (e.g. wire transfers or bill payments). Remember you can now deliver the two factor security through the customer’s mobile device and not a separate token.

2. Fix two or three process issues that are basic transactions for customers: Just as you need to continually cycle back through your customer interfaces to keep them fresh and take advantage of the latest consumer technology, you should also need to revisit some of the basic transactions that business typically fail to put on their list to invest and yet become problematic service areas for customers. These would be areas like change of address or statement reprints or getting a replacement card. Because they are never on the project list, these services remain backwaters of process and automation with predictable frustration for the customers, high error rates, and disproportionate manual effort to complete. Work with a strong business partner (perhaps the COO or someone close to the customer experience) to tee these up. Use the latest workflow tools to tackle the process piece. Leverage the latest data warehouse and ETL capabilities to integrate the customer data across business units and applications so that the process can be once and done. If you are not sure on what basic customer process to start, then talk to the unit handling customer complaints and look for those processes that have the highest number of issues yet the customer is trying to do something quite basic. Remember, every customer complaint requires expensive responses that by eliminating, you drive material improvements in productivity and cost.

3. Implement more self-service: An oft-neglected area is the improvement of corporate support functions and their productivity and service. In a typical corporation almost every employee is touched by the HR and Fiance processes, which can be heavily manual and archaic (again they rarely show up on the project list) By working with your Finance and HR functions you can reduce their costs and improve their delivery to their users through implementation of automation and self-service. The advanced workflow toolsets (IBM’s BPM) mean you can do far more with incremental, small tiger team efforts than ever before. Your scope to automate and move to self service on your intranet is much greater. More and more minor business processes than ever before can be automated at far less cost and effort. The end results are higher productivity for your business, lower operations costs in HR and Finance, a more satisfied user base, and a better perception of IT.

4. Get to a modern production and service toolset for IT: For the past twenty years, there have been two traditional toolsets that most companies leveraged for production processes and service requests (Remedy and Peregrine). And most of us have implemented (with some struggle) reasonable implementations that met the bulk of our needs. But the latest generation of these toolsets (e.g., ServiceNow) make our previous implementations look like dinosaurs. And when you consider the 60 or 70% of your staff and service desk are using these tools every day, and you can make them far more productive with a new toolset, it is worth taking a look at. Further, your business users, will love the new IT ordering facilities on the intranet that are better than ordering a book from Amazon. By the way, the all-in operating cost of the new tools should be substantially less than your current costs for Remedy or Peregrine. And your team will be operating at a step level improvement in efficiency and productivity.

5. Get Going in Business intelligence: One last item to make sure your company is capitalizing on is leveraging the data you already have to know your customer better, improve your products or services, and reduce your costs. Why have advertising for customers who never click through or buy? Why do customers call your call center when they should be able to do it easier online? Wade through all the unstructured data being generated on social about your company to figure out how to improve your brand. Knowing the mood of the market, understanding your customers and the perspectives on your products and services requires IT to partner with the business to leverage the data you have to obtain intelligence. Investing in this area can now be tackled with the new big data tools on the market. If you are not doing much here, then I recommend finding out what your competitors are doing and sitting down with your business partners to sort through what you must do.

So, there’s 5 things that five years ago, would have never made any list. Yet if you make real progress in 3 of the 5, you can hit home runs in customer satisfaction, service quality, and a much better view of IT. And most important, you can ensure your company stays ahead of the game to achieve greater productivity and lower costs.

Any views or alternate perspectives on the progression of IT tools and solutions? Do you see the same sea change that I am calling out here for us to take advantage of?

Let me know. Best, Jim

Delivering More by Leveraging the ‘Flow of Work’

As you start out this year as an IT leader and you are trying to meet both project demand on one hand and savings goals on the other, remember to leverage what I term the IT ‘Flow of Work‘. Too often, once work comes into an organization, either through a new project going into production or through the original structure of the work, it is static. The work, whether it is server administration, batch job execution, or routine fixes, continues to be done by the same team that often developed it in the project cycle. Or at best, the system and its support are handed off to a dedicated run team, that continues to treat it as ‘custom’ work. In other words, once the system has been crafted by the project team, the regular work to run, maintain, and fix the system continues to be custom-crafted work.

This situation, where the work is static and continues to be executed by the initial, higher cost resources, would be analogous to a car company crafting a prototype or concept car, and then using that same team to produce every single subsequent car of that model with the same ‘craft’ approach. This of course does not happen as the car company moves the prototype to a production factory where the work is standardized and automated and leaned, and far lower cost resources execute the work. Yet in the IT industry we often fail to leverage this ‘flow of work’. We use senior server engineers to do basic server administration tasks (thus making it custom work). We fail to ‘package’ or productionalize or automate the tasks thus requiring exorbitant amounts of manual work because the project focused on delivering the feature and there was not optimization step to get it into the IT ‘factory’.

Below is a diagram that represents the flow of work that should occur in your IT organization.

Moving work to where it is most efficiently executed

The custom work, work done for new design, or complex analysis or maintenance, is the work done by your most capable and expensive resources. Yet, many IT organization waste these resources by doing custom work where it doesn’t matter. A case in point would be anywhere that you have IT projects leveraging custom designed and built servers/storage/middleware (custom work) instead of standardized templates (common work). And rarely do you see those tasks documented and automated such that they can be executed by the IT Operations team (routine work). And not only do you then waste your best resources on design that adds minimal business value, you then do not have those best resources available to do the new projects and initiatives the business needs to have done. Or similarly, your senior and high cost engineers are doing routine administrative work because that is how the work was implemented in production. Later, no investment has been made to document or package up this routine work so it can easily be done by your junior or mid-level engineering resources.

Further, I would suggest that you often find the common or routine engineering work stays in that domain. Investment is infrequently made to further shrinkwrap and automate and document the routine administrative tasks your mid-level engineers so that you can hand it off to the IT Operations staff to execute as part of their regular routine (and by the way, the Ops team typically executes these tasks with much greater precision than the engineering teams).

So, rather than fall into the traps of having static pools of work within your organization, drive and investment so that the work can be continually packaged and executed at a more optimal level and free up your specialty resources to tackle more business problems.  Set a bar for each of your teams for productivity improvements. Enable them the time and investment to package the work and send it to the optimal pool. Encourage your teams to partner with the group that will receive their work on the next step of the flow. And make sure they understand that for every bit of more routine work that they can transition to their partner team, they will receive more rewarding custom work.

After a few cycles of executing the flow of work within your organization, you will find you have gained significant capacity and reduced your cost to execute routine work substantially. This enables you to achieve a much improved ratio of run cost versus build costs by continually compressing the run costs.

I have seen this approach executed with great effect in both large and mid-sized organizations. Have you been part of or lead similar exercises? What results did you see?

Best, Jim

 

IT Project Delivery – Dismal Government Projects Track Record

In the past several weeks, I have posted on project management best practices. And we talked about the track record of the IT industry as being, at best, a mixed bag. It turns out  for the UK government, that would be putting a very positive spin on their IT projects. The Times recently ran an article* detailing the eight worst areas in the government which have cost the taxpayers dearly. IT projects had 3 blatant failures of the eight and had a major hand in 2 of the remaining. How did it come to this? Why is practice of IT reasonable for more than half of the UK government wasteful initiatives? And these are not minor blowups. The Fire and Rescue Plan, which started out as a 120 million pound effort to consolidate 46 control rooms to 9 regional centres was finally axed after it cost 469 million pounds! A straightforward project (how many of us have consolidated call centres, trading floors or command centres in the past 10 years? I would venture 50% of your firms have done this) that cost 4 times the estimate and never even delivered! And that is not the worst one: the NHS records project has cost 6.4 billion pounds to date with at least 2.7 billion of that wasted. While there are 60 million citizens in the UK, many of us in industry have customer bases in the tens of millions where we keep critical financial data safe and accessible for our customers. So while I would agree the health records breaks new ground in some areas, it is not the Manhattan project. This is a very doable project where given the monies spent, it should have already delivered significant benefit and capability. And yet very little has been delivered and there is low confidence this will change in the near future for the program. Overall, how can IT projects have 3 of the 8 slots of government failure areas when the defense industry only has 1 (and you could argue IT projects contributed heavily to that one)?

I think this dismal track record in government for IT projects is  due to some common issues and a few unique ones. First, typically there is poor and ambiguous sponsorship. And it is compounded by very weak and changing requirements. Too many parties and groups in government with a stake and a reason to argue and change the project. Applying the methods of  political processes of lengthy debate, consensus and influence to defining and running a project are a recipe for disaster. And I suspect the contractors doing the work likely encouraged changes and debate as this was then an opportunity to grow scope with much higher margin work (change orders are always much more profitable than the original bid). Second, the approach undoubtably used a waterfall method. And given the size and scope and vast array of stakeholders, each step (e.g. requirements definition, etc)  took an extremely elongated time. An elongated schedule with a cumbersome and bloated program structure to match the stakeholder complexity would certainly have multiplied the costs. Still, it takes even more to cause such spectacular blowups.

There is an excellent book, Software Runaways, available that documents such ‘death march’ projects. ‘Death march’ projects are the kind of massive program that everyone knows is doomed to failure and yet everyone is still lashed to the ship on this voyage to failure. It is a fascinating read and some ways like watching a traffic accident unfold — it’s pretty awful but you can’t tear yourself away. Of course, the UK government and its contractors do not have a monopoly on such spectacular program failures (though they certainly seem to be doing their best to enable additional chapters to be written). What is relevant here though is that the book does an excellent job of reviewing about a dozen of the more interesting past IT program failures and identifying the root causes. These rot cause include the poor sponsorship and ill-defined requirements we discussed above. But it also describes the mentality that sets into a large program team on their ‘death march’. In essence, even though the members of the team know there are massive flaws in the program, because of the complexities and different agendas and influences of a large complex program team, they are often unable to repair them from within. Even worse, when an external party identifies the flaws, the program team then bands together to defend from such external attacks at all costs.  Their identity has become so caught up in the program that they would create a Potemkin village to demonstrate that there are no flaws.

So, it is the instinct of these large program organizations that assume a life of their own with all the members now vested in its survival (not its delivery but its survival)  that when combined with major flaws (such as ill-defined requirements or the wrong methodology) creates the spectacular failures. Or, put another way, it is how humans work together within large programs that if based on poor practices, can be multiplied to raise the negative results to such an irrational level.

With that in mind, what are some approaches to prevent this from occurring? I would suggest the basic ones of ensuring there is clear sponsorship, proper steering committees should help, but more radical changes to the approach would be better. Let’s take the command centre consolidation. Why do all 46 into 9 in one waterfall or ‘big bang’ approach? Instead take two regions of the 9 and do two pilots, each constructed with their separate sponsors, steering committees and contractors. Set an overall schedule for them to deliver to a well-defined, but high level set of requirements or outcomes. The team that completes their work on time and meets the requirements will have an opportunity to bid on the next two regions to be consolidated. And the one that does not meet the bar will result in the contractor being barred from the next round of work and a negative performance mark will go on the sponsors and government leads. Now, the payoff for contractor encouraging endless changes by government is gone. Further, you are breaking up the work into more doable components that can then be improved in the next regional implementation. Smaller problems sets are eminently more doable then massive ones. By changing the approach to more incremental work with short cycles and aligning program structure and incentives to getting real results, I think you would find a dramatic difference in the delivery of the project.

While these changes would certainly improve project delivery, I am sure there are several other elements that have caused impact and problems. What would you change? How do we get IT projects to not be a huge portion of wasted taxpayer funds?

I look forward to your comments.

Best, Jim

* The Times, which is a very good newspaper, unfortunately does not provide access to its articles via the internet without a subscription. If you have a subscription, the article title is ‘Scandal of the big spenders who have cost taxpayers dear’ published on January 9, 2012.

Ensuring Project Success II: Best Practices in Project Delivery

I hope that you are back from holidays well-rested and ready to go for the new year. My previous post provided some tips on how to get off to great start of the new year, today’s post will focus on some additional best practices for Project Delivery (I covered project initiation and project reporting and communication practices in my December post on project delivery). Today, I will cover project management best practices. I expect to also extend these posts with a project delivery best practices pages that will cover more details over the next month.

As I previously mentioned, project delivery is one of the critical services that IT provides. And yet our track record as industry of project delivery is at best a mixed bag. Many of  our business partners do not have confidence in predictable and successful IT project delivery. And the evidence supports their lack of confidence as a number of different studies over the past five years put the industry success rate below 70% or even 50%. A good reference in fact is the Dr. Dobbs site where a 2010 survey found project success rates to be:

  • Ad-hoc projects: 49% are successful, 37% are challenged, and 14% are failures.
  • Iterative projects: 61% are successful, 28% are challenged, and 11% are failures.
  • Agile projects: 60% are successful, 28% are challenged, and 12% are failures.
  • Traditional projects: 47% are successful, 36% are challenged, and 17% are failures.

So, obviously not a stellar track record in the industry. And while you may feel that you a doing a good job of project delivery, typically this means the most visible projects are doing okay or even well, but there are issues elsewhere in less visible or lower priority projects.

There are also a few critical techniques to overcome obstacles including how to get the right resource mix, doing the tough stuff first, and when to use a waterfall approach versus incremental or agile. I will cover the first two areas today and the rest over the few weeks.

Project Management – I think one of the key positive trends in the past decade has been the rise of certified project managers (PMP) and the leverage of a PMI education and structure for project activities. These learnings and structure have substantially raised the level of discipline and knowledge of the project management process. But, like many templates, if used without adjustment and with minimal understanding of the drivers of project failures, you can have a well-run project that is not going to deliver what was required.

First, though I would reinforce the vlaue of ensuring your project manager have a clear understanding of the latest industry PMI templates and structure (PMBOK) and your organization should have a formalized project methodology that is fully utilized. You should have at least 50% of your project managers (PMs) within your organization be PMP certified. If you have low proportion of PMs with certification today, set a goal for the team to achieve step improvement through a robust training and certification program to move the needle. This will imbue an overall level of professionalism and technique within your PM organization that will be beneficial. The first foundational attributes of good PMs are discipline and structure. Having most, if not all, of your PM team familiar with the PMI will establish a common PM discipline and structure for your organization.

So, assuming you have in place a qualified PM team and a defined project methodology primarily based on the PMI structure, then we can focus on three key project management best practice areas. Note these best practice areas help mitigate the most common causes of project failures which include:

  • lack of sponsorship
  • inadequate team resources or skills
  • poor or ambiguous requirements or scope
  • new technology
  • incorrect development or integration approaches for the scope, scale, timeframe or type of project
  • inadequate testing or implementation preparation
  • poor design

As I mentioned in the Project Initiation best practices post, there is a key early gate that must be in place to ensure effective sponsorship, adequate resources, and proper project scope and definition.  Most projects that fail (and remember, this is probably half of the projects you do), started to fail at the beginning. They were started without clear business ownership or decision-makers, or with a very broad or ambiguous description of delivery. or they were started when you were already over-committed, without senior business analysts or technology designers.  Use this entry gate to ensure the effort is not just a waste of money, resource time, and business opportunity.

So assuming you have executed project initiation well, you should then leverage the following project management best practice techniques.

Effective definition, design and implementation approaches:  Failure in these areas is often due to what I would call ‘beating around the bush’. In many projects, getting the real work done — well-defined requirements, the core engineering work, thorough testing is hard and takes real talent. Many teams just work on the edges and don’t tackle the tough stuff. I recommend ensuring that your project managers and design leads are up for tackling the tough stuff in a project and doing it as early as possible (not late in the cycle). And when it is done, it should be done fully. So, some examples:

  • if requirements are complex and ambiguous at the start, a good project manager will schedule rigorous project definition sessions. The PM will ensure broad participation, and good attendance and engagement at the meetings. Perhaps even a form of Rapid Requirements is used to quickly elicit the definitions needed. Or, the project team will use an agile or highly iterative incremental approach to take back to the user interfaces and outputs for early clarification and verification.
  • if new technology is being used, or there is a critical component or algorithm that must be engineered and implemented, then work begins early on these core sections. A good PM knows you do not leave the tough stuff to be done last. In particular, these means you do not wait for the final weeks of the project to do complex integration. Instead you develop and test the integration early with stubbed out pieces. Pilot and test, if necessary in parallel, the tough stuff. And if there are major hurdles not anticipated, then you will find them out early and you can kill or alter the project much earlier with far less sunk costs.
  • as CIO, make sure that major IT initiatives that your company is investing in make a difference where the rubber meets the road. The system must be easier to do the most common tasks, not just lots of features and cool. You want to make sure the front line staff will want to use the new system (not be forced to convert).
  • in sum, the approach for the design and implementation work should be incremental, parallelized and iterative rather than serial, waterfall, or as I like to call it, ‘ big bang theory’.  Waterfall can be used for well known domains, well understood engineering, and short or mid-range timelines. Otherwise, it should be avoided.

Project checklists, inspections and assessment: Use lightweight project and deliverable review approaches to identify issues early and enable correction. Given the failure rate on projects is typically 50% or more, and given the over-demand on most IT shops, nearly every project will have issues. The key is to detect the issues early enough so they can be remedied and the project can be brought back on track. I have always been stunned at how many projects, especially large ones, go off track, continue to be reported green while everyone on the project team thinks things are going wrong, and only at the very last minute do they go from green to red.  Good quick assessments of projects can be done frequently and easily and enable critical early identification of issues. These assessments will only work if there is an environment where it is okay to point out issues that must be addressed. (By the way, if 90%+ of your projects are green, that means either no one can report issues or you have an extremely good project team). The assessments can be a checklist or a quick but thorough review by experienced PMs, issues should be noted as potential problems, and they are compiled and left with the PM and the sponsors to sort through and decide if and how to address them. Your senior manager in charge of your PMs and you should get a monthly summary of the assessments done. Often you will find the same common issues across projects (e.g. resource constraints or late sponsor sign off on requirements) and you can use this to remedy the root cause.

Another key technique to use in addition to project assessments is inspections.  Just about any deliverable for a project, requirements, high level design configurations, test plans, can be inspected — not just code. And inspections are an under-utilised yet typically, the most effective way to find defects (other than production). Have your project teams use inspections for critical deliverables. They work. And augment inspections with a few deep dives (not inspections) of your own of both in flight and completed projects to ensure you have a pulse on how the work is getting done and what issues your teams are facing.

I hope to add some Project Assessment Checklists to the best practice pages in the coming month. Anything you would add to the list?

Next week I hope to tackle Program Management and Release Management and key best practices in those areas.

Any good project stories you would like to share? Anything you would change in what I recommended? Let me know.

Best, Jim