When to Benchmark and How to Leverage the Results

Benchmarking is an important tool for management and yet frequently I have found most organizations do not take advantage of it. There seem to be three camps when it comes to benchmarking:

  • those who either don’t have the time or through some rationale, avoid tapping into benchmarking
  • those that try to use it infrequently but for a variety of reasons, are never able to leverage and progress from its use
  • those who use benchmarking regularly and make material improvement

I find it surprising that as so many IT shops don’t benchmark. I have always felt that there were only two possible outcomes from doing benchmarking:

  1. You benchmark and you find out that you compare very well with the rest of the benchmark population and you can now use this data as part of your report to your boss and the business to let them know the value your team in delivering
  2. You benchmark and you find out a bunch of things that you can improve. And you now have the specifics as to where and likely how to improve.

So, with the possibility of good results with either outcome, when and how should you benchmark your IT shop?

I recommend making sure that your team and the areas that you want to benchmark have an adequate maturity level in terms of defined processes, operational metrics, and cost accounting.  Usually, to take advantage of a benchmark, you should be at a minimum a Level 2 shop and preferably a Level 2+ shop where you have well understood unit costs and unit productivity. If you are not at this level, then in order to compare your organization to others, you will need to first gather an accurate inventory of assets, staff, time spent by activity (e.g., run versus change). This data should supplemented with defined processes and standards for the activity to be compared. And then for a thorough benchmark you will need data on the quality of the activity and preferably 6 month trending of all data. In essence, these are prerequisite activities that must take place before you benchmark.

I do think that many of teams that try to benchmark but then are not able to do much with the results are unable to progress because:

  • they do not have the consistent processes on which improvements and changes can be implemented
  • they do not routinely collect the base data and thus once the benchmark is over, no further data is collected to understand if any of the improvements had effect or not
  • the lack of data and standards results in so much estimation for the benchmark that you cannot then use it to pinpoint the issues

So, rather than benchmark when you are a Level 1 or 2- shop, instead just work on the basics of improving your maturity of your activities. For example, collect and maintain accurate asset data – this foundational to any benchmarking. Similarly collect how your resources spend their time — this is required anyway to estimate or allocate costs to whoever drives it, so do it accurately. And implement process definitions, base operational metrics, and have the team review and publish monthly.

For example, let’s take the Unix server area. If we are looking to benchmark we would want to check various attributes against the industry including:

  • number of servers (versus similar size firms in our industry)
  • percent virtualized
  • unit cost (per server)
  • unit productivity (servers per admin)
  • cost by server by category (e.g. staff, hardware, software, power/cooling/data center, maintenance)

By having this information you can quickly identify where you have inefficiencies or you are paying too much (e.g., the cost of your software per server) or you are unproductive (your servers per admin is low versus the industry). This then allows you to draw up effective action plans because you are addressing problems that can likely be solved (you are looking to bring your capabilities up to what is already best practice in the industry).

I recall a benchmark of the Unix server area where our staff costs per server were out of line with the industry though we had relatively strong productivity. Upon further investigation we realized we had mostly a very senior workforce (thus being paid the top end of the scale and very capable) that had held on to even the most junior tasks. So we set about improving this in several ways:

  • we packaged up and moved the routine administrative work to IT operations (who did it for far less and in a much more automated manner)
  • we put in place a college graduate program and shifted nearly all of our hires in this group from a previous focus of mid to senior only new hires to one where it was mostly graduates, some junior and only a very few mid-level engineers.
  • we also put in place better tools for the work to be done so staff could be more productive (more servers per admin)

The end result after about 12 months was a staff cost per server that was significantly below the industry median, approaching best in class (and thus we could reduce our unit allocations to the business). Even better, with a more balanced workforce (i.e., not all senior staff) we ended up with a happier team because the senior engineers were now looked up to and could mentor the new junior staff. Moreover, the team now could spend more of their time on complex change and project support rather than just run activities. And with the improved tools making everyone more productive, it resulted in a very engaged team that regularly delivered outstanding results.

I am certain that some of this would have been evident with just a review. But by benchmarking, not only were we able to precisely identify where we had opportunity, we were better able to diagnosis and prescribe the right course of action with targets that we knew were doable.

Positive outcomes like this are the rule when you benchmark. I recommend that you conduct a yearly external benchmark for each major component area of infrastructure and IT operations (e.g. storage, mainframe, server, network, etc). And at least every two years, assess IT overall, assess your Information Security function, and if possible, benchmark your development areas and systems in business terms (e.g., cost per account, cost per transaction).

One possibility in the development area is that since most IT shops have multiple sub-teams within development, you can use the operational development metrics to compare them against each other (e.g. defect rates, cost per man hour or cost per function, etc). Then the sub-team with the best metrics can share their approach so all can improve.

If you conduct regular exercises like this, and combine it with a culture of relentless improvement, you will find you achieve a flywheel effect, where each gain of knowledge and each improvement becomes more additive and more synergistic for your team. You will reduce costs faster and improve productivity faster than taking a less open and less scientific approach.

Have you seen such accelerated results? What were the elements that contributed to that progress? I look forward to hearing from you.

Best, Jim

 

 

IT Service Desk best practices

An important interface for your internal customers is through your IT service desk. Unfortunately, in many situations the service desk (or help desk) does not use up-to-date practices and can be a backwater of capability. This can result in a very poor reputation for IT because the service desk is the primary customer interface with the IT organization. I recall starting at a company tasked with turning around the IT organization. When I asked about the IT help desk, the customer turned to me and said ‘ You mean the IT helpless  desk?’  With a reputation that poor with our customers, I immediately set out to turnaround our service desk and supporting areas.

The IT Service Desk may seem quite straightforward to address — maybe the thought is that all you really need to do is have one number, staff it, be courteous and try hard.  This isn’t the case and there are some clear best practice techniques and approaches that will enable you to deliver consistent, positive interactions with your customers as well as enable greater productivity and lower cost for the broader IT team.

For our discussion, I have two esteemed former colleagues who have run top notch service desks that will be authoring material on best practices and how to deliver an outstanding customer experience through this critical interface. Both Steve Wignall and Bob Barnes have run world class service desks at large financial services companies. And I think you will find the guidance and techniques they provide here today and in subsequent posts to be a surefire manner to transforming your ‘helpless’ desk to a best in class service desk.

First, let’s recap why the service desk is such an important area for IT:

  • the service desk is the starting point for many key processes and services for IT
  • well-constructed, the service desk can handle much of the routine work of IT, enabling engineering and other teams to do higher value work
  • it is your primary interface with the customer, where you can gauge the pulse of your users, and make the biggest daily impact on your reputation
  • with the right data and tools, the service desk identify and correct problem outbreaks early, thereby reducing customer impacts and lowering overall support costs.

And yet, despite its importance to IT, too often Service Desks are chronic under-performers due to the following issues:

  • poor processes or widespread lack of adherence to them
  • the absence, or low quality application, of a scientific and metric based management approach
  • lousy handoffs and poor delivery by the rest of IT
  • inadequate resources and recruiting, worsened by weak staff development and team-building
  • weak sponsorship by senior IT leaders
  • ineffective service desk leadership

Before we get into the detailed posts on service desk best practices, here are a few items to identify where your team is, and a few things to get started on:

1. What is your first call resolution rate? If it is below 70% then there is work to do. If it is above 70% but primarily because of password reset, then put in decent self serve password reset and re-evaluate.

2. What is the experience of your customers? Are you doing a monthly or quarterly survey to track satisfaction with the service? If not, get going and implement. I recommend a 7 point scale and hold yourself to the same customer satisfaction bar your company is driving for with its external customers.

3. What are the primary drivers of calls? Are they systems issues? Are they due to user training or system complexity issues? Workstation problems? Are calls being made to report a surplus of availability or general systems performance issues? If you know clearly (e.g., via Pareto analysis) what is driving your calls, then we have the metrics in place (if not, get the metrics – and process if necessary — in place). Once you have the metrics you can begin to sort out the causes and tackle them in turn. What is your team doing with the metrics? Are they being used to identify the cause of calls to then go about eliminating the call in the first place?

For example, if leading drivers of calls are training and complexity, is your team reworking the system so it is more intuitive or improving the training material? If the drivers are workstation issues, do you know what component and what model and are now figuring out what proactive repair or replacement program will reduce these calls? Remember, each call you eliminate probably saves your company at least $40 (mostly in eliminating the downtime of the caller).

4. Do you and your senior staff meet with the service desk team regularly and review their performance and metrics? Do senior IT leaders sponsor major efforts to eliminate the source of calls? Does the service desk feature in your report to customers?

5. If it is a third party provider, have you visited their site lately? Does your service area have the look and feel and knowledge of your company so they can convey your brand? And hold them to the same high performance bar as you would your own team.

Use these five items for a quick triage of your service desk and our next posts will cover the best practices and techniques to build a world-class service desk. Later this week, Bob and Steve will cover the structure and key elements of a world-class service desk and how to go about transforming your current desk or building a great one from scratch. Steve will also cover the customer charter and its importance to maintaining strong performance and meeting expectations.

I look forward to your thoughts and experiences in this area. And perhaps you have a service desk that could use some help to turn around IT’s and your reputation.

Best, Jim

 

 

 

So far, so good

It has only been a few weeks into the new year but I think we are off to a good start here at recipeforIT.com. Given the significant number of new readers, I thought I would touch again on the key goals for this site and also give you a preview of upcoming posts and pages.

As many of you know, delivering IT today, with all of the cost reduction demands, the rapidly changing technology, the impact of IT consumerization, and security and risk demands, is, simply put, tough work. It is complicated and hard work to get the complex IT mechanism, with all the usual legacy systems issues, to perform as well as the business requires. So to help those IT leaders out, I have started this site to provide recipes and ideas on how to tackle the tough but solvable problems they face. And in addition to providing best practices, we will give you a solid and sensible perspective on the latest trends and technologies.

So what is upcoming? First, I have two esteemed former colleagues who have run top notch service desks that will be authoring material on best practices and how to deliver an outstanding customer experience through this critical interface. Look for their posts on the ‘face of IT’ over the next several weeks. Second, I want to touch on innovation and the increasingly common business fascination with innovation as a solution to all manner of business ills. I hope to also explore leadership further in February. And lastly, I have a number of incremental but hopefully material improvements to the site pages that will provide further details on best practices in a comprehensive manner.

I think it is a good lineup for the next month! I continue to receive strong feedback from many of you on the usefulness and actionability of the material. I will definitely work to ensure we maintain that relevance.

Don’t forget you can subscribe to the site so you get an email when there’s a new post (subscribing is on the rightmost bar, halfway down the page). And feel free to provide comments or suggestions — the feedback really helps!

If you are new to the site, I recommend a few posts for relevance and fundamentals:

There’s lots more to come and have a great week!

Best, Jim

A Continued Surge in IT Investment

In recent posts, I have noted previous articles on how the recovery in the US has been a ‘jobless’ recovery yet one with stronger investment in equipment and IT. This peculiar effect in the latest reporting appears to be even more pronounced, perhaps even running in ‘overdrive’. According to yesterday’s Wall Street Journal article, the investment in labor savings in the US stepped up at the beginning of the decade, but with the recession, companies found bigger opportunities to automate even more with machinery and software.  Timothy Aeppel, the author notes, this investment and spending level has continued broadly through the fourth quarter. And while certainly investment in technology will cause employment subsequently to rise, I think it appears that CEOs are investing in technology far more than adding staff as in previous recoveries. And they are doing this for a reason — they can get better returns from the machinery and robotics and software than before. Part of this is due to low interest rates and unique tax breaks, but I believe that technology is enabling greater returns on reducing costs and improving productivity than previously. Further, I think fundamental changes in IT capabilities and robotics are fueling improved returns from automation even more than has occurred in the past, spurring even greater investment in IT.

Historically, IT applications and systems were applied to domains with large amounts of the routine work, often done by hundreds if not thousands of staff. These were the areas that justified the cost of IT and provided the greatest return. Improvements in application and database technology enabled technology to tackle tougher and more complex problems. It also enabled leverage of technology on medium scale processes. Basic toolsets like email and Sharepoint tackled the least complex and departmental processes. This progress is represented in the diagram below.

The introduction of client server platforms allowed solutions to be applied to routine work on a smaller scale, to large departments and medium sized companies rather than just divisions and large corporations. This accelerated with the internet and the advent of virtualization.

But the cumulative and accelerating effect of new development technologies and methods, new toolsets, new client devices, cloud infrastructure, and advancing data and analytics capabilities has enabled a far broader range of solutions to be applied easily and effectively across a wide range of institutions and problem sets. Small and medium sized companies through cloud services can now leverage similar infrastructure capabilities that previously could only be implemented and afforded by the largest corporations. This step change in progression is represented by the chart below.

The increase of automation scope with new Tech toolsets

The new lightweight workflow tools like IBM’s Lombardi toolset and many others open up almost any departmental process to be easily and rapidly automated and achieve a decent ROI.  The proliferation of client interfaces through the internet and mobile allow customers to self serve intelligently for almost any product and service, enabling the elimination of large amounts of front office and back office work. This cumulative and compounding effect is truly a step change in what can be done by IT to automate the work within medium and large companies. And yet, despite this sea change in capabilities, you still find many IT departments focused almost wholely on their traditional scope. The projection initiation and selection process is laborious and even arduous, oriented towards doing large (and fewer) projects. The amount of overhead required to execute almost any typical project would overwhelm lightweight automation for departmental-sized efforts. And yet, there are huge new areas of scope and automation that are possible now in almost every company. And so how do you start to prove out these new areas and adapt your processes to enable them to get done?

I think there are several ways to get started here, some of which I mentioned briefly in my December post on  ‘A few things to add to your technology plans’.

1. Improve customer signup or account opening: Unless your company has redone these functions within the past 24 months, it is doubtful that this area is up to the latest expectations of customers.  Enable account opening from the web and mobile devices, leverage the app stores to provide mobile clients that have additional, useful and cool functions (nearest store, nearest ATM, or if you are a climbing gear company, the current temperature and weather at base camp on Mount Everest). Make them easy to navigate and progress through the application with  progress bar and associated menu. Ensure the physical process at the store or branch is as easy as the internet version (i.e., not twenty pages of forms). And tighten up the security with strong passwords (many sites today have a strength indicator as you type in the new password) and two factor security on critical transactions (e.g. wire transfers or bill payments). Remember you can now deliver the two factor security through the customer’s mobile device and not a separate token.

2. Fix two or three process issues that are basic transactions for customers: Just as you need to continually cycle back through your customer interfaces to keep them fresh and take advantage of the latest consumer technology, you should also need to revisit some of the basic transactions that business typically fail to put on their list to invest and yet become problematic service areas for customers. These would be areas like change of address or statement reprints or getting a replacement card. Because they are never on the project list, these services remain backwaters of process and automation with predictable frustration for the customers, high error rates, and disproportionate manual effort to complete. Work with a strong business partner (perhaps the COO or someone close to the customer experience) to tee these up. Use the latest workflow tools to tackle the process piece. Leverage the latest data warehouse and ETL capabilities to integrate the customer data across business units and applications so that the process can be once and done. If you are not sure on what basic customer process to start, then talk to the unit handling customer complaints and look for those processes that have the highest number of issues yet the customer is trying to do something quite basic. Remember, every customer complaint requires expensive responses that by eliminating, you drive material improvements in productivity and cost.

3. Implement more self-service: An oft-neglected area is the improvement of corporate support functions and their productivity and service. In a typical corporation almost every employee is touched by the HR and Fiance processes, which can be heavily manual and archaic (again they rarely show up on the project list) By working with your Finance and HR functions you can reduce their costs and improve their delivery to their users through implementation of automation and self-service. The advanced workflow toolsets (IBM’s BPM) mean you can do far more with incremental, small tiger team efforts than ever before. Your scope to automate and move to self service on your intranet is much greater. More and more minor business processes than ever before can be automated at far less cost and effort. The end results are higher productivity for your business, lower operations costs in HR and Finance, a more satisfied user base, and a better perception of IT.

4. Get to a modern production and service toolset for IT: For the past twenty years, there have been two traditional toolsets that most companies leveraged for production processes and service requests (Remedy and Peregrine). And most of us have implemented (with some struggle) reasonable implementations that met the bulk of our needs. But the latest generation of these toolsets (e.g., ServiceNow) make our previous implementations look like dinosaurs. And when you consider the 60 or 70% of your staff and service desk are using these tools every day, and you can make them far more productive with a new toolset, it is worth taking a look at. Further, your business users, will love the new IT ordering facilities on the intranet that are better than ordering a book from Amazon. By the way, the all-in operating cost of the new tools should be substantially less than your current costs for Remedy or Peregrine. And your team will be operating at a step level improvement in efficiency and productivity.

5. Get Going in Business intelligence: One last item to make sure your company is capitalizing on is leveraging the data you already have to know your customer better, improve your products or services, and reduce your costs. Why have advertising for customers who never click through or buy? Why do customers call your call center when they should be able to do it easier online? Wade through all the unstructured data being generated on social about your company to figure out how to improve your brand. Knowing the mood of the market, understanding your customers and the perspectives on your products and services requires IT to partner with the business to leverage the data you have to obtain intelligence. Investing in this area can now be tackled with the new big data tools on the market. If you are not doing much here, then I recommend finding out what your competitors are doing and sitting down with your business partners to sort through what you must do.

So, there’s 5 things that five years ago, would have never made any list. Yet if you make real progress in 3 of the 5, you can hit home runs in customer satisfaction, service quality, and a much better view of IT. And most important, you can ensure your company stays ahead of the game to achieve greater productivity and lower costs.

Any views or alternate perspectives on the progression of IT tools and solutions? Do you see the same sea change that I am calling out here for us to take advantage of?

Let me know. Best, Jim

Delivering More by Leveraging the ‘Flow of Work’

As you start out this year as an IT leader and you are trying to meet both project demand on one hand and savings goals on the other, remember to leverage what I term the IT ‘Flow of Work‘. Too often, once work comes into an organization, either through a new project going into production or through the original structure of the work, it is static. The work, whether it is server administration, batch job execution, or routine fixes, continues to be done by the same team that often developed it in the project cycle. Or at best, the system and its support are handed off to a dedicated run team, that continues to treat it as ‘custom’ work. In other words, once the system has been crafted by the project team, the regular work to run, maintain, and fix the system continues to be custom-crafted work.

This situation, where the work is static and continues to be executed by the initial, higher cost resources, would be analogous to a car company crafting a prototype or concept car, and then using that same team to produce every single subsequent car of that model with the same ‘craft’ approach. This of course does not happen as the car company moves the prototype to a production factory where the work is standardized and automated and leaned, and far lower cost resources execute the work. Yet in the IT industry we often fail to leverage this ‘flow of work’. We use senior server engineers to do basic server administration tasks (thus making it custom work). We fail to ‘package’ or productionalize or automate the tasks thus requiring exorbitant amounts of manual work because the project focused on delivering the feature and there was not optimization step to get it into the IT ‘factory’.

Below is a diagram that represents the flow of work that should occur in your IT organization.

Moving work to where it is most efficiently executed

The custom work, work done for new design, or complex analysis or maintenance, is the work done by your most capable and expensive resources. Yet, many IT organization waste these resources by doing custom work where it doesn’t matter. A case in point would be anywhere that you have IT projects leveraging custom designed and built servers/storage/middleware (custom work) instead of standardized templates (common work). And rarely do you see those tasks documented and automated such that they can be executed by the IT Operations team (routine work). And not only do you then waste your best resources on design that adds minimal business value, you then do not have those best resources available to do the new projects and initiatives the business needs to have done. Or similarly, your senior and high cost engineers are doing routine administrative work because that is how the work was implemented in production. Later, no investment has been made to document or package up this routine work so it can easily be done by your junior or mid-level engineering resources.

Further, I would suggest that you often find the common or routine engineering work stays in that domain. Investment is infrequently made to further shrinkwrap and automate and document the routine administrative tasks your mid-level engineers so that you can hand it off to the IT Operations staff to execute as part of their regular routine (and by the way, the Ops team typically executes these tasks with much greater precision than the engineering teams).

So, rather than fall into the traps of having static pools of work within your organization, drive and investment so that the work can be continually packaged and executed at a more optimal level and free up your specialty resources to tackle more business problems.  Set a bar for each of your teams for productivity improvements. Enable them the time and investment to package the work and send it to the optimal pool. Encourage your teams to partner with the group that will receive their work on the next step of the flow. And make sure they understand that for every bit of more routine work that they can transition to their partner team, they will receive more rewarding custom work.

After a few cycles of executing the flow of work within your organization, you will find you have gained significant capacity and reduced your cost to execute routine work substantially. This enables you to achieve a much improved ratio of run cost versus build costs by continually compressing the run costs.

I have seen this approach executed with great effect in both large and mid-sized organizations. Have you been part of or lead similar exercises? What results did you see?

Best, Jim

 

IT Project Delivery – Dismal Government Projects Track Record

In the past several weeks, I have posted on project management best practices. And we talked about the track record of the IT industry as being, at best, a mixed bag. It turns out  for the UK government, that would be putting a very positive spin on their IT projects. The Times recently ran an article* detailing the eight worst areas in the government which have cost the taxpayers dearly. IT projects had 3 blatant failures of the eight and had a major hand in 2 of the remaining. How did it come to this? Why is practice of IT reasonable for more than half of the UK government wasteful initiatives? And these are not minor blowups. The Fire and Rescue Plan, which started out as a 120 million pound effort to consolidate 46 control rooms to 9 regional centres was finally axed after it cost 469 million pounds! A straightforward project (how many of us have consolidated call centres, trading floors or command centres in the past 10 years? I would venture 50% of your firms have done this) that cost 4 times the estimate and never even delivered! And that is not the worst one: the NHS records project has cost 6.4 billion pounds to date with at least 2.7 billion of that wasted. While there are 60 million citizens in the UK, many of us in industry have customer bases in the tens of millions where we keep critical financial data safe and accessible for our customers. So while I would agree the health records breaks new ground in some areas, it is not the Manhattan project. This is a very doable project where given the monies spent, it should have already delivered significant benefit and capability. And yet very little has been delivered and there is low confidence this will change in the near future for the program. Overall, how can IT projects have 3 of the 8 slots of government failure areas when the defense industry only has 1 (and you could argue IT projects contributed heavily to that one)?

I think this dismal track record in government for IT projects is  due to some common issues and a few unique ones. First, typically there is poor and ambiguous sponsorship. And it is compounded by very weak and changing requirements. Too many parties and groups in government with a stake and a reason to argue and change the project. Applying the methods of  political processes of lengthy debate, consensus and influence to defining and running a project are a recipe for disaster. And I suspect the contractors doing the work likely encouraged changes and debate as this was then an opportunity to grow scope with much higher margin work (change orders are always much more profitable than the original bid). Second, the approach undoubtably used a waterfall method. And given the size and scope and vast array of stakeholders, each step (e.g. requirements definition, etc)  took an extremely elongated time. An elongated schedule with a cumbersome and bloated program structure to match the stakeholder complexity would certainly have multiplied the costs. Still, it takes even more to cause such spectacular blowups.

There is an excellent book, Software Runaways, available that documents such ‘death march’ projects. ‘Death march’ projects are the kind of massive program that everyone knows is doomed to failure and yet everyone is still lashed to the ship on this voyage to failure. It is a fascinating read and some ways like watching a traffic accident unfold — it’s pretty awful but you can’t tear yourself away. Of course, the UK government and its contractors do not have a monopoly on such spectacular program failures (though they certainly seem to be doing their best to enable additional chapters to be written). What is relevant here though is that the book does an excellent job of reviewing about a dozen of the more interesting past IT program failures and identifying the root causes. These rot cause include the poor sponsorship and ill-defined requirements we discussed above. But it also describes the mentality that sets into a large program team on their ‘death march’. In essence, even though the members of the team know there are massive flaws in the program, because of the complexities and different agendas and influences of a large complex program team, they are often unable to repair them from within. Even worse, when an external party identifies the flaws, the program team then bands together to defend from such external attacks at all costs.  Their identity has become so caught up in the program that they would create a Potemkin village to demonstrate that there are no flaws.

So, it is the instinct of these large program organizations that assume a life of their own with all the members now vested in its survival (not its delivery but its survival)  that when combined with major flaws (such as ill-defined requirements or the wrong methodology) creates the spectacular failures. Or, put another way, it is how humans work together within large programs that if based on poor practices, can be multiplied to raise the negative results to such an irrational level.

With that in mind, what are some approaches to prevent this from occurring? I would suggest the basic ones of ensuring there is clear sponsorship, proper steering committees should help, but more radical changes to the approach would be better. Let’s take the command centre consolidation. Why do all 46 into 9 in one waterfall or ‘big bang’ approach? Instead take two regions of the 9 and do two pilots, each constructed with their separate sponsors, steering committees and contractors. Set an overall schedule for them to deliver to a well-defined, but high level set of requirements or outcomes. The team that completes their work on time and meets the requirements will have an opportunity to bid on the next two regions to be consolidated. And the one that does not meet the bar will result in the contractor being barred from the next round of work and a negative performance mark will go on the sponsors and government leads. Now, the payoff for contractor encouraging endless changes by government is gone. Further, you are breaking up the work into more doable components that can then be improved in the next regional implementation. Smaller problems sets are eminently more doable then massive ones. By changing the approach to more incremental work with short cycles and aligning program structure and incentives to getting real results, I think you would find a dramatic difference in the delivery of the project.

While these changes would certainly improve project delivery, I am sure there are several other elements that have caused impact and problems. What would you change? How do we get IT projects to not be a huge portion of wasted taxpayer funds?

I look forward to your comments.

Best, Jim

* The Times, which is a very good newspaper, unfortunately does not provide access to its articles via the internet without a subscription. If you have a subscription, the article title is ‘Scandal of the big spenders who have cost taxpayers dear’ published on January 9, 2012.

Ensuring Project Success II: Best Practices in Project Delivery

I hope that you are back from holidays well-rested and ready to go for the new year. My previous post provided some tips on how to get off to great start of the new year, today’s post will focus on some additional best practices for Project Delivery (I covered project initiation and project reporting and communication practices in my December post on project delivery). Today, I will cover project management best practices. I expect to also extend these posts with a project delivery best practices pages that will cover more details over the next month.

As I previously mentioned, project delivery is one of the critical services that IT provides. And yet our track record as industry of project delivery is at best a mixed bag. Many of  our business partners do not have confidence in predictable and successful IT project delivery. And the evidence supports their lack of confidence as a number of different studies over the past five years put the industry success rate below 70% or even 50%. A good reference in fact is the Dr. Dobbs site where a 2010 survey found project success rates to be:

  • Ad-hoc projects: 49% are successful, 37% are challenged, and 14% are failures.
  • Iterative projects: 61% are successful, 28% are challenged, and 11% are failures.
  • Agile projects: 60% are successful, 28% are challenged, and 12% are failures.
  • Traditional projects: 47% are successful, 36% are challenged, and 17% are failures.

So, obviously not a stellar track record in the industry. And while you may feel that you a doing a good job of project delivery, typically this means the most visible projects are doing okay or even well, but there are issues elsewhere in less visible or lower priority projects.

There are also a few critical techniques to overcome obstacles including how to get the right resource mix, doing the tough stuff first, and when to use a waterfall approach versus incremental or agile. I will cover the first two areas today and the rest over the few weeks.

Project Management – I think one of the key positive trends in the past decade has been the rise of certified project managers (PMP) and the leverage of a PMI education and structure for project activities. These learnings and structure have substantially raised the level of discipline and knowledge of the project management process. But, like many templates, if used without adjustment and with minimal understanding of the drivers of project failures, you can have a well-run project that is not going to deliver what was required.

First, though I would reinforce the vlaue of ensuring your project manager have a clear understanding of the latest industry PMI templates and structure (PMBOK) and your organization should have a formalized project methodology that is fully utilized. You should have at least 50% of your project managers (PMs) within your organization be PMP certified. If you have low proportion of PMs with certification today, set a goal for the team to achieve step improvement through a robust training and certification program to move the needle. This will imbue an overall level of professionalism and technique within your PM organization that will be beneficial. The first foundational attributes of good PMs are discipline and structure. Having most, if not all, of your PM team familiar with the PMI will establish a common PM discipline and structure for your organization.

So, assuming you have in place a qualified PM team and a defined project methodology primarily based on the PMI structure, then we can focus on three key project management best practice areas. Note these best practice areas help mitigate the most common causes of project failures which include:

  • lack of sponsorship
  • inadequate team resources or skills
  • poor or ambiguous requirements or scope
  • new technology
  • incorrect development or integration approaches for the scope, scale, timeframe or type of project
  • inadequate testing or implementation preparation
  • poor design

As I mentioned in the Project Initiation best practices post, there is a key early gate that must be in place to ensure effective sponsorship, adequate resources, and proper project scope and definition.  Most projects that fail (and remember, this is probably half of the projects you do), started to fail at the beginning. They were started without clear business ownership or decision-makers, or with a very broad or ambiguous description of delivery. or they were started when you were already over-committed, without senior business analysts or technology designers.  Use this entry gate to ensure the effort is not just a waste of money, resource time, and business opportunity.

So assuming you have executed project initiation well, you should then leverage the following project management best practice techniques.

Effective definition, design and implementation approaches:  Failure in these areas is often due to what I would call ‘beating around the bush’. In many projects, getting the real work done — well-defined requirements, the core engineering work, thorough testing is hard and takes real talent. Many teams just work on the edges and don’t tackle the tough stuff. I recommend ensuring that your project managers and design leads are up for tackling the tough stuff in a project and doing it as early as possible (not late in the cycle). And when it is done, it should be done fully. So, some examples:

  • if requirements are complex and ambiguous at the start, a good project manager will schedule rigorous project definition sessions. The PM will ensure broad participation, and good attendance and engagement at the meetings. Perhaps even a form of Rapid Requirements is used to quickly elicit the definitions needed. Or, the project team will use an agile or highly iterative incremental approach to take back to the user interfaces and outputs for early clarification and verification.
  • if new technology is being used, or there is a critical component or algorithm that must be engineered and implemented, then work begins early on these core sections. A good PM knows you do not leave the tough stuff to be done last. In particular, these means you do not wait for the final weeks of the project to do complex integration. Instead you develop and test the integration early with stubbed out pieces. Pilot and test, if necessary in parallel, the tough stuff. And if there are major hurdles not anticipated, then you will find them out early and you can kill or alter the project much earlier with far less sunk costs.
  • as CIO, make sure that major IT initiatives that your company is investing in make a difference where the rubber meets the road. The system must be easier to do the most common tasks, not just lots of features and cool. You want to make sure the front line staff will want to use the new system (not be forced to convert).
  • in sum, the approach for the design and implementation work should be incremental, parallelized and iterative rather than serial, waterfall, or as I like to call it, ‘ big bang theory’.  Waterfall can be used for well known domains, well understood engineering, and short or mid-range timelines. Otherwise, it should be avoided.

Project checklists, inspections and assessment: Use lightweight project and deliverable review approaches to identify issues early and enable correction. Given the failure rate on projects is typically 50% or more, and given the over-demand on most IT shops, nearly every project will have issues. The key is to detect the issues early enough so they can be remedied and the project can be brought back on track. I have always been stunned at how many projects, especially large ones, go off track, continue to be reported green while everyone on the project team thinks things are going wrong, and only at the very last minute do they go from green to red.  Good quick assessments of projects can be done frequently and easily and enable critical early identification of issues. These assessments will only work if there is an environment where it is okay to point out issues that must be addressed. (By the way, if 90%+ of your projects are green, that means either no one can report issues or you have an extremely good project team). The assessments can be a checklist or a quick but thorough review by experienced PMs, issues should be noted as potential problems, and they are compiled and left with the PM and the sponsors to sort through and decide if and how to address them. Your senior manager in charge of your PMs and you should get a monthly summary of the assessments done. Often you will find the same common issues across projects (e.g. resource constraints or late sponsor sign off on requirements) and you can use this to remedy the root cause.

Another key technique to use in addition to project assessments is inspections.  Just about any deliverable for a project, requirements, high level design configurations, test plans, can be inspected — not just code. And inspections are an under-utilised yet typically, the most effective way to find defects (other than production). Have your project teams use inspections for critical deliverables. They work. And augment inspections with a few deep dives (not inspections) of your own of both in flight and completed projects to ensure you have a pulse on how the work is getting done and what issues your teams are facing.

I hope to add some Project Assessment Checklists to the best practice pages in the coming month. Anything you would add to the list?

Next week I hope to tackle Program Management and Release Management and key best practices in those areas.

Any good project stories you would like to share? Anything you would change in what I recommended? Let me know.

Best, Jim

 

Your Start of the Year Leadership Checklist

Just as I published a quick checklist for you to use as the year was closing, here is a checklist for your first week back to help you get off to a great start of the new year.  Plus, this is a lot more fun than taking down those outdoor Christmas decorations or doing returns of the unwanted gifts. So before the office gets busy, use your first few weeks to get a jump on outstanding results in 2012 with this list:

1. Remember to get the things done we planned in December. You have booked time in January with your team to do the detailed planning to ensure you have the IT goals for 2012 clearly defined with the key steps to get there. Knock it out with your team.

2. Set your 1st and 2nd quarter virtualization goals for your server and storage and sit down with them and ensure they are mapping out how to get it done. Get them off to a quick start.

3. Pick one or two major contracts to renegotiate in your favor this quarter. A quick hint: Oracle missed expectations last quarter so you may have an opportunity. Remember to hold tight, put something new on the table to get the most out of a deal, and insist on your terms and conditions (and if your company does not have an up-to-date contract template, put that on the plate with your Chief Procurement Officer to get it done).

4. Take those new insights that you gained from your holiday vacation (remember you were going to spend part of your vacation time reading a good management or IT book) and ensure you bring the view to your planning meeting.

5. Review your January schedule and ensure you have time with your customers fully scheduled. Invite one of them to kick off your planning meeting.

6. Also review the planning meeting agenda with your boss and ensure you capture any ‘messages’ he/she wants to make sure come across.

7. Sit down with the intranet team and ensure they are adding 1 or 2 helpful ‘widgets’ a quarter to your intranet site. Start with a ‘How do I …’ list button or improved search tool, or a wikipedia for corporate terms and abbreviations. The little helpful things mean a lot to the productivity of your company’s employees.

8. If you don’t have BYOD yet, sit down with your client device team and review the plans to pilot and then implement it this year.

9. Schedule a visit for you and several of your team to review either a customer facing site (call center or retail store) or a key operations facility for your company. Ask questions and see how IT is working where the rubber meets the road in your firm. I am certain you will learn plenty.

10. Review and report on your performance for the past year – do it with thought and be provocative. Challenge yourself and your team where you have not delivered well. Then follow up with a high level and positive note to your entire team talking in broad strokes about the goals for the year. Strong communication at the start of the year will help ensure you and your team are lined up for success throughout the year.

Many of these items are reconnecting activities: with your business, with your customers, with your boss, and with your team. Before you start off on any major endeavor, it is critical to recheck the plan and the communication lines — that is in essence what we are doing. And with it you will be much more likely to have a successful and rewarding 2012.

All the best, and roger on those plans! Jim