First Quarter Technology Trends to Note

For those looking for the early signs of spring, crocuses and flowering quince are excellent harbingers. For those looking for signs of technology trends and shifts, I thought it would be worthwhile to point out some new ones and provide further emphasis or confirmation of a few recent ones:

1. Enterprise server needs have flattened and the cumulative effect of cloud, virtualization, SaaS, and appliances will mean the corporate server market has fully matured. The 1Q13 numbers bear this trend is continuing (as mentioned here last month). Some big vendors are even seeing revenue declines in this space. Unit declines are possible in the near future. The result will be consolidation in the enterprise server and software industry. VMWare, BMC and CA have already seen their share prices fall as investors are concerned the growth years are behind them. Make sure your contracts consider potential acquisitions or change of control.

2. Can dual SIM smartphones be just around the corner? Actually, they are now officially here. Samsung just launched a Galaxy dual SIM in China, so perhaps it will not be long for other device makers to follow suit. Dual SIM enables excellent flexibility – just what the carriers do not want. Consider when you travel overseas, you will be able to insert a local SIM into your phone and handle all local or regional calls at low rates, and will still receive your ‘home’ number calls. Or for everyone who carries two devices, one for business and one for personal, now you still can keep your business and personal numbers separate, but only have one device.

3. Further evidence has appeared of the massive compromises enterprise are experiencing due to Advanced Persistent Threats (APTs). Most recently, Mandiant published a report that ties the Chinese government and the PLA to a broad set of compromises of US corporations and entities over many years.  If you have not begun to move you enterprise security from a traditional perimeter model to a post-perimeter design, make the investment. You can likely bet you are compromised, you need to not only lock the doors but thwart those who have breached your perimeter. A post here late last year covers many of the measures you need to take as an IT leader.

4. Big data and decision sciences could drive major change in both software development and business analytics. It may not be of the level of change that computers had say on payroll departments and finance accountants in the 1980s, but it could be wide-ranging. Consider that perhaps 1/3 to 1/2 of all business logic now encoded in systems (by analysts and software developers) could instead be handled by data models and analytics to make business rules and decisions in real time. Perhaps all of the business analysts and software developers will then move to developing the models and  proving them out, or we could see a fundamental shift in the skills demanded in the workplace. We still have accountants of course, they just no longer do the large amount of administrative tasks. Now, perhaps applying this to legal work….

5. The explosion in mobile continues apace. Wireless data traffic is growing at 60 to 70% per year and projected to continue at this pace. The use of the mobile phone as your primary commerce device is likely to become real for most purchases in the next 5 years. So businesses are under enormous pressure to adapt and innovate in this space.  Apps that can gracefully handle poor data connections (not everywhere is 4G) and hurried consumers will be critical for businesses. Unfortunately, there are not enough of these.

Any additions you would make? Please send me a note.

Best, Jim Ditmore

 

IT Security in the Headlines – Again

Again. Headlines are splashed across front pages and business journals where banks, energy companies, and government web sites have been attacked. As I called out six months ago, the pace, scale and intensity of attacks had increased dramatically in the past year and was likely to continue to grow. Given one of the most important responsibilities of a CIO and senior IT leaders is to protect the data and services of the firm or entity, security must be a bedrock capability and focus. And while I have seen a significant uptick in awareness and investment in security over the past 5 years, there is much more to be done at many firms to reach proper protection. Further, as IT leaders, we must understand IT is in deadly arms race that requires urgent and comprehensive action.

The latest set of incidents are DD0S attacks against US financial institutions. These have been conducted by Muslim hacker groups purportedly in retaliation for the Innocence of Muslims film. But this weekend’s Wall Street Journal outlined that the groups behind the attacks are sponsored by the Iranian government – ‘the attacks bore “signatures” that allowed U.S. investigators to trace them to the Iranian government’. This is another expansion of the ‘advanced persistent threats’ or APTs that now dominate hacker activity. APTs are well-organized, highly capable entities funded by either governments or broad fraud activities that enables them to carry out hacking activities at unprecedented scale and sophistication. As this wave of attacks migrates from large financial institutions like JP Morgan Chase and Wells Fargo to mid-sized firms, IT departments should be rechecking their defenses against DD0S as well as other hazards.  If you do not already have explicit protection against DDoS, I recommend leveraging a carrier network-based DDoS service as well as having a third party validate your external defenses against penetration. While the stakes currently appear to be a loss of access to your websites, any weaknesses found by the attackers will invariably be subsequently exploited for fraud and potential data destruction. This is exactly the path of the attacks against energy companies including Saudi Aramco that recently preceded the financial institutions attack wave. And no less than Leon Panetta spoke about the most recent attacks and consequences. As CIO, your firm cannot be exposed as lagging in this arena without possible significant impact to reputation, profits, and competitiveness.

So, what are the measures you should take or ensure are in place? In addition to the network-based DDoS service mentioned above, you should implement these fundamental security measures first outlined in my April post and then consider the advanced measures to keep pace in the IT security arms race.

Fundamental Measures:

1. Establish a thoughtful password policy. Sure, this is pretty basic, but it’s worth revisiting and a key link in your security. Definitely require that users change their passwords regularly, but set a reasonable frequency–any less than three months and users will write their passwords down, compromising security. As for password complexity, require at least six characters, with one capital letter and one number or other special character.

2. Publicize best security and confidentiality practices. Do a bit of marketing to raise user awareness and improve security and confidentiality practices. No security tool can be everywhere. Remind your employees that security threats can follow them home from work or to work from home.

3. Install and update robust antivirus software on your network and client devices. Enough said, but keep it up-to-date and make it comprehensive (all devices).

4. Review access regularly. Also, ensure that all access is provided on a “need-to-know” or “need-to- do” basis. This is an integral part of any Sarbanes-Oxley review, and it’s a good security practice as well. Educate your users at the same time you ask them to do the review. This will reduce the possibility of a single employee being able to commit fraud resulting from retained access from a previous position.

5. Put in place laptop bootup hard drive encryption. This encryption will make it very difficult to expose confidential company information via lost or stolen http://www.buyambienmed.com laptops, which is still a big problem. Meanwhile, educate employees to avoid leaving laptops in their vehicles or other insecure places.

6. Require secure access for “superuser” administrators. Given their system privileges, any compromise to their access can open up your systems completely. Ensure that they don’t use generic user IDs, that their generic passwords are changed to a robust strength, and that all their commands are logged (and subsequently reviewed by another engineering team and management). Implement two-factor authentication for any remote superuser ID access.

7. Maintain up-to-date patching. Enough said.

8. Encrypt critical data only. Any customer or other confidential information transmitted from your organization should be encrypted. The same precautions apply to any login transactions that transmit credentials across public networks.

9. Perform regular penetration testing. Have a reputable firm test your perimeter defenses regularly.

10. Implement a DDoS network-based service. Work with your carriers to implement the ability to shed false requests and enable you to thwart a DDoS attack.

Advanced Practices: 

a. Provide two-factor authentication for customers. Some of your customers’ personal devices are likely to be compromised, so requiring two-factor authentication for access to accounts prevents easy exploitation. Also, notify customers when certain transactions have occurred on their accounts (for example, changes in payment destination, email address, physical address, etc.).

b. Secure all mobile devices. Equip all mobile devices with passcodes, encryption, and wipe clean. Encrypt your USD flash memory devices. On secured internal networks, minimize encryption to enable detection of unauthorized activity as well as diagnosis and resolution of production and performance problems.

c. Further strengthen access controls. Permit certain commands or functions (e.g., superuser) to be executed only from specific network segments (not remotely). Permit contractor network access via a partitioned secure network or secured client device.

d. Secure your sites from inadvertent outside channels.Implement your own secured wireless network, one that can detect unauthorized access, at all corporate sites. Regularly scan for rogue network devices, such as DSL modems set up by employees, that let outgoing traffic bypass your controls.

e. Prevent data from leaving. Continuously monitor for transmission of customer and confidential corporate data, with the automated ability to shut down illicit flows using tools such as NetWitness. Establish permissions whereby sensitive data can be accessed only from certain IP ranges and sent only to another limited set. Continuously monitor traffic destinations in conjunction with a top-tier carrier in order to identify traffic going to fraudulent sites or unfriendly nations.

f. Keep your eyes and ears open. Continually monitor underground forums (“Dark Web”) for mentions of your company’s name and/or your customers’ data for sale. Help your marketing and PR teams by monitoring social networks and other media for corporate mentions, providing a twice-daily report to summarize activity.

g. Raise the bar on suppliers. Audit and assess how your company’s suppliers handle critical corporate data. Don’t hesitate to prune suppliers with inadequate security practices. Be careful about having a fully open door between their networks and yours.

h. Put in place critical transaction process checks. Ensure that crucial transactions (i.e., large transfers) require two personnel to execute, and that regular reporting and management review of such transactions occurs.

i. Establish 7×24 security monitoring. If your firm has a 7×24 production and operations center, you should supplement that team with security operations specialists and capability to monitor security events across your company and take immediate action. And if you are not big enough for a 7×24 capability, then enlist a reputable 3rd party to provide this service for you.

I recommend that you communicate the seriousness of these threats to your senior business management and ensure that you have the investment budget and resources to implement these measures. Understand the measures above will bring you current but you will need to remain vigilant given the arms race underway. Ensure your 2013 budget allows further investment, even if as a placeholder. For those security pros out there, what else would you recommend?

In the next week, I will outline recommendations on cloud which I think could be very helpful given the marketing hype and widely differing services and products now broadcast as ‘cloud’ solutions.

Best, Jim Ditmore

 

Achieving Outstanding IT Strategy

Developing your IT strategy should be based on a thoughtful, ongoing process. Too often, strategy is developed as a one time event (typically with consultants) or is a hurried episode following a corporate vision statement that has been handed down. A considered approach, where there is robust industry and technology trend analysis coupled with a two way dialogue on business strategy can yield much better results.  I have mapped out below a best practice strategy process that I have leveraged in previous organizations that will ensure a strong connection with the business strategy, leverage of technology trends and clear cascade into effective goals and plans. With such a  process in hand, the senior technology leader should be able to both drive a better IT strategy, and importantly, an improved business strategy.

The IT strategy process should start with two sets of research and analysis that interplay: a full review of the business strategy and a comprehensive survey of the key technology trends, opportunities and constraints. It is critical that the business strategy should drive the technology strategy but aspects of the business strategy can and should be driven by the technology. Utilize the technology trend analysis as well as the understanding of the key strengths and weaknesses of the current technology platform to as a feedback loop into the business strategy.

When working with the business, to help them hone their strategy, I recommend leveraging a corporate competency approach from The Discipline of Market Leaders by Michael Treacy and Fred Wisrsema. In essence, Treacy and Wisrsema state that companies that are market leaders do not try to be all things to all customers. Instead, market leaders recognize their competency either in product and innovation leadership, customer service and intimacy, or operational excellence. Good corporate examples of each would be 3M for product, Nordstrom for service, and FedEx for operational excellence. Thus your business strategy should not attempt to excel at all three areas but instead to leverage your area of strength and extend it further while maintaining acceptable performance elsewhere. This focus is particularly valuable when working to prioritize an overly broad and ambitious business strategy.

Below is a diagram that maps out this strategy process or cascade:

The process anticipates that the corporate strategy will drive multiple business unit strategies that IT will then support. It is appropriate to develop the business unit technology strategies that will operate in concert with both the business unit strategy and the corporate technology strategy. Once the strategies are established, it is then critical to define the technology roadmao for each business unit. The roadmap can be viewed as a snapshot of the critical technology capabilities and systems every 3 or 6 months for the next two years that provides a definitive plan of how the business unit’s technology will evolve and be delivered to meet the business requirements. These roadmaps should be tied into and should support an overall technology reference architecture for the corporation. This ensure that the technology roadmaps will work in concert with each other and enable critical corporate capabilities such as understanding the entire relationship with a customer across products and business units.

I recommend executing the full process on an annual basis, synchronous with the corporate planning cycle with quarterly updates to the roadmaps. It is also reasonable to update the the technology trends and business unit strategies on a six month basis with additional data and results.

What would you add to this strategic planning approach? Have you leveraged different approaches that worked well?

Best, Jim Ditmore

Getting Things Done: A Key Leadership Skill

It is a bit ironic that this post has taken me twice as long to do as my average post. But while it is an important topic, it is difficult to pinpoint, of all the practices you can leverage, which ones really help you or your team or organization get the right things done. So, just before the Memorial Day holiday, here is a post to help you execute better for the rest of the year and meet those goals.

Have a great holiday weekend.  Jim

Getting things done is a hallmark of effective teams. Unfortunately, the focus and flow of large business organizations combined with influences of the modern world erode our ability to get the right things done. To raise the productivity to a high performance team,  as a senior leader, you should impart an ability to get the right things done at the divisional and team level within your organization. And while there are myriad reasons that conspire to reduce our focus or effectiveness, there are a number of techniques and practices that can greatly improve the selection and capacity at all levels: at the overall organization or division, at the working team level, and for the individual.

Realize that the same positive forces that ensure a focus on business goals, drive consensus within an organization, or require risk and control to be addressed, can also be mis- or over-applied and result in organizational imbalance or gridlock. Coupled with too much waterfall or ‘big bang’ approaches and you can get not just ineffectiveness but spectacular failures of large efforts. At the organizational level, you should set the right agenda and framework so the productivity and capacity of your IT shop can be improved at the same time you are delivering to the business agenda. To set the right agenda look to the following practices:

  • provide a clear vision with robust goals that include clear delivery milestones and that are aligned to the business objectives. The vision should also be compelling — your team will only outperform for a worthwhile aspiration.
  • avoid too many big bets (an unbalanced portfolio) – your portfolio should be a mix of large, medium and small deliveries. This enables you to deliver a regular stream of benefits across a broader set of functions and constituents with less risk. Often a nice balancing investment area is drive several small efforts in HR and Finance that streamline and automate common processes in these areas used by much of the corporation (thus a good, broad positive impact on the corporate productivity).
  • aggregate your delivery – often IT efforts can be so tightly tied to immediate delivery for the business that the IT processes are substantially penalized including:
    • where a continuous stream of applications and updates are introduced into production without a release schedule (causing large amount of duplicative or indequate design, testing and implementation)
    • where a highly siloed delivery approach where every minor business unit has its own set of business systems resulting in redundant feature build and maintain work.
  • address poor quality standards and ineffective build capability including:
    • correct defects as early in the build process as possible. Defects correct at their source (design or implementation) are far less costly to fix than those corrected once in production
    • lower build productivity due to a lack of investment in the underlying ‘build factory’ including tools, training and processes or the teams do not leverage modern incremental or agile methods
    • delivery by the internal team of the full stack, where packaged software is not leveraged (recently I have encountered shops trying to do their own software distribution tools or data bases

So, in sum, at the organizational level, provide clarity of vision, review your portfolio for balance, make room for investments in your factory and look to simplify and consolidate.

At the team level, employ clarity, accountability, and simplicity to get the right things done. Whether it is a project or an ongoing function:

  • are the goals or deliverables clear?
  • are the efforts broken into incremental tasks or steps?
  • are the roles clear?
  • are the tasks assigned?
  • are there due dates? or good operational metrics?
  • is the solution or approach straightforward?
  • is there follow up to ensure that the important work takes priority and the work is done?

And then, most important, are you recognizing and rewarding those who gets things done with quality? There are many other factors that you may need to address or supplement to enable the team to be achieve results from providing specific direction to coaching to adding resources or removing poor performers. But frequently well-resourced teams can spin their wheels working on the wrong things, or delivering with poor quality or just not focusing on getting results. This is where clarity, accountability and simplicity make the difference and enable your team to get the right things done.

Most importantly, getting the right things done as an individual is a critical skill that enables outperformance. Look to hone your abilities with some of following suggestions:

  • recognize we tend to do what is urgent rather than what is important. Shed the unimportant but urgent tasks and spend more time on important tasks. In particular, use the time to be prepared, improve your skills, or do the planning work that is often neglected.
  • hold yourself accountable, make your commitments. As a leader you must demonstrate holding yourself to the same (or higher) standards as those for your team.
  • Make clear, fact-based decisions and don’t over-analyze. But seek inputs where possible from your team and experts. And leverage a low PDI style so you can avoid major mistakes.
  • and finally, a positive approach can make a world of a difference. Do your job with high purpose and in high spirit. Your team will see it and it will lighten their step as well.

So, those are the practices from my experience that have been enablers to getting things done. What would you add? or change? Do let me know.

Best, Jim Ditmore


 

Key Steps to Building a High Performance Team: Prune and Improve

Today I revisit a core topic of Recipes for IT: High Performance IT Teams. Before I provide background on this series of posts, I thought it was about time for a quick blog update. Recipes for IT continues to attract new readers and has a substantial ongoing readership. It is quite heartening to see the level of interest and I really appreciate your visits and comment. I will strive to regularly add thoughtful and relevant material for IT leaders and hope that you continue to find the site useful. I do recommend for new readers that you check out the introduction page and the various topic areas as you should find useful material of strong depth and actionability that can help you be more successful. This site also continues to do well in Google page rankings on a number of topic areas, particularly service desk queries and IT metrics and reporting. If there are topics you would like me to tackle, please do not hesitate to send me a comment.

Now back to some background on Building High Performance Teams. This post is now the fifth on this topic and there will be one further post to complete the steps of building a high performance team. I hope you find the material to be both enlightening and actionable. One key for IT leaders is that you consider the tasks required to build a HP team as some of your most important activities. At nearly every poor performing organization that I have been responsible for turning around, I have found that many times, the primary reason for inadequate talent and poor performing teams is inadequate manager attention and focus on these activities. So, work hard to make the time, even though you would much rather be doing other activities. And now for the post. Best, Jim

Building High Performance Teams: As I have mentioned previously, I have a positive outlook on the competence of today’s managers and leaders. I see more material and approaches available for managers than ever before and more effort and study applied by the managers as well. Much of the material though is either a very narrow spectrum or a single technique which does not address the full spectrum of practices and knowledge that must be brought to bear to build and sustain a high performance IT team. So,  I have assembled a set of practices that I have leveraged or I have seen peers or other senior IT leaders use to build high performance IT teams in this series of posts to enable managers to have a broad source of practice at their disposal.

Senior IT leaders, with his or her senior management team, can use these practices to build a high performing team, in the following steps:

Today’s post covers how to prune and improve as required. The previous steps are prior posts and I have further constructed reference pages with links above on the first four steps.  Subsequent posts will cover the last steps as well as a summary.

I think the aspiration of building a high performing team is a lofty, worthwhile, and achievable vision. If you have ever participated in a high performance team at the top of their game, in other words: a championship team, then you know the level of professional reward and sense of accomplishment that accompanies such membership. And for most companies that rely significantly on IT, if their IT team is a high performing team, it can make a very large difference in their products, their customer experience, and their bottom line. Building such a championship team is not only about attracting or retaining top talent, it is also necessarily about identifying those team members who do not have the capabilities, behaviors, or performance to remain part of the team and addressing their future role constructively but firmly.

Let’s first revisit some key truths that underly how to build a high performance team:

– top performing engineers, typically paid similar to their mediocre peers are not 10% better but 2x to 10x better

– having primarily senior engineers and not a good mix of interns, graduates, junior and mid and senior level engineers will result in stagnation and overpaid senior engineers doing low level work

– having a dozen small sites with little interaction is far less synergistic and productive than having a few strategic sites with critical mass

– relying on contractors to do most of the critical or transformational work is a huge penalty to retain or grow top engineers

– line and mid-level managers must be very good people managers, not great engineers, otherwise you are likely have difficulty retaining good talent and you will not develop your talent

– engineers do not want to work in an expensive in-city location like the financial district of London (that is for investment bankers)

– enabling an environment where mistakes can be made, lessons learned, and quality and innovation and initiative are prized means you will get a staff that behaves and performs like that.

With these truths in mind, (and these are the same ones you used to set about building the team), having executed the first four steps, you should have adequate capacity to begin thoughtful pruning and improvement of your organization. While there are circumstances when a poor performing manager or senior engineer causes so many issues that it is a benefit to remove them, in many cases you must have adequate resource capacity to meet demands so that once you begin pruning your team is not overtaxed and penalized as a result.

Pruning should begin at the top and work down from there. Start with your directs and the next level below. Consider the span of control of your organization and the number of levels. High performing organizations are generally flatter with greater spans of control. In considering your team, I recommend leveraging a talent calibration approach of either the typical 9 box or a top-grading variant. The key to calibration is to essentially formulate three sets of results: those on your staff that are top performers that you will need to further develop and challenge; the ‘well-placed experts’ and solid performers that will need support and attention but will execute reliably; and those whose performance and potential is lacking and who must step up to continue in their role. With these three groupings of your management team identified, ensure you lay out crisp plans for all three groups and execute against them. (Remember, it will be very difficult for you to subsequently demand of your line managers that they address their staff issues if you have not shown a capability to execute such accountability with your team.)

One area to particularly focus on is time-boxing the development plans for poor performers. As these are senior managers the time to address performance issues should be shorter not longer. I recommend you start the development plan with a succinct, clear conversation on high expectations and shortcoming of their performance with examples where possible. You should provide a writeup covering this discussion at the end of the discussion. Jointly layout key deliverables, milestones, expected behavior changes and results with the affected leader. Be open to the possibility that the employee may know they are in over their head and may be looking for an alternative. While not advocating moving problem performers around, there may be a role within the company or elsewhere outside the company that is a much better fit. Look to assist with such a transition if beneficial for the company and the employee. If the employee insists this is the role they want and they are willing to step up and adjust, then you should provide support under a tight timeline for them to achieve it. Monitor the plan regularly with HR. If you follow up diligently it will become evident quite quickly that the employee can muster to the new level or not. Generally, in my experience, a surprisingly large percentage of poor performing employees will drop out of their own accord once you have provided clear expectations and no escape routes other than the hard work to get there — assuming of course that there is a modest but respectable exit plan for them. It is also key to treat the employee with respect and fairness throughout the process and focus on the results and outcomes.

Equally though, I have more than a handful of senior leaders and managers who have expressed surprise when confronted with poor performance as no one had communicated clearly and firmly their performance issues previously. Once understood and once the higher goals and expectations were known, many of these individuals (and others as well), definitively stepped up and improved significantly. Thus, until you communicate the higher goals and expectations clearly AND communicate where they must improve (constructively, with specifics) the likelihood of improvement is minimal. So, allocate the time to hold the tough but fair conversations and provide this information. Once the conversations are held, over the next 2 to 3 months you should take action based on the results. Either poor performing managers will be exited (or moved to a role much more befitting) or poor performers will become good performers.  One of the interesting results from such actions is that the remaining team, upon seeing poor performers exited, will view the results positively. In fact, I have experienced some very strong reactions from other team members who now felt a dead weight was off of their shoulders as they no longer had to make up for the defects and negative performance of the just exited team member. Further, I have received multiple (back-handed) compliments along the lines of ‘Wow, we are glad management finally figured out what to do and took action!’ . So do not be persuaded that the team will view performance actions solely in a negative light.

Once you have initiated the performance management process and you are well in the process of pruning your team, you can work with your managers and HR department to address areas lower in the organization. Remember it is key to first set expectations and goals that cascade and match your overall goals. Then ensure you hold managers and senior engineers to a higher bar than the mid and junior staff. For senior staff, you are not looking just for technical competence but also they must meet the standard for such behaviors as problem solving/solution orientation, teamwork, initiative and drive, and quality and focus on doing things right. And they should exhibit the right leadership and communication skills.

Driving such pruning and development work through your organization is important but also a delicate task. Generally, with little exception, management in a IT organization can improve how they handle performance management. Because most of the managers are engineers, their ability to interact firmly with another person in a highly constructive manner is typically under-developed. Thus, some managers may not be up to this pruning task or their calibration of talent could be well off the mark. So, leverage your HR resources to guide management and personally check in to ensure proper calibration of talent by your lower level managers. Provide classes and interactive session on how to do coaching and provide feedback to employees. Even better, insist that performance reviews and development plans must be read and signed off by the manager’s manager before being given to improve their quality. This a key element to focus on because a poorly executed resource improvement plan could backfire. Remember that the line manager’s interaction with an employee is the largest factor in undesired attrition and employee engagement. Of course, these is all the more reason to replace poor performing managers with good leaders, but do so effectively and firmly. Use the workforce plans that you developed in the Build step to ensure your pruning and development also helps you move toward your strategic site goals, contractor/staff mix targets, and junior/mid/senior profiles.

Pruning and improvement is the tough but necessary step in building a high performance team. If done well, pruning and improvement will provide additional substantial lift to the team and more importantly, enable ongoing sustainment. It requires discipline and focus to execute the steps we would all prefer to avoid, but are necessary for reaching the final high performance stages.

What has been your experience either as a leader or participant in such efforts? What have you seen go very well? or terribly wrong? I look forward to your perspective.

Best, Jim Ditmore

IT Service Desk: Turning around a ‘helpless’ desk

In one of our earliest posts on service desks, I mentioned how an inherited service desk had delivered such poor service that it was referred to by users as the ‘Helpless Desk’ rather than the Help Desk. With that in mind, for those IT leaders who have a poor service situation on your hands with your most important customer interface, this post outlines how to stabilize and then turnaround your service desk. For those new to this site, there is a service desk reference page and also posts to understand service desk elements and best practices.
Service Desks can underperform for a number of reasons, but ongoing poor performance is generally due to a confluence of causes. Typically, underlying issues thrust service desks into poor performance when combined with major changes to the supply (or agent service) or the demand (the calls and requests coming into the desk).  It is important to recognize that service desks are in essence call centres. Call Centre performance is driven by the supply and demand, with an effective service at an efficient cost representing equilibrium – the point at which the competing forces of supply and demand are in optimized with each other. A supply side or demand side ‘shock’ can move the state of equilibrium to a point outside of management control and if there are other fundamental issues, it will result in sustained underperformance by the Service Desk.

There is a ‘tipping point’ within Call Centre mechanics which means that the rate of deterioration will become exponential  – i.e., the gentle gradient of deterioration does not last long before service falls over the cliff edge (i.e. wait times in seconds, quickly become minutes and then tens of minutes – even hours). Calls are abandoned by customers, with call backs adding further volume. Agents become overworked and stressed due to the tone of the calls, their efficiency reduces and attrition goes up, exacerbating any supply shortage. These dynamics also work in reverse and so what can seem to be an insurmountable problem can in fact be rapidly returned to stability if managed appropriately.

Common supply side issues include:

  • Organisations increasingly use headcount caps and quotas to control their cost base. As the quota filters through the organisation, there can be a tendency to retain ‘higher end’ roles, which means the Service Desk comes under particular scrutiny and challenge. A reduction in the supply of labour (without equivalent changes in demand) can very quickly lead to significant service deterioration.
  • Similarly, Service Desk tends to be a function in which organisations have an uplifted appetite to make organisational changes to outsource and offshore (and similarly insource and onshore as the cycle runs). The wholesale replacement of the Service Desk workforce is a fairly common scenario within the industry and is frequently the root cause of acute service issues in the run up to change (as attrition without replacement bites) and during and post change (as a new workforce struggles to support their new customer base).
  • Any issue / initiative that either reduces the availability of agents to handle live calls or leads to a significant increase in the transaction time to do so can very quickly have a catastrophic impact on service. For example; the implementation of a new Service Management toolset is likely to elongate call duration in the short to medium term, a call centre with a high attrition rate will constantly lose agents just as they start to perform – to be replaced by a trainee performing at a sub optimal level and a call centre operating at too high an occupancy level will quickly burn out staff and have an increasing level of absenteeism.

Demand side issues commonly include;

  • Growth of the user base, generating an uplifted volume of contacts to the Service Desk.
  • An increase in contacts per supported user, driven by increasing IT usage or deterioration of IT performance (this is frequently driven by Business or IT change activity delivered by Projects and Programmes – such as the deployment of a new application to the workforce).

Irrespective of the root cause of the failure, service remediation needs to be a concerted effort combining strong analysis with disciplined planning and focused execution. Identifying that there is an issue and responding appropriately in a timely manner should happen automatically if you are already operating with maturity from fact based metrics that have a healthy mix of lead and lag indicators. If the organisation is less mature in its use of metrics (and particularly lead indicators) then the ‘crisis’ is not likely to be noticed (or at least taken seriously by senior leadership) until after the Service Desk hits the tipping point and service is deteriorating at an alarming pace, generating severe customer dissatisfaction (i.e. until it is too late).

Remediating a failing Service Desk requires multiple and varied actions dependent upon the root cause of the issues. The approach to identifying and rectifying those root causes can be managed effectively by following a logical framework.

Step 1 – Stabilize

If service has tumbled over the tipping point and is deteriorating rapidly, there is going to be little sponsorship for an extended analysis and planning exercise. Results – or at least avoiding further deterioration of performance – will be expected immediately. Your first priority is to create the space to put together a meaningful recovery plan.

Do everything that you can do to boost the supply side in the short term (overtime / shift amendments / cessation of any non-time critical, non-customer contact work by your agents, boost the number of agents by diverting resources to customer service roles from other positions etc, bring in temporary resources, etc). This will not fix the issue and is not a sustainable containment strategy; it will however create the window of opportunity you require and give a much needed boost to stakeholder confidence that the ‘freefall’ may be over. By itself, it will reduce the cycle of abandons and call backs that create additional work for the team.

Similar attention should be paid to any demand side actions that can be deployed quickly, it is less likely however that you can act immediately on the demand side, but there are steps that can be taken quickly. If there are recent system or application rollouts that are generating questions and calls, it may be worthwhile to send out a FAQ or Quick Tips to help employees understand what to do without calling the service desk. Or any self help initiatives already in the pipeline could be accelerated to remove some calls. While these actions are more likely to form elements of your strategic recovery plan, the may provide some level of relief.

Step 2 – Undertake the Analysis

Your leadership group and analysts need to undertake the analysis to understand why service has deteriorated. What has gone wrong, when, where and why? If your desk has been performing well (or even adequately) for some time, remember that a recent ‘change’ in either the demand or supply side is likely to be the root cause.

If the desk has been underperforming for a significant period, there are likely to be more systemic causes of the failure and so a full strategic review of your operations is required. Reading the full set of Service Desk Best Practices published within Recipes for IT will provide guidance on the areas of focus required.

After understanding your call volumes and their trends (everything from call time to why customers are calling) you should be able to identify some of the root causes. Are there new issues that are now in the top 10 call reasons? Are your call times elongated? Have call volumes or peaks increased? For each shift in metrics, analyze for the following:

  • determine if the root cause for a customer call is due to:
      • system issues or changes
      • user training or understanding
      • lack of viable self-service channel
  • identify if increases in calls are due to:
      • underlying user volume increases or growth
      • new user technologies or systems
      • major rollouts or releases that are problematic
  • or if service is suffering due to:
      • lack of resources or mismatched resource levels and call volumes
      • inadequate training of service desk personnel
      • new or ineffective toolsets that elongate service time
      • inefficient procedures or poor engagement
      • high attrition or loss of personnel

If you do not have adequate metrics to support the analysis, then you will need to establish basic metrics collection as the first, fundamental step.

Step 3 – Construct the Recovery Plan

Constructing the recovery plan needs to be genuinely solution-oriented and outcome focused. The objective of the plan is not usually to resolve the source of the ‘shock’ to return us back to the old equilibrium (e.g. we aren’t likely to want to back out the new Service Management toolset that we have just implemented – we will want to build forward). The objective of the plan is to detail the actions required to resolve the issues identified as well as build a solid foundation to allow us to move back to a steady state operation, delivering with quality and consistency to our SLA.

A good recovery plan will be clear about what actions are to be undertaken, by who, when, to achieve which specific deliverable and with specific measures and metrics tracking progress to achievement of the overall outcome.

The plan needs to focus on prioritising actions that can make a positive impact of scale and of pace commensurate to the scale of the service issues being experienced. Many and multiple actions on a service recovery plan creates a false sense of comfort for those involved in the crisis and will almost certainly hinder genuine service improvement. Targeted action is required and this needs discipline and skill from the plan owner to ensure that benefits will be realised, will be relevant to the problem statement and that our actions in aggregate will move bottom line performance to where we need it to be.

We recommend a recovery plan that has the following elements:

a. Maintain an interim staffing boost to stabilize the service desk until other work is completed

b. If clear problem causes are identified (poorly rolled out systems, ongoing defective systems causing high volumes of calls) then ensure these areas are high priority for fixes on the next release or maintenance cycle.

c. Match resources to demand cycles based on current volumes and call handle times. Then forecast proper resource levels based on improvement initiatives and their completion dates.

d. If self service can address a significant volume of calls, these should also be a top priority for investment as this solution is also usually an overall cost save as well as service experience improvement (e.g. password resets).

e. Ensure your service desk staff can efficiently handle calls — proper training, tool adjustments, thoughtful goals, incentives and a productive environment.

f. Address staff recruiting as well as development, incentives, and training and career progression to ensure you will have an engaged and well-trained staff to deliver exceptional service

g. Review your IVR and call centre technology setup and look to optimize menus, self-service, and call back options. Specialize resources into pools as appropriate to improve efficiency.

h. Define strategic service goals and SLAs along with initiatives to achieve them (e.g., additional or different sites, knowledge management tools, revamp of problem systems, etc).

Step 4 – Execute the Recovery Plan

Ensure that the plan is owned by an individual with the gravitas, influence, experience and focus to manage it through with real pace and drive. Ideally, the individual should not own actions within the plan itself (as this undermines their ability to hold everyone fully to account and removes their impartiality when inevitable conflicting priorities arise).

The plan can (and should) be meaningfully tuned as you progress with delivery. It should not however be a constant exercise in planning and re-planning and particular focus needs to be applied to ensure that commitments (delivery to time, cost and quality) are met by all action owners.

Communicate the issue, the plan, progress & improvement achieved to date and upcoming actions from your recovery plan to stakeholders. Ensure that stakeholder management is a specific activity within your plan and that you pay particular attention to each stakeholder group as a constituency. The role of senior leaders in recovery situations should be to protect the operations team to enable it to focus on delivery through the management of senior clients and customers and to ensure that the required resources to remediate the issues are provided.

Step 5 – Take a Look Back

Once service has been remediated and stabilised there are a number of residual actions to undertake.

  • As additional resources were utilised in the recovery effort (holidays restricted, time off in lieu accumulated, overtime paid etc…) there may well be negative service and / or financial implications of those decisions. It is important to quickly understand any such impacts and to manage them appropriately (e.g., review the holiday allocation process to ensure accumulated holidays can still be scheduled without a bottleneck, determine whether to grant time off in lieu for extra hours worked or to pay overtime, ensure that departments and functions who have been loaning staff to the front line receive support and resources to now clear their backlogs quickly etc.).
  • Review the control processes and responsiveness of your Service Desk in the identification of the issue / issues and how this could be improved upon in the future (in particular the use of lead and lag performance metrics). The ‘root causes’ identified should be eliminated or carefully tracked to ensure that future occurrences can be identified and dealt with before they manifest as service impact to your customers.
  • Ensure that the findings of your root cause analysis are communicated to and understood by your stakeholders. Be honest, be clear and be candid about what has happened, why and the measures that are now in place to prevent / mitigate any future such occurrences.
  • Say Thank You as the milestones are completed. A number of people will have participated in the recovery effort, some very explicitly and others in a very low key manner (for example by absorbing extra workload from colleagues seconded to the front line). Recognising their contribution and taking the time to say Thank You will ensure that your team feel rewarded for their efforts and motivated to stand shoulder to shoulder in tackling future adverse events that impact customer service.

And with these efforts, you will have turned the ‘helpless desk’ into a positive key interface for your customers.

Best, Steve Wignall and Jim Ditmore


Another Wave of Security Breaches: Meeting It with Security Best Practices

With the latest breaches in the news, I felt it was important to map out base practices and well as some of the best practices in Information Security. In the age of LulzSec, industrial espionage, and everyday breaches, it’s more important than ever to be proactive about security. I consulted with several top security engineers that I have worked with in the past to construct these practices. Much of this post was first published in early April in Information Week and I have updated it further. Unfortunately, this area should be a top priority for IT leaders to protect their firms, customers and information. If it’s not at your firm, you need to change that. Best, Jim.

PS. Here is a good reference on the biggest data breaches the past 15 years to help you get the investment required to properly implement IT Security.

Mark Twain observed 150 years ago: “A lie can travel halfway round the world while the truth is putting on its shoes.” With the advent of social media, these days that lie has likely made it all the way around the world and back while the truth is still in bed.

And today it is not just the false information it’s the confidential information, your customer’s information or your company intellectual property that is spirited away. The pace and sophistication of attacks by hackers and others who expose confidential data and emails has increased dramatically. For their latest exploit, a group calling itself LulzSec Reborn recently hacked a military dating website releasing the usernames and passwords of more than 170,000 of the site’s subscribers.

Then there are the for-profit attacks by nation states and companies seeking intellectual property, and fraud by organized crime outfits. Consider the blatant industrial espionage conducted against Nortel and more recently, AMSC, or the recent fraud attack against Global Payments. These are sobering stories of how company’s falter or fail in part due to  such espionage.

One of a CIO’s most critical responsibilities is to protect his or her company’s information assets. Such protection often focuses on preventing others from entering company systems and networks, but it must also identify and prevent data from leaving. The following recommendations can help you do this. They are listed in two sections: conventional measures that focus on system access, and best practices given the profiles of today’s attacks.

Conventional Measures:

Establish a thoughtful password policy. Sure, this is pretty basic, but it’s worth revisiting. Definitely require that users change their passwords regularly, but set a reasonable frequency–any less than three months and users will write their passwords down, compromising security. As for password complexity, require at least six or seven characters, with one capital letter and one number or other special character.

Publicize best security and confidentiality practices. Do a bit of marketing to raise user awareness and improve security and confidentiality practices. No security tool can be everywhere. Remind your employees that security threats can follow them home from work or to work from home. Help your employees take part of your company’s security practices — there is a good post on this at How To Make Information Security Everyone’s Problem.

Install and update robust antivirus software on your network and client devices. Enough said, but keep it up-to-date and make it comprehensive (all devices)

Review access regularly. Also, ensure that all access is provided on a “need-to-know” or “need-to- do” basis. This is an integral part of any Sarbanes-Oxley review, and it’s a good security practice as well. Educate your users at the same time you ask them to do the review. This will reduce the possibility of a single employee being able to commit fraud resulting from retained access from a previous position.

Put in place laptop bootup hard drive encryption. This encryption will make it very difficult to expose confidential company information via lost or stolen laptops, which is still a big problem. Meanwhile, educate employees to avoid leaving laptops in their vehicles or other insecure places.

Require secure access for “superuser” administrators. Given their system privileges, any compromise to their access can open up your systems completely. Ensure that they don’t use generic user IDs, that their generic passwords are changed to a robust strength, and that all their commands are logged (and subsequently reviewed by another engineering team and management). Implement two-factor authentication for any remote superuser ID access.

Maintain up-to-date patching. Enough said.

Encrypt critical data only. Any customer or other confidential information transmitted from your organization should be encrypted. The same precautions apply to any login transactions that transmit credentials across public networks.

Perform regular penetration testing. Have a reputable firm test your perimeter defenses regularly.

A Thoughtful Set of Additional Current Best Practices: With the pace of change of technology and the rise of additional threats from hackers and state-sposored espionage, your company’s security posture must adopt the latest best techniques and be updated regularly. Here are the current best practices that I would highly recommend.

Provide two-factor authentication for customers. Some of your customers’ personal devices are likely to be compromised, so requiring two-factor authentication for access to accounts prevents easy exploitation. Also, notify customers when certain transactions have occurred on their accounts (for example, changes in payment destination, email address, physical address, etc.).

Secure all mobile devices. Equip all mobile devices with passcodes, encryption, and wipe clean. Encrypt your USD flash memory devices. On secured internal networks, minimize encryption to enable detection of unauthorized activity as well as diagnosis and resolution of production and performance problems.

Further strengthen access controls. Permit certain commands or functions (e.g., superuser) to be executed only from specific network segments (not remotely). Permit contractor network access via a partitioned secure network or secured client device.

Secure your sites from inadvertent outside channels.Implement your own secured wireless network, one that can detect unauthorized access, at all corporate sites. Regularly scan for rogue network devices, such as DSL modems set up by employees, that let outgoing traffic bypass your controls.

Prevent data from leaving. Continuously monitor for transmission of customer and confidential corporate data, with the automated ability to shut down illicit flows using tools such as NetWitness. Establish permissions whereby sensitive data can be accessed only from certain IP ranges and sent only to another limited set. Continuously monitor traffic destinations in conjunction with a top-tier carrier in order to identify traffic going to fraudulent sites or unfriendly nations.

Keep your eyes and ears open. Continually monitor underground forums (“Dark Web”) for mentions of your company’s name and/or your customers’ data for sale. Help your marketing and PR teams by monitoring social networks and other media for corporate mentions, providing a twice-daily report to summarize activity.

Raise the bar on suppliers. Audit and assess how your company’s suppliers handle critical corporate data. Don’t hesitate to prune suppliers with inadequate security practices. Be careful about having a fully open door between their networks and yours.

Put in place critical transaction process checks. Ensure that crucial transactions (i.e., large transfers) require two personnel to execute, and that regular reporting and management review of such transactions occurs.

Best, Jim D.

In some ways you can view it as no longer a matter of if you get hacked, but when. Information Week has a special retrospective of news coverage, Monitoring Tools And Logs Make All The Difference, where they take a look at ways to measure your security posture and the challenges that lie ahead with the emerging threat landscape. (Free registration required.)

Using Performance Metric Trajectories to Achieve 1st Quartile Performance

I hope you enjoyed the Easter weekend. I have teamed up today with Chris Collins, a senior IT Finance manager and former colleague. Our final post on metrics is on unit costing — on which Chris has been invaluable with his expertise. For those just joining our discussion on IT metrics, we have had 6 previous posts on various aspects of metrics. I recommend reading the Metrics Roundup and A Scientific Approach to Metrics to catch you up in our discussion.

As I outlined previously, unit costing is one of the critical performance metrics (as opposed to operational or verification metrics) that a mature IT shop should leverage particularly for its utility functions like infrastructure (please see the Hybrid model for more information on IT utilities). With proper leverage, you can use unit cost and the other performance metrics to map a trajectory that will enable your teams to drive to world-class performance as well as provide greater transparency to your users.

For those just starting the metrics journey, realize that in order to develop reliable sustainable unit cost metrics, significant foundational work must be done first including:

  • IT service definition should be completed and in place for those areas to be unit costed
  • an accurate and ongoing asset inventory must be in place
  • a clean and understandable set of financials must be available organized by account so that the business service cost can be easily derived

 If you have these foundation elements in place then you can quickly derive the unit costing for your function. I recommend partnering with your Finance team to accomplish unit costing. And this should be an effort that you and your infrastructure function leaders champion. You should look to apply a unit cost approach to the 20 to 30 functions within the utility space (from storage to mainframes to security to middleware, etc). It usually works best to start with one or two of the most mature component functions and develop the practices and templates. For the IT finance team, they should progress the effort as follows:

  • Ensure they can easily segregate cost based on service listing for that function
  • Refine and segregate costs further if needed (e.g., are there tiers of services that should be created because of substantial cost differences?)
  • Identify a volume driver to use as the basis of the unit cost (for example, for storage it could be terabytes of allocated storage)
  • Parallel to the service identification/cost segregation work, begin development of unit cost database that allows you to easily manipulate and report on unit cost.  Specifically, the database should contain:
    • Ability to accept RC and account level assignments
    • Ability to capture expense/plan from the general ledger
    • Ability to capture monthly volume feeds from source systems including detail volume data (like user name for an email account or application name tied to a server)

For the function team, they should support the IT Finance team in ensuring the costs are properly segregated into the services they have defined. Reasonable precision of the cost segregation is required since later analysis will be for naught if the segregations are inaccurate. Once the initial unit costs are reported, the function technology can now begin their analysis and work. First and foremost should be an industry benchmark exercise. This will enable you to understand quickly how your performance ranks against competitors and similar firms. Please reference the Leveraging Benchmarkspage for best practices in this step. In addition to this step, you should further leverage performance metrics like unit cost to develop a projected trajectory for for your function’s performance. For example, if your unit cost for storage is currently $4,100/TB for tier 1 storage, then the storage team should map out what their unit cost will be 12, 24, and even 36 months out given their current plans, initiatives and storage demand. And if your target is for them to achieve top quartile cost, or cost median, then they can now understand if their actions and efforts will enable them to deliver to that future target. And if they will not achieve it, they can add measures to address their gaps.

Further, you can now measure and hold them accountable on a regular basis to achieve the proper progress towards their projected target. This can be done not just for unit cost but for all of your critical performance measures (e.g., productivity, time to market, etc).  Setting goals and performance targets in this manner will achieve far better results because a clear mechanism for understanding cause and effect between their work and initiatives and the target metrics has been established.

A broad approach to also potentially utilize is to establish a unit cost progress chart for all of your utility functions. On this chart, where the y axis is cost as a percentage of current cost and the x axis is future years, you should establish a minimum improvement line of 5% per year. The rationale behind this is that improving hardware (e.g., servers, storage, etc) and improving productivity, yield an improving unit cost tide of at least 5% a year. Thus, to truly progress and improve, your utility functions should well exceed a 5% per year improvement if they are below 1st quartile. This approach also conveys the necessity and urgency of not sitting on our laurels in the technology space. Often, with this set of performance metrics practices employed along with CPI and other best practices, you can then achieve 1st quartile performance within 18 to 24 months for your utility function.

What has been your experience with unit cost or other performance measures? Where you able to achieve sustained advantage with these metrics?

Best,

Jim Ditmore and Chris Collins

 

Better Requirements Definition to Improve Time to Market and Quality

We have two features this week: the first and best is Fred Alsup has provided his perspective on best practices in requirements definition; and I have also added and slightly updated a post on Agility and Innovation in the ‘Asides’ page.  Enjoy and have a great week! Best, Jim Ditmore

To deliver cutting-edge, market-advantage systems for your company requires allying your company’s business and operations experts, as well as marketing and sales leaders with technology experts. For a large firm, you may have several if not dozens of projects with these multi-disciplinary teams. Complex projects with participants articulating requirements from different perspectives using terminology unique to their discipline without a rigorous approach is one of the primary reasons for poor requirements. Frequent issues include ambiguous or ill-defined requirements, missing requirements or poorly defined bridges between areas, or worse, a key expert is not focused on their area until late in the project (when it is far more difficult or expensive to correct). Further traditional approaches often result in team members feeling that “everyone’s speaking different languages” or “I’m not being heard”.

This happens both when little time is spent on requirements or, most disappointingly, when even when a great deal of time is spent. Such a result is often due to the requirements elicitation approach and results in negative consequences for the project. The project financial loss includes the extended cost of the system, all of the time spent in meetings trying to elicit the requirements or review documents. And, of course there is the lost opportunity of not having the system that would optimize the work or provide market advantage. And the relationship between the business and IT becomes or remains strained.

Traditional requirements elicitation or gathering methods are often the cause of these issues. Cycling over and over the requirements definition with individual users and experts or small groups in a document-based, waterfall approach results in many issues:

  • A lengthy process where each contributor often only sees a part of the envisioned system (e.g., like the 4 blind men touching an elephant and coming up with 4 different conclusions as to what it is)
  • Business and technology roles are not clearly defined and often have overlap resulting in user requirements and technical solutions mixed together
  • The requirements definitions in a document based capture approach are often overly complex and miss the essence of of what should be done
  • Key experts are often overloaded, thus frequently cannot spend adequate time on the area in a traditional process  resulting in either delays or gaps
  • The spark of innovation due to bringing multiple different areas to solution for the business together never occurs

In essence, by gathering requirements in such a methodical but piecemeal and lengthy process causes these outcomes. In order to break out and develop far higher quality requirements and spark innovation and outstanding solutions, you should apply a rapid requirements process that brings the contributors together in a structured and intense manner. You will be able to define requirements in a far shorter time period with much greater quality. Very similar to Agile approaches where scrum sessions with a focused set of users drive a tight cycle time to define interfaces or small systems, rapid requirements enables requirements elicitation for large and complex systems with many disciplines to occur in a tight, joint set of cycles.

The essentials for executing a rapid requirements approach are as follows:

  • Set aside two or more days for the requirements elicitation. If the system is to reengineer or deliver a business process, add one day for each additional business area the process impacts (e.g., if back office and middle office, then 3 days total, if front office as well, 4 days). Usually it is best if the sessions are 5 or 6 hours per day versus 8 as users can then handle regular priority work rather than interrupt the session.
  • Utilize a facilitator and scribe in the sessions. For larger sessions, multiple scribes or assistants are needed. The scribes should be strong analysts, not just note keepers. The facilitator must be skilled at communications and elicitation as well as effective at driving for results.
  • Bring together all the key users and providers for the session. Attendance is mandatory, delegation should be frowned upon. Having even one contributor missing can make a major difference, lead to a major omission. And individuals must operate in one role only (e.g., users cannot define how to implement, first this is a requirements session not a system design session and second, this overruns the provider role). Having individuals act in one and only one role during a session, in my experience is of critical importance, if not common practice.
  • Facilitators have full control and responsibility for running the meeting, asking questions, and directing other role players in the question and answering. The facilitator should have the ability to build deep rapport, quickly with each participant in the session while balancing that with the need to be assertive and even commanding if necessary.  The facilitator should understand whether expansion or closure on a topic is appropriate and then formulate the right question to include suggesting user requirements (but not make decisions about user requirements).
  • Scribes listen and document requirements using a defined grammar. A scribe may suggest user requirements but not make decisions about user requirements or user prioritization.
  • Users define the user requirements and determine prioritization. It is best if the users (business experts) also seek to improve business process or synergize during the session. It can be helpful to have process experts (e.g., lean, etc) assist the users in defining their target process as part of the requirements session.
  • Solution providers (e.g., technology or operations) ask clarifying questions concerning user requirements. A solution provider may suggest user requirements but not make decisions about user requirements or user prioritization.
  • It can also be helpful to include risk or legal experts as part of the team.
  • Ensure that your target process includes inherent measurement and quality management as part of each subprocess as appropriate.

By bringing everyone together in a structured and facilitated session, connections are made, misunderstandings addressed and synergies achieved that just would not happen otherwise. Full and engaged customer and supplier representation is a common sense that does not commonly occur with traditional methods. With this rapid requirements approach you will get higher quality requirements in far less time with greater participation and better solutions. And you will start your project off with high quality requirements that are much more likely to lead to project success. For more information on Rapid Requirements, don’t hesitate to visit the site.

Fred Alsup

 

 

 

Tying Consumption to Cost: Allocation Best Practices

In 1968, Garrett Hardin wrote about the over-exploitation of common resources in an essay titled the “The Tragedy of the Commons“. While Garrett wrote about the overexploitation of common pastureland where individual herders overused and diminished common pasture, there can be a very similar effect with IT resources within a large corporation. If there is no cost associated with the usage of IT resources by different business unit, than each unit will utilize the the IT resources to maximize its potential benefit to the detriment of the corporate as a whole. Thus, to ensure effective use of the IT resources there must be some association of cost or allocation between the internal demand and consumption by each business unit. A best practice allocation approach enables business transparency of IT cost and business drivers of IT usage so that thoughtful business decisions for the company as a whole can be made with the minimum of allocation overhead and effort.

A well-designed allocations framework will ensure this effective association as well as:

  • provide transparency to IT costs and the particular business unit costs and profitability,
  • avoid wasteful demand and alter overconsumption behaviors
  • minimize pet projects and technology ‘hobbies’

To implement an effective allocations framework there are several foundation steps. First, you must ensure you have the corporate and business unit CFOs’ support and the finance team resources to implement and run the allocations process. Generally, CFOs look for greater clarity on what drives costs within the corporation.  Allocations allow significant clarity on IT costs which are usually a good-sized chunk of the corporation’s costs.  CFOs are usually highly supportive of a well-thought out allocations approach. So, first garner CFO support along with adequate finance resources.

Second, you must have a reasonably well-defined set of services and an adequately accurate IT asset inventory. If these are not in place, you must first set about defining your services (e.g. and end user laptop service that includes laptop, OS, productivity software, and remote access or a storage service of high performance Tier 1 storage by Terabyte) and ensuring your inventory of IT assets is minimally accurate (70 to 80 %). If there are some gaps, they can be addressed by leveraging a trial allocation period where numbers and assets are published, no monies are actually charged, but every business unit reviews its allocated assets with IT and ensures it is correctly aligned. Once you have the service defined and the assets inventoried, your finance team must then set about to identify which costs are associated with which services. They should work closely with your management team to identify a ‘cost pool’ for each service or asset component. Again, these costs pools should be at least reasonably accurate but do not need to be perfect to begin a successful allocation process.

The IT services defined should be as readily understandable as possible. The descriptions and missions should not be esoteric except where absolutely necessary. They should be easily associated with business drivers and volumes (such as number of employees, or branches, etc) wherever possible.  In essence, all major categories of IT expenditure should have an associated service or set of services and the services should be granular enough so that each service or component can be easily understood and each one’s drivers should be easily distinguished and identified. The targets should should be somewhere between 50 and 150 services for the typical large corporation.  More services than 150 will likely lead to more effort being spent on very small services and result in too much overhead. Significantly, less than 50 services could result in clumping of services that are hard to distinguish or enable control. Remember the goal is to provide adequate allocations data at the minimum effort for effectiveness.

The allocations framework must have an overall IT owner and a senior Finance sponsor (preferably the CFO). CFOs want to implement systems that encourage effective corporate use of resources so they are a natural advocate for a sensible allocation framework. There should also be a council to oversee the allocation effort and provide feedback and direction where majors users and the CFO or designate are on the council. This will ensure both adequate feedback as well as buy-in and support for successful implementation and appropriate methodology revisions as the program grows. As the allocations process and systems mature, ensure that any significant methodology changes are reviewed and approved by the allocation council with sufficient advance notice to the Business Unit CFOs. My experience has been that everyone agrees to a methodology change if it is in their favor and reduces their bill, but everyone is resistant if it impacts their business unit’s finances regardless of how logical the change may be. Further, the allocation process will bring out intra business unit tensions toward each other, especially for those that have an increase versus those that have a decrease, if the process is not done with plenty of communication and clear rationale.

Once you start the allocations, even if during a pilot or trial period, make sure you are doing transparent reporting. You or your leads should have a monthly meeting with each business area with good clear reports. Include your finance lead and the business unit finance lead in the meeting to ensure everyone is on the same financial page.  Remember, a key outcome is to enable your users to understand their overall costs, what the cost is for each services and, what business drivers impact which services and thus what costs they will bear. By establishing this linkage clearly the business users will then look to modify business demand so as to optimize their costs. Further, most business leaders will also use this allocations data and new found linkage to correct poor over-consumption behavior (such as users with two or three PCs or phones) within their organizations. But for them to do this you must provide usable reporting with accurate inventories. The best option is to enable managers to peruse their costs through an intranet interface for such
end-user services such as mobile phones, PCs, etc . There should be readily accessible usage and cost reports to enable them to understand their team’s demand and how much each unit costs.  They should have the option right on the same screens to discontinue, update or start services. In my experience, it is always amazing that once leaders understand their costs, they will want to manage them down, and if they have the right tools and reports, managing down poor consumption happens faster than a snowman melting in July — exactly the effect you were seeking.

There are a few additional caveats and guides to keep in mind:

  • In your reporting, don’t just show this month’s costs, show the cost trend over time and provide a projection of future unit costs and business demand
  • Ensure you include budget overheads in the cost allocation, otherwise you will have a budget shortfall and neglect key investment in the infrastructure to maintain it.
  • Similarly, make sure you account for full lifecycle costs of a service in the allocation — and be conservative in your initial allocation pricing, revisions later that are upward due to missed costs will be painful
  • For ‘build’ or ‘project’ costs, do not use exact resource pricing. Instead use an average price to avoid the situation where every business unit demands only the lowest cost IT  resources for their project resulting in a race to the bottom for lowest cost resources and no ability to expand capacity to meet demand since these would be high cost resources on the margin.
  • Use allocations to also avoid First-In issues to new technologies (set the rate at the project volume rate not the initial low volume rate) and to encourage transition off of expensive legacy technologies (Last out increases)
  • And lastly, and ensure your team knows and understands their services and their allocations and can articulate why what costs what they cost

With this framework and approach, you should be able to build and deliver an effective allocation mechanism that enables the corporation to avoid the overconsumption of free, common resources and properly direct the IT resources to where the best return for the corporation will be. Remember though that in the end this is an internal finance mechanism so the CFO should dictate the depth, level and allocation approach and you should ensure that the allocations mechanism does not become burdensome beyond its value. remember that allocations framework.

What have been your experiences with allocations frameworks? What changes or additions to these best practices would you add?

Best, Jim Ditmore