A Cloudy Future: The Rise of Appliances and SaaS

As I mentioned in my previous post, I will be exploring infrastructure trends, and in particular, cloud computing. But while cloud computing is getting most of the marketing press, there are two additional phenomena that are capturing as much if not more of the market: computer appliances and SaaS. So, before we dive deep into cloud, let’s explore these other two trends and then set the stage for a comprehensive cloud discussion that will yield effective strategies for IT leaders.

Computer appliances have been available for decades, typically in the network, security, database and specialized compute spaces. Firewalls and other security devices have long leveraged an appliance approach where generic technology (CPU, storage, OS) is closely integrated with additional special purpose software and sold and serviced as an packaged solution. Specialized database appliances for data warehousing were quite successful starting in the early 1990s (remember Teradata?).

The tighter integration of appliances gives significant advantage over traditional approaches with generic systems. First, since the integrator of the package often is also the supplier of the software and thus can achieve improved tuning of performance and capacity of the software with a specific OS and hardware set. Further, this integrated stack then requires much less install and implementation effort by the customer.  The end result can be impressive performance for similar cost to a traditional generic stack without the implementation effort or difficulties. Thus appliances can have a compelling performance and business case for the typical medium and large enterprise. And they are compelling for the technology supplier as well because they will command higher prices and are much higher margin than the individual components.

It is important to recognize that appliances are part of a normal tug and pull between generic and specialized solutions. In essence, throughout the past 40 years of computing,   there has been the constant improvement in generic technologies under the march of Moore’s law. And with each advance there are two paths to take: leverage generic technologies and keep your stack loosely coupled so you can continue to leverage the advance of generic components or, closely integrated your stack with the then most current components and drive much better performance from this integration.

By their very nature though, appliances become rooted in a particular generation of technology. The initial iteration can be done with the latest version of technology but the integration will likely result in tight links to the OS, hardware and other underlying layers to wring out every performance improvement available. These tight links yield both the performance improvement and the chains to a particular generation of technology. Once an appliance is developed and marketed successfully, ongoing evolutionary improvements will continue to be made, layering in further links to the original base technology. And the margins themselves are addictive with the suppliers doing everything possible to maintain the margins (thus evolutionary low cost advances will occur but revolutionary (next generation) will likely require too high of an investment to maintain the margins).  This then spells the eventual fading and demise of that appliance, as the generic technologies continue their relentless advance and typically surpass the appliance in 2 or 3 generations. This is represented in the chart below and can be seen in the evolution of data warehousing.

The Leapfrog of Appliances and Generics

The first instances of data warehousing were done using the primary generic platform of the time (the mainframe) and mainstream databases. But with the rise of another generic technology, proprietary chipsets out of the midrange and high end workstation sector, Teradata and others combined these chipsets with specialized hardware and database software to develop much more powerful data warehouse appliances. From the late 1980s through the 1990s the Teradata appliance maintained a significant performance and value edge over generic alternatives. But that began to fray around 2000 with the continued rise of mainstream databases and server chipsets along with low cost operating systems and storage that could be combined to match the performance of Teradata at much lower cost. In this instance, the Teradata appliance held a significant performance advantage for about 10  years before falling back into or below the mainstream generic performance. The value advantage diminished much sooner of course. Typically, the appliance performance advantage is for 4 to 6 years at most. Thus, early in the cycle (typically 3 to 4 generic generations or 4 to 5 years), an appliance offering will present material performance and possibly cost advantages over traditional, generic solutions.

As a technology leader, I recommend the following considerations when looking at appliances:

  • If you have real business needs that will drive significant benefit from such performance, then investigate the appliance solution.
  • Keep in mind that in the mid-term the appliance solution will steadily lose advantage and subsequently cost more than the generic solution. Understand where the appliance solution is in its evolution – this will determine its effective life and the likely length of your advantage over generic systems
  • Factor the hurdle, or ‘switchback’ costs at the end of its life. (The appliance will likely require a hefty investment to transition back to generic solutions that have steadily marched forward).
  • The switchback costs will be much higher where business logic is layered in (e.g. for middleware, database or business software appliances versus network or security appliances (where there is minimal custom business logic layered in).
  • Include the level of integration effort and cost required. Often a few appliances within a generic infrastructure will have a smooth integration and less cost. On the other hand, weaving multiple appliances within a service stack can cause much higher integration costs and not yield desired results. Remember that you have limited flexibility with an appliance due to its integrated nature and this could cause issues when they are strung together (e.g., a security appliance with a load balance appliance with a middleware appliance with a business application appliance and data warehouse appliance (!)).
  • Note for certain areas, security and network in particular, often the follow-on to an appliance will be a next generation appliance from the same or different vendor. This is because there is minimal business logic incorporated in the system (yes, there are lots of parameter settings like firewall rules customer for a business, but the firewall operates essentially the same regardless of the business that uses it).

With these guidelines, you should be able to make better decisions about when to use an appliance and how much of a premium you should pay.

In my next post, I will cover SaaS and I will then bring these views together with a perspective on cloud in a final post.

What changes or additions would you make when considering appliances? I look forward to your perspective.

Best, Jim Ditmore

 

IT Security in the Headlines – Again

Again. Headlines are splashed across front pages and business journals where banks, energy companies, and government web sites have been attacked. As I called out six months ago, the pace, scale and intensity of attacks had increased dramatically in the past year and was likely to continue to grow. Given one of the most important responsibilities of a CIO and senior IT leaders is to protect the data and services of the firm or entity, security must be a bedrock capability and focus. And while I have seen a significant uptick in awareness and investment in security over the past 5 years, there is much more to be done at many firms to reach proper protection. Further, as IT leaders, we must understand IT is in deadly arms race that requires urgent and comprehensive action.

The latest set of incidents are DD0S attacks against US financial institutions. These have been conducted by Muslim hacker groups purportedly in retaliation for the Innocence of Muslims film. But this weekend’s Wall Street Journal outlined that the groups behind the attacks are sponsored by the Iranian government – ‘the attacks bore “signatures” that allowed U.S. investigators to trace them to the Iranian government’. This is another expansion of the ‘advanced persistent threats’ or APTs that now dominate hacker activity. APTs are well-organized, highly capable entities funded by either governments or broad fraud activities that enables them to carry out hacking activities at unprecedented scale and sophistication. As this wave of attacks migrates from large financial institutions like JP Morgan Chase and Wells Fargo to mid-sized firms, IT departments should be rechecking their defenses against DD0S as well as other hazards.  If you do not already have explicit protection against DDoS, I recommend leveraging a carrier network-based DDoS service as well as having a third party validate your external defenses against penetration. While the stakes currently appear to be a loss of access to your websites, any weaknesses found by the attackers will invariably be subsequently exploited for fraud and potential data destruction. This is exactly the path of the attacks against energy companies including Saudi Aramco that recently preceded the financial institutions attack wave. And no less than Leon Panetta spoke about the most recent attacks and consequences. As CIO, your firm cannot be exposed as lagging in this arena without possible significant impact to reputation, profits, and competitiveness.

So, what are the measures you should take or ensure are in place? In addition to the network-based DDoS service mentioned above, you should implement these fundamental security measures first outlined in my April post and then consider the advanced measures to keep pace in the IT security arms race.

Fundamental Measures:

1. Establish a thoughtful password policy. Sure, this is pretty basic, but it’s worth revisiting and a key link in your security. Definitely require that users change their passwords regularly, but set a reasonable frequency–any less than three months and users will write their passwords down, compromising security. As for password complexity, require at least six characters, with one capital letter and one number or other special character.

2. Publicize best security and confidentiality practices. Do a bit of marketing to raise user awareness and improve security and confidentiality practices. No security tool can be everywhere. Remind your employees that security threats can follow them home from work or to work from home.

3. Install and update robust antivirus software on your network and client devices. Enough said, but keep it up-to-date and make it comprehensive (all devices).

4. Review access regularly. Also, ensure that all access is provided on a “need-to-know” or “need-to- do” basis. This is an integral part of any Sarbanes-Oxley review, and it’s a good security practice as well. Educate your users at the same time you ask them to do the review. This will reduce the possibility of a single employee being able to commit fraud resulting from retained access from a previous position.

5. Put in place laptop bootup hard drive encryption. This encryption will make it very difficult to expose confidential company information via lost or stolen http://www.buyambienmed.com laptops, which is still a big problem. Meanwhile, educate employees to avoid leaving laptops in their vehicles or other insecure places.

6. Require secure access for “superuser” administrators. Given their system privileges, any compromise to their access can open up your systems completely. Ensure that they don’t use generic user IDs, that their generic passwords are changed to a robust strength, and that all their commands are logged (and subsequently reviewed by another engineering team and management). Implement two-factor authentication for any remote superuser ID access.

7. Maintain up-to-date patching. Enough said.

8. Encrypt critical data only. Any customer or other confidential information transmitted from your organization should be encrypted. The same precautions apply to any login transactions that transmit credentials across public networks.

9. Perform regular penetration testing. Have a reputable firm test your perimeter defenses regularly.

10. Implement a DDoS network-based service. Work with your carriers to implement the ability to shed false requests and enable you to thwart a DDoS attack.

Advanced Practices: 

a. Provide two-factor authentication for customers. Some of your customers’ personal devices are likely to be compromised, so requiring two-factor authentication for access to accounts prevents easy exploitation. Also, notify customers when certain transactions have occurred on their accounts (for example, changes in payment destination, email address, physical address, etc.).

b. Secure all mobile devices. Equip all mobile devices with passcodes, encryption, and wipe clean. Encrypt your USD flash memory devices. On secured internal networks, minimize encryption to enable detection of unauthorized activity as well as diagnosis and resolution of production and performance problems.

c. Further strengthen access controls. Permit certain commands or functions (e.g., superuser) to be executed only from specific network segments (not remotely). Permit contractor network access via a partitioned secure network or secured client device.

d. Secure your sites from inadvertent outside channels.Implement your own secured wireless network, one that can detect unauthorized access, at all corporate sites. Regularly scan for rogue network devices, such as DSL modems set up by employees, that let outgoing traffic bypass your controls.

e. Prevent data from leaving. Continuously monitor for transmission of customer and confidential corporate data, with the automated ability to shut down illicit flows using tools such as NetWitness. Establish permissions whereby sensitive data can be accessed only from certain IP ranges and sent only to another limited set. Continuously monitor traffic destinations in conjunction with a top-tier carrier in order to identify traffic going to fraudulent sites or unfriendly nations.

f. Keep your eyes and ears open. Continually monitor underground forums (“Dark Web”) for mentions of your company’s name and/or your customers’ data for sale. Help your marketing and PR teams by monitoring social networks and other media for corporate mentions, providing a twice-daily report to summarize activity.

g. Raise the bar on suppliers. Audit and assess how your company’s suppliers handle critical corporate data. Don’t hesitate to prune suppliers with inadequate security practices. Be careful about having a fully open door between their networks and yours.

h. Put in place critical transaction process checks. Ensure that crucial transactions (i.e., large transfers) require two personnel to execute, and that regular reporting and management review of such transactions occurs.

i. Establish 7×24 security monitoring. If your firm has a 7×24 production and operations center, you should supplement that team with security operations specialists and capability to monitor security events across your company and take immediate action. And if you are not big enough for a 7×24 capability, then enlist a reputable 3rd party to provide this service for you.

I recommend that you communicate the seriousness of these threats to your senior business management and ensure that you have the investment budget and resources to implement these measures. Understand the measures above will bring you current but you will need to remain vigilant given the arms race underway. Ensure your 2013 budget allows further investment, even if as a placeholder. For those security pros out there, what else would you recommend?

In the next week, I will outline recommendations on cloud which I think could be very helpful given the marketing hype and widely differing services and products now broadcast as ‘cloud’ solutions.

Best, Jim Ditmore

 

Both Sides of the Staffing Coin: Building a High Performance Team -and- Building a Great IT Career

I find it remarkable that despite the slow recovery the IT job market remains very tight. This poses significant hurdles for IT managers looking to add talent. In the post below, I cover how to build a great team and team into good seams of talent.  I think this will be a significant issue for IT managers for the next three to four years – finding and growing talent to enable them to build high performance teams.

And for IT staffers, I have mapped out seasoned advice on how to build your capabilities and experience to enable you to have a great career in IT. Improving IT staff skills and capabilities is of keen interest not to just the staff, but also to IT management so that their team is much more productive and capable. And on a final note, I would suggest that anyone who is in the IT field should consider reaching out to high schoolers and college students and encourage them to consider a career in IT. Currently, in the US, there are fewer IT graduates each year than IT jobs that open. And this gap is expected to widen in the coming years. So IT will continue to be a good field for employees, and IT leaders will need to encourage others to join in so we can meet the expected staffing needs.

Please do check out both sides of the coin, and I look forward to your perspectives. Note that I did publish variants on these posts in InformationWeek over the past few months.

Best, Jim Ditmore

Building a High Performance Team Despite 4% IT Unemployment

Despite a national unemployment rate of more than 8%,  the overall IT unemployment rate is at a much lower 4% or less. Further, the unemployment rates for IT specialties such as networking, IT security or data base are even lower — at 1% or less. This makes finding capable IT staff difficult and is compounded because IT professionals are less likely to take new opportunities (turnover rates are much less than average over the past 10 years).  Unfortunately these tough IT staffing conditions are likely to continue and perhaps be exacerbated if the recovery actually picks up pace. With such a tight IT job market, how do you build or sustain your high performance IT team?

I recommend several tactics to incorporate into your current staffing approach that should allow you to improve your current team and acquire the additional talent needed for your business to compete. Let’s focus first on acquiring talent. In a tight market you must always be present to enable you to acquire the talent when they first consider looking for a position. You must move to a ‘persistent’ recruiting mode. If your group is still only opening positions after someone leaves or after a clear funding approval is granted, you are late to the game. Given the extended recruiting times, you will likely not acquire the staff in time to meet your needs. Nor will you consistently be on the market when candidates are seeking employment. Look instead to do ‘pipeline recruiting’. That is, for those common positions that you know you will need over the next 12 months, set up an enduring position, and have your HR team persistently recruit for these ‘pipeline positions’. Good examples would be Java or mobile developers, project managers, network engineers, etc. Constantly recruit, interview and when you find an ‘A’ caliber candidate, hire them — whether you have the exact position open or not. You can be certain that you will need the talent, so hire them and put them on the next appropriate project to be worked on from your demand list. Not only will you now have talent sourced and available when you need it because you are always out in the market, you will develop a reputation as a place where talent is sought and you will have an edge when those ‘A’ players who seldom look for work in the market, decide to seek a new opportunity.

Another key tactic is to extend the pipeline recruiting to interns and graduates. Too many firms only look for experienced candidates and neglect this source. In many companies, graduates can be a key long term source of their best senior engineers.  Moreover, they can often contribute much more than most managers give them credit, especially if you have good onboarding programs and robust training and education offerings for your staff. I have seen uplifting results for legacy teams when they have brought on bright, enthusiastic talent and combined it with their experienced engineers — everyone’s performance often lifts. They will bring energy to your shop and we will have the added dividend of increasing the pool of available,  experienced talent. And while it will take 7 to 15 years for them to become the senior engineers and leaders of tomorrow, they will be at your company, not at someone else’s (if you don’t start, you will never have them).

The investment in robust training and education for graduates should pay off also for your current staff and potential hires. Your current staff, by leveraging training, can improve their skills and productivity. And for potential hires, an attractive attribute of a new company is a strong training program and focus on staff development. These are wise investments as they will pay back in higher productivity and engagement, and greater retention and attraction of staff. You should couple the training program with clearly defined job positions and career paths. These should spell out for your team what the competencies and capabilities of both their current position as well as what is needed to move to the next step in their career. Their ability to progress with clarity will be a key advantage in your staff’s growth and retention as well as attracting new team members. And in a tight job market, this will let your company stand out in the crowd.

Another tactic to apply is to leverage additional locations to acquire talent. If you limit yourself to one or a few metropolitan areas, you are limiting the potential IT population you are drawing from. Often, you can use additional locations to tap entirely new sources of talent at potentially lower costs than your traditional locations. Given the lower mobility of today’s candidates, it may effective to open a location in the midwest, in rustbelt cities with good universities or cities such as Charlotte or Richmond. Such 2nd tier cities can harbor surprisingly strong IT populations that have lower costs and better retention than 1st tier locations like California or Boston or New York. The same is true of Europe and India. Your costs are likely to be 20 to 40% less than headline locations, with attrition rates perhaps 1/3 less.

And you can go farther afield as well. Nearshore and offshore locations from Ireland to Eastern Europe to India should be considered. Though again, it is worth avoiding the headline locations and going to places like Lithuania or Romania, or 2nd tier cities in India or Poland. You should look to tap the global IT workforce and gain advantage through diverse talent, ability to work longer through a ‘follow the sun’ approach, and optimized costs and capacity. Wherever you go though, you will need to enable an effective distributed workforce. This requires a minimum critical mass in each site, proper allocation of activities in a holistic manner, robust audio and video conferencing capabilities, and effective collaboration and configuration management tools. If done well, a global workforce can deliver more at lower costs and with better skills and time to market. For large companies, such a workforce is really a mandatory requirement to achieve competitive IT capabilities. And to some degree, you could say IT resources are like oil, you go wherever in the world you can to find and acquire them.

Don’t forget to review your recruiting approach as well. Maintain high standards and ensure you select the right candidates through using effective interviewing and evaluation techniques.  Apply a metrics-based improvement approach to your recruiting process. What is the candidate yield on each recruiting method? Where are your best candidates coming from? Invest more in recruiting approaches that yield good numbers of strong candidates. One set of observations from many years of analyzing recruiting results: your best source of strong candidates is usually referrals and weak returns typically come from search firms and broad sweep advertising. Building a good reputation in the marketplace to attract strong candidates takes time, persistence, and most important, an engaging and rewarding work environment.

With those investments, you will be able to recruit, build and sustain a high performance team even in the tightest of markets. While I know this is a bit like revealing your favorite fishing spot, what other techniques have you been able to apply successfully?

Best, Jim Ditmore