Monday, June 30, 2008

Service Desk - The First Port of Call for IT Services

The importance of Service Desk in an IT support model cannot be emphasized enough. It is the first port of call for any issue the users face. As such its role really spans across various support teams - be it application, system or network support. Be it a nagging Blackberry issue or a slow application, users always call the service desk first and form an opinion of the quality of support by the quality of response they get. One could have the smartest guys in the L2/L3 teams but if the user facing L1 team ( the service desk team) does not inspire that confidence and ability to handle various user issues then, the IT support is always questioned.

It is always a challenge to find smart, motivated team members for the service desk. Those who are good will not stick ( sounds familiar?) and those who stick would want to move into something more technical after few months. Key to handling this team is to build a good career plan and take care of aspirations of these team members.

In an outsourced environment, this is the team which will see the maximum attrition too and so there is the risk of losing client information and knowledge. Since this team goes through a natural learning curve - these acquired latent knowledge of the client's environment is key to a successful continuous improvement program.

While the issues in the Data Center or Enterprise Network will impact a large part of the organization, most users get ticked off with the service desk for what are typically Sev3 or P3 incidents ( Severity 3 or Priority3, which impact just a single user in the organization - like a broken keyboard or a mailbox issue). So, while all efforts in running the show should focus on issues that impact a large set of users (reducing Sev1s and closing them faster), for managing the perception of service the users have - there is no better place than the Service Desk.

Tuesday, June 24, 2008

Increasing Revenues with Workforce

Most offshore IT services companies are increasing their employee base by thousands each year. The simple logic for most have been that as revenues increase, the need for people to deliver services also increases. So, as the revenue has increased, so has the number of employees. Essentially they are growing the same way as they grew when they were a fifth of their respective current size in terms of revenue.


Two simple rules can spoil the party:
  • What works when you are small does not necessarily work when you are big ( or even doesn't necessarily work when you are small but at a different time)
  • What goes up, comes down

So, while these companies are pushing their employee base in pursuit for more revenues, what will happen when the inevitable slow down happens?

These companies run the exposure of being left with a sea of workforce which is not required and not longer productive on a "bench" for future business. This will result in high cost of retaining these people which will just not be sustainable and result in large scale laying off of employees. This phenomenon of large scale layoffs has hardly been seen in these offshored countries like India. While there have been those occassional companies that went bust resulting in loss of employment but there are just so few that chances are that most techies will have to think hard to think of a company they knew which went bust.

In comparison, software product companies have a fantatistic model when it comes to such co-relation of revenue and employee base. The productivity (dollars/employee) for software product companies is very very high. Once the software is developed and released, there is hardly any incremental cost for selling that same software to any number of customers. They can scale revenues literally overnight by just handling their production (of cutting CDs and license management) while for any technology services company, this is just not attainable as they will need to overnight hire hoards of people which is not possible without compromising on quality of resources.

To delink employee base growth with revenues, most technology services companies are moving towards services which are of higher value. This may include services around their own products (which is not easy as there are hardly any product successes to make it the part of the story) or those related to activities like consulting. The third option is to use strategic innovation driven improvements in delivery IT services. This could be through improved automation and tools or other technology levers which will help drive higher revenues with the same base of FTE count in a project.

Clearly, the industry will belong to those who move from the current cost based revenue model to value based revenue model.

This is already happening and bound to be the gameplan of most Tier 1 and 2 offshore based IT services companies. Since the rules of the game are changing, it agains throws an opportunity for a new player to move in the top league and for some laggards to potentially drop if they don't read the future and act fast enough.

Sunday, June 22, 2008

Remote Infrastructure Management

Today a large part of the services involved in supporting the delivery of IT Infrastructure Services can be delivered from remote. Those that cannot be done from remote are essentially activities like:
  • end user computing incidents which cannot be resolved remotely
  • tape management related issues in data centers
  • standing up new environments or installing new equipment or services (though these are not really operational activities but project-based activities)
  • face to face business and user interaction
Automation, internet and reliable links has only helped this ability to handle most of the support issues from remote. This has formed the basis of a strong proposition from offshore based players to deliver these services from remote locations with a small team onsite.

The large traditional players have not been behind and in the past few years have South African and South American locations ( actually many more but these are examples) to deliver services, moving from a traditional onsite/onshore heavy model to offshore or near shore heavy model.

On an average more than 90% of the effort in managing IT Infrastructure can now be run from a remote location. With virtualized desktops and other technologies, this will only increase.

What about an old man and a dog data center?

Thursday, June 19, 2008

Wave of Virtualization - Case of Desktop Virtualization

Any "transformation" plan in IT Infrastructure space which does not pay its tribute to virtualization at some level is not a plan.

While there are benefits of virtualization, as it delinks the application layer and the infrastructure layer and creates virtual pool of service layers, it does not come without its own problems. However the case of buying fewer hardware and maxing capacity on investments made makes it strong.
In case of desktops, it is ensuring that technology has come a full circle by moving back to a host-terminal model with thin clients through desktop virtualization. This is what it moved away from, fuelled by the Wintel wave of client-server technology. Having everything at a central place did not deliver a performance each user wanted with the bottleneck at the host. The promise the client-server technology brought was a mix of host and client side computing. Applications were split with a server and client component. Soon this peaked to lead to issues with management issues at the client side and high cost of administration. These include:


  • Need for hardware refresh periodically fuelled by new OS and cheaper hardware options every few years
  • Challenges of patch management for OS and application upgrades
  • Anti-virus management

Most enterprises now have a centralized policy management system where the end-user has limited control on his desktop. Users don't have admin access with ports and interfaces (FDD, USB, CD/DVD drives etc.) locked. Updates are pushed centrally and updates done from the back. So, essentially a desktop is working like a terminal.

Leading the revolution was the largest application ever - the Internet, which delivered everything and more through the humble browser. The browser epitomizes a dumb client with limited configuration and need for little client-side computing infrastructure. Most of the current needs for hardware running browser is due to the operating system on which browsers run. As they evolved (Wintel at work), they required upgraded hardware for a better browsing experience but there are options available to browse with little local computing.

The applications are following this - taking head-on with MS-Office are applications like Google Doc (http://docs.google.com/) or Zoho (http://www.zoho.com/)which now make it possible to work on documents, spreadsheets, presentations and more with a simple browser.

At the enterprise level Sales Force has been the poster boy of Software As A Service.

At these two trends emerge - the applications need to take a lead to make a strong case for low/no computing at the user end. Till then desktop virtualization will mimic the future with assets of the present.

IT Infrastructure Services Offshoring 2.0

While the offshored leveraged outsourcing industry has grown in the last few years in its first phase of evolution, it will soon need to morph and change the traditional approach due to the now pervasive business environment conditions. Some of these key trends being:
  • The shrinking labor arbitrage makes a poorer business case by just taking a people-t0-people replacement cost. An offshore FTE used to be few times cheaper to an onsite FTE. They are still cheaper to hire but the difference has shrunk dramatically and continues
  • Most biggies (Tier 1 outsourcing players) who were traditionally providing services from local countries in US, Europe now have a big offshore story. In fact for some, offshore delivery centers are now comparable in size to traditional offshore based players
  • Most services are now commoditized except those lock-stock-barrel deals. These deals are typically blended with asset acquisition and re-badging which some of the larger players are embracing but see it impact their operating model and more importantly profit models.
  • The outside trend of IT being more aligned to business, need for stronger business case and charges linked to business value generated ( including trends like utility computing, transaction based charges etc.). With chargebacks being explored as a sign of maturity in IT ops, the CIO is supposed to lead a profit center rather than a perpetual cost center

With these the following will emerge:

  • Striation of services, with finer division in SLAs and quality of infrastructure based on the business process being supported
  • Tremendous focus on SLAs and identifying meaninful metrics for managing the overall Quality of Service
  • Service delivery out of new havens of offshoring in Asia Pacific and East Europe
  • Cost pressures which have till now not been felt since the labor arbitrage often masked incidental costs which the companies were ready to absorb due to high margins

In all these the current offshore based companies in IT Infrastructure Management or Remote Infrastructure Management (RIM) will focus more on innovation. They need to innovate themselves to adapt to the changing waves. This will see some existing players drop the race ( as laggards if they fail to read the trend and act) and also see some new players emerge who may not be at the top of the pack in wave 1.0 but will read the trends and emerge ahead in the wave 2.0.

Innovation will come around delivery models to deliver services, automation, costing models and through strategic alliances. There will also be a lot of focus on processes as they will try to run the same from these new found destinations like Asia Pacific and East Europe.


Wednesday, June 18, 2008

ITIL v3 : First Impressions

I did my ITIL v2 Foundation certification last year. It was actually the fag end of last year and I initially enrolled to take the test for ITIL v3. It was too late by the time I realized my mistake since I had already entered my credit card information on the test scheduling site. Luckily there was some error in entering the card number and the transaction was nullified and I re-scheduled for v2 for which I had been preparing.

Now few months down the line, I got a chance to overview and skim through ITIL v3. I am still browsing but first impressions:
  • Has a major thrust on alignment with business in line with the trend where the CIO increasingly is reporting to the CEO or COO and not to CFO or some VP - Admin (!! .. yeah)
  • Is more cognizant of the use of ITIL in outsourced scenarios. In fact it even recognizes that services will be delivered from offshore
  • Focus is now on "Service" than elements that form the basis of service. So there are things like Service Strategy, Service Design, Service Transition and Service Operation. Also there are elements like Service Design Package (SDP) and Service Level Package (SLP) and a CMDB like Service Knowledge Management System (SKMS).
  • Has a more marketing oriented feel and approach - be it the alignment to business or the abundant use of terms "strategy" and "value". It even has its four "Ps" !
  • It is understands that there are service providers who are increasingly providing such infrastructure support services. It even takes into account the competitive environment for these service providers

Like everything that tries to be current and keeps evolving ITIL v3 is a step to keep the ITIL context relevant and appealing to most decision makers. It continues to be a guiding framework and not prescriptive for specific environments.

Tuesday, June 17, 2008

Importance Of Service Transition

The importance of a good transition when outsourcing IT infrastructure services cannot be emphasised enough. A poor transition can even disrupt the outsourcing program and cause a lot of business unrest.

Why is service transition so critical?:
  • Change of personnel -- either the incumbent provider walks off or an insourced team is displaced (unless most of the existing team is re-badged) -- takes away the operating knowledge which would have built over the years with the incumbent team. Further, the new team will face even more disruptions as they transition and will be more ill-equipped than the incumbent, presenting a double whammy
  • Change of environment - servers changing to a new provider or moving to a new hosting facility are bound to throw numerous situations ( often some of the servers would have not been shut-down for years and no one would know for sure how they will behave once re-started after the move). More importantly how to fix the issues that may surface when a server misbehaves
  • Tools/Automation Impact : Most often either the tools change and so will require few weeks at minimum (for mid to large enterprises) to bring some sanity in monitoring and escalation. Most tools for monitoring also have a learning curve (self attained or crafted) which build over time. So, in all this chaos, the tools are not there to monitor the crucial times when business is most likely to get impacted
  • Differing levels of interest by the stakeholders : While business may be fuming at the decision of IT to change the provider, the outgoing team will (let's say, for lack of better terms) not be interested in a good transition since they are anyway getting impacted professionally and some times personally if they are losing their employment

This is no means an exhaustive list but surely the top ones. Essentially there are too many moving targets and moving guns that come to play. One way to mitigate or minimize the risk is to stagger the transition based on some considerations. If not all targets are moving or if the guns are not moving, it becomes easier. Possible modes could be:

  • Stagger across regions/geographies
  • Stagger across service streams (say, Data Center Ops goes first, and then End User Computing and then Service Desk)
  • Stagger activities (say, the tools will continue from the incumbent till the services are taken over by the new provider's team)
  • Stagger across different user groups ( say IT goes first and then HR and then ... finally .. yeah you got it - the Finance team!)

There is no silver bullet but what is important is that sufficient thought is spent on this and mid-course mitigation strategies drawn.

Monday, June 16, 2008

Estimation Models For IT Infrastructure Outsourcing Providers

The need for estimation models when proposing cost elements in large IT Infrastructure Services contracts plays an important role. The models need to cover all aspects of delivering service which impact cost:
  • Manpower
  • Tools
  • Connectivity and enabling infrastructure
  • Administrative costs like travel and communication
  • Corporate overheads like physical space and air/light/water etc.

Often companies may use composite rates which combine some of the above into a single unit of rate. These then need to be linked to the volume of work to develop a costing solution with sufficient elasticity but at the same time optimal and competitive.


In the absence of standard models, these have lend a competitive edge to some of the leaders while other companies are trying to find theirs. Since many companies evolved from the application services space, which were typically Time & Material contracts, they find it difficult to have a solid estimation model for fixed priced, managed services contract.

Reliance Communication's Acquisition of Vanco

This recent news points to a possibly bigger of Reliance Communication than just a network service provider. While it does add the global footprint, it is interesting that this perhaps is their first foray into a virtual telecom service space.

Vanco is a Virtual Network Service Provider - not having it's own extensive network but rather front-ending with customers for network services, in turn sourcing transport from multiple other providers. While this gives them access to multiple providers, possibly dynamic-best-route and a higher degree of business continuity, they suffer from lack of control of services really as they need to depend on their alliance with the real providers, who also are in the same space.

Network services are always provided for customers by local companies. Most offshore leveraged outsourcing companies find it difficult to have a solid answer to the traditional providers or more often the traditional alliance between a local provider and one of the big outsourcing firms who would have jointly worked on multiple engagements in that geography. The offshore based players then attempt to have loose tie-ups for specific opportunities.

If Reliance gets into offshored based IT services, Vanco can provide an alternative thus.

Sunday, June 15, 2008

Where to Find Dollars in an IT environment

Where are those hidden dollars, those waiting-to-be-unchained values in an IT department? There are numerous nooks and corners where there are hidden opportunities to save. Bad times like these throw lot of opportunities to bring cost efficiency. The role of an outsourced provider is so much more critical in such situations. Of course there has to be the right level of motivation as part of the contract to motivate the provider to help the customer find oppportunities to save.

Outsourced providers present themselves with this opportunity by having a view across their engagements of common such opportunities, plus themselves running the operations of a particular customer, having the requisite information to enable such cost saving.

There can be numerous such opportunities but here is a start to list some of them. Can't cover all in this post but will take a beginning:
  • Unused licenses lying in the enterprise (and many find themselves getting budgets to buy more of the same for new requirements). These are more for end user software like MS Project, Visio etc.
  • Underutilized hardware in parts or whole
  • Spare network bandwidth in tier-2 or tier-3 of the networks off the central network topology which is more closely monitored
  • Groups of support teams (in-house and or outsourced teams) providing services very similar in nature as part of different businesses
  • Multiple data centers or server rooms
  • De-centralized environments making it necessary to have lumpy teams across the country or globe doing pretty much the same stuff.
  • De-centralized purchasing of IT assets

This is just the beginning and more are sure to follow.

Run Book Automation

The focus in managing IT infrastructure traditionally leverages automation to detect abnormal activity through monitoring of networks, systems and applications. Once an error is detected, it raises an alert for the operator who then logs into the system to identify the possible causes of the failure before taking action to get the service restored.

Move on a couple of decades to present : the focus is now on rectifying the error through automation and not just bringing it to the notice of the operator or the administrator. Run Book Automation products aim to bring this new revolution. Of course there will be learning curves (for the software before it starts being more intelligent like routing tables in routers) and workflows being built in, but clearly these will deliver tremendous value beyond the traditional monitoring tools.

There is a sort of frenzy in this space and majors like HP and BMC have already acquired niche RBA (Run Book Automation) companies while some others continue to emerge as strong players.

This will also bring a lot of value to outsourced service providers, who will be able to provide improved services, with fewer staff and better precision in service restoration attempts when they impact business. Surely one of the top spaces to watch from my perspective.