The Service Excellence Project is breaking all the rules!

…and that’s a good thing! Some of the most uncomfortable aspects to the Service Excellence program center around changes we are making to the way we approach this project.

Here are some commonly held rules…and how we are breaking them.

Rule #1 – We decide what projects to tackle and when to tackle them based on resource availability

Because this project is IT Services’ 3rd highest priority (after two-factor and AIX/PIX), the resource constraint has been viewed as secondary. Having a team that represents the whole organization participate in a thoughtful design and implementation of our processes for delivering service excellence is critical.

If you have input for the service excellence process or the underlying tool/training/documentation/etc– these team members are your voice. Seek them out and engage them. The Service Excellence Program Google site shows the membership of the core team and each specific process team.

Rule #2 – Seek input when near the end of the project

Our team is producing design packages that define the strategy for each process. Leadership is being engaged when that design is complete to provide feedback and sign-off.Each process will have a series of 3 incremental builds. Each build is further segmented into 2 parts.

  • One part based on the tool

  • One part based on the training and documentation that needs to accompany the tool.

For each build there will be demos to review the tool configuration and focus groups to get feedback about the full package of training, and documentation. Focus groups will also include hands-on time.

See the Service Excellence Program Google site for detailed build timelines and upcoming demos and focus groups.

Rule #3 – We work independently on tasks laid out by the project schedule

The team has adopted an agile-like timebox approach. A what? A time-box approach.

The team estimated the time they believed it would take to perform the various activities and produce the necessary artifact, such as:

  • Critical Success Factors: identify what drives success of the process

  • Process Policies: what rules/guidelines do we need to put in place and hold people accountable to perform to contribute to that success?

  • Communication Plan: what information do various stakeholders want to hear about this process?

The result was formulated into a seven-week cycle of activities averaging about 20 hours of work per week. The team assembles during dedicated blocks of time to complete each artifact. This approach has two important implications:

1) These timeboxes are not extended. We trust the team’s ability to prioritize their work within the the timebox to put the highest value activities first. Whatever is achieved is what moves on to the next stage.

2) This approach inherently puts people out of their comfort zone as they are asked to contribute to the team in ways not traditionally within their immediate sphere of expertise. Writing training content, determining the process metrics, building process diagrams. Any member on the team may be called on to deliver any of these, and more.

You can view the full series of timeboxes by clicking the schedule link on the Service Excellence Program Google site.

Rule #4 – Activities surrounding configuration of the tool/technology defines the critical path of the project

Half of our time for each process is dedicated to having a thoughtful design before we ever touch the tool. The tool plays a vital role in helping us realize an efficient and effective process, but we are putting the process above the tool and understanding how to best use the tool to achieve our objectives.

The project is 80% process (and how it contributes to providing excellent service) and 20% tool.

These thoughtful considerations are visible in the design packages being produced for each process. For a sample, look in the materials section of the Service Excellence Program Google site.

tl;dr: This project is taking a thoughtful approach to how we deliver these process by linking the process objectives to the design; building and testing the tool, training, and documentation to achieve that design; and deploying the final result as a single, imperfect release.

Leave a Reply

Your email address will not be published. Required fields are marked *