Team Performance

Ā·

9 min read

Team Performance
šŸ’”
For context, I use Scrum. Also, I left out metrics that are more on the DevOps and Tech Financials side. Those will be for another article.

Different companies use different key performance metrics. Different teams also require different measurements. The following are some of the metrics that I usually look into:

Productivity & Quality Metrics

Most of these are readily-available in tools like Jira or GitLab.

  • Estimation Accuracy ā€“ measures how close actual effort or time spent aligns with the estimates

  • Velocity ā€“ measures the amount of work completed per sprint (e.g., story points)

  • Cycle Time - measures the time taken from starting a task to completion. Shorter cycle times often indicate better efficiency

  • Lead Time ā€“ measures the time from when a request is made to when it is delivered. Useful in tracking bottlenecks in processes

  • Scope Change Rate - measures how frequently changes are introduced to the sprint (tickets being added, removed after the sprint has started)

  • Merge Request Throughput - measures how many merge requests are completed over a period

  • Deployment Frequency - measures how often new code is deployed to Production

  • Defect Escape Rate - measures the percentage of defects that are found after a release, meaning they escaped detection during development and testing


Story Point Estimation

I find it easier to use story points (SP) for estimating the work needed for tasks. It may be confusing for others who are used to time-based only estimates.

Story points estimation takes into account more than just the time required but also effort, complexity and confidence. Asanaā€™s Story Points Estimation Matrix makes it easier to understand how this kind of estimation works.

This is what we have been using for years now:

  • Each ticket is value-adding. It should be certified by the Quality Engineer and is releasable.

  • Fibonacci series - Lowest is 1. Highest is 5. Tasks that are higher than 5 should be broken down

  • Everyone needed for the task gives an estimate.

    • Even the juniors or new members (less familiar with tasks)

    • If there are differences, we hear each other out.

  • Get a baseline first. For example, start at 25 story points in a sprint.

    • Review what happens and go for continuous improvement. If 25 is too low, then go higher. Eventually, you will find the perfect velocity and get a sense of more realistic work and deadline arrangements.

Using An Icebox

Obviously, creating tickets with clear requirements will help the team estimate and break down the tasks properly, resulting to better overall metrics. As far as ticket creation goes, the team may come up with a lot of ideas that are not yet mature (no clear requirements, not urgent and may also be of low importance). Instead of cluttering the backlog with these ideas, we use an icebox.

The icebox can be as simple as a notes list, a separate Jira project or using Jira Product Discovery (and similar). This helps in the cycle time and lead time. Ideas are parked somewhere else and not counted in the reports.

Capacity Planning

  • Take note of holidays and planned leaves. Remember, holidays are not always planned as well lol

  • If someone gets sick, can anybody else take their task?

  • Meetings? Trainings? Other things that are not directly related to development

Merge Requests

A lot of the back-and-forth action can be minimized if the quality of code is high and there are automations in place.

  • Business rules / acceptance criteria

    • Everyone should be on the same page. Pay attention and ask questions during backlog grooming.

    • Participate actively in sprint planning. Ensure everyone understands the acceptance criteria. Encourage people to speak up if something is unclear.

  • Code quality

    • Itā€™s better if there are coding guidelines that the team adheres to. These standards can be fed into the tools we use aside from the default rules (for coding, static analysis, security scans, etc)

    • AI-generated code should still be reviewed.

    • Unit testing and system integration testing. Basically, testing locally or in a dev environment before merging.

    • Use the pipeline, automate everything

  • Code reviews

    • MR changes should be focused as much as possible. Donā€™t add 1000 changes in one MR. Seriously?? This encourages people to comment LGTM without reviewing properly

    • Comments should be direct-to-the-point. No word salad.

Release Engineering Processes

It would be a dream to be able to push to Production without a lot of manual interventions needed. Proper safeguard rails should still be in place.

  • Automated regression and smoke tests - to ensure nothing breaks before and after release

  • Feature flags and A/B testing - tools like LaunchDarkly and Split io can control the targets of the feature releases and run experimentations

  • Canary deployments - progressive rollout and automated rollbacks

  • Incident monitoring and alerts system


Learnings

  • Standards, templates and automation. Wherever possible. Yes. :)

  • Continuous improvement

  • What if no one understands what a certain member does? Like if he is the sole service engineer and nobody understands his tasks. lol sad This shouldnā€™t happpeeen., but just in case:

    • Get somebody outside the team who can check or review (not sustainable)

    • Upskill or hire lol

    • Actuallyyyy, there may be people who can still comment on a high-level. They can ask the right questions even if they donā€™t know the exact syntax or process. I mean the one who created the ticket (hopefully not also the SE) knows what should be done right?

  • Swarming. Multiple people working on the same ticket so itā€™s completed faster. Somebody does the development, another is doing unit testing. This helped us a number of times especially for urgent tasks.

  • Pair programming. Code together. You can do this even remotely with editor extensions or web-based IDEs and similar.

  • Know when your comment should be parked for another time, especially if it will completely change the scope of the work. Are you suddenly going to refactor everything? Does this mean the requirements weren't clear, or was the solution not well thought out? Will pursuing the current implementation give way to more issues like in maintainability or other tech debts? The decision depends on many factors including urgency and criticality of the task. Something has to give way, no? Be clear about the tradeoffs.

Scope creeps

Yes, a separate section. What if the scope changes all the time? Everything is priority zero? hehe. For the sake of covering all scenarios, ā€œit dependsā€. šŸ¤£

Here are some stuff that helped us:

  • Having a clear roadmap and approved requirements documents

    • Approvals from the necessary departments (e.g. security, enterprise architects, data privacy) as soon as possible. Like during PRD creation. We donā€™t want the <insert important department> to suddenly tell us this solution will not work because etc etc ā† This happened by the way. HAHAHA So traumatizing. Jk

    • Be aware of the priorities and the idea behind the rank. For example, Item 1 is important but not that urgent and can be re-scheduled.

    • Be clear about the tradeoffs. What will not be completed because this new thing snuck its way into the active sprint?

  • But creating requirements documents is too tedious or takes so long

    • Aww, you know what takes so long too? Firefighting and being confused why something does not work Jk

    • Anyway, normally you donā€™t create the requirements a day before the implementation right? Usually this happens waaayy before development starts, and itā€™s not hard if you know your teamā€™s mission and vision. Or ask chatgpt loljk

    • Requirements should be backed by data, whether from internal tools or global benchmarking. Do you have this readily available?

  • Protect the team (and the sprint). This is for items that are trying to sneak in but are not important, not critical and not urgent. In other words, they can be scheduled for another time.

    • The mesh team leads are the first safeguards. Knowing your domain, your scope, etc. This is becoming repetitive but seriously if you know what you are doing, you will make the right decisions. Whether this will take the place of another ticket in the active sprint or it needs to be scheduled for the next one.. everything depends on making informed decisions.

      • Our Product Owner knew our domain and she had high-level technical knowledge of our services. She was able to safeguard us from other Product and Business Teams

      • Tech Leads were able to shield the team from unnecessary context-switching because they donā€™t need everyone to make decisions every time.

      • All of these leads have good working relationships with other teams so yes it was a network of give and take relationships. We help you with this, please help us with that. I donā€™t remember anyone ever who did not give their support.

    • If possible, ask other teams in advance if they are planning on something that requires our time

    • Protecting is not the same as saying no all the time. No. Just no. šŸ˜…

  • Knowing your velocity and your teamā€™s individual strengths and weaknesses. Primary assignees are crucial.

  • Know if someone can help your team. Can this be delegated? What if the Operations team can also handle this, and they have readily available dashboards? It pays to know your network. Itā€™s a blessing to have friends. :)

  • Meetings can be scope creeps too, and they are not easily seen in the backlog. Add support tickets to the sprint so itā€™s clear to anyone looking at the board.

    • Someone from Product thought we were not working on enough tickets. The truth is we were swamped with support items, but they were not logged in the active sprint.
  • Scrum masters!!

  • What if itā€™s the acceptance criteria that requires changes?

    • Is it a minor change? The team should agree if this can be accommodated. For example, it is a ā€œsmallā€ thing but this will require re-doing manual regression testing. Thatā€™s not so minor anymore, right? But if the teamā€™s got the time, the teamā€™s ok with it, then go.

    • What if it was just plain wrong? Like if the current criteria is implemented, it will no longer add value to the product? Short answer: drop it. lol Why spend more time with this when you can do something else? Plus, donā€™t you think thereā€™s an even bigger issue?

  • Do an honest retrospective. There is a reason why this keeps on happening.

    • Poor planning? No clear direction? No clear boundaries? Miscommunication?

    • Many Production incidents? Soo are we releasing low-quality code? Any manual processes that cause these? Missed reviews? No guardrails?

    • Meetingssss?

    • Are you a pushover? Jokeee but are you? Jk again šŸ˜

  • Being agile and running around like a headless chicken are different. Lesson: donā€™t be chicken. šŸ“

Ā