When I was starting out as a manager, I was overwhelmed with the task of defining the next roadmap. I didn’t know where to start, but I also wanted everything to be polished come next year.
First week of January. I saw updates from the senior manager of another department. She was going to hold a series of workshops for her teams so they could work on their roadmap together. Our team reached out to her and, thankfully, she was so accommodating. She let us sit in.
Lesson #1: What is the team’s mission?
Lesson #2: You don’t have to do it alone.
Company roadmap, team roadmap and everything in between
A mesh team has a consolidated Product, Business and Tech roadmap. There’s just one because all these units should also act as one inside the mesh.
Product and Business Roadmap focuses on product vision, features, and releases.
Technology Roadmap outlines the strategic direction for tech improvements, infrastructure updates, and engineering priorities.
All items should be aligned to the company’s goals.
Conceptualization
Focusing on the tech roadmap,
Identify opportunities and challenges (e.g. adopting new technology, fixing tech debts)
Do a retrospective (e.g. for team process improvements)
Check the icebox for parked ideas
Talk to the stakeholders
Topics and examples
New tech adoption
Migration to new tech stacks
Going serverless
Integration of AI and ML to create data products
Tech debt reduction
Refactoring legacy code
Completing marked “todo”s
Improving database queries
Infrastructure upgrades
Server upgrades, database optimizations, network enhancements
Cloud migration (e.g. moving from on-prem setup to cloud like AWS, GCP and Azure)
Using service mesh
Security and compliance
Security patches, vulnerability fixes, compliance updates (e.g., PCI DSS, ISO 27001)
GDPR, HIPAA, and other regulatory adherence
Implementing Zero Trust Architecture
Improved authentication & access controls (e.g., MFA, SSO)
Implementing audit logs and security monitoring
Quality engineering
End-to-end automated tests
Test case management
Cost optimization
Decommissioning unused resources
Migration to Graviton instances
Site reliability engineering
Implementing caching strategies
Database indexing optimizations
Load balancing, disaster recovery planning
AI-powered autoscaling
Release engineering
Automating build, test, and deployment pipelines
Introducing feature flags for controlled releases
Implementing progressive delivery (canary, blue-green deployments)
Observability and monitoring
Enhancing logging (e.g. Splunk, Grafana, Datadog)
Improving real-time alerting and incident management
Adding SLOs (Service Level Objectives) and SLIs (Service Level Indicators)
Tooling and developer experience
Standardizing API documentation & code guidelines
Enhancing developer onboarding processes
Improving internal dev tooling (e.g., dashboards, better debugging tools, code linters)
People
Domain knowledge transfers
Up-skilling, trainings and certifications (especially when the team is expected to handle an unfamiliar project or technology)
Prioritization
There are many prioritization frameworks that can be used, but the latest we have used was Intercom’s RICE framework.
Reach - number of people or events give a time period (e.g. active users per month or transactions per month)
Impact - a scale. More on financials. For example, anything that will bring in 1B and up is considered massive impact
3 for “massive impact”
2 for “high”
1 for “medium”
0.5 for “low”
and finally 0.25 for “minimal”
Confidence - any data/metrics to back up the claims?
100% for “high confidence”
80% for “medium”
50% for “low”
0% for “no confidence at all”
Effort - man-hours per role
The RICE score is computed as (R * I * C) / E
Learnings
The Senior Manager gave us a copy of her presentation. I thought I’ve seen something similar, I just didn’t remember where. I don’t know how we eventually found it, but apparently it was our pitch from when the team was to be made official. Everything was there — our mission and vision, where we were back then and where we wanted to be. ♥️
Making time for a year-end retrospective and for planning the next quarters was 👌 It took us less than a week.
Tech and Product started out with different lists (separate meetings)
Reached out to other teams to make sure we did not miss anything that they had on their own backlogs wherein we were needed
Merged the items into one backlog
Discussed each item briefly and prioritized using RICE framework
Created the roadmap for the first two quarters (with markers for the second half)
Regular roadmap reviews. Business goals may change, tech initiatives too. The team has to adjust, but keep true to its goals.
All items in the roadmap should have proper requirements. Those that still have open questions will be affected significantly by the prioritization.
Some people will see roadmap planning, requirements gathering or even reviews as wasted hours. Like you could have used that time to etc etc. Or it will take too long to do this etc.
I’ve seen projects wherein efforts were wasted because nobody took the time to stop and think. The actual implementation would’ve been more efficient and effective if it was planned better. Planning better doesn’t mean weeks or months, too.
Services get monkey-patched. Feature was released but did not work as expected because the expectations were wrong. Unfortunately, the data was there, just not used. Everyone was confused. Another hotfix.
RICE is just a tool to help with the prioritization. It’s not supposed to be the sole factor to consider. It helps when everything looks equally important.
It can be tricky for tech initiatives but in our projects, since we always attribute an item back to a business goal.. it can be done. Like how much are we losing because of <something>?
Where will you get the data? Instrumentation, dashboards, reports, surveys, global benchmarks, industry benchmarks, competitors. A lot of sources, you just have to know where to look or who to ask.
This is easy if the team has a culture of making new features or iterating existing ones based on data. Tech team may be doing continuous improvements on observability and monitoring, data collection, etc. Product team may be creating requirements based on actual numbers from Production and not some gut feel only.
More often than not, a new shiny prio will pop up and everyone’s expected to conform. I think it’s just the same as when handling sprint scope creeps.
We don’t wait for other teams or departments or tech heads to give us something to do. :)
We also don’t wait for others to give us a go-signal to explore tech/process/tooling improvements. And even if there was a top → down initiative, we don’t stop at just conforming to the standards. For example, if we think of something that’s better, we approach the working group admin and ask.
Team chose to explore using AWS CDK for Infra-as-code. It was just like how we code normally so it was not going to be a burden to our service engineers. We could do it ourselves. Unfortunately, after.. I don’t know how many weeks? or months?, it was decided all teams should start using Terraform. 🤣 Okiii 😆 CI/CD standardization. Same story. 😆
We published performance reviews of our team and our services in end-of-year reports. We also had quarterly reviews. We collected all these metrics from different tools even if we were not asked to do so, and even if we didn’t have a developer portal back then. 😂 Where will you get your next roadmap if you don’t know where you stand right now?
Know your clients. When I was in a startup, we were allowed to talk directly to clients, and discuss next features. We just had to loop in our project manager/lead. Many clients are not direct to the point with what they want (especially when it comes to design). But the more that I interacted with them, the better my understanding.