Skip to content

Latest commit

 

History

History
242 lines (241 loc) · 269 KB

File metadata and controls

242 lines (241 loc) · 269 KB

Tech

TechInvestments Rate N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 8-10% of tasks per quarter are allocated for technical investments.
Level 2 11-15% of tasks per quarter are allocated for technical investments.
Level 3 16-20% of tasks per quarter are allocated for technical investments.
How to check level How to get the percent of tech investments: Go to Team Performance Dashboard. get the info for 8 last sprints in TechInvest Index Widget. If the team is not in the team Performance Dashboard, calculate the ratio in Jira using two filters: - ALL ISSUES: project = PROJECTKEY AND issuetype in standardIssueTypes() AND resolved >= -16w order by created DESC - ISSUES FOR TECH INVEST ONLY: project = PROJECTKEY AND issuetype in standardIssueTypes() ORDER by created DESC *** To calculate the metric, the team must maintain techInvestments in the backlog. If this practice is not in place, go through the tasks retrospectively and determine whether there were any tasks for tech investments. To determine the percentage of tasks from platform communities, analyze the tasks from the output project = PROJECTKEY AND issuetype in standardIssueTypes() AND "Backlog Type[Dropdown]" = TechInvestments AND resolved >= -16w order by created DESC. We don't have any formal marker for community tasks yet, so we leave manual counting to the team's discretion.
TechDebt Backlog N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 The team creates and tracks its technical debt according to the common process. Will be added: Average lifetime of technical debt is X (TBD)
Level 2 Technical debt decreases faster than it increases. Will be added: Average lifetime of technical debt is Y (TBD). Will be added: Tasks with DueDate expired — no more than Z
Level 3 Technical debt tasks are closed in accordance with the set Due Date or not older than a quarter.
How to check level . Will be a Widget in Team Performance Dashboard.
Mobile Code Base Quality N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 80% of files in the team repository and team cluster comply with the guidelines.
Level 2 85% of files in the team repository and team cluster comply with the guidelines.
Level 3 90% of files in the team repository and team cluster comply with the guidelines.
How to check level SME gives the info using scripts
Documentation management N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 All team repositories are connected to Documentary. The documentation is updated according to the designated process. If the documentation is not updated with changes in the code, then a task for technical debt is created and closed during the current quarter (i.e. at the end of the quarter there are no tasks older than three months).
Level 2 The documentation in the repository is updated synchronously with changes made to the code base (there is no technical debt for the documentation). The documentation review is carried out internally by the team, and the documentation according to the guidelines..
Level 3 At the development stage of any new product functionality, a PRD (product requirements document) is available (if applicable). Swagger is updated (if applicable for the service).
How to check level project =!PROJECT! and "Backlog Type[Dropdown]"=TechDebt and status != Closed and labels = documentation.
PRODUCT SECURITY
ErrorBudget by Quarter N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 Spending below 13,500 security points per quarter
Level 2 Spending below 9,000 security points per quarter
Level 3 Spending below 4,500 security points per quarter
How to check level To know your Security Budget: 1. Go to dashboard 2. Choose the date of quarter till the current date, by month and your team. 3. Compare your budget with Quarterly Spending (100000 USD)
Introduction to basic information security practices N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 All backend developers successfully passed the test on secure development. All backend developers passed the test on service owner responsibility.
Level 2 All backend and frontend developers successfully passed the test on secure development. All backend and frontend developers passed the test on service owner responsibility.
Level 3 All engineers (including QA) passed the test on service owner responsibility. Team takes part in Early Adopters program.
How to check level We got a report with names from ProdSec SME.
DX Index
Cycle Time of PR N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 6-10 d
Level 2 3-6 d
Level 3 <3d
How to check level LeanerB team dashboard
Deployment Frequency N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 1-5 times per week
Level 2 1-17 times per week
Level 3 17+ times per week
How to check level LeanerB team dashborad
PR Size N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 200-400 lines on 75 percentile
Level 2 100-200 lines on 75 percentile
Level 3 < 100 lines on 75 percentile
How to check level LeanerB team dashboard
Refactor Rate N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 15-21%
Level 2 9-15%
Level 3 <9%
How to check level LeanerB team dashboard
Rework Rate N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 5-7%
Level 2 2-5%
Level 3 <2%
How to check level LeanerB team dashboard
STANDARDS
Design System Components Usage N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 30% of team screens are more than 80% ds components
Level 2 60% of team screens are more than 80% ds components
Level 3 90% of team screens are more than 80% ds components
How to check level
Accessibility support N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 40%+ of Teams screens fully support VoiceOver and large font
Level 2 60%+ of Teams screens fully support VoiceOver and large font
Level 3 80%+ of Teams screens fully support VoiceOver and large font
How to check level
Fast roll-backs N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 - 50% of improvements on clients before rolling out are covered by AB / FeatureToggle. - All AB or FeatureToggle are enabled in the release branch before the cut, undergo manual and automated testing before the release.
Level 2 - 70% of improvements on clients before rolling out are covered by AB / FeatureToggle. - All incidents on client platforms during the quarter were isolated without rolling out a new version of the application to users.
Level 3 - 90% of improvements on clients before rolling out are covered by AB / FeatureToggle. - All AB and temporary FeatureToggle code is deleted no later than one month after the decision to roll out the feature is made.
How to check level
QUALITY
% of Unit Test coverage N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 60-65%
Level 2 65-75%
Level 3 75%+
How to check level In a team's repo
% of Crit Path UI coverage N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 50-74%
Level 2 75-80%
Level 3 80%+
How to check level Allure Report
% of API coverage N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 65-84%
Level 2 85-95%
Level 3 95%+
How to check level Allure Report
Autotest Stability (all runs for the quarter) N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 >50% (for each API + UI layer)
Level 2 >70% (for each API + UI layer)
Level 3 >80% (for each API + UI layer)
How to check level Test coverage
Quality Rate N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 1 critical defect per quarter; 2 high defects since the last defect or per quarter
Level 2 0 critical defects since the last measurement or per quarter; 1 high defect since the last measurement or per quarter
Level 3 0 high defects since the last measurement or per quarter
How to check level Tableau
Shift Left Practices N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 - QA engineer participates in story development at early stages, using testing practices. - Acceptance criteria are always written before development for Story and Enablers. - Test Case draft is formed in Github+AllureTestOps taking into account positive, negative and boundary checks, with a breakdown by testing types (UI, API, etc.). - 75% of tasks are not returned from testing back for rework.
Level 2 - A testing plan is formed for each mobile release. - Autotest drafts are written in parallel with the task development. - 85% of tasks are not returned from testing back for rework.
Level 3 - Autotest drafts are written before the start of coding and are supplemented as development progresses. - 95% of tasks are not returned from testing back for rework.
How to check level Count how much tasks on rework: Find the number of all closed tasks: project = PROJECTKEY AND issuetype in (Enabler, Story, Bug) AND resolved >= -16w order by created DESC On Team Performance Dashboard see the Test Failed for four last months. Find percentage: 100%-(Test Failed/ALL Closed*100%)
Production issues management N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 The team opens as many production bugs as it closes.
Level 2 The number of production bugs in the team's backlog decreases.
Level 3 The number of open production bugs is zero. New bugs are closed within the SLA
How to check level Find your team and check via dashboard
MOBILE RELEASE MANAGEMENT
Release Average Testing Time N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 <24h
Level 2 <16h
Level 3 <12h
How to check level Tableau
Release Bug Fixing Time N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 <16h
Level 2 <8h
Level 3 0 or negative
How to check level Tableau
Number of Additions after Cut and Hot Fix Number N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 No more than one accepted hotfix; No more than one pull request for a release.
Level 2 0 hotfixes; No more than 1 request for inclusion in the release during the quarter.
Level 3 0 requests for feature additions and hotfixes during the quarter.
How to check level Jira filter with on your team with r_hotfix label
DATA (SERVICE) OWNERSHIP
Management of tables in DWH N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 1. The team that owns the data schema (dataset BQ) has accepted ownership of the tables and schemas for which the request was received from the ownership bot and/or the steward, or proactively offers other ownership options. 2. If the team is the owner of an entity that participates in the data flow for critical data (namely, financial, investor, regulatory, ESG reporting), then it agrees on changes to the data with the owners of the entities formed on the basis of the team's data. 3. The owner responds to the deletion request within two weeks
Level 2 Table owners have documented 50% of their tables according to the general policy
Level 3 Table owners have documented 95% of their tables according to the general policy
How to check level See the assigned tables in Metadata
BACKEND RELEASE MANAGEMENT
Quality Gates N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 All builds that are rolled into production have information about passing all Company Level Quality Gates.
Level 2 The percentage of company-level Quality Gates passed for builds released to production is no more than 70%
Level 3 1. The percentage of company-level Quality Gates passed for builds released to production is no more than 90% 2. The team has team/service Quality Gates in addition to Company-level
How to check level
Stability of testing environment N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 95% successful raising of services with default version in the presence of dependencies.
Level 2 98% successful raising of services with default version in the presence of dependencies.
Level 3 99.5% successful raising of services with default version in the presence of dependencies.
How to check level See the Grafana table
Cost Efficiency of Testing Env N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 - Cost Efficiency of Each Team Service is >50%; - Cost Efficiency of all team's NS is >50%
Level 2 - Cost Efficiency of Each Team Service is >70%; - Cost Efficiency of all team's NS is >70%
Level 3 - Cost Efficiency of Each Team Service is >90%; - Cost Efficiency of all team's NS is >90%
How to check level The DevPlatform team will make long-term storage (there will be benchmarks by teams) to count over time.
Test environment management N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 1. A Team uses dev- и test-namespaces according to documentation. 2. The environment is created via manual UI or console.
Level 2 Each team member created a namespace at least once a month (i.e. engineers use the tool themselves).
Level 3 The average lifetime of NS is < 24 hours. (In most cases, each test run creates its own test-namespace. Environments by template are created and deleted automatically via git trigger).
How to check level
ARCHITECTURE STANDARDS
TechRadar Following N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 The team uses technologies from TechRadar for all new services.
Level 2 Critical services of the team are built using only Adopt technologies from TechRadar. Normal services of the team are built using Adopt and Trial technologies from TechRadar.
Level 3 The team proposes reasonable changes to the technical radar.
How to check level Check via the Tech Radar
POSTPONED, MOVED TO SMM: Cost Efficiency by Team N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 The team has access to the necessary dashboards to obtain financial information on the cost of the service; The team receives financial information on the cost of owning their service.
Level 2 The team calculates the PNL of its services; The team closes Cost optimization tasks from external stakeholders.
Level 3 The team independently optimizes financial costs for its service, either independently or with the help of external experts.
How to check level
BACKEND AND INFRASTRUCTURE
DataBases management N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 The team writes SQL queries themselves: builds a query execution plan and improves problematic queries. The architect reviews requests for database changes.
Level 2 The team manages its application migrations independently
Level 3 The team understands the key metrics of the database of their service, Can optimize them (the size of the hot database does not exceed 1 TB, partitions are used, the average time is no more than ~100 ms, there are no locks in the database, evaluates financial efficiency). Timely eliminates problematic situations.
How to check level
Best practices of kafka / (will be converted to Event Driven Approach) N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 The team has completed onboarding in Kafka. The team understands how the service should interact with the Kafka cluster. The team configures its own applications to connect to the cluster.
Level 2 The team independently creates topics/acl and knows how to check their configuration. The team manages consumer groups. The team develops applications that use the schema registry.
Level 3 The team configures the topic parameters, knows exactly how each parameter affects the topic, and optimally configures topics (number of partitions, retention, etc.). The team creates producers and consumers with configurations that meet the semantics of their applications. The team understands the cluster metrics. The team can troubleshoot the interaction between the application and the Kafka cluster: either resolving the issue independently or escalating the problem (in case of cluster-related issues).
How to check level
Stability of the Services N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 - All team services display their SLI in Grafana for each RED metric of incoming requests with the following indicators: - E/R (Errors ratio to the total number of Requests); - D (Duration) of all requests (without separating successful and erroneous); - A link to the SLI dashboards of all services can be found in the project repository.
Level 2 - All services have fixed (in the service repo file with a description) SLOs agreed with their clients for each SLI, as well as SLAs; - The corresponding SLOs are displayed on the SLI dashboards of all services.
Level 3 The unused error budget (the predicted difference between SLI and SLO) is used for experimentation and planned downtime.
How to check level
Observability of the Services N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 - All key services log errors (that result in the inability to process an incoming request) in the single format. - All key services provide RED metrics or Golden Signals for incoming requests in standard Prometheus format. - All key services have alerts configured to detect deterioration of RED metrics by more than 50% for more than 15 minutes.
Level 2 - All services log non-critical errors (e.g. partial degradation) with trace id (if available) - Key services write traces in a uniform format using standard common packages. - All key services provides RED metrics or Golden Signals Using common package (then metrics will be available on a dashboard). - All services have alerts configured to detect deterioration of RED metrics or Golden Signals when the metric changes by X% (X is determined according to the specifics of the service) for more than 5 minutes. - Dashboards and alerts are configured according to the pipeline.
Level 3 - The service dependencies used (DB, services) that support tracing also write it to our storage system. - The team finds out the cause of any failure in its service that is sufficient to eliminate it without redeploying and involving other teams in no more than 15 minutes. - Alerts by metrics have a response procedure described if the team itself has not responded to it (so that a person outside the team can understand who to contact and when).
How to check level
INCIDENT MANAGEMENT
Incident management process N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 The team has set up escalation policies in PagerDuty for the services within its area of responsibility, on-call schedules and up-to-date participants configured The team joins to Major Incidents in Slack channel #server-incidents as part of the process Major Incidents Workflow, if there are issues within the team's area of responsibility
Level 2 The on-call member from the team responds to incidents automatically created by the PagerDuty system based on alerts that the team supports independently for their services.
Level 3 The team works with the current alerts for the services within its area of responsibility: modifying alerts (trigger thresholds, rules, severity), adding new ones, and removing irrelevant alerts.
How to check level
SLA of Team Services N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 SLA of tier 3/4 services complies with SLA Tiering of Services
Level 2 SLA of tier 2/3/4 services complies with SLA Tiering of Services
Level 3 SLA of all services complies with SLA Tiering of Services
How to check level There should be a dashboard for your service.
Mean Time to Repair N/A Not applicable due to the team's specific nature of work.
Level 0 Does not meet the criteria of the first level.
Level 1 No more than 4 hours at the 80th percentile
Level 2 No more than 2 hours at the 80th percentile
Level 3 No more than 1 hour at the 80th percentile
How to check level We have reports from Observability teams every quarter.