Skip to main content

Overview

CodeAnt AI’s DORA Metrics feature provides comprehensive insights into your software delivery performance using the four key metrics established by the DevOps Research and Assessment (DORA) team. These metrics help you measure, benchmark, and improve your engineering team’s effectiveness and delivery capabilities. Metrics are computed from merged pull request data and automatically benchmarked against industry standards across four performance levels: Elite, High, Medium, and Low.

Supported Providers

DORA Metrics are available for the following Git providers:
ProviderCloudSelf-Hosted
GitHubYesYes
GitLabYesYes
BitbucketYesYes (Data Center)
Azure DevOpsYesYes

Team Control

Teams in DORA Metrics are collections of repositories grouped together for aggregated metrics tracking. A team allows you to view combined DORA metrics across all repositories within it, or drill down into individual repository performance while preserving team-level configuration.

Creating a Team

Navigate to the DORA Metrics section and click Create Team to set up a new team. Create Team dialog When creating a team, you configure the following fields:
FieldRequiredDescription
Team NameYesA descriptive name for the team (2-100 characters). Example: Backend Services, Platform Team.
DescriptionNoAn optional description of the team’s purpose or scope.
StatusYesSet the team as Active or Inactive. Only active teams appear in the dashboard by default.
Team RepositoriesYesList of repositories to include in the team. Click + Add Repository to add repos. Each added repository is configured with a Deployment Type and Prod Branch Patterns (see below).

Adding a Repository

When you add a repository to a team, you configure how deployments are tracked for that repo: Add repository with prod branch patterns
FieldDefaultDescription
RepositoryThe repository to add (e.g., intellij-experiments).
Deployment TypePR_MERGEDefines what counts as a deployment. Currently, only PR_MERGE is supported — a merged pull request into a production branch is treated as a deployment.
Prod Branch Patterns^main$Regex patterns that identify your production branches. Only PRs merged into branches matching these patterns are counted as deployments. You can add multiple patterns — for example, ^main$ and ^release/.*$ would track merges to both main and any release/* branch. Click Add to add each pattern.

Managing Teams

The Team Controls page lists all your teams with their repositories, branch rules, and status at a glance. Team Controls table Each row shows:
  • Team Name and description
  • Repositories & Branch Rules — the repos in the team along with their configured production branch patterns
  • Status — Active or Inactive
  • Actions — View metrics, edit, or delete the team
From this page you can:
  • Add repositories to an existing team
  • Remove repositories from a team
  • Update repository config — change the production branches or deployment type for a specific repository within the team
  • Edit team details — update the team name, description, or active status
  • Delete a team — delete the team

DORA Dashboard

CodeAnt AI computes all four standard DORA metrics. Each metric is rated against industry benchmarks and includes trend comparison against the previous period of equal length.

Composite DORA Score

The overall DORA score is the average of all four individual metric ratings, converted to a numeric scale:
RatingScore
Elite4
High3
Medium2
Low1
The composite score is calculated by averaging the four scores and rounding to the nearest level. For example, if your metrics rate as Elite (4), High (3), Elite (4), and Medium (2), your composite score would be 3.25 — rated High. The dashboard displays your composite DORA score alongside all four metric cards, each with its rating, trend, and daily chart. DORA Metrics dashboard

Deployment Frequency

What it measures: How often your team successfully deploys code to production. A deployment is defined as a pull request merged into a production branch (as configured in the team’s repository settings). How it’s calculated:
  • Total deployments (merged PRs) in the selected date range
  • Average deployments per day, per week, and per month
Performance Ratings:
RatingThreshold
Elite1 or more deployments per day
High1 or more deployments per week
Medium1 or more deployments per month
LowLess than 1 deployment per month
Dashboard view:
  • Summary card showing total deployments, daily/weekly/monthly averages, and rating
  • Daily trend chart showing deployment count per day over the selected period
  • Trend comparison (up/down/flat) against the previous period

Lead Time for Changes

What it measures: The time from first commit to production deployment. Lead time tracks the full lifecycle of a change, from when the first commit is made on a pull request branch to when that PR is merged into production. How it’s calculated:
  • Average and median lead time across all merged PRs in the period
Performance Ratings:
RatingThreshold
EliteLess than 24 hours
HighLess than 1 week (168 hours)
MediumLess than 30 days (720 hours)
Low30 days or more
Lead Time Stages: Lead time is broken down into five stages, giving you visibility into where time is spent in your delivery pipeline:
StageLabel in UIDescription
First Commit to OpenCommit → PR OpenTime between the earliest commit on the PR branch and the PR creation. Measures how long code sits before a PR is opened.
First Response TimeFirst ResponseTime from the first commit to the first review comment or approval. Measures how quickly reviewers engage.
Rework TimeRework TimeTime from the first review to the last approval. Captures the review iteration cycle — back-and-forth between reviewer feedback and author updates until final approval.
Merge TimeMerge TimeTime from the last approval to the PR being merged. Measures post-approval delay.
Merge to DeployMerge → DeployTime from PR merge to production deployment. For PR_MERGE deployment type, this is typically zero.
Each stage reports average and median values in both seconds and hours. Distribution Buckets: The lead time distribution shows how your PRs are spread across time buckets:
  • Less than 1 hour
  • 1 hour to 24 hours
  • 24 hours to 7 days
  • 7 days to 30 days
  • More than 30 days
Dashboard view:
  • Summary card with average/median lead time and rating
  • Daily trend chart showing average lead time per day
  • Per-stage daily breakdown charts
  • Distribution histogram
For Bitbucket and Azure DevOps, detailed review data (first response time, rework time, merge time) is loaded on-demand. Click Load review details on a specific PR in the drill-down to fetch and cache this data.

Change Failure Rate

What it measures: The percentage of deployments that cause failures in production. Failures are identified by detecting revert PRs — pull requests that revert a previously merged change. How it’s calculated:
  • (Number of revert PRs / Total merged PRs) × 100
Performance Ratings:
RatingThreshold
Elite0–5%
High5–10%
Medium10–15%
LowGreater than 15%
Dashboard view:
  • Summary card with CFR percentage, total and failed deployment counts, and rating
  • Daily trend chart showing number of failures per day
  • List of failed PRs with links to the original reverted PR

Mean Time to Restore (MTTR)

What it measures: How quickly your team recovers from production failures. MTTR is calculated from revert PRs — the time between when the original (failed) PR was merged and when the revert PR was merged. How it’s calculated:
  • Average and median recovery time across all incidents (revert PRs) in the period
  • Only resolved incidents (those with a measurable recovery time) contribute to the average
Performance Ratings:
RatingThreshold
EliteLess than 1 hour
HighLess than 24 hours
MediumLess than 1 week (168 hours)
Low1 week or more
Dashboard view:
  • Summary card with average/median MTTR and rating
  • Daily trend chart showing average recovery time per day
  • List of incidents with recovery time details

Drill-Down Charts

Each DORA metric provides a paginated drill-down view that lets you inspect individual pull requests and incidents. All drill-down tables support sorting and pagination.

PR Drill-Down (Lead Time)

View every PR merged during the selected period with its full lead time breakdown. Columns displayed:
ColumnDescription
RepositoryRepository name
PRPR number and title
AuthorPR author with avatar
Base BranchTarget branch the PR was merged into
Merged AtDate and time the PR was merged
Additions / DeletionsLines of code added and removed
Files ChangedNumber of files modified
CommitsNumber of commits in the PR
ReviewsNumber of reviews received
Lead TimeTotal lead time with expandable stage breakdown
Is RevertWhether this PR is a revert
PR drill-down table Hover over the lead time value on any PR to see the full stage breakdown: Lead Time Breakdown hover tooltip

Deployment Drill-Down

View all deployments (merged PRs to production branches) with lead time details. Columns displayed:
ColumnDescription
RepositoryRepository name
PRPR number and title
AuthorPR author with avatar
Base BranchTarget production branch
Head BranchSource branch
Merged AtDeployment timestamp
Additions / DeletionsLines changed
Files ChangedNumber of files modified
CommitsNumber of commits
ReviewsNumber of reviews
Lead Time (hours)Total lead time in hours

Failure Drill-Down

View all revert PRs that indicate production failures. Columns displayed:
ColumnDescription
RepositoryRepository name
Revert PRThe revert PR number and title
AuthorWho created the revert
Merged AtWhen the revert was merged
Reverted PRThe original PR that was reverted
Original PR Merged AtWhen the original (failed) PR was merged
Recovery TimeTime between original PR merge and revert PR merge

Incident Drill-Down

View all production incidents with recovery time details. Incidents are detected as revert PRs. Columns displayed:
ColumnDescription
RepositoryRepository name
Incident PRThe revert PR number and title
AuthorWho resolved the incident
Merged AtWhen the incident was resolved
Reverted PRThe PR that caused the incident
Original PR Merged AtWhen the incident-causing PR was deployed
Recovery TimeTime to restore service (hours)

Filters

DORA Metrics support filtering at multiple levels to help you focus on the data that matters.

Team-Level Filters

FilterDescription
TeamSelect a team to view aggregated metrics across all repositories in that team.
Date RangeChoose a start and end date for the analysis period. The previous period of the same length is automatically computed for trend comparison.

Repo-Level Filters

FilterDescription
RepositoryWhen viewing a team, you can filter to a single repository within the team. This preserves the team’s production branch configuration for that repo.
Production BranchesWhen viewing a standalone repository (outside a team), you can specify custom production branch patterns to define what counts as a deployment.

Drill-Down Filters

All drill-down views support:
FilterDescription
Sort ByChoose the column to sort results by (e.g., merged_at, lead_time).
Sort OrderAscending or descending order.
PaginationNavigate through results with configurable page size (default: 25 items per page).

Trend Comparison

Every metric automatically includes a trend comparison against the previous period. If you select a 30-day window (e.g., Feb 1 – Mar 2), the previous period is the 30 days immediately before (Jan 2 – Jan 31). Trends show:
  • Direction: Up, down, or flat
  • Change percentage: How much the metric changed compared to the previous period
  • Previous value: The metric value from the prior period
This helps you quickly understand whether your team’s delivery performance is improving, declining, or holding steady.

How It Works

  1. Connect Repositories: Link your Git repositories (GitHub, GitLab, Bitbucket, or Azure DevOps) to CodeAnt AI.
  2. Create a Team: Group repositories that belong to a team or service area.
  3. Configure Production Branches: Set regex patterns for each repository to define which branches represent production deployments.
  4. Automatic Data Collection: CodeAnt AI continuously collects and caches merged PR data from your repositories — no CI/CD pipeline integration required.
  5. View Metrics: Open the DORA Metrics dashboard, select your team and date range, and view all four metrics with ratings, trends, and drill-downs.
  6. Drill Down: Click into any metric to see the individual PRs, deployments, failures, or incidents that contribute to that metric.
  7. Track Improvement: Use trend comparison and historical data to track how your metrics evolve over time and benchmark against industry standards.