Getting Started
Our platform is designed to be fast and highly automated. Follow the steps below to get set up and ready to see your data.
Get security approval
Send our security documentation to get approval from your security or procurement team.
Our platform is built on a foundation of enterprise-grade security, trusted by over 900K developers and 10K companies. We partner with Drata, a continuous compliance platform, to monitor our security posture on an ongoing basis. Our SOC 2 Type 2 report, latest security and policy documents, continuous monitoring status, and list of subprocessors can all be found at our Trust Center .
Here’s a sample request you can send to your security admin:
I’m evaluating Software.com to automate KPIs to measure our biggest areas of investment in software development. The platform collects metadata about our organization’s activity from Git and never reads, transmits, or stores source code. It’s also SOC 2, Type 2 compliant, as detailed on their Security page:
Can you please review their app so that we can proceed with our evaluation?
Connect your Git provider
Create a Software.com account and connect GitHub, GitLab, Bitbucket, or Azure DevOps.
Getting started only takes a few minutes — just create a Software.com account and connect your company’s Git provider. We support GitHub (Cloud), GitLab (Cloud, Self-Managed), Bitbucket (Cloud), and Azure DevOps (Cloud).
We recommend connecting all of the repositories in your organization. You can easily exclude repositories from your data in our platform at any point in the future.
Data is automatically backfilled ~2 years. The full backfill depends on the size of your organization, but typically takes 1-3 days. We will notify you when your data has finished backfilling.
You can continue to the next steps while we process your historical data in the background.

Connect your AI tools
Measure the impact of AI coding tools by connecting Cursor, Claude Code, GitHub Copilot, and more.
Our platform tracks key impact and usage data for AI coding tools. You can easily compare key metrics between teams using specific AI tools and the rest of your organization, or compare teams before and after adopting an AI tool.
We also support advanced metrics through direct integrations with the most popular AI tools. We automatically track AI coding metrics directly from each tool, giving you deeper insight into adoption, engagement, and ROI.
You can connect Cursor, Claude Code, GitHub Copilot, and more. Visit our documentation to see the full list of supported AI tools.

Set up groups
We automatically import groups from your Git provider, but you can set up additional groups for deeper analysis (e.g. product lines, regions).
Groups allow you to compare productivity across your organization and more quickly identify key constraints. We automatically create groups if you have existing teams in your Git provider, like GitHub and GitLab teams. If your teams don’t exist in your Git provider, you can create groups of Git users and/or repositories on our platform.
To bulk import teams, please provide us with a list of users from your HRIS system, along with their manager, team, department, region, office, and any other information that would be useful for analysis.

Set up a comparison
Compare major areas of investment, such as AI coding tools, outsourcing partners, and acquisitions.
Comparisons are your command center to evaluate ROI on strategic initiatives, such as GenAI efforts, onshore vs. offshore, and outsourcing partners.
In addition to comparing current groups, you can also baseline productivity before vs. after a point-in-time—for instance, to capture the impact of an acquisition or adopting a new AI developer tool.
You can input each group’s average annual cost per contributor to calculate Cost per New Delivery. We recommend using the annual, fully-burdened cost per contributor, including costs associated with other employee-related expenses, taxes, and benefits (typically ~1.2-1.4x the average base salary).

Schedule a data review
Schedule time with our team to review your data and surface new insights.
We offer a consultative data review for new teams. During the review, which typically takes about 30 minutes, we will uncover opportunities to improve productivity, help you identify high-ROI work areas for AI, and share best practices and industry benchmarks.
Optional steps
Map deployments
Review and validate deployment methods for your repositories, so you can track key deployment metrics.
Tracking deployments helps you measure the performance and efficiency of your deployment process. We use signals from your Git provider—such as releases and pipeline runs—to track key deployment metrics (deployment frequency, batch size, etc.).
While we automatically detect deployment methods for most repositories, validation is often required during setup. In some cases, you may need to manually configure the deployment method. You can learn more about deployment tracking and supported signals in our documentation.

Filter or exclude data
Exclude repos (e.g. open source) and filter users (e.g. managers) from your data.
We automatically filter out contributions from certain user types to keep metrics focused on your core team’s activity. For example, we exclude bot-authored PRs and open-source contributors by default.
If there are additional repositories or users that you would like to exclude from your metrics, you can manually configure your data in your organization’s settings.
You can learn more about data filtering and exclusions in our documentation.
