Unlocking experimentation at scale with Statsig—driving rapid innovation and smarter, data-informed product decisions.
Atlassian lacked a scalable and intuitive experimentation platform. While most products had migrated to the cloud, the legacy of on-prem infrastructure left significant gaps in our ability to run controlled experiments effectively. Experimentation was unintuitive, difficult to configure, and poorly supported—resulting in low adoption of feature flags and minimal experimentation across teams. UI tooling was fragmented and lacked usability, limiting experimentation velocity.
Compared to industry leaders like Facebook, Atlassian’s experimentation culture and tooling were lagging. Teams lacked confidence, and iteration was often driven by instinct instead of data.
This project embraced a cloud-native mindset to introduce standardized, developer-friendly tools—enabling fast, safe, and scalable experimentation.
Ensure consistent usage of libraries across platforms, products, and languages.
Enable more robust A/B testing and rapid iteration across teams.
Provide clear insights to inform product development through integrated analytics.
Roll out reusable libraries and improve visibility across engineering teams.
Provide guidance, documentation, and maintenance as adoption scales.
Develop internal tools that empower teams to adopt best practices with minimal friction.
To meet our goals around experimentation, we adopted Statsig—giving us a fast, reliable way to launch experiments with confidence.
Running an experiment using statsig involves the following steps.
While Statsig provides robust SDKs for multiple platforms, we chose to develop our own internal wrappers to ensure consistent and scalable integration across ourS services.
By building these wrappers, we were able to embed native support for our internal Traits and Attributes Platform (TAP), allowing both sidecar-based and API-based trait resolution to integrate seamlessly into the experiment workflow. It also enabled us to enforce standardized evaluation logic across teams, reducing duplication and potential misconfigurations.
Furthermore, the wrappers offered a clean developer interface with utility functions such as buildStatsigUser, checkGate, and getExperiment, streamlining adoption and ensuring a unified developer experience regardless of the service or team implementing Statsig.
In Statsig, the StatsigUser
object includes traits for targeting:
companyID
for org-level rolloutsThe wrappers support both sidecar-based and API-based TAP trait retrieval methods with graceful fallback logic. For developer ergonomics, I introduced helper methods like buildStatsigUser
, checkGate
, and getExperiment
.
Bootstrapping allows initializing Statsig with pre-evaluated flags before a network call. It helps:
Trait merging combines traits from multiple sources—auth, runtime, and environment—into a single user object. This enables:
The wrappers also resolve merge conflicts deterministically and provides lifecycle methods like initializeStatsig()
and shutdownStatsig()
.
Empowered product teams to make release decisions based on experiment outcomes rather than intuition
Cut experiment configuration errors by 70%
100+ teams actively using the platform for experimentation
3x increase in the number of concurrent experiments run
Faster time to insight, enabling teams to iterate faster and make more informed decisions
Significantly reduced experiment setup time by 80%