Braze Innovation & Technology Culture

Braze Employee Perspectives

How do your teams stay ahead of emerging technologies or frameworks?

The teams are encouraged to experiment with new technologies during quarterly hackathons. This allows them to explore and learn new approaches, and at the same time contribute to Braze’s products and goals. An example of an innovative project from the last hackathon is an observability pipeline from our Kubernetes-based job runner through Datadog to a custom visualization tool in Streamlit that shows overprovisioned ML pipeline jobs and allows us to more accurately right-size our infrastructure.

Our product and engineering team is actively experimenting with applying agentic coding techniques in their daily work. This is a fast moving field, with technologies such as Model Context Protocols, Retrieval-Augmented Generation and agent skills that are developing quickly. We have a biweekly AI lunch-and-learn where we share experiences and best practices: e.g., multi-agent workflows, different types of RAGs, and experiences with applying the latest models (via Cursor and Command-Line Interface) to our codebase.

Finally, it’s helpful to simply experiment with agentic tools hands-on, both in personal “toy projects” and applied to our core products. We have a very active Slack channel (#vibe-coding) where engineers learn from each other and share experiences and resources.

 

Can you share a recent example of an innovative project or tech adoption?

BrazeAI Decisioning Studio™ is a platform that leverages Reinforcement Learning to automate and optimize customer interactions. In simple terms, instead of a marketer manually guessing which message version is best, or running slow, manual A/B tests, an RL agent continuously learns from user engagement (clicks, conversions) to dynamically serve the optimal content at the optimal time via the optimal channel. However, configuring RL environments is notoriously difficult; it requires precise definitions of state, actions and reward functions. 

To help our forward-deployed data scientists configure Decisioning Studio for new customers, we have developed the BrazeAI Decisioning Assistant, an internal agentic application designed to act as a co-pilot for setting up and maintaining these complex ML configurations. Unlike a standard RAG chatbot that only retrieves documentation, this assistant creates a bridge between LLMs and our runtime environment. It can actively verify proposed configurations against known standards, execute SQL queries to analyze model performance, and autonomously diagnose issues by interpreting real-time data logs. The goal is to shift our forward-deployed service posture from manual configuration and troubleshooting to a more automated, smart verification.

 

How does your culture support experimentation and learning?

Primarily, by tackling tough problems and building exciting AI products. Productizing reinforcement learning at scale for AI decisioning is a cutting-edge challenge that requires significant research and experimentation. That makes our product interesting for engineers to work on, in addition to being valuable for our customers. Outside of the core product work, we encourage learning via regular hackathons, provide a generous learning stipend to spend on materials, courses and conferences and try to match engineers and applied scientists to areas of work that are particularly interesting to them. Just now, we’re tackling scalability via optimizing contextual bandit algorithms in Spark + Scala, building a next generation marketer UI for Decisioning Studio inside the Braze Platform and investigating how to apply causal inference techniques to make our AI models learn faster from limited data. Simply executing on our ambitious roadmap requires a lot of learning and growth, which my team and I enjoy.

Victor Kostyuk
Victor Kostyuk, VP of Engineering, AI Decisioning & Reinforcement Learning

Braze not only invests heavily in AI within our own product to empower our customers but also fosters a strong internal culture of innovation. We are actively encouraged to leverage AI for both productivity and creative experimentation.

Mizuki Hiramatsu
Mizuki Hiramatsu, Partner Sales Manager

How does your team stay ahead of emerging technology trends while scaling fast?

For our high-growth decisioning services team, which is focused on BrazeAI Decisioning Studio™, staying ahead is a “push-pull” dynamic. Our AI deployment team is key to our efforts, deploying tailored contextual bandits that navigate real-world noise and latency. We act as the “exploration” engine, identifying emerging needs, like constrained optimization or multi-action selection.

We scale fast by turning these “bespoke” client wins into productized building blocks. Our product team acts as the “exploitation” engine, abstracting the deployment team’s custom logic into modular components.

Now more than ever, the competitive edge isn’t just a better model; it’s the speed at which a client-specific innovation becomes a core capability. This loop ensures we aren’t chasing trends; we are building a self-reinforcing system where every unique deployment hardens our core platform. 

 

What recent product or feature are you most proud of — and what impact has it had?

We are most proud of our internal AI analytics framework for rapidly productizing custom client analyses for wider use. It transforms “bespoke” client-specific analyses created by our deployment teams into self-service, productized building blocks accessible to all forward-deployed data scientists and clients. This drastically reduces the time from a custom win to a core capability.

A prime example is our solution for detecting regime bias. It started as a custom analysis to identify disparities in historical regime performance. It has since been productized into a tool that uses different techniques to assess the risk of bias in our audience assignment. This proactive check is now a core part of our setup process to ensure fair experimental groups, and is now integrated into a weekly alert for continuous bias monitoring. This ability to quickly turn a client-driven problem, like the bias seen in a recent audience reshuffle, into an automated, standard governance tool demonstrates the speed and quality of our innovation loop.

 

How do you create a culture where innovation and experimentation are encouraged daily?

At the heart of our business strategy lies a commitment to continuous experimentation. To foster this, our product team developed a systematic testing framework, enabling us to test hypotheses and configurations to drive performance gains. We have institutionalized this approach by mandating that deployments include a test using this framework whenever feasible. 

By tracking and openly sharing all outcomes — both successes and failures — we cultivate an environment where innovation is truly celebrated. The most valuable insights often come from failed experiments that reveal what doesn’t work.

An excellent illustration of this principle is a recent initiative headed by one of our forward-deployed data scientists aimed at lowering unsubscribe rates through rules-based frequency guardrails. Although that specific trial did not yield the desired results, the presentation of the findings inspired another team to pivot their strategy, ultimately leading to a successful outcome.

Another key aspect of our culture is to value individuals who identify a problem and come up with solutions and guidance. This ability to identify problems and propose solutions is at the core of one of our values, “Don’t Ignore Smoke,” which is really all about noticing issues before they turn into fires, and helping each other to find and implement a solution.

William Palmer
William Palmer, Director, Forward-Deployed Data Science