Stephen Biesty's cross-section book on trains

Funny enough, numbers used to intimidate me.
Maybe I thought my "artsy" brain wasn't wired for the "exacts" world. But one thing has always driven me: a deep curiosity about how things work. Just like I'd spend hours poring over Stephen Biesty's cross-section books as a kid, fascinated by what lay beneath the surface, I've always been drawn to understanding the inner mechanisms.

My time as a designer within the customer support team at Nubank, the Brazilian digital powerhouse, became my real-world cross-section. Witnessing firsthand how their simple, accessible support built trust and fueled massive growth in a historically underserved market was truly inspiring. Interacting daily with both customers and the brilliant support agents highlighted the critical need for an in-app experience that echoed their human-centric approach.

The challenge? Optimizing the in-app help center, the initial port of call for users seeking help. Despite comprehensive FAQs, users often bypassed them for direct chat or phone support – pricier channels with slower resolution times (TTR). This low FAQ engagement also meant our AI model, "Tarot," lacked the data to learn and personalize effectively, hindering our goal of scaling self-service (a key business objective during rapid expansion) while maintaining stellar Customer Satisfaction (CSAT) and Net Promoter Score (NPS).

Our hunch was that making suggested FAQ topics more visible would encourage self-help. Through iterative design and user testing, we prominently featured these suggestions, and saw an initial uptick in clicks.

Next came the crucial step of feeding Tarot. We implemented a simple "was this helpful?" button and tracked how many users clicked FAQs and still contacted support. This direct feedback loop significantly improved Tarot's accuracy, leading to a noticeable drop in TTR and fewer support tickets.

Building on this win, we carefully introduced Tarot into the chat for less sensitive issues, offering automated first responses while always providing a human agent option. Recognizing the often emotional nature of chat, initial engagement with automated suggestions was lower. So, we shifted our training: empowering support agents to review and provide feedback on Tarot's initial responses. This "human-in-the-loop" approach proved incredibly effective, resulting in more relevant automated replies and a smarter Tarot.

This project hammered home the power of aligning user behavior with business goals through data-informed design. By continuously testing and iterating, and by recognizing the vital role of human insight in training AI, we significantly improved user experience, boosted self-service, and contributed to Nubank's success in delivering exceptional support at scale.

This experience back in 2020 sparked a real curiosity in product automation.

Fast forward to 2024, and that curiosity led me to pursue an MBA in Data Science – where I finally started learning Python, work automation, and… tarot.

Next
Next

The Silly Tarot