Smartnumbers
Smartnumbers uses machine learning and community to authenticate customers and prevent fraudsters from accessing information.
Challenge
We want to be the number one tool in the telephony space for authentication and fraud prevention. Working with our clients and target personas, I was leading the discovery of generating opportunities with product managers and providing user insights that informed the business strategy.
My impact
- Continuous improvements to the discovery processes.
- Lead two cross-functional teams aligned with business goals.
- Generative and solution-based discovery utilising opportunity solutions trees for visibility across the business.
- Maintaining and iterating our design systems.
Wins
- 120% increase in weekly active users.
- Stickiness with our services. It’s literally difficult to turn us off because of our accuracy and performance.
- Network effect growth enabled by empowering users who love our product!
Process
Every opportunity had a modified process depending on the bet. A higher-risk bet went through more research and design than opportunities where there was significant value with low effort as we already had positive indicators. Everything was focused on moving product-owned metrics which leads directly to a company goal.
Having worked on various opportunities, the below focuses on a specific user need.
As a fraud investigator, I need to have cases flagged to me and my team so that we can prevent a fraudster in their journey.
Competitive analysis & product usage
The team and I were not sure what a valuable investigation looked like. We extended this further and found no concrete facts about the business either, so we made bets based on what we knew about the personas and utilised product usage data from our users and the findings from the competitive analysis.
What the data told us:
- A large number of high-risk callers were prolific fraudsters or repeat offenders calling multiple times in the day.
- We were used as an information-gathering tool by our users. The start of the investigation would rely on our competitors.
We interviewed our users and tested our hypothesis against them based on the data. We found:
- Timing is important. We needed to flag calls as close to real-time to allow investigators a chance to protect funds.
- Typically there would be 8-15 cases investigated per day, per investigator.
Ideas
Armed with all the facts, I relayed a summary to the team so we all had the right context to raise ideas.
After a show and tell, we went more into defining the efforts for each idea. Because of the complexity, a typical matrix wouldn’t capture the relevant details for us to refer back to in the future.
A key discussion point within this session relied on tracking and what success looks like. Remember this as I’ll showcase what we did in the final designs.
Designs
With our chosen idea and logic, I designed a new user journey.
In the ideas section, I mentioned a point about what success looks like. At the end of the investigation journey, we got our users to provide decisions. The aim is to have as many of them as fraud or suspicious.
We also wanted to feed the decision data back to our machine learning team as a factor for calculating risk. Working closely with other product managers and keeping aware of their goals, I knew this would assist with another product metric over time relating to accuracy.
User testing summary
- Our users loved the journey.
- The conditions of cases were understood and had a positive response.
- Users were vocal about what the next iteration looked like – this was fed back to the team and added as opportunities in our opportunity solution tree.
Success
After development was completed, and the feature launched, we tracked our metrics on a weekly basis and tuned things as and when needed.
- 120% increase in users completing a key action.
- The decision data was fed back to the machine learning teams which increased our accuracy of calls being flagged.
- The need to expand. Quickly. Based on our logic, we found our fraud Investigators using us as a starting point for investigations. We eventually ran out of cases to flag – but we knew what the second phase looked like based on our user testing.





