Speaklarity
SaaS
MVP
In March 2025, I joined Speaklarity, an early-stage startup armed with a core concept: to create a powerful AI coach for structured professional communication.
My mission was to take the product from zero to a fully functional MVP. The challenge was to translate complex AI feedback into a human-centric experience that encourages users to practice, rather than intimidating them.
Research, UI/UX Design, Prototyping, Testing
Product Designer
Mar 2025 - current
Candidates often fail not because they lack experience, but because they lack structured practice.
My core hypothesis was that by providing instant, structural feedback, the product could motivate users to iterate, turning vague answers into polished and structured responses.
Speaklarity is built around repetition and immediate feedback. Instead of just consuming content user is pushed to practice right away, which helps lock the skill into their muscle memory.
Record
Analyse
Iterate
This behavioural shift creates a hook that drives higher retention and ensures the user actually acquires the skill, rather than just reading about it.
Speaklarity bridges the gap between knowledge and execution. By providing a safe, simulated environment for repeated failure and rapid improvement, we turn interview anxiety into reliable confidence.
Research
The Challenge
Context
Professionals at top-tier companies need to ace behavioral interviews.
Problem
Candidates often know the theory but fail in delivery due to lack of stress-resilient practice. They can't bridge the gap between knowing and speaking.
Current Landscape
Passive learnings
Creates a knowledge illusion but builds no muscle memory.
Voice memos
High friction, zero feedback. Users reinforce bad habits
AI Tools
High utility but high anxiety. Overwhelms users with data, lowering motivation.
Human Coach
Effective but expensive and unscalable.
To build a sustainable habit, I discovered that the product must satisfy two needs simultaneously:
I decided to map the landscape against Perceived Progress and Actual Skill Growth.
From this analysis, it became clear that today's market is polarised: current solutions either offer comfort without results or results without motivation, leaving the ideal high-growth sector completely empty
Key takeaways
Limit feedback
Act as a smart filter. Even if the AI detects 20 errors, show only most critical ones to prevent cognitive overload and anxiety.
Sandwich utility between emotion
Wrap technical criticism inside positive reinforcement. Start and end every session with validation, highlighting what they did right, to keep the perceived progress high.
Visualise progress
Track and display metrics that always go up (streaks, word count, time spent), regardless of skill level. This ensures a dopamine hit even when the actual skill growth plateaus.
Usability Testing
MVP VALIdation
The initial user flow was technically sound.
However, my usability testing revealed a critical gap between function and adoption. Users completed the tasks but refused to return.
Data signal
I launched a closed beta with a cohort of 20 active job seekers.
Unlike a controlled usability test, these users were given full access to use the product for their actual interview preparation.
The goal was to observe organic usage patterns and validate if users would instinctively enter the iteration loop (Record → Analyse → Re-record) without prompted guidance.
70% drop-off
The drop-off was alarmingly high.
To understand why, I conducted follow-up interviews with the participants.
They revealed that the friction didn't just happen at the end, it started much earlier.
Users reported losing focus and confidence during the recording itself, which made the subsequent complex feedback feel even more overwhelming.
ROOT CAUSE ANALYSIS
I mapped these emotional and cognitive blockers in the journey analysis below:
Record a STAR answer
Understand performance
What do I say? Will the AI even understand my context?
I feel stupid talking to a blank screen. What was the question again?
Whoa, too much info. I have an AI-revised script, tone advice, and 20 grammar fixes. What do I do first? Do I memorise the new text or just fix the old one?"
Anxiety
Overwhelm
With only a standard record button and no visible question, users forgot their talking points or were loosing focus during the recording process
The lack of context made them hesitate, fearing the AI wouldn't understand the context
The journey map revealed a clear pattern of friction. Users were paralysed by under-guidance during the recording phase and information overload during the analysis phase.
To fix the drop-off, I needed to lower the cognitive load at these two critical steps.
Strategy
Eliminate decision fatigue
Remove the burden of "what to say" from the user.
Reduce cognitive load
Filter raw data into a single, prioritised insight.
Lower the barrier to iteration
Make re-recording the path of least resistance
Recording screen
THE PROBLEM
In the first iteration, I removed all distractions. However, testing revealed three critical issues:
Trust Gap
Users doubted the AI would understand their context without manual input
Lack of direction
Users didn't know what to say or where to start.
Low Intensity
Speaking to a blank screen felt too casual. Users lost focus, rambled, and didn't take the practice seriously.
To fix this, I shifted from a passive recorder to a video-call simulation
redesign
I redesigned the screen to mimic a real interview environment. This forces the user to posture up and manage their presence, while giving them full control over the topic.
#1 Context and control
Users can select a preset question or write their own.
This builds trust (ensuring the system knows the topic) and solves the blank page paralysis by giving a clear starting point.
#2 The Mirror Effect & Presence
Seeing themselves on camera forces users to fix their posture and treat the session seriously, preparing them for real interviews.
The AI Avatar (a static image, as if a real interviewer were on mute) adds a sense of being heard, preventing the feeling of talking into a void.
#3 Visual Anchor
The question card remains visible throughout the session.
This keeps the user focused on the specific topic and prevents rambling or forgetting the prompt mid-speech.
RESULT
Analysis screen
the problem
Initially, I designed an Overview tab to summarise the top 3 focus areas alongside a full transcription. I assumed users wanted a holistic view. However, testing revealed this triggered Analysis Paralysis:
Even selecting between 3-4 suggestions was overwhelming. Users had to click, read, and decide what to tackle first.
Demotivation
The screen felt like a static list of flaws rather than a path forward.
To fix this, I shifted from reporting to guided, single step coaching.
redesign
I completely restructured the screen to direct all attention to re-recording. The interface now hides the details and highlights a single, high-impact task, gamifying the improvement process.
#1 Gamified Forecast
Instead of just showing past mistakes, the new graph projects the future. It promises: "Fix this one thing, and your score grows by 12%. It's good now and you can make it better next time." This creates an immediate dopamine reward hook.
#2 One Key Task
To cure overwhelm, I removed the list of focus tasks. The system auto-selects the highest-ROI improvement and expands it by default. Users don't have to choose or search, they just read the tip and hit record.
#3 Persistent Environment
I replaced the static transcription in the right window with the paused video feed. This signals that the session isn't over, the "interviewer" is simply waiting. It keeps the user in the flow of speaking, making the transition to re-recording seamless.
Result
Conclusion
BUSINESS OUTCOMES
Engagement and focus
Re-recording rate
























