Turning passive job tracking into an active search strategy
ROLE
Product Designer
TIMELINE
Apr - May 2026
TEAM
TOOLS
Claude Code
Figma
Vercel
THE TLDR
Applytics is a job application tracker designed to help job seekers stay organized, take action, and land the role.
This project was built by a team of six based in the US, Spain, Nigeria, Uganda, and Serbia. We weren't assigned our scrum master until a few weeks in, and got a late start communicating largely because it was hard to find a time that worked for everyone across that many time zones. This was also a short-term project spanning seven weeks. I used it as an opportunity to learn how to use AI as a design tool, and once the team got rolling we were able to produce a fully functional app that went beyond the original brief.
5
Time zones
7
Weeks
13
Screens built
PROBLEM
Job searching is broken, just not in the way you'd think
Most trackers log applications but don't help you take action
Job searching is overwhelming not because there aren't enough tools, but because the ones that exist don't reflect how people actually search. Most trackers are essentially just elaborate spreadsheets that log applications but don't help you act on them. They don't tell you when to follow up, surface patterns in your pipeline, or account for the fact that different job seekers have completely different strategies. I saw real opportunity to design something that actually helps job seekers.
THE CHALLENGE
How might we help job seekers stay organized and take action without adding more noise to an already overwhelming process?
UNDERSTANDING
What my own job search told me
PAIN POINT
PAIN POINT
PAIN POINT
SOLUTION
Designing for how people actually job search
Volume Applying
These job seekers focus on tracking how many applications went out, what's the response rate, where are they dropping off in the funnel
Relationship Building
These job seekers need reminders on when to follow up, who to thank, which conversations to keep warm
Most trackers log applications but don't help you take action
The features I added beyond the brief came directly from these two use cases. Suggested tasks with follow-up reminders came from my own job search focus on building relationships. The saved links section, where users store their LinkedIn, portfolio, and other links to copy into applications, came from the friction of pasting the same URLs into every single application form. The insights page includes both "this month" and "all time" views so job seekers can reflect on their progress, whether in the same job search period or compare with a past job search period.
Here are some key flows on Applytics…
Track and manage every application in one place
The dashboard gives you a bird's eye view of your pipeline, key metrics, and a full list of every application you've logged.
Log, update, and dive into the details of each application
Add new applications in seconds and click into any of them to see the full picture: status timeline, notes, tasks, and salary details.
Filter your pipeline by status and reflect on your progress with insights
Boards let you filter by applied, interviewed, offer, rejected, or favorites. The insights page goes deeper with charts and breakdowns of your activity over time.
Customize your profile, notifications, and saved links
Settings covers everything from account details and notification preferences to an upload section where your most-used documents and links are always one click away.
PRODUCT THINKING
Every feature started with a real decision
Job searching is a desktop sport
We decided to prioritize desktop in hopes of building a responsive web app if time allowed. Job searching is active work that happens on a laptop, not a phone. If we had the time to expand to mobile, it would have been a lighter experience for quick check-ins, with features like parsing a job posting URL to autofill application fields.
Feasibility as a design constraint
Features went through a quick feasibility check with the developers to keep the product realistic. Knowing what could ship changed how I prioritized and sequenced the work.
CHALLENGES
One designer, five time zones, seven weeks
Without structure, the project stalled early
The team spanned the US, Spain, Serbia, Nigeria, and Uganda: five time zones with no overlap window that worked for everyone. There was no product owner or scrum master for the first few weeks, which meant decisions were slow and alignment was hard to get.
Getting alignment across time zones
Once the scrum master joined, meeting cadence improved and decisions got made faster. In the meantime I defaulted to async communication like sharing updates early and often and recording a video demo of the lofi wireframes so the team could review on their own schedule.
Olivia's extreme accuracy for product perception, gradually bringing that idea into simplified design is a skill I have not seen in many. Her ability to work with a diverse cross-functional team, communicating the design flow clearly while being open to taking and giving feedback helped increase the progress of the project.
Zuwee Ali, scrum master
DESIGN APPROACH
Finding the right way to work with AI
What I tried first
Using AI beyond a chat interface was totally new to me, so I started by looking up how other designers were doing it. I found a workflow connecting Claude to the Figma MCP to create layouts so I tried it myself, built 3 lofi screens, and asked Claude to finish the rest. The output had some reasonable suggestions but overall it wasn't hitting the mark, so I rebuilt those screens myself.

claude's hifi output: cluttered typography, oversaturated and muddy colors, unnecessary information/features.
I spent more time (and tokens) correcting Claude's output than I spent actually creating
I tried again by asking Claude to convert the lofi screens to hifi using Robinhood, Linear, Stripe, and Notion as inspiration. The output was still generic. It fleshed out the structure but didn't absorb the visual language I described. I added a design system and asked Claude to apply it, but it carried over unstyled lofi components and what it did apply didn't look right.

mini design system
A better approach
I noticed early on that Claude defaulted to building locally before I'd specifically asked it to work in Figma. Combined with conversations I'd been following on X about what people were building with AI directly in code, I decided to try this approach. I started a new terminal session with a detailed prompt covering product purpose, core pages, color palette, style, target audience, vibe, and key UI elements.

my input and claude's output. i documented the initial demo on x
Validating every output against my vision
When the output came back I evaluated structure and function first: were all the core pages present, was the navigation logical, did the key interactions work. Color and polish came last because those were always going to need iteration regardless. The output was the most accurate yet. It matched what I had in mind and in some cases suggested solutions I hadn't thought of.
Investment Impact
The Terminal approach produced a more accurate result in roughly 60% less time than the Figma MCP and manual wireframing combined.

ITERATION & REFINEMENT
Learning to speak Claude Code's language
Strategic prompting by identifying "why"
With the layout solid I moved into color and visual details. No matter how specific my prompts were, colors kept reverting to generic defaults. I used a separate conversational Claude session to diagnose the problem, which is how I discovered Tailwind utility classes were overriding my CSS changes. Once I had a prompt targeting those overrides the process accelerated significantly, and understanding the "why" changed how I approached prompting from then on.

How my prompts evolved
I noticed my early prompts covered purpose and vibe. Later ones targeted a single component with exact hex values. Shorter focused prompts made issues easier to catch and less likely to burn through my usage limits. I was able to make specific UX judgment calls like making the status timeline horizontal, adjusting suggested task opacity, and moving the task header outside the card for clearer hierarchy.
HANDOFF & QA
Where design decisions meet real constraints
Unifying what drifted
Despite sharing a design system, colors varied page to page and components differed depending on who built them. I stepped in to facilitate visual unification across each developer's work before the deadline.
What shipped vs what didn't
I deployed the prototype to Vercel so developers could inspect and interact with it directly rather than working from a static spec. The devs built a strong product from it and most of my QA feedback was minor. The task system was cut from the MVP entirely and the URL parser didn't make it in, which were scope reductions driven by timeline and bandwidth, not decisions I'd walk back.
Beyond just the visuals, Olivia's designs forced me to think deeply about user flows and anticipating user needs. We didn't just get the UI right; we crafted a cohesive experience that feels intuitive and intentional.
Anthony Tibamwenda, developer
REFLECTION
What I walked away with
Come in with a clear vision
The more defined my direction before prompting, the less time I spent correcting output and the more accurate the result.
Not everything can be prompted
I stopped prompting for things it couldn't do and started filling those gaps myself once I understood where AI adds value and where it falls short.
Structure enables speed
Clear ownership and communication expectations from day one would have made the whole project move faster.
How to make AI my… sidekick
Knowing when to push further, when to step in, and when to try something different entirely is what makes this workflow actually work.