Instrumentl is a SaaS platform that matches non-profits with grantors and is an essential service for NGOs to track and manage grant applications.

I love this tool and the power it gives small teams to improve their fundraising opportunities. But I didn’t quite love the main interface—so I redesigned it. I’ll explain why (and how) in this case study.

GRANT WRITERS AND MANAGERS spend a lot of time researching, applying to, and managing grants—it’s their lifeline. As a result, they can spend hours inside a tool like Instrumentl and need the UX and UI to make their interactions as concise and helpful as possible.

My experience using Instrumentl wasn’t as helpful as I would have liked. The main interface—a grant tracker—lacks a strong visual hierarchy which, over time, has led to a race condition of sorts where too many colours, CTAs, and sections compete for your attention rather than focusing you on your core goals.

The Project Dashboard: where do you look?

The grant detail page—the place to gauge if a particular grant suits your application/project—is indeed detailed, but could use a more scan-able surface (to quickly pull out key amounts and dates), historical context (grantees apply to many of the same grants year-over-year), and the removal of roller-blinding (a modal frame loaded over a dark background that creates a roller-blind effect with each new query) to make the research phase less straining.

Grant Details: a lack of scale makes for a challenging scan.


With these issues in mind, I set up some basic goals for a modest redesign:
• Improve the speed at which researches can find and add grants to a project
• Help them triage grant application stages to prioritize their time
• Provide historical context for projects and grants
• Reduce visual noise so they can spend more time in the app

If we translate this exercise into business goals, they’d read as:
• Improve the visual hierarchy/system of the grant tracker to boost retention and, conversely, reduce churn
• Improve the information context of grant details to increase efficacy and, as a result, help grantees find, add, and win more grants


I limited myself to two weeks and included these steps in the process:
1. Primary research
2. Job stories
3. UX/UI sketches
4. Design system elements
5. Mock-ups

In addition to using the app for a few days myself, I also asked three researchers to complete common tasks and observed the outcomes.

These included:
• Add a new project
• Research possible grants for that project
• Check on grant application deadlines
• Check on grant application status
• Evaluate ongoing goals
• Review previous year’s grant details

A few issues popped up during these tasks:

• While researchers could add new projects to their workspace they often mistook which project was open. As a result, they spent time looking at the wrong data and/or backtracking to find the correct project.

• Researching grants means searching, and search was often missed or confused. If they didn’t miss it, participants thought the main “quick find” search was only for projects (a collection of saved grants) rather than global grant research. And, people would search “saved grants” when they thought they were doing global searches.

• While the bright labels of a grant’s status were easy to spot, they competed with other colours in the UI. Everyone found it hard to focus on “where to look” the more time they spent using the tool.

• Evaluating ongoing goals (how many grants have we won/lost/etc) was certainly possible, but the four KPIs lacked correlation.

• It was impossible to determine previous year’s activity. That feature isn’t currently available.

Here’s how I’d synthesize the interviews and observations into job stories.


When I start a new project I want to quickly find and scan grants so I can determine if they’re right for my needs.


When I explore a grant I want to understand our history with it so I can decide how to better manage the next application.


When I scan the status of our grant applications I want to see what grants require my attention so I can ensure we don’t miss deadlines and our overall goals.


When I check on our project’s goals I want to quickly see how we’re tracking, what funds are still available to us, and what we’ve lost so I can gauge the effectiveness of our campaign.

While I constrained this exercise to the project tracker and grant detail, I started by exploring the broader context of the app. How could sections be logically grouped? What sub-sections or features need prioritization vs deprecation? These would be assumptions without an actual roadmap, but for this purpose they allowed me to explore foundational elements and how they might influence my chosen sections.

I started with basic UI architecture: retaining the left rail for nav, placing global search in the header, and using cards or boards to display information in each section.
Next, I added a global view so a researcher could immediately grok upcoming grant deadlines, project totals, and activity from other team members.
Then, I added key features/sections to the left rail that I prioritized from the current dashboard. This included sections devoted to Research, a Calendar, and Analytics.
These broader segments gave me the heading structure and historical dropdown for the Projects section.
From there, I went on to explore Grant Details. My priorities here were to highlight key data (given the sheer volume that are viewed by a researcher) and introduce a historical context (through a year picker similar to projects, and a timeline that recorded key events, similar to GitHub).

I shared versions of these sketches with my test subjects for some quick iterations and, from there, moved on to establishing some design system elements that would make their way into the final UI.

Across the research and job stories, “clarity” and “context” are two dominant features users require as they wade through enormous amounts of grant material. They want to quickly grok details like titles, sub-titles, and amounts; and be aware of scenarios like deadlines, opportunities, and challenges.

To solve for this—and jumpstart the foundation—I used the grid and cell structure from Shopify’s Polaris system. From there I added:

• Typographic scale
• Desaturated colour
• Consolidation of buttons
• A visualization of project goals
• A grant activity timeline

Typographic scale, to improve hierarchy and relationships.
Desaturated colour scale to ease the UI and better focus attention and priorities.
A timeline to record grant activity so team members have historical context.
Consolidation of buttons into a simpler pattern that relies on proximity and hierarchy rather than colour range.
A visualization of project goals to better articulate the relationship of the four key KPIs: the goal, progress towards goal (amount won), the remaining pool, and lost grants.

Status badges
One particularly important design pattern is grant status colours. These are a cornerstone of grant management and indicate six possible states that researchers must regularly track.

The current system uses bright colour hues to tag these states which has the effect of making them both distinct within the interface but indistinct from one another in that they all compete for a similar amount of attention.
The new system uses a desaturated but progressive colour range to create a hierarchy so researchers can quickly grasp what needs their attention now vs later.
Status Progression
From cool to warm, filled to outlined, the badges now indicate “stage” as much as state.

Here’s where all the inputs landed.

Desktop with goals displayed.
Desktop with grant details. The dark modal overlay was removed in favour of inline content.
Dashboard on tablet.
Grant details on tablet.


Normally, these mocks would go in front of several current and new Instrumentl users for discussion and feedback. For our purposes, we’ll use job stories to reflect on the proposed changes. Here’s a recap.

Natch, this exercise is just that—an exploration of how to remove some of the challenges researchers face using the app day in-and-out. More qualitative data and actual quantitative data would be needed to better understand how to deliver on the promise of jobs-to-be-done. For now, a fun exercise with an otherwise great product.

PS. Here’s a before an after.
Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from Youtube
Consent to display content from Vimeo
Google Maps
Consent to display content from Google