Landing Page

A daily overview fine-tuned to each user

Once upon a click, users arrived on a landing page that should have guided them forward — but instead, many felt lost in jargon, misdirected by layout, or stalled before taking action…

 
 

Timeline

November 2023-January 2024

Working Team Members

Design, Product, Business, Engineering

 
 

Methods

Literature Review | Interview & Concept Test (n = 14) | Wizard of Oz Method (n = 11)

Impact

✅ User awareness of to-dos ✅ Product and Business buy-in for customization ✅ Additional checks for data accuracy before launch

 
 

Process

📚 Literature Review & Discovery → 🔄 Design → 🔎 Interview & Concept Test → 🔄 Iteration → 🔎 Wizard of Oz Method → 🔄 Iteration → ⚙️ Engineering Handoff → 📋Pilot Survey → ⚙️ Refinement & Full Launch → 📋 Survey

 

Background / Challenge

Once upon a click, users arrived on a landing page that should have guided them forward — but instead, many felt lost in jargon, misdirected by layout, or stalled before taking action… My role was to uncover the sticking points and chart a clearer path.

Methods & Participants: Designing the Experience

Literature Review

We began by summarizing key insights from past research. The insights were synthesized in a document to deliver takeaways to designers quickly. The deliverable provided success criteria for the proof of concept (POC), information about hypothesized features, and other themes for the team to keep in mind.

Interview & Concept Test

Once designers reviewed the literature review and business requirements, they provided low fidelity concepts. Three concepts were developed, each personalized to a user group hypothesized by the team as having unique needs. We conducted 1-hour remote sessions with 14 users, aiming for 3-5 users from each of the 3 key user groups. The concepts and user groups were all similar enough that insights could be drawn between the user groups.

Wizard of Oz

Designs were iterated on based on the previous round of research and made to be higher fidelity. From the prior round of research, we knew that there were unique needs across users, even within a given user group. For this round of research, we conducted 1-hour remote sessions with 11 users, aiming again for the 3-5 users per key user group. A designer manipulated the concept mid-session, making adjustments to the design based on each participant’s stated preferences.

Takeaways

  • The concepts were valuable since they could create efficiency, help avoid forgetting important to-dos, and focus the next move.

  • The concepts should be personalized based on user group when first released to show some initial value that can be built on through customization. The visual below can guide the personalization approach.

 

Value of widgets according to each user group.

 
  • Importantly, participant needs were largely unique from one another, even if they were in the same user group. 17 factors were identified as primarily contributing to a user’s preferences in organizing their landing page. In addition, the page would likely change over time, as workers progress through their careers.

  • Finally, participants assumed that the data would be accurate and up-to-date, feeding to and from other systems.

    Recommendation: Collaborate with other teams about feasibility. Do not move forward if 1) customization options (add, remove, and move information) are not feasible 2) widgets cannot accurately and quickly integrate with other apps and systems.

Methods & Participants: Monitoring the Experience

Pilot Survey

A small group of users were given access to a trimmed-down version of the landing page before it was launched to all users. A survey was shared with all pilot users to understand their experience with the landing page. Survey results found that users were dissatisfied with the data accuracy on the page. This motivated Product and Engineering to hold off on a larger release until these issues were resolved.

Next Steps

Once all technical issues were resolved, the team launched the landing page to all users. A survey will be launched about one month post-release to measure the new experience, which will allow the team to compare the experience with additional measurements for future releases.