UXPA Journal Project

A research backed redesign strategy for the UXPA's Journal of UX

Timeline

3 Months

Role

Information Architect

Tools

Google Suite
Optimal Workshop
Figma
Canva

Team

Aliyah
Hania
Mya
Sanaa
Sanskar
Vanessa

Introduction

The JUX website offers UX-centric content with valuable educational resources, but it faces challenges in content structure, navigation labeling, limited search functionality, and tools for refinding content or tracking new publications. These issues hinder user navigation and discovery, prompting us to address them through a content inventory, card sorting, and tree testing.

Content Audit

Purpose of the website

The UXPA website serves multiple purposes, primarily targeting UX professionals by offering research articles, usability studies, and best practices. It aims to:

  1. Establish Credibility
    • Provide research-backed content on usability, user-centered design, and UX topics.
    • Offer courses, peer-reviewed articles, and journals to educate new entrants in the UX field.
  2. Facilitate Professional Development
    • Provide career resources, memberships, and networking opportunities for UX professionals.
    • Connect members through a mentorship program and foster community engagement via events and social media platforms.

Target Users and Their Needs

  1. Primary Users:
    • UX Professionals: Designers, researchers, and developers working with user-centric methods.
      • Needs: Access to usability case studies, research articles, webinars, events, and job postings.
      • Search Behavior: Requires high recall (broad search results) to explore comprehensive information.
  2. Secondary Users:
    • Authors & Writers: Contributors to UX-related research.
      • Needs: Ability to refind and archive their own articles or reports.
      • Search Behavior: Requires high precision for specific results and quick access to saved content.
    • Design Industry Professionals:
      • Needs: Exploratory information on UX topics and trends.
      • Search Behavior: Focus on pearl-growing—searching for related content based on previous findings.

Search Behaviors

Users engage in different search strategies, including Sensemaking, Foraging, Berry Picking, and Pearl Growing, depending on their goals. Search functions need to be optimized for both recall and precision to meet the needs of varying user types. UX professionals need high-recall searches to access as much relevant content as possible, while authors and designers prefer high-precision results for quick discovery of specific items.

To support these behaviors, the JUX website must:

  • Help users create meaningful queries.
  • Ensure metadata is clear and visible to aid in search result interpretation.
  • Provide consistent, accurate search results with intuitive search tools.
  • Enable users to recover previous search sessions and avoid dead ends.
Website Hierarchy Tree Graph

Website Challenges Identified

  • Content Structure:
    The site lacks clear categories, facets, and filtering options, limiting users' ability to refine searches. Breadcrumb navigation is also missing, causing difficulties in orienting within the site.
  • Navigation Issues:
    Broken links (19 found across 409 pages) disrupt navigation, with users encountering backend code pages or repeated 404 errors. Key areas like the “Contact Us” form and links to social media profiles lead to errors, further frustrating users.
  • Outdated Content:
    Many articles are authored by the same contributors and date back over 10 years, affecting credibility for users seeking current research and trends.
  • Limited Metadata and Features:
    While the metadata is generally useful, such as author details and keywords, the site lacks advanced search tools and notifications for new publications, making it hard for users to stay updated.
Content Inventory

Card Sort

Card Sort Overview

Objective: Improve the Information Architecture (IA) of the JUX website.

Method:

  • Unmoderated open card sort with 10 participants.
  • Participants organized 20 cards based on their understanding.
  • Study included a pretest and post-test questionnaire.
  • URL of the card sort: Optimal Workshop card sort.

Participants and Findings

  • Participant Pool:
    • 10 participants from Canada.
    • UX experience ranging from 1-12 years (majority with 1-2 years).
    • None had prior knowledge of the JUX website.
  • Usage Needs for Scholarly Articles:
    • All participants sought articles as students.
    • 3 used the content as educators.
    • 5 accessed it for personal purposes.
    • 1 had other professional needs.
  • Completion Metrics:
    • Median completion time: 4 minutes, 55 seconds.
    • Users with more UX experience used broader categories and completed the sort faster.
    • Users with less UX knowledge created more specific labels, sometimes repeating card terms verbatim.
Participant overview

Key Findings

Prior Knowledge and Sorting Patterns

  • Experienced Users (e.g., Participant #9):
    • Used broader categories (e.g., "Research").
    • Completed the study quickly (5 minutes).
  • Less Experienced Users (e.g., Participant #1):
    • Used narrower, specific labels (e.g., “Methodologies for Conducting UX Research”).
    • Took longer to finish the task (13 minutes).

This suggests that prior knowledge influences not only speed but also the specificity of categories. Less experienced users rely on card terminology, while experienced ones generalize based on their domain knowledge.

Highlighting Participant #1 and #9

Card Pairings and Category Trends

  • Common Groupings:
    • Articles grouped by dates:
      • “Articles from 2022” + “Articles from 2006 to 2021”
    • Groupings by topic/author:
      • “Methodology of Wizard of Oz research testing” + “Measuring the reliability of scales in UX research”
      • “Most Recent Articles” + “Most Downloaded Articles”
  • Misplaced or Confusing Cards:
    • "Call for Submissions" card was difficult to categorize for three participants. This might indicate issues with jargon comprehension or lack of context.
  • Unexpected Pairings:
    • “Editorial staff list” was never grouped with “Articles by JUX author Hye-Jin Lee,” showing clear user expectations about unrelated content.

Tree Test

Overview

To improve the information architecture (IA) of the Journal of User Experience (JUX) website, we conducted two tree tests using Optimal Workshop. Each study was limited to 10 participants due to the platform’s payment plan, totaling 20 participants across both tests (Study 1 and Study 2). Participants completed a pretest questionnaire, instructions, the tree test, and a post-test questionnaire.

Our categories were based on a card sort and content inventory of JUX. We analyzed results using Optimal Workshop’s tools while considering participants' UX experience, task completion rate, and navigation paths.

Structure of the Tree Test

The tree test structure was designed using data from the previous card sort study.

  • Parent Categories: We used the most frequent labels participants created in the card sort to align with user expectations.
  • Subcategories: We relied on the Similarity Matrix to cluster related items, ensuring intuitive grouping.

One challenge was placing “Most Downloaded Articles,” which participants had sorted inconsistently into four different categories. To simplify navigation, it was placed as a third-level category instead of a fourth-level subcategory under “Most Recent Articles.”

Participants

Participants included junior UX designers, UX students, and UX professors to ensure familiarity with UX terms and navigation strategies.

  • Pre-study results:
    • 3 out of 19 participants had visited the JUX site before.
    • UX experience ranged from 1 to 20+ years, with many participants being students using the site for academic research.

We targeted participants with UX backgrounds to minimize knowledge gaps and ensure meaningful feedback, aligning with Nikolopoulou et al.’s recommendation that academic audiences require effective search strategies.

Test #1 Results

  • Participants: 9
  • Task 1:
    • 8 participants completed it, with 7 achieving direct success by selecting the expected category (“Articles”).
  • Task 2:
    • Only 44% initially clicked on “Call for Papers,” the correct category.
    • 7 participants completed the task, but only 3 achieved direct success.
  • Median completion time: 1 minute 46 second

Test #2 Results

  • Participants: 10
  • Both tasks showed 90% success rates, with direct success improving from 33% to 70% for Test 2 compared to Test 1.
  • Task 2 improvement: 90% of participants correctly clicked on “Call for Papers” on their first try, showing clearer navigation paths.
  • Median completion time: 2 minutes.
  • Participant feedback: Most found the tasks easy, though a few noted Task 2 required more time to evaluate options.

Study Tasks

  1. Task 1
    • Prompt: "Suppose you are completing a scholarly research piece – find research about inclusive and universal design."
    • Anticipated path: Home → Articles → Articles by Topic → Accessibility
  2. Task 2
    • Prompt: "Suppose you are a professor who has written a research paper. Where would you look to find guidelines about publishing your research?"
    • Anticipated path: Home → Call for Papers → Criteria for Submitting a Paper

These tasks reflected realistic user needs based on our content inventory. Success was measured by the accuracy of category selection, directness of navigation, and first-click accuracy.

Key Findings and Adjustments

  1. Structural Changes:
    • In Test 1, “Policies” was a level 2 category, which caused confusion. Moving it under “Call for Papers” in Test 2 improved the success rate for Task 2 by 20%.
    • Rewording Task 2 from "peer-reviewed paper" to "research paper" made it clearer.
  2. Direct Success Improvement:
    • Test 2 saw a significant jump in direct success for Task 2 (from 33% to 70%). This suggests participants better understood the relationship between the task and available options.
  3. First Click Rates:
    • Task 1 had 100% correct first-clicks across both studies, showing it was straightforward.
    • For Task 2, first-click accuracy increased from 44% in Test 1 to 90% in Test 2 after adjustments to category structure.

Task Specifics for Test #1

Task Specifics for Test #2

Summary of Results

The structural and wording changes made between Test 1 and Test 2 led to improved navigation:

  • Task 2's direct success rate increased by nearly 50%.
  • First-click accuracy for Task 2 rose from 44% to 90%.
  • Task 1 success remained consistent across both studies, with no significant changes.

Participants’ feedback confirmed the clarity of tasks, though Task 2 initially posed challenges due to the placement of Policies as a level 2 category. Moving it to “Call for Papers” and adjusting the wording made the task easier to complete in Test 2.

Final Report

Design Strategy for JUX

The redesigned JUX website offers a more intuitive structure and enhanced navigation, aligning with the needs of its academic users. With the implementation of a revised sitemap, improved labeling, and faceted search, we expect users to have a smoother experience finding relevant content. The inclusion of utility features such as a subscribe option and bookmarking will also increase user engagement and retention. These changes aim to elevate the JUX website into a more efficient, user-centric platform that meets the evolving needs of the UX research community.

The limitations we encountered with metadata optimization and multi-device bookmarking highlight potential areas for future improvements. Despite these challenges, our recommendations reflect achievable solutions that maintain the site’s core functionality while addressing the critical pain points identified through research. With the proposed changes, JUX can better serve its primary audience, encourage scholarly collaboration, and establish itself as a valuable resource in the UX community.

Retrospective

Reflection

In this project, we successfully identified key usability issues on the JUX website, conducted user research through card sorting and tree testing, and implemented a redesign that improves navigation, search functionality, and the overall user experience. However, there are always alternative approaches that could have been explored to enhance the design further.

What we could have done better

  • Instead of relying solely on cookies, we could have introduced user accounts for saving bookmarks and receiving personalized notifications. This solution could also improve the multi-device experience but would require account management infrastructure.
  • We could have implemented smaller changes incrementally and tested them in real time using A/B testing, allowing us to collect user data and refine the design over multiple iterations

Feel free to reach out if you’d like to chat about this project or explore my other work!