Developer Offer
Try ImaginePro API with 50 Free Credits
Build and ship AI-powered visuals with Midjourney, Flux, and more — free credits refresh every month.
AI Is Reshaping UX Research and Design

UX Roundup for October 27, 2025. (Seedream 4)
In a festive spirit, a short video demonstrates the idea of dressing up as a usability expert for Halloween, created using Sora 2 and Veo 3. An interesting observation from this experiment is the differing styles of AI-generated video; Sora 2 produces frantic, fast-cut clips reminiscent of viral social videos, while Veo 3 offers a calmer, more educational tone. This suggests that the training data significantly influences the output style of these models. While AI can automate the entire video creation process, there's still a preference for the creative control offered by manually scripting and directing clips, as seen in a video made with a usability action figure.

Scoring Secondary Research with the 3S Method
Secondary research involves using existing user research findings to inform a new design project. It offers immediate insights at a low cost. For instance, services like the Baymard Institute provide access to extensive e-commerce usability research for a monthly fee, delivering a high return on investment through improved usability and sales.
However, the primary challenge with secondary research is its relevance, as it was conducted for different goals. Relying on research with mismatched user groups—like applying findings from ostriches to a product for giraffes—can be misleading.
To address this, the Study Similarity Score (3S) offers a structured way to assess the relevance of a secondary study. It involves rating five characteristics on a 0–100 scale:
- User match: Are the study participants similar to your target users?
- Task match: Do the tasks share the same workflow, inputs, and constraints?
- Context match: Is the device, environment, and pressure similar?
- Outcome match: Are the success metrics (e.g., sales, speed, satisfaction) the same?
- Ecological validity: How closely does the study mimic real-world conditions?
The total 3S score (out of 500) guides your next steps:
- 400–500: A perfect match. Dive deep into the report and strongly consider its recommendations.
- 250–399: Moderately relevant. Skim for insights but validate any recommendations with your own user testing.
- 0–249: Unlikely to be relevant. You might still find inspiration in the methodology for your own research.

23 Fresh Ideas for AI User Interfaces
Students in Carnegie Mellon University’s “Design of AI Products and Services” class have developed 23 concepts to improve the standard linear chat interface in AI systems. These ideas primarily target power users, addressing friction points that emerge with extensive AI use. The concepts are grouped into four essential themes.
1. Supporting Complex Workflows This theme tackles the limitations of linear chat for complex, multi-threaded work.
- BranchFlow: Allows users to branch AI responses into visual trees to explore different ideas while retaining context.
- Dual-mode AI Workspace: Offers both a freeform canvas for creative tasks and a guided workspace for structured goals.
- Contexts: Creates AI-generated workflow threads that span multiple chats, helping users track ongoing tasks.
2. Navigating and Revisiting Past Work This theme focuses on making it easier to find and resume previous conversations.
- Thematic Chat Grouping: Automatically clusters related chats by theme, with summaries and dates.
- Info Clusters: Provides an overview of numerous conversations and allows for stable data export.
- Re-entry Panel: Offers an AI-generated recap of the last session to help users regain context.
- Chat Navigator: Lets users star key moments in a conversation for quick navigation.
- Bookmark Tab: Collects important outputs or prompts for easy reuse.
3. Improving Input and Articulation This theme aims to help users formulate effective prompts and overcome the AI Articulation Barrier.
- Prompt Modes: Provides guides (e.g., Research, Design) to help frame requests.
- Prompt Suggestions: Suggests keywords and tags to improve a prompt before submission.
- AI Prompt Generation: Offers flexible options for iterating on prompts.
- Prompting Guide: Visually parses complex inputs into navigable categories like Goals and Contexts.
- Highlight What’s Important: Allows users to use highlighting or bolding to communicate priorities to the AI.
4. Enhancing Transparency, Trust, and Control This theme is dedicated to building user trust by improving transparency and providing granular control over AI output.
- Side-Panel AI Verification: Opens a side panel to show conflicting information and helps resolve hallucinations.
- Preview of Source: Allows users to fact-check by previewing sources without leaving the interface.
- Multi-level Refinement: Offers various editing levels, from big-picture changes to precise inline edits.
- Fine-tune Pop-ups: Enables users to select specific text sections for immediate refinement (e.g., improve, explain, change tone).
5. Making AI Memory Controllable This theme addresses the issue of LLMs forgetting information between sessions.
- Conversational Memory: Displays in-line citations when memories are used, allowing user confirmation.
- Memory Citations: Shows exactly where memories are being pulled from.
- Memory Context: Provides an organizational structure for memories to help navigate long conversations.
- Memory Setup: Allows users to proactively remind the AI of their preferences and priorities.

Can AI Outperform Humans in Heuristic Evaluation?
A recent study from the Federal University of Technology Paraná compared the performance of AI models (ChatGPT 4o and Gemini 2.5 Flash) with four human usability experts in conducting a heuristic evaluation of a pediatric care UI. The evaluators used Jakob Nielsen's 10 usability heuristics.
The results were compelling:
- Gemini 2.5 Flash: Found 82 usability problems (17% false positives).
- GPT 4o: Found 63 usability problems (18% false positives).
- Human Experts (average): Found 25 usability problems (7% false positives).
While humans are better at avoiding false positives, AI excels at identifying a higher volume of design issues, even if many are minor. The study also confirmed that different AI models find different problems, with only 20% of issues being identified by both. This reinforces the recommendation to use multiple AI models for more comprehensive results.
Currently, AI can't definitively beat human experts, but performance is expected to improve with future models. The prediction is that AI will exceed human performance in heuristic evaluation between 2027 and 2030. For now, the best approach is to treat AI-generated usability reports as a list of suggestions for a human expert to review, leveraging AI's breadth of detection with human judgment and expertise.

Compare Plans & Pricing
Find the plan that matches your workload and unlock full access to ImaginePro.
| Plan | Price | Highlights |
|---|---|---|
| Standard | $8 / month |
|
| Premium | $20 / month |
|
Need custom terms? Talk to us to tailor credits, rate limits, or deployment options.
View All Pricing Details

