Key Takeaways
At Baymard, our research team has just spent 4,000+ hours usability testing and researching Checkout features, layouts, content, and designs — leading to our revised and expanded Checkout research study.
The research is based on more than 200 qualitative user/site usability test sessions following the “Think Aloud” protocol (1:1 remote moderated testing).
This Checkout study focused on a mix of mass merchant and smaller sites, for a total of 16 sites: Etsy, Theo Chocolates, CVS, Best Buy, Stance, Hayneedle, Apple, Everlane, Container Store, Williams Sonoma, Overstock, Walgreens, Snowe Home, Wayfair, American Eagle, and Samsung.
During testing the users encountered 1,350+ medium-to-severe checkout usability issues.
These issues have subsequently been analyzed and distilled into the 110+ UX guidelines found within our Checkout research study (all of which are available as part of our Premium research findings).
The 110+ guidelines cover most aspects of the Checkout UX, at both a high level of general user behavior as well as at a more granular level of specific issues users are likely to encounter.
This latest round of large-scale Checkout testing provides the following:
This 2024 Checkout research ensures that our existing Checkout research — first conducted in 2010 and periodically conducted since — remains robust and up-to-date with the latest UX findings.
This large-scale study dedicated 4,000+ research hours and conducted more than 200 qualitative user/site usability test sessions solely to investigating e-commerce Checkout UX.
Analyzing the Checkout test session data, we’ve reverified all of our existing Checkout research — meaning that our previously identified Checkout issues and recommendations (published as Checkout articles or in Baymard Premium) are again shown to be primary drivers of the Checkout UX for end users.
Along with reverifying our Checkout research, we’ve also updated our Checkout research catalog with hundreds of new examples.
The examples are from the test sessions at the 16 test sites listed above, as well as from our e-commerce UX benchmark (desktop, mobile, and app).
While many user issues and patterns persist from year-to-year, there are some genuinely new issues that arise, and new approaches to resolving these issues.
As such, we’ve added 20 new Checkout guidelines based on this latest 2024 test data, some of which we’ve already published as articles (e.g., see “Include All Order-Fulfillment Options in the Fulfillment-Selector Interface (50% Don’t)”, “Provide a “Fully Automatic Address Lookup” Feature (55% Don’t)”, and “Use Buttons or Buttons Plus an Open Text Field for Updating Cart Quantity (61% Don’t)”).
Additionally, 62% of our existing Checkout guidelines that were reverified in this study have been heavily revised — meaning that, while the core issues observed and solutions recommended largely remain the same from the previous guideline, the revised guideline has added arguments, implementation details, additional considerations, and other content that provides for a more robust overall guideline.
Finally, some of our Premium subscribers may notice that there have been some changes to the research catalog in the Checkout theme.
In particular, we’ve archived 36 guidelines, for 1 of the following reasons:
Due to these archived guidelines, the number of Checkout guidelines has dropped from 134 guidelines (in 2022) to 119 in 2024 — which have been combed through by our research team to ensure they are not only valid but clear, actionable, and highly relevant to determining a particular site’s Checkout UX performance.
In addition to archiving 36 guidelines, we’ve also improved the reading experience in Premium by updating topic titles and the overall organization of the catalog, and we anticipate that these changes will make for a more reader friendly and less intimidating experience for Premium users in the Checkout theme.
At Baymard, we’re not new to Checkout UX research.
Indeed, we conducted our first Checkout study back in 2010, and have been testing e-commerce checkouts ever since.
However, if a client or customer happens to see a screenshot from one of our test sessions of an interface that looks dated, we’re sometimes asked, “How do I know this research is still relevant?”
In particular, some seem to think that, with new design trends emerging all the time, any UX research that hasn’t been conducted in the past 3–6 months is irrelevant, or is of diminished relevance.
Yet what this latest round of Checkout research has shown (and what we’ve observed more generally for e-commerce UX) is that, while design trends come and go, user issues and user needs remain largely the same over time.
For instance, consider an example from that 2010 Checkout study, compared against a much more recent example.
In the 2010 study, we identified a user issue where some users were forced to reenter a billing address after having already provided a shipping address, or otherwise struggled with the “Billing Address” fields.
14 years later, in this latest round of Checkout research, we identified 35 instances where users encountered the exact same, or very similar, issues or implementations regarding the billing address.
Thus, in this one example we see how, despite flashy new designs, a site’s Checkout UX performance depends on “Checkout UX fundamentals”: forms, form fields, and Checkout elements and features that have been around for a decade or more.
Despite these issues persisting over the years, we find that on many sites they remain unresolved — leading to friction-filled checkout experiences, negative user brand perceptions, and users abandoning checkouts.
This latest 2-year Checkout research study shows that, to improve Checkout UX, the vast majority of sites need to solve problems that we’ve identified 3, 5, or even 10 years’ ago — and have now verified with this 2024 research.
Thus, the research (even if including a screenshot that’s not the latest version of x site’s interface) remains valid, and implementing the recommendations will improve the experience of a site’s users today (despite the issue being identified years’ ago).
Indeed, in many of our guidelines we include “old” examples along with the newest examples of users experiencing issues during Checkout — precisely to illustrate the longevity of the UX issue.
In addition to 4,000+ more hours of research, our latest Checkout-specific research study adds an additional 200+ qualitative user/site sessions, for a total of 650+ qualitative user/site sessions for our Checkout research only, conducted periodically from 2010 to 2024.
Note that this figure doesn’t include site sessions from our industry-specific studies, the majority of which have Checkout findings as well.
For example, our dedicated research tracks on Vitamins and Supplements, Travel Accommodations, and Online Grocery all contain Checkout findings that largely support our broad Checkout B2C research, along with some Checkout findings that are specific to those industries.
Finally, in addition to our Checkout guidelines (many of which are available for free as published articles), be sure to view our Checkout benchmark based on over 24,000 performance ratings, as well as our Checkout page designs, which includes over 6,600 desktop and mobile Checkout examples.
Getting access: all 110+ Checkout UX guidelines are available today via Baymard Premium access. (If you already have an account open the Checkout study.)
If you want to know how your desktop site, mobile site, or app performs and compares, then learn more about getting Baymard to conduct a UX Audit of your site or app.
Comment on LinkedIn or Post this article on X
At Baymard we’ve just released an update of our Order Tracking & Returns UX benchmark.
This adds to our existing e-commerce UX benchmark.
We’ve manually assessed 20 large desktop and mobile e-commerce sites across our 27 research-based Order Tracking & Returns UX guidelines.
This provides you with:
850+ new worst and best practice Order Tracking & Returns UX implementation examples already embedded in the 27 UX guidelines
950+ new Mobile Customer Accounts UX performance scores from the 20 sites — view them and the updated case-study dataset in the UX Benchmark tool
160+ new full-page design examples of order tracking and returns pages from leading e-commerce sites in the Page Design tool
The 20 sites’ 950+ UX performance scores are summarized in the interactive scatterplot below — showing you how they perform collectively and individually:
{{ scatterplot-graph: view-structure-id=gemini-VLtV0EX9eCUnbvHklaxr + habitat=public + size=full-width }}
A publicly available overview of the Order Tracking & Returns research and benchmark can be found on our Accounts & Self-Service UX research overview page.
Getting access: all 950+ UX performance scores and 850+ implementation examples are available immediately and in full within Baymard Premium. (If you already have an account open the Accounts & Self-Service guideline collection.)
If you want to know how your desktop site, mobile site, or app performs and compares, then learn more about getting Baymard to conduct a UX Audit of your site or app.
Comment on LinkedIn or Post this article on X
Key Takeaways
Users don’t always understand the terminology chosen for filter types and options.
In Baymard’s large-scale testing of product lists, participants struggled with a significant number of unclear or jargon-heavy filter types and options (e.g., “RFID” in filters for wallets), which often required domain or technical knowledge to use correctly.
Labeling filters with terms users don’t understand can hinder product finding just as much as not having the right filters in the first place.
Yet our e-commerce UX benchmark shows that 62% of sites use unclear labels, which users may skip when trying to filter for desired items.
Thus, particularly with large product lists, it’s important to help users filter accurately to reduce the noise.
In fact, our testing shows that not fully understanding — or misunderstanding — filter types and options was both a direct and an indirect cause of site abandonments.
In this article, we’ll discuss Baymard’s Premium research findings related to jargon language for filter types and options:
When sites use technical or unfamiliar language in filter types or options, users may be less likely to apply filters — and overlook what in reality are very suitable filters.
For example, a user shopping for sofas who doesn’t know the term “Slipcover” may avoid selecting it but still select other options — meaning that the filtered product list might contain no sofas with this feature.
Thus, if this style were in fact suitable, users could miss out on ideal products.
On the other hand, failing to understand a site’s filter terms could also mean that users may apply filters that they thought meant something else, resulting in product lists that show too many unsuitable items or too few suitable ones.
For example, if a user misunderstood the term “Bouclé” and applied that option, the list could contain so many unsuitable items — if this style didn’t appeal to them — that they would struggle to find a suitable one.
Additionally, users who encounter ambiguous filter options can end up spending time applying and removing filter options just to understand their effect on the product list.
This becomes a tedious process, in particular when the information or thumbnails for each item in the product list do not adequately convey the differences in specs or features between the products.
Mobile users will have an especially difficult time if they must use a separate filtering interface that sends them back to the product list after each filter option is selected.
Finally, a subgroup of users intent on looking up the meaning of an unclear filtering term might leave the site just to figure out what a word or phrase means (“I’ll just Google it”) — which carries the risk that they find alternative product options on other sites and not return.
Clearly labeled filters help users reduce the noise by narrowing down a large list to a smaller, more relevant list that improves the odds users will find a suitable product.
Throughout testing, 3 approaches proved effective in reducing terminology-related issues with regard to filters:
If possible, the best solution is to avoid industry jargon entirely.
Instead, use terms in filter types and options that better match the language that your greater audience is likely to find familiar (e.g., “temperature” instead of “season rating”, or “security protection” instead of “RFID”).
Doing so will ensure that the majority of users will understand all filtering options and be able to tailor the product list without unnecessary friction.
Sometimes industry jargon is the only option available or it holds significant value for expert users who rely on it to differentiate products — which means the jargon should be kept.
However, it’s important to then explain the terminology in a way that non-experts can understand, saving users the trouble of leaving the site to search for its meaning: “‘RAM’ means random-access memory. This helps your computer tackle multiple tasks at once. A minimum of 2GB is required for basic tasks, but 8GB or more is recommended for gaming or video editing.”
The explanatory content should be placed right where the tricky terms are found in the filtering component — typically at the filter-type level or at the filtering options themselves.
Helpful definitions are best served up in a tooltip on desktop sites or as a tappable link or icon on mobile platforms.
On mobile, take care to ensure that the link or icon for viewing the explanations are sufficiently sized and have enough space around them to ensure that they are easy to tap.
When considering which filtering terms need to be explained, we recommend running tests with novice users to identify any problematic filtering labels that could benefit from more clarity.
However, a good rule of thumb is to provide explanations for category-specific filters, as these can vary from industry to industry — and even from site to site — and are most likely to be unfamiliar to users.
If the filtering relies on visual characteristics to differentiate the options, using thumbnails alongside the name of the filter type or option makes it easier for users to identify those distinguishing attributes without having to rely only on text labels — for example, thumbnails of different types of wine racks or sofas enable users to identify differences at a glance without necessarily having had any prior experience with the product type.
As our research has shown, exploring and refining e-commerce product lists can be challenging for users.
Thus, to encourage customer stickiness, it’s important that all users, not just domain experts, are able to use product filters quickly and accurately.
It’s therefore important to
Doing so will allow users to adequately narrow product lists.
Yet 62% of sites use technical or ambiguous filtering language (with no in-component help for users) or lack visual thumbnails that more easily identify product attributes, forcing some users to leave sites due to filtering problems.
This article presents the research findings from just 1 of the 650+ UX guidelines in Baymard Premium – get full access to learn how to create a “State of the Art” e-commerce user experience.
If you want to know how your desktop site, mobile site, or app performs and compares, then learn more about getting Baymard to conduct a UX Audit of your site or app.
Comment on LinkedIn or Post this article on X
At Baymard we’ve just released a new UX benchmark of 5 Home & Hardware sites in our Home & Hardware benchmark collection.
This follows from our large-scale user testing research on Home & Hardware sites and adds to our existing e-commerce UX benchmark.
The 5 Home & Hardware sites from our Home & Hardware benchmark have been manually reassessed across 500+ research-based UX parameters relevant to Home & Hardware sites, resulting in 2,500+ weighted UX performance scores and 2,000+ worst and best practice examples.
For the benchmark, we rated 5 Home & Hardware sites: Home Depot, Build.com, Northern Tool, Grainger, and Lowe’s.
You can explore the 5 new Home & Hardware UX case studies using the below links:
decent
Home & Hardware
94 page designs: desktop, mobile, app
decent
Home & Hardware
75 page designs: desktop, mobile, app
mediocre
Home & Hardware
63 page designs: desktop, mobile
poor
Home & Hardware
56 page designs: desktop, mobile
decent
Home & Hardware
83 page designs: desktop, mobile, app
Want to know how your site performs? Get Premium access to review your own site or have it audited by Baymard researchers.
Each of the 5 Home & Hardware sites’ 2,500+ UX performance scores, along with the scores for the 7 other Home & Hardware sites in the benchmark, are summarized in the interactive scatterplot below — showing you how they perform collectively and individually:
{{ scatterplot-graph: size=big + habitat=public + base-sites=collection:home-and-hardware + view-structure-id=gemini-DyCQfqIvHoP85WQOxhqA }}
A publicly available overview of the research and benchmark can be found on our Home & Hardware research overview page.
Getting access: all 2,500+ UX performance scores, 2,000+ best practice examples, and the UX insights from researching the Home & Hardware industry are available immediately and in full within Baymard Premium. (If you already have an account open the Home & Hardware study.)
If you want to know how your Home & Hardware desktop site, mobile site, or app performs and compares, then learn more about getting Baymard to conduct a Home & Hardware UX Audit of your site or app.
Comment on LinkedIn or Post this article on X
Key Takeaways
At the payment step in checkout, the finish line is in sight for both the user and the merchant.
In Baymard’s large-scale checkout testing, once participants entered the checkout flow, we observed higher completion rates with sites designed to elegantly handle errors during checkout — especially with regard to managing credit card data.
However, it can be technically challenging to persist data in fields requiring a higher level of security, such as credit card numbers, expiration dates, and security codes, than for other form fields.
To comply with PCI standards, merchants are not allowed to store credit card information temporarily, and discarding users’ credit card details when checkout errors occur is one method used to maintain compliance.
During testing, this was a direct cause of abandonments, as some participants refused to waste time reentering the credit card data they had populated just moments before.
Yet our e-commerce UX benchmark shows that 34% of sites don’t retain credit card numbers in the checkout flow when errors occur.
In this article, we’ll discuss Baymard’s Premium research findings for retaining data in sensitive credit card fields, including:
One of the most problematic and avoidable issues with secured credit card fields is when they get cleared after a form error occurs in nonsensitive fields elsewhere on the page.
Technically, this happens when the entire page is submitted and all fields are validated in one go (instead of validating each field as the user completes it).
For example, if there’s a validation error in, say, an address field elsewhere on the page, the credit card fields are cleared when the entire page is reloaded to show the validation error for the address field.
While the site technically meets PCI standards, it results in the user’s credit card information being deleted when the page is reloaded.
Clearing a correctly entered 16-digit credit card due to, for example, a simple typo in a phone number elsewhere on the page is beyond frustrating, and in testing was the direct and sole cause of multiple participants abandoning the site.
What’s more, when the credit card number triggered an error, participants in testing were slow to recover, needing to retype the entire number into an empty field rather than scanning what they had typed to resolve the issue that caused the error.
And in general users struggle even more on mobile sites, where checkout errors are often harder to recover from due to poor error design.
Thus, clearing the credit card field — regardless of where the validation error actually occurred — results in increased effort for all users.
To avoid frustration and the risk of abandonment, sites should retain the data entered in credit card fields when displaying errors for those fields or any other fields in the form.
Persisting the card data does much more for the error-recovery process than just saving the user from retyping all the credit card details; more importantly, when the error resides within the credit card data, it allows users to review the incorrect input and find the error.
This detail is crucial: in testing we observed participants who mistook the cause of the error and often tried to fix already-correct information, leading them to abandon the purchase due to compounding validation errors.
Additionally, on sites tested that preserved the credit card information when showing errors, participants recovered from an error substantially faster than on sites that cleared it.
Thus, persisting the incorrect credit card data allows users to spot their mistake and only correct the invalid input.
In practice, there are 3 different ways to remain compliant with PCI standards and resolve or mitigate user frustration:
When users complete the checkout form, first validate all nonsensitive fields on the front-end (the non–credit card fields).
If any nonsensitive field contains invalid input, scroll the user to that field. By validating these fields (e.g., “contact”, “billing address”, or “coupon”) on the front-end, the form has not yet been “submitted”; hence, a page reload is not required if these fields contain an error and the user’s input in sensitive fields can be retrained.
Once you confirm the nonsensitive fields are valid (on the front-end), then submit the entire page and payment data to your regular payment-processing gateway.
While not solving the issue entirely, validating each field as the user completes it allows sites to greatly reduce the frequency of the issue.
By immediately alerting users to incorrect inputs, there is less risk of them proceeding to submit the payment page and clearing the valid credit card data.
In particular, you can leverage Luhn validation to verify that credit card numbers are entered correctly as soon as the user completes that field (along with making the numbers easier to scan).
By immediately displaying an error when a credit card number is too short or contains a nonvalid character, users can fix an issue well before submitting their order.
While Luhn validation provides the benefit of inline form validation on this sensitive field, note that it is limited in scope: it doesn’t tell you if the credit card is valid or has sufficient funds.
As such, it should be used in combination with asynchronous payment calls or dual-stage error handling to improve users’ experience with the credit card form during checkout.
Finally, a third approach is to present all or nearly all nonsensitive fields at other checkout steps, and isolate the sensitive fields on the payment step.
This approach is an option for sites with limited technical resources or with a very simple checkout flow where there’s little need for displaying other nonsensitive fields at the payment step.
Users abhor unnecessary rework, and e-commerce merchants must keep payment details secure.
In this case, you can reduce cart abandonment by ensuring the underlying code manages the complexity of PCI compliance instead of frustrating users with extra work.
Which method you use depends on your technical resources and payment vendor.
Many sites can process payments by requesting credit card validation from the payment vendor (using secure tokens to stay PCI compliant) while the user continues to complete other fields.
And if the credit card triggers an error, sites can communicate behind the scenes (using AJAX) without requiring a full reload or update of the webpage, resulting in a more seamless and responsive experience.
In the end, retaining the credit card data that users have painstakingly populated leads to more sales and fewer frustrated users at the finish line.
Yet 34% of sites in our e-commerce UX benchmark don’t retain data in the credit card fields when validation errors occur, resulting in many unnecessary abandonments.
This article presents the research findings from just 1 of the 650+ UX guidelines in Baymard Premium – get full access to learn how to create a “State of the Art” e-commerce user experience.
If you want to know how your desktop site, mobile site, or app performs and compares, then learn more about getting Baymard to conduct a UX Audit of your site or app.
Comment on LinkedIn or Post this article on X
At Baymard we’ve just released a new UX benchmark of 5 Electronics sites in our Electronics & Office benchmark collection.
This follows from our large-scale user testing research on Electronics & Office sites and adds to our existing e-commerce UX benchmark.
The 5 Electronics sites from our Electronics & Office benchmark have been manually reassessed across 600+ research-based UX parameters relevant to Electronics & Office sites, resulting in 3,000+ weighted UX performance scores and 2,500+ worst and best practice examples.
For the benchmark, we rated 5 Electronics sites: B&H Photo, Crutchfield, Microsoft, HP, and Apple.
You can explore the 5 new Electronics UX case studies using the below links:
decent
Electronics & Office
77 page designs: desktop, mobile, app
decent
Electronics & Office
50 page designs: desktop, mobile
poor
Electronics & Office
55 page designs: desktop, mobile
decent
Electronics & Office
49 page designs: desktop, mobile
mediocre
Electronics & Office
55 page designs: desktop, mobile
Want to know how your site performs? Get Premium access to review your own site or have it audited by Baymard researchers.
Each of the 5 Electronics sites’ 3,000+ UX performance scores, along with the scores for the 12 other Electronics & Office sites in the benchmark, are summarized in the interactive scatterplot below — showing you how they perform collectively and individually:
{{ scatterplot-graph: size=big + habitat=public + base-sites=collection:electronics-and-office + view-structure-id=gemini-TKOZznF3fHMamenWiAQT }}
A publicly available overview of the research and benchmark can be found on our Electronics & Office research overview page.
Getting access: all 3,000+ UX performance scores, 2,500+ best practice examples, and the UX insights from researching the Electronics & Office industry are available immediately and in full within Baymard Premium. (If you already have an account open the Electronics & Office study.)
If you want to know how your Electronics or Office desktop site, mobile site, or app performs and compares, then learn more about getting Baymard to conduct an Electronics & Office UX Audit of your site or app.
Comment on LinkedIn or Post this article on X
Key Takeaways
Despite the ubiquity of filters in e-commerce, Baymard’s large-scale UX testing of Product Lists & Filtering reveals that the underlying filtering logic at some e-commerce sites misaligns with how users expect filters to work.
Multiple rounds of testing have shown that when users can’t select multiple options for a filter type they have to redo the filtering process for each separate filter option they are interested in — a process rendered even more difficult and tedious on mobile.
To enable users to easily apply all the filter options they want so they can get a product list composed of only items that satisfy their requirements, sites should allow users to combine multiple filtering options within the same filter type.
However, our e-commerce UX benchmark shows that a surprising 15% of sites don’t let users select multiple filter options — causing some users to abandon suitable products as a result.
In fact, our usability testing shows that filters are sometimes wrongly implemented as mutually exclusive — meaning users can only select one filter value (e.g., “Blue”) at a time for a given filter type (e.g., “Color”), thus making it difficult for users to narrow product lists in order to focus on items they actually want to purchase.
This article will discuss our latest Premium research findings on how users select filters and the underlying required filtering logic, including:
How not allowing users to combine filter values makes it quasi-impossible to use filters to narrow product lists
To understand how negative an impact mutually exclusive filter values can have on the end user, it’s worth considering the process a user must go through when trying to use a filtering implementation that doesn’t allow filter options to be combined.
“I wouldn’t mind paying up to ‘$100,’ but if I select that option I don’t get to see any of the ‘$50–$74’ training shoes…In fact, I would like to tick these 3 price options so I don’t miss out on any good deals.”
When selecting filter options, users frequently need to choose more than one option from each filter type — for example, users in a product list of jeans may want to see both “Blue” or “Black” jeans.
Indeed, during testing, it was commonplace for participants to try to select more than one filter option in a single filter type.
When they were unable to do so, participants couldn’t establish an overview of all the products matching their unique set of requirements and some had to abandon their efforts to find a suitable product.
So when users cannot select more than one filter option in the same filter type it is both unexpected and problematic — and sites that don’t allow multiple selections are very much the exception.
At sites where the filtering options within a filter type cannot be combined, users who are interested in, for instance, either “Blue” or “Black” jeans must go through the following process to access a set of suitable products:
Apply the first filtering option (“Blue”)
Look through the products in the filtered list
Memorize the interesting products for “Jeans: Blue”
Deselect the “Blue” filtering option and wait for the page to reload to see the prior product list without any filters applied
Apply the second filtering option (“Black”) at the now unfiltered product list
Look through the products in this new filtered list
Memorize the interesting products for “Jeans: Black”
Deselect the “Black” filtering option and again wait for the page to reload to see the prior product list without any filters applied
Finally, users must now — from memory — locate all the wanted options in the now unfiltered product list and compare them
Users will have to go through these multiple steps just to see the results across 2 filtering options if they are mutually exclusive.
And as more filters are needed, the process gets both increasingly tedious and increasingly impractical because the user has to memorize more and more products across the product list — until they can’t realistically remember all of the options anymore (during testing it was not uncommon for test participants to apply 5 or 6 filters, with some applying up to 10).
When filter options cannot be combined, users are effectively prevented from seeing all relevant products matching their needs.
In fact, during testing, when filter options were mutually exclusive, participants were far less likely to use filters, or to use them effectively — and therefore were less likely to find suitable products.
On mobile, not being able to select multiple filter options within one filter type makes it even more difficult to apply more than one filter.
Indeed, memorizing products (as outlined in the 9 steps above) is harder on mobile, because users have to take the extra step of opening and looking at the filter interface each time they apply a filter option.
And if filter choices are autosubmitted and there is no overview of applied filters, mobile users will have to open the filter interface yet again to remove the last option they applied.
Each additional action needed to remove one filter and apply another makes it more likely that users will forget the details of the previous filtered product list.
All in all, applying multiple filters on mobile is practically unworkable for some product types if users can only select one option in each filter type.
To avoid these difficulties, allow users to combine multiple filter options within the same filter type.
The process of applying multiple filter options when they aren’t mutually exclusive is much simpler:
In the unfiltered product list or in the separate filter interface on mobile, apply the desired filtering options
Look through the products in the filtered list
And this straightforward process is applicable regardless of how many filtering options the user is interested in.
When filtering options can be combined within the same type — such as applying “Blue”, “Black”, and “Red” for a “Jackets” category, or applying 3 different brand filters in a product list of cosmetics — users can easily create a tailored product list.
Furthermore, allowing multiple filtering options of the same filter type to be applied also lets users remove irrelevant product variations by applying all filter options except those for the variations they’re not interested in.
Finally, note that, as a first step, it’s critical to ensure the categorization of the product catalog is appropriate — otherwise, users will struggle with filtering the product list, even if they can select multiple filter options from the same filter type.
It’s worth noting additional user expectations regarding the logic underlying filter types versus filtering options.
When values from multiple filter types (or filter “groups”) are applied, users expect the resultant products to match all selected values from the different filter types — for example, a user in an “Espresso Machine” category who applies the “Finish: Stainless steel” and “Brand: DeLonghi” filters would expect the results to show DeLonghi brand items in a stainless steel finish.
Moreover, if multiple filtering options are selected within the same filter type, users expect the resultant products to reflect any one of those options — for instance, if the user in the “Espresso Machine” category had also chosen “Brand: Keurig” (in addition to “Brand: DeLonghi”), they would expect to see stainless steel espresso machines from either DeLonghi or Keurig.
The logic is therefore that filter types should follow an “AND” logic when multiple filter types are selected, whereas the selected filtering options within any of those types should follow an “OR” logic.
Furthermore, “AND” logic for filtering types and “OR” logic for filtering options are well-established e-commerce conventions.
Finally, to communicate to users the logic that filter options are inclusive, style filter values as checkboxes to visually indicate that multiple values can be selected.
As our research has shown, browsing e-commerce product lists can be challenging for users.
Thus it’s important to reduce any friction so that users can easily establish an overview of suitable products to explore; in particular, by using an “AND” logic for filter types and an “OR” logic for filter options.
Yet 15% of sites don’t allow users to combine 2 or more filter options in the product list — both forcing users to tediously repeat the filtering process in order to find suitable items, and breaking with users’ expectations of how filters should work in general.
This article presents the research findings from just a few of the 650+ UX guidelines in Baymard Premium – get full access to learn how to create a “State of the Art” e-commerce user experience.
If you want to know how your desktop site, mobile site, or app performs and compares, then learn more about getting Baymard to conduct a UX Audit of your site or app.
Comment on LinkedIn or Post this article on X
At Baymard we’ve just released a new UX benchmark of Online Grocery sites and apps, thereby expanding our existing Online Grocery benchmark with 5 new UX case studies.
This follows from our large-scale user testing research on Online Grocery sites and apps and adds to our existing e-commerce UX benchmark.
The 5 sites and apps added to our Online Grocery benchmark have been manually assessed across 700+ research-based UX parameters relevant to Online Grocery sites and apps, resulting in 3,500+ weighted UX performance scores and 2,900+ worst and best practice examples.
For the benchmark, we rated 5 new Online Grocery sites and apps (bringing our total for the Online Grocery benchmark to 10 sites and 5 apps): Safeway, Aldi, Kroger, HEB, and Morrison’s.
You can explore the 5 new Online Grocery UX case studies using the below links:
poor
Online Grocery
86 page designs: desktop, mobile, app
poor
Online Grocery
71 page designs: desktop, mobile, app
poor
Online Grocery
72 page designs: desktop, mobile, app
poor
Online Grocery
66 page designs: desktop, mobile, app
broken
Online Grocery
78 page designs: desktop, mobile, app
Want to know how your site performs? Get Premium access to review your own site or have it audited by Baymard researchers.
Each of the new 5 Online Grocery sites’ and apps’ 3,500+ UX performance scores, along with the scores for the 5 other Online Grocery sites in the benchmark, are summarized in the interactive scatterplot below — showing you how they perform collectively and individually:
{{ scatterplot-graph: size=big + habitat=public + base-sites=collection:online-grocery + view-structure-id=gemini-NnB0Gt4SpH2POUOnlkle }}
A publicly available overview of the research and benchmark can be found on our Online Grocery research overview page.
Getting access: all 3,500+ UX performance scores, 2,900+ best practice examples, and the UX insights from researching the Online Grocery industry are available immediately and in full within Baymard Premium. (If you already have an account open the Online Grocery study.)
If you want to know how your Online Grocery website or app performs and compares, then learn more about getting Baymard to conduct an Online Grocery UX Audit of your site or app.
Comment on LinkedIn or Post this article on X
Key Takeaways
Receiving unexpected error messages can be a jarring and frustrating experience.
Across multiple rounds of Baymard’s large-scale testing, these often contributed to form or checkout abandonment.
While error message styling and content can make it easier for users to locate and remedy errors, the delay caused by completing the form and submitting it only to be stopped by an unexpected error message can be significant, introducing friction to the form-submission process.
Effectively, receiving an error message on form submission means that sites have already missed an opportunity to prevent the error from occurring.
Yet our Premium research findings indicate that validating users’ inputs inline — as they’re filling out a form field — can mostly resolve this issue.
Yet 32% of sites in our e-commerce UX benchmark fail to provide any field validation at all.
In this article we’ll discuss the following:
Participants in testing who encountered an error message upon form submission were forced to come to a complete stop to find and resolve the error.
This typically leads to significant delays and occasionally causes abandonment when the error is too difficult to locate or unclear how to resolve.
Compounding the issue, many forms reset upon submission, removing user entries and selections.
Therefore, users are forced to resolve not only the error at hand but also to duplicate previous work.
In reality, error messages disrupt the natural process flow, surprising users who expect to move on to the next step after hitting the “Submit” button.
In testing, we’ve repeatedly observed that a better solution to error messaging that consistently improved the participants’ error-recovery experience was live inline validation.
Live inline validation is where the validity of the user’s input is checked immediately as the user types in the full value or leaves the field.
In practice, inline validation introduces several benefits over traditional error messages presented upon form submission.
First, users are informed of the error immediately and, as a result, won’t to the same degree need to come to a full stop to figure out where the error is, making the process of locating the errors significantly simplified.
Inline validation also efficiently draws attention to any required fields that users skipped while tabbing through the form, preventing any associated error messages.
Next, because users are alerted to input issues immediately after typing, the amount of time needed to correct the error decreases significantly, as the input and its context are still fresh in the user’s mind.
On a more traditional after-the-fact error page, the incorrect input will have to be relearned, as it can be several fields and minutes ago that the user read the label for the incorrect field and typed the incorrect input value.
Also of note, inline validation does not entirely replace the need for traditional error page and design, since users may still attempt to submit the form even with inline validation errors present.
To ensure inline validation performs as best as possible for users, it’s key to implement the following:
It’s important to avoid taking validation too far and provide critical feedback before users begin entering information.
While the point of live inline validation is to alert users of incorrect inputs early on, overly aggressive premature validation negatively impacts the overall experience as users are told their input is wrong before they have had a chance to type it correctly.
In testing, participants were often frustrated by overzealous inline validation suggesting they had made a mistake before they even had a chance to type the input correctly.
As one participant from testing said, frustrated at the immediate validation error when he moved focus to an empty field for the first time, “Why are you telling me my email address is wrong, I haven’t had a chance to fill it all out yet!”
Moreover, as an error message is shown, we observed other participants’ typing was disrupted as they stopped to read and interpret the error — misleading some to incorrectly interpret the message as suggesting that their perfectly valid input was actually wrong.
To keep live inline validation from becoming a distraction and annoyance, it’s key to fine-tune the logic for when the field is checked for errors.
When users enter a new field, without any existing errors, and start typing, the validity of the input should not be checked before the user has had a chance to fully type a correct input.
Depending on the type of input, this means that the validity of each field input should be checked when the user leaves the field — for example using an onblur
event.
In addition, for some field types, sites can also check the validity once the input has reached the correct character length, such as for ZIP and postal codes, phone numbers, credit card numbers, card security code inputs, etc.
When an error has occurred and a live inline error message is shown to users, they will naturally try to correct it immediately.
However, some sites with inline validation do not remove the error message live, as soon as the user has resolved the issue, which led to compounded problems during testing.
After resolving the issue suggested by the inline error message, participants often focused intensely at the error message, expecting it to be removed as soon as the field contained the correct input — rather than leaving the field, which would then often remove the inline message.
In practice, this can cause grave issues as users are likely to misinterpret their newly corrected, and now valid, input to still contain an error.
This is particularly so for fields where validity is interdependent, such as email and password confirmation fields.
It’s therefore key that when an error is invoked using live inline validation, the logic for rechecking the validity of the incorrect field does not happen only as users leave the field.
This means that rechecking the validity of the incorrect field is not based solely on an onblur
or similar event.
Instead, the error message must live update on a keystroke level, disappearing the moment users enter a valid input.
Finally, if positive inline validation is used, even users without any incorrect input will benefit.
Positive inline validation removes some cognitive load from users since they don’t have to review and validate the form for errors before submitting it.
With positive inline validation, a site can assure users that their input is correct, and the user is able to more quickly move to the next step of the checkout flow.
During testing, sites that used positive inline validation, by adding a checkmark next to correctly inputted values, added a sense of accomplishment and progression to the whole typing experience.
Positive inline validation also helps risk-averse users trying to preempt validation errors and those users who would otherwise meticulously review the entire completed form before submitting it.
Our large-scale testing has shown that, even if forms are perfectly designed, users will invariably have errors.
Typing, especially on mobile devices, is challenging, and minor mistakes are easy to make.
Yet how the site supports — or doesn’t support — users matters more than a user receiving an error.
In particular, by alerting a user to an error immediately via inline validation the user has a much better chance of quickly correcting the input and moving forward with checkout.
On the other hand, only showing errors to users when they’ve tried to submit a form can be a recipe for abandonment.
Yet our e-commerce UX benchmark shows that 31% of sites fail to provide inline validation.
Finally, having inline validation is only the start.
To have it perform optimally it’s key that fields aren’t prematurely validated, that error messages are removed as soon as the input is corrected, and that “positive inline validation” indicates to users “everything’s okay!” as they’re moving through a form.
Implementing these 3 key details will help to ensure that users are able to swiftly move through a form, supported yet unimpeded by the inline validation feature.
This article presents the research findings from just 1 of the 650+ UX guidelines in Baymard Premium – get full access to learn how to create a “State of the Art” e-commerce user experience.
If you want to know how your desktop site, mobile site, or app performs and compares, then learn more about getting Baymard to conduct a UX Audit of your site or app.
Comment on LinkedIn or Post this article on X
With the end of 2023 nearing, here’s an overview of everything we’ve been working on, as well as what we have planned for 2024.
The 10 most popular UX articles for the year are:
In 2023 we’ve conducted and published 30,000+ hours of new UX research. The 4 most significant UX research studies added during 2023 were:
This research focuses on “Vitamins & Supplements” websites and included test sites from 11 multibrand Vitamins & Supplements sites: eVitamins, GNC, iHerb, Lucky Vitamin, PipingRock, Pure Formulas, Puritan’s Pride, Swanson Vitamins, Vitacost, Vitamin Shoppe, and Vitamin World.
See the findings from our Vitamins & Supplements research study or explore our Vitamins & Supplements UX articles:
This research focuses on “Travel Tours & Experience Booking” websites and included small- and medium-sized test sites offering a variety of tours and experiences:
See the findings from our Travel Tours & Experience Booking research study or explore our Travel Tours & Experience Booking UX articles:
This research focuses on the product finding and product browsing aspects of the e-commerce user experience and includes electronics and office test sites (Staples, HP, Office Depot, Best Buy, B&H Photo, and Newegg) and apparel test sites (GAP, H&M, AEO, Nordstrom, L.L. Bean, and Urban Outfitters).
See the findings from our Homepage & Category Navigation UX and Product Lists UX research studies or explore our articles.
Product finding articles published 2023 (listed by popularity):
This research focuses on the cart and checkout experience and includes the following test sites: Etsy, Theo Chocolates, CVS, Best Buy, Stance, Hayneedle, Apple, Everlane, The Container Store, Williams Sonoma, Overstock, Walgreens, Snowe Home, AEO, Wayfair, and Samsung.
See the findings from our Cart & Checkout research study or explore our articles.
New checkout articles published in 2023 (listed by popularity):
During 2023 we also launched our online self-paced UX training platform:
Learn more about Baymard’s UX training.
In 2023 we also conducted 14 new UX benchmarks, adding 30,000+ new best practice and worst practice examples illustrating how leading sites stack up against the 650+ UX guidelines available in Baymard Premium.
You can browse parts of this dataset for free in our public E-Commerce UX Benchmark and in the Page Design tool.
You can see the 14 new UX benchmarks in detail here:
In 2023 we welcomed 13 new colleagues to Baymard — and we have more hiring plans for 2024.
If you want to join Baymard in 2024, or know of someone who’d be a great fit, then sign up for our job email alert to get notified when Baymard is hiring (max 4 emails per year).
Looking ahead to the first half of 2024, we have 6 new UX research studies coming out, dedicated to:
In the first half of 2024 we’ll also release new UX industry benchmarks for:
You can see a full breakdown of all changes on our Roadmap & Changelog page.
We’re looking forward to 2024.
- Christian, Jamie, and the entire Baymard team
Getting access: our research findings are available in the 650+ UX guidelines in Baymard Premium – get full access to learn how to create a “State of the Art” e-commerce user experience.
If you want to know how your desktop site, mobile site, or app performs and compares, then learn more about having Baymard conduct a UX Audit of your site or app.
Comment on LinkedIn or Post this article on X