Car Curation

Exploring a new way to view car rentals

Client: Hotwire       Role: Lead UX

Background

Hotwire’s featured product is called “Hot Rate”. Hot rate is a discounted opaque product based unsold inventory from vendors. In exchange for a great deal we can’t reveal the vendor upfront, but we can give you a great idea.

When users book we will reveal the agency at end of checkout

Problem

Over time, we are seeing our Hot rate car inventory shrink.

Supply

To offset that, Hotwire has decided to increase its diversity in options by adding in more rate types as well as more tiers of vendors.

Insights

I first wanted to understand more about the differences between vendors so I engaged with our supply team to better understand the market. Our hot rate product is generally supplied by “tier 1” vendors. These are generally nationally recognized brands.

Tiers of products

Our new inventory will add “tier 2” and “tier 3” vendors. These are generally more regional and in some cases local based agencies. These smaller brands are generally cheaper but have many caveats like being further from the airport.

I next wanted a better understanding of the users. I began working with the analytics & research team to pull up any data on our users looking for common behaviors and patterns. I took the information acquired and compiled it into an infographic for distribution. I found that our typical user would book last minute trips. The duration of the trips would be under 3 days and they would pick the best value of the smallest car sizes. This helped paint a picture that our users just wanted a quick last minute deal.

I began to question if users even understood or desired to see our diverse supply. I partnered with a researcher who helped me setup a survey. I wanted to understand if Users understood or cared about the differences between agencies. Especially tier 1 and tier 2 vendors.

  • What is important to customers when renting a car?
  • What are customers’ expectations and preferences around payment terms?
  • The survey was sent out to customers who booked a rental with us in the past year.

The results we got confirmed the pattern that our users cared about price above all else. Our users had a low understanding of the differences between car rental agencies but brands they did recognize they voted highly.

UX Brief

Objective: Create a new experience that can support our new supply of products and vendors.

Key Results: Purchase Rate (PR) & Gross Profits Per Transaction (GPPT)

Informing: Our users have a low understanding between vendors and tiers. Users were not brand loyal but always price conscious.

Assumption: Users would prefer a curated rental experience

Action: Create a curated results page for users highlighting the best options

Creating the UX

Teardown analysis

Before jumping into sketches and wireframes I did a teardown of our current flow and notice a few issues in our ux. The first is notice is disjointed decision making process Users would pick their car size on results and then pick their vendor on the details page.

The second is Issues of transparency . Users couldn’t determine the type of product and vendor till the details.

The third was difficulty seeing all vendors. We have card based carousel which is not effective when you have a bigger supply of vendors.

These issues helped me create a guidelines our concepts should follow:

  • Ability to compare inventory without leaving results page
  • Allow users to see the differences in inventory
  • Have the ability to browse all listings

Concepts

I presented the curated assumption to stakeholders and next wanted to validate the concept. Our approach was to run a lab where we could test out a prototype. We would create our curated approach and a more standard one as control.

Concept A would be our assumption, Curation. This concept gives focus on highlighting the best choices for users. This version allows users to be better educated on the differences between choices.

Concept A: Curation

The choices I wanted to highlight would generally be our 3 rate types: Hot Rate, Prepaid, and Retail. This led to the idea of creating 3 cards for users to focus on. Price was the top trait when users observed cards so I wanted to gave more emphasis in the price I wanted an area to educated our users so they could better understand the differences in rate type. A primary CTA that would let them select the option and a see all button to see the rest of the inventory for that rate type, this would act as a smart filter. I picked horizontal order because it gave the options a little more equal weight and distinction from our see all feature when users would click on the link below the main call to action.

Interaction pattern exploration

With the 3 card design I looked at interaction patterns I could use to surface the information. I tested different patterns like modals, expandable boxes, anchored states, accordions. I gathered feedback from other product designers and stakeholders.

Final direction for intiail concept

Concept B was a more standard booking experience, exploration. An unfiltered experience centered on sorting by cheapest first. Research and planning would have to be independently. On the exploration concept was a more common experience seen on similar travel sites. I did a competitive analysis of results interactions.

Competitive analysis

For this concept we'd show a list of vendors sorted by cheapest first. Users are able to sort the results based on the header categories. I wanted users to be able to compare multiple products on the page so we opted to let users expand multiple cards instead of an accordion approach.

Concept B: Curation

Prototyping & Labs

I made higher fidelity wireframes for the prototype with an interactive prototype we could observe how users could react and comprehend interaction patterns for each approach. I began working alongside a developer in creating both prototypes.

Prototypes

With the researcher we established a Moderated Concept Validating Study. We would test the two concepts to see which was easiest for participants in renting a car. The lab involved interviewing 5 Candidates with a moderator who sat with the candidates. We also had a dedicated note taker who sat in an observation room with other key stakeholders.

We presented a car rental task with our prototypes counterbalancing which goes first. With each version we wanted to see if users did the following:

Curation: How do participants evaluate and compare options? Do they understand the different rate plans?

Exploration: How do participants evaluate and compare explorable options? Are they able to find a car, agency, rate plan that meets their needs?

Results

Every user in the lab preferred the curated approach over the exploratory one. Participants understood the difference between the 3 categories and used them as rate plans filters to narrow their options.

Implementation

Qualitatively we have right signals on the direction we want to go but we couldn't just turn on the design instantly. I collaborated with the project manager, analytics, and lead developer to get better direction on how we could break up the project. There were concerns on the interaction of the drawer coming in and pushing certain elements lower on the page. To test the interaction we would have to test out placement of the elements being pushed down. A compromise was made to change the interaction into a more digestible one.

Revised work Finished product

After working closely with developers we finished development and got our test ready in the pipeline. We ran it as a VT against against our current experience as control.

Results & post mortem

After 2 months we got our results. We saw purchase rates increase, but we saw GPPQ go down. What this meant was our users were booking more, but not booking the most profitable product. We saw more users book a retail postpaid experience over our Hot Rate prepaid experience.

Things that went wrong

Inventory issues: The inventory issues we ran into was simply our diverse inventory we planned for had not arrived yet. We ended up pushing hot rate vs 2 retail products in our tests.

Supression Logic: The one area we didn’t forsee was trying to turn off our supression logic. The supression logic dictates how we organize our vendors on our current results page. Generally cheapest first. Our curation logic picked the cheapest in multiple categories. Trying to adjust the current suppression logic caused a lot of issues and we had to just run with our default logic.

Things we should’ve done

The things we should’ve done overall was run more Risk Assessment Tests earlier on or in conjunction with research.

Run a separate test just bringing the vendor selection process from the details to results page. This was a test was something could’ve ran at the beginning utilizing the expandable card interaction.

Isolate and test user behavior towards Tier 1 vs Tier 2 vendors. Even after our research we couldn’t make appropriate assumptions at how they would react to lower quality vendors. We should have planned a test to just introduce a tier 2 hot rate and see how it performed.

Things we're doing

Improving retail cancellation rate. Part of the reason why our GPPQ when down was we had more users book retail. Retail right now has a 50% no show rate, this is a industry wide problem. We are addressing this by working on notifying users better and requiring them to input a credit card number.

Improving education reinforcement for hot rates. We still are finding users do not understand the benefits and potential savings for hot rate. We are working on stories on branding and education that can give users better assurance and safety with our product.

Testing tier 1 vs tier 2 with curation framework. We are in the process of adding tier 2 hot rate as an extra vendor. We are reusing our curation code to see which products users prefer.