Skip to Content

Record a Complete End-to-End Scenario in UI Automation Testing

What is Scenario Recording?

Recording in UI Automation Testing is the process of capturing real user interactions with your application. The recording agent observes every click, input, navigation, and assertion you make, automatically generating test cases that can be replayed to validate your application.

Think of it as having a smart assistant watching over your shoulder, documenting every action you take so it can repeat those exact steps later.


Before You Start Recording

Prerequisites Checklist

Before clicking the record button, ensure you have:

  • Scenario created and configured – Name, description, and settings are ready
  • Test data prepared – All usernames, passwords, and input values ready
  • Application accessible – Can access the environment you’re testing
  • Recording agent installed – Browser extension or agent is active
  • Browser prepared – Cache cleared, unnecessary tabs closed
  • Stable connection – Good internet connectivity
  • Plan documented – You know the flow you’ll record

Mental Preparation

Know Your Path:

  • Start point: Where will you begin?
  • Each action: What will you do at each step?
  • Assertions: Where will you validate behavior?
  • End point: Where should the journey finish?

Have a Backup Plan:

  • What if something goes wrong during recording?
  • Can you pause and resume?
  • Should you re-record or edit afterward?

Step-by-Step Recording Process

Step 1: Launch the Recording Agent

  1. Navigate to your created scenario in E2E Test Automation
  2. Look for the “Start Recording” or “Record” button
  3. Click the button to launch the recording agent

What Happens:

  • The recording agent activates
  • A recording indicator appears (usually a red dot or recording badge)
  • The agent begins monitoring your browser interactions
  • A control panel may appear with recording controls

Recording Controls:

  • Pause: Temporarily stop recording (useful for setup between steps)
  • Resume: Continue recording after a pause
  • Stop: End the recording session
  • Add Assertion: Manually add a validation point
  • Add Comment: Document what you’re about to do

Step 2: Navigate to Your Starting Point

  1. Enter the starting URL in your browser or click through to reach your starting point
  2. Wait for the page to fully load – Don’t rush, let everything render
  3. Verify you’re at the correct starting location

Tips:

  • Use the exact URL you want users to start from
  • Ensure the page is in its default state
  • Clear any previous session data if needed
  • The agent captures this initial navigation

Example Starting Points:


Step 3: Perform Actions Naturally

Now execute your planned workflow, acting exactly as a real user would:

Clicking Elements

What to Click:

  • Buttons (“Submit”, “Next”, “Add to Cart”)
  • Links (“Learn More”, “View Details”)
  • Menu items (navigation, dropdowns)
  • Checkboxes and radio buttons
  • Icons and images

How to Click:

  • Click naturally—the agent captures the element and creates a locator
  • Wait for the page to respond before the next action
  • Verify each click had the expected effect

What the Agent Captures:

  • Which element was clicked
  • Multiple locator strategies for that element
  • Timestamp of the click
  • Page state before and after

Example Actions:

Click "Sign Up" button Click "Products" in navigation menu Click first product in search results Click "Add to Cart" button

Entering Text

Where to Enter Text:

  • Text input fields (name, email, address)
  • Password fields
  • Text areas (comments, descriptions)
  • Search boxes

How to Enter Text:

  • Click in the field to focus it
  • Type the text naturally (you can type quickly—the agent captures the final value)
  • Tab to the next field or click elsewhere

What the Agent Captures:

  • The input field element
  • The text you entered
  • Field focus and blur events
  • Any validation that occurs

Example Actions:

Enter "John" in First Name field Enter "Doe" in Last Name field Enter "john.doe@example.com" in Email field Enter "TestPass123!" in Password field

Best Practices:

  • Use realistic test data
  • Don’t use production or sensitive data
  • For unique values (emails), use timestamps: test+\{timestamp\}@example.com
  • Test both valid and edge-case inputs

Selecting from Dropdowns

How to Select:

  • Click the dropdown to open it
  • Click the option you want to select
  • The agent captures both actions

What the Agent Captures:

  • The dropdown element
  • The option you selected
  • The selected value

Example Actions:

Select "United States" from Country dropdown Select "Credit Card" from Payment Method dropdown Select "Standard Shipping" from Shipping Options

Checking Boxes and Radio Buttons

How to Interact:

  • Click the checkbox to check/uncheck it
  • Click the radio button to select it

What the Agent Captures:

  • The checkbox/radio element
  • Whether it’s checked or unchecked
  • The change in state

Example Actions:

Check "I agree to terms and conditions" Check "Subscribe to newsletter" Select "Male" radio button

How Navigation Happens:

  • Click links
  • Click navigation menu items
  • Submit forms
  • Click back/forward buttons (though generally avoid this in tests)

What the Agent Captures:

  • The navigation action
  • The destination URL
  • Page load events
  • New page context

Example Actions:

Click "Next" button to go to page 2 Click "Checkout" to proceed to checkout page Click "View Cart" to see shopping cart

Uploading Files

How to Upload:

  • Click the file input or upload button
  • Select file from the file dialog
  • Wait for upload to complete

What the Agent Captures:

  • The file input element
  • The file path or file name
  • Upload completion

Example Actions:

Upload profile picture from test-files/avatar.jpg Upload document from test-data/resume.pdf

Best Practices:

  • Use test files stored in a known location
  • Use small files for faster test execution
  • Verify upload success with an assertion

Scrolling

When to Scroll:

  • To bring elements into view
  • To trigger lazy-loading content
  • To test infinite scroll functionality

How to Scroll:

  • Scroll naturally with mouse wheel or scrollbar
  • Some agents auto-scroll to elements, others capture explicit scrolls

What the Agent Captures:

  • Scroll position
  • Scroll direction
  • Elements that come into view

Example Actions:

Scroll down to footer Scroll to "Reviews" section Scroll to bottom of page

Step 4: Add Assertions During Recording

Assertions are critical—they validate that your application behaves correctly. Add assertions at key points during recording.

When to Add Assertions

After Critical Actions:

  • After login → Verify user is logged in
  • After form submission → Verify success message
  • After adding to cart → Verify cart count increased
  • After navigation → Verify correct page loaded

Before Dependent Actions:

  • Before clicking “Checkout” → Verify items are in cart
  • Before submitting form → Verify all required fields are filled
  • Before proceeding → Verify previous step completed

At Validation Points:

  • After data entry → Verify data appears correctly
  • After calculations → Verify computed values are correct
  • After state changes → Verify UI reflects the change

How to Add Assertions

During Recording:

  1. Pause the recording (if needed to think clearly)
  2. Click “Add Assertion” button in the recording controls
  3. Select what to verify:
    • Element visibility
    • Text content
    • Element state (enabled/disabled)
    • Attribute values
    • URL
  4. Click the element you want to verify (if applicable)
  5. Configure the assertion:
    • What should be true?
    • What value to check?
    • Should it pass or fail?
  6. Save the assertion
  7. Resume recording

Common Assertion Types:

Element Visibility:

Verify "Welcome, John!" message is visible Verify "Error: Invalid email" appears Verify "Loading..." spinner is not visible

Text Content:

Verify page title contains "Dashboard" Verify heading text equals "Shopping Cart" Verify label says "Total: $99.99"

Element State:

Verify "Submit" button is enabled Verify "Email" field is editable Verify "Remember Me" checkbox is checked

URL Validation:

Verify URL is "https://example.com/dashboard" Verify URL contains "/success" Verify URL does not contain "/error"

Element Attributes:

Verify button has class "active" Verify link href is "https://example.com/help" Verify input value is "john.doe@example.com"

Count/Quantity:

Verify cart badge shows "3" Verify 10 products are displayed Verify search returns at least 1 result

Assertion Best Practices

Do:

  • Add assertions after every critical action
  • Verify both success and error conditions
  • Check multiple aspects when important (text, visibility, state)
  • Use specific, meaningful assertion messages
  • Verify data persisted after saves

Don’t:

  • Skip assertions thinking “it looks right”
  • Add too many assertions on trivial things
  • Use vague assertions like “something appears”
  • Forget to assert navigation occurred
  • Only check happy paths

Step 5: Use Reusable Components

If you’ve created reusable components (like login), insert them during recording instead of recording those actions again.

When to Use Components

Common Scenarios:

  • Login at the beginning of scenarios
  • Navigation to specific sections
  • Common form fills
  • Logout at the end
  • Setup/teardown actions

How to Insert Components

  1. Pause the recording
  2. Open the components menu or panel
  3. Select the component you want to insert (e.g., “User Login”)
  4. Click “Insert” or drag it into the flow
  5. The component actions are added to your scenario
  6. Resume recording from where the component left off

Example:

Start Recording [INSERT "User Login" Component] - Navigate to /login - Enter username - Enter password - Click login - Verify logged in Continue recording: Search for product Add product to cart ...

Benefits:

  • Save time—don’t re-record common flows
  • Ensure consistency—same login process every time
  • Easier maintenance—update component once, affects all scenarios
  • Focus on unique parts of the scenario

Step 6: Handle Dynamic Content

Your application might have dynamic elements that change based on data, time, or user actions. Handle these carefully during recording.

Dynamic Text

Examples:

  • Timestamps (“Last login: 2 hours ago”)
  • User-specific data (“Welcome, John!”)
  • Generated IDs (“Order #12345”)
  • Counts (“3 items in cart”)

How to Handle:

  • For user-specific data: Use assertions that check for partial matches
    • Instead of: Verify text equals "Welcome, John!"
    • Use: Verify text contains "Welcome"
  • For IDs or unique values: Capture and store them for later use
    • Capture order number from confirmation page
    • Use captured order number in next step
  • For counts: Assert the value if it’s predictable
    • Verify cart shows "2" items (if you added 2)

Loading States

Examples:

  • Spinners
  • “Loading…” messages
  • Skeleton screens
  • Progress bars

How to Handle:

  • Add waits for elements to appear after loading completes
  • Don’t record actions during loading states
  • Verify loading indicators disappear before proceeding
  • Use implicit waits or explicit wait commands

Example:

Click "Submit" button Wait for loading spinner to disappear Verify "Success!" message appears

Conditional Elements

Examples:

  • Elements that appear only under certain conditions
  • Different content for different user roles
  • A/B test variations
  • Responsive design elements (desktop vs mobile)

How to Handle:

  • Record the path relevant to your test scenario
  • Document conditions in scenario description
  • Create separate scenarios for different conditions if needed
  • Use conditional logic or branching (if supported)

Step 7: Complete the User Journey

Continue recording until you reach the end of your planned workflow.

Final Actions:

  • Complete all steps in your plan
  • Reach the expected end state
  • Add final assertions to verify success
  • Verify end-state elements are present

Example End States:

Order Confirmation Page: - Verify "Order Confirmed" heading - Verify order number appears - Verify "Thank you" message - Verify email confirmation note User Dashboard: - Verify user name in header - Verify dashboard widgets load - Verify navigation menu is accessible Success Message: - Verify "Success!" message appears - Verify no error messages - Verify return to home link is present

Don’t Forget:

  • Log out if that’s part of the flow
  • Return to a consistent end state
  • Clean up test data if needed (though usually done separately)

Step 8: Stop Recording

Once you’ve completed the entire workflow:

  1. Review what you’ve recorded (mentally or by pausing to check)
  2. Ensure you reached the end state
  3. Verify all assertions were added
  4. Click “Stop Recording” in the recording controls

What Happens:

  • The recording agent stops capturing actions
  • All captured actions are saved to the scenario
  • Locators are generated for all elements
  • The scenario is ready for review

Post-Recording Actions:

  • The scenario appears in your test suite
  • You can now review, edit, and test it
  • All actions and assertions are saved
  • Test cases are organized in sequence

After Recording

Step 9: Review the Recorded Scenario

Immediately after recording, review what was captured:

In Manage Actions View:

  1. Check all actions are present:
    • Verify no steps were skipped
    • Ensure actions are in correct order
    • Look for any duplicate or unnecessary actions
  2. Review assertions:
    • Confirm all assertions were captured
    • Check assertion values are correct
    • Add any missing assertions
  3. Examine locators:
    • Each action should have multiple locator options
    • Verify locators are specific enough
    • Check for overly fragile locators
  4. Review test data:
    • Ensure sensitive data wasn’t captured
    • Verify test data is appropriate
    • Replace hardcoded values with variables if needed

In Test Case Flow View:

  1. Visualize the flow:
    • See how screens connect
    • Identify any unexpected paths
    • Verify logical flow from start to finish
  2. Check for gaps:
    • Are there missing connections?
    • Do all branches make sense?
    • Is the flow complete?

In Preview View:

  1. Watch the playback:
    • See exactly what was recorded
    • Verify it matches your intentions
    • Look for any issues during recording

Step 10: Test the Scenario

Before considering the scenario complete, run it to ensure it works:

  1. Click “Run” or “Execute” to run the scenario
  2. Watch the execution:
    • Actions should replay smoothly
    • Assertions should pass
    • The flow should complete successfully
  3. Review the results:
    • Check pass/fail status
    • Review any failures
    • Look at screenshots and logs

If the scenario fails:

  • Identify which step failed
  • Check why it failed (wrong locator, timing issue, assertion incorrect)
  • Edit and refine the scenario
  • Run again to verify fixes

If the scenario passes:

  • Great! The scenario is working
  • Document any known issues or flaky steps
  • Consider running it multiple times to check stability

Best Practices for Recording

Recording Environment

Do:

  • Record in a stable, controlled environment
  • Use a clean browser profile
  • Clear cache and cookies before recording
  • Close unnecessary browser tabs and applications
  • Ensure good internet connectivity

Don’t:

  • Record while other applications are running that might interfere
  • Record on slow connections (actions may time out)
  • Record with browser extensions that modify pages
  • Rush through the recording

Recording Technique

Do:

  • Perform actions deliberately and clearly
  • Wait for pages to fully load
  • Pause between steps if needed
  • Add comments or notes during recording
  • Think like a real user

Don’t:

  • Click too quickly (let actions complete)
  • Perform actions outside the application window
  • Use keyboard shortcuts that might not be captured
  • Navigate using browser back/forward buttons (use app navigation)
  • Record debugging or exploration—know your path first

Assertions

Do:

  • Add assertions frequently (after each critical action)
  • Verify both positive outcomes (success) and negative indicators (no errors)
  • Use specific, meaningful assertions
  • Check multiple aspects when important
  • Document what each assertion validates

Don’t:

  • Skip assertions thinking you’ll add them later
  • Only assert at the end of the scenario
  • Use vague or generic assertions
  • Over-assert trivial things
  • Forget to verify navigation occurred

Test Data

Do:

  • Use realistic but fake test data
  • Use unique identifiers (timestamps in emails)
  • Prepare test data before recording
  • Document test data requirements
  • Use data that won’t expire or change

Don’t:

  • Use production data or real personal information
  • Hardcode data that should be variable
  • Use data that’s already in the system (may cause conflicts)
  • Forget to document required test data
  • Use data that violates privacy or security policies

Scenario Length

Do:

  • Keep scenarios focused on one complete workflow
  • Break very long flows into multiple scenarios
  • Aim for 10-30 actions per scenario (guideline, not rule)
  • Focus on business-meaningful workflows

Don’t:

  • Try to test everything in one scenario
  • Create scenarios that are too short to be meaningful
  • Mix multiple unrelated workflows
  • Record endlessly without a clear goal

Common Recording Challenges and Solutions

Challenge 1: Element Not Captured

Problem: You clicked an element but the agent didn’t capture it.

Solutions:

  • Click the element more deliberately
  • Ensure the element is fully visible and clickable
  • Wait for the page to fully load
  • Check if JavaScript needs to complete before the element is active
  • Manually add the action after recording

Challenge 2: Wrong Element Captured

Problem: The agent captured a different element than intended.

Solutions:

  • Be more specific with your click (click the center of the element)
  • Ensure you’re clicking the correct element
  • After recording, edit the action and update the locator
  • Use a more specific locator strategy

Challenge 3: Action Happens Too Fast

Problem: The next action happens before the previous one completes.

Solutions:

  • Add explicit waits between actions
  • Wait for loading indicators to disappear
  • After recording, add wait commands
  • Adjust timeout settings in scenario configuration

Challenge 4: Dynamic Content Not Handled

Problem: Content changes each time, causing test to fail.

Solutions:

  • Use partial matches instead of exact matches
  • Capture dynamic values and reuse them
  • Use more flexible locators
  • Add conditional logic if supported

Challenge 5: Pop-ups or Alerts

Problem: Pop-ups, modals, or alerts appear during recording.

Solutions:

  • Handle them as part of the flow (click OK, dismiss, etc.)
  • The agent should capture alert/modal interactions
  • Add assertions to verify they appeared/disappeared
  • After recording, add explicit alert handling if needed

Challenge 6: Scrolling Issues

Problem: Elements not visible, scrolling not captured.

Solutions:

  • Manually scroll to elements before clicking
  • After recording, add explicit scroll commands
  • Ensure elements are in viewport before interacting
  • Adjust viewport size if needed

Advanced Recording Techniques

Using Variables

Capture values during recording:

  • Extract text from elements (order numbers, IDs)
  • Store values in variables
  • Use variables in subsequent actions

Example:

Step 5: Click "Place Order" Step 6: Capture order number from confirmation page → Store as "orderNumber" Step 7: Enter \{orderNumber\} in search field

Conditional Actions

Record different paths based on conditions:

  • If element exists, do X; otherwise, do Y
  • Check for optional elements
  • Handle variations in application behavior

Example:

If "Cookie Banner" appears: - Click "Accept Cookies" Continue with main flow

Loops and Iterations

Repeat actions multiple times:

  • Add multiple items to cart
  • Fill repeating form sections
  • Process list items

Example:

Repeat 3 times: - Search for product - Add first result to cart - Return to search

Data-Driven Recording

Use external data sources:

  • Import test data from CSV or JSON
  • Record once, run with different data sets
  • Parameterize inputs

Example:

For each row in test-data.csv: - Enter row.firstName in First Name field - Enter row.lastName in Last Name field - Enter row.email in Email field - Click Submit

Post-Recording Checklist

After recording and initial review, verify:

  • All actions were captured correctly
  • Actions are in the correct sequence
  • Assertions are present at key validation points
  • Locators are reliable and not overly specific
  • Test data is appropriate and documented
  • The scenario completes successfully when run
  • Scenario name and description are accurate
  • Reusable components are identified or created
  • The scenario is saved in the correct test suite
  • Team members have appropriate access

Summary

Recording a complete end-to-end scenario in UI Automation Testing involves:

  1. Launch the recording agent and prepare your environment
  2. Navigate to your starting point and wait for the page to load
  3. Perform actions naturally as a real user would (click, type, select, navigate)
  4. Add assertions at critical points to validate behavior
  5. Use reusable components for common workflows
  6. Handle dynamic content appropriately
  7. Complete the user journey to reach the expected end state
  8. Stop recording when the workflow is complete
  9. Review the recorded scenario for accuracy and completeness
  10. Test the scenario to ensure it executes successfully

By following these steps and best practices, you’ll create reliable, maintainable UI automation scenarios that effectively validate your application’s user workflows.

Remember: Good recording comes from good planning. Know your path, have your data ready, and think about assertions before you start. The recording itself is just capturing what you’ve already planned.

Next step: Learn about “Different Types of Locators in UI Automation Testing” to understand how to make your tests more resilient and maintainable.


Record a Complete End-to-End Scenario | Documentation