Runtime Fields: Migrating From Enzyme To React Testing Library

by Alex Johnson 63 views

This article outlines the process of migrating the Runtime Fields plugin tests from Enzyme to React Testing Library (RTL). This migration aims to modernize our testing approach, leveraging the benefits of RTL for improved test clarity and maintainability. This migration enhances the robustness and maintainability of our testing strategy.

Summary

The primary goal is to migrate the Runtime Fields plugin tests from Enzyme to React Testing Library (RTL). This transition will help us align with modern testing practices, resulting in more maintainable and efficient tests. This migration not only modernizes our testing approach but also ensures that our tests are more aligned with how users interact with our components, leading to more reliable and relevant tests.

Original issue: https://github.com/elastic/kibana/issues/241532

Epic: https://github.com/elastic/kibana/issues/222608

⚠️ CRITICAL: Migration Scope

It's important to understand that the file list provided below is a MINIMUM starting point and not an exhaustive list. A proactive approach is crucial for discovering and migrating all relevant files. The key is to ensure a comprehensive migration by identifying and addressing all instances of Enzyme usage within the specified codebase. This includes not only the files explicitly mentioned but also any related components, utilities, and configurations that rely on Enzyme.

You MUST proactively discover and migrate:

  • Testbed architecture files (look for testbed, TestBed, registerTestBed)
  • Helper files that abstract Enzyme usage (.helpers.ts, .helpers.tsx)
  • Index files that re-export Enzyme utilities (index.ts in test directories)
  • Any file importing from @kbn/test-jest-helpers that uses registerTestBed
  • Shared test utilities that may hide Enzyme patterns
  • Fixtures that set up Enzyme-based test infrastructure

The migration process requires a thorough understanding of the existing test infrastructure and a proactive approach to identifying all instances of Enzyme usage. This involves not only converting existing tests but also refactoring any related utilities and helpers to align with RTL's principles. The goal is to create a testing environment that is both efficient and maintainable, ensuring the reliability of our components.

Search commands to find hidden Enzyme usage:

rg -l "registerTestBed|TestBed" x-pack/platform/plugins/private/runtime_fields/
rg -l "enzyme|shallow|mount|wrapper" x-pack/platform/plugins/private/runtime_fields/
rg -l "@kbn/test-jest-helpers" x-pack/platform/plugins/private/runtime_fields/

These commands will help you identify files that use Enzyme-related patterns, ensuring a comprehensive migration. By using these commands, developers can quickly locate and address areas in the codebase that still rely on Enzyme, ensuring a complete and effective transition to RTL. This proactive approach minimizes the risk of overlooking critical dependencies and ensures the long-term maintainability of the test suite.

Goal: RTL-pure, simplified architecture. Remove all testbed abstractions and helper layers. Tests should use direct RTL patterns with screen queries. The ultimate objective is to create a testing architecture that is streamlined, efficient, and easy to maintain. This involves not only migrating from Enzyme to RTL but also simplifying the overall testing structure by removing unnecessary abstractions and dependencies. By adopting direct RTL patterns, we can ensure that our tests are more closely aligned with how users interact with our components, leading to more reliable and relevant test results.

Target Files (Minimum List)

  • [ ] x-pack/platform/plugins/private/runtime_fields/public/components/runtime_field_editor_flyout_content/runtime_field_editor_flyout_content.test.tsx
  • [ ] x-pack/platform/plugins/private/runtime_fields/public/components/runtime_field_editor/runtime_field_editor.test.tsx
  • [ ] x-pack/platform/plugins/private/runtime_fields/public/components/runtime_field_form/runtime_field_form.test.tsx
  • [ ] x-pack/platform/plugins/private/runtime_fields/public/test_utils.ts

This list serves as a starting point for the migration, and it's crucial to identify and include any other relevant files that may be using Enzyme. The files listed represent key components within the Runtime Fields plugin that require migration from Enzyme to RTL. Each file plays a critical role in the functionality of the plugin, and ensuring their proper testing is essential for maintaining the quality and reliability of the software. This list highlights the specific areas that need attention during the migration process.

🎯 Primary Reference

Study this PR thoroughly before starting:

  • #242062 - ILM migration (most recent, canonical reference for this exact migration pattern)

This PR demonstrates the complete transformation from testbed architecture to RTL-pure patterns. This pull request serves as a valuable resource for understanding the migration process and the expected outcome. By examining this example, developers can gain insights into the specific steps and considerations involved in transitioning from Enzyme to RTL. This reference ensures that the migration process is consistent and aligned with best practices.


WISDOM BEAD: Enzyme to RTL Migration Patterns

⚠️ CRITICAL: Read this entire section IN FULL before starting any migration work.

This section provides essential guidance and best practices for migrating from Enzyme to RTL. It covers a wide range of topics, from fundamental principles to specific component testing techniques. By understanding these patterns, developers can ensure a smooth and effective migration process, avoiding common pitfalls and maximizing the benefits of RTL.

I. ESSENTIAL PRINCIPLES

These principles are crucial for ensuring the accuracy and effectiveness of the migration process. They emphasize the importance of understanding the code under test and verifying changes at each step. Adhering to these principles helps prevent incorrect results and ensures the reliability of the migrated tests.

1. Investigation-First Principle

Before modifying any test or helper, it's crucial to thoroughly understand the component or feature being tested. This involves examining the source code, identifying test IDs, understanding rendering conditions, and verifying the DOM structure. This principle ensures that tests are based on a solid understanding of the implementation, preventing issues caused by assumptions or incomplete knowledge.

  • Before fixing any test or helper:
    • Read the actual source code of the component/feature under test FIRST
    • Understand what test IDs exist, conditions controlling rendering, required props
    • Check for license requirements, feature flags, conditional rendering logic
    • Verify actual DOM structure and element attributes
    • When tests fail, inspect implementation to understand WHY

Rule: Never fix tests in a vacuum without understanding the implementation.

2. Verification After Every Change

It is essential to verify changes at the narrowest scope possible, expanding as milestones are achieved. This iterative approach helps identify and address issues early in the process. By verifying changes frequently, developers can maintain a high level of confidence in the correctness of the migrated tests and minimize the risk of introducing regressions.

  • Always verify at the narrowest scope, expand at milestones:
    • Per file change (default):
      • Type check: cd <plugin path> && npx tsc -p tsconfig.json --noEmit
      • Lint: node scripts/eslint <path>
      • Unit: node scripts/jest <path to test>
    • Iteration/milestone complete (expand scope):
      • Utility done -> test all files using that utility
      • Feature iteration done -> test entire feature/plugin
      • PR ready -> full suite
    • Integration tests:
      • Use node scripts/jest_integration <path> only if jest.integration.config.js exists nearby

Rules:

  • Always verify the changed file immediately
  • Default narrow, expand when iteration achieved
  • Don't skip verification for small changes
  • Don't run full suites after each line change

3. Git & Commit Hygiene

Maintaining a clean Git history and adhering to commit hygiene principles are crucial for collaboration and code maintainability. This involves seeking explicit approval before committing or pushing changes. By following these practices, developers can ensure that the codebase remains consistent and that changes are properly reviewed and validated.

  • NEVER commit changes without explicit user approval
  • NEVER push to remote without explicit user approval
  • Always request approval before EACH commit, even if prior approval given
  • Always request approval before EACH push, even if prior approval given
  • Do not assume continuing permission across operations
  • Present changes and wait for explicit proceed/yes/commit/push confirmation
  • When in doubt, stop and ask

II. RTL FUNDAMENTALS

This section covers the core API usage of React Testing Library, including querying elements, waiting for updates, and simulating user interactions. Understanding these fundamentals is essential for writing effective and maintainable RTL tests. By mastering these techniques, developers can create tests that accurately reflect user behavior and ensure the reliability of their components.

4. Element Queries & Async Waiting

The choice of query and waiting method depends on what you are waiting for, whether an element to appear, disappear, or an attribute to change. Proper use of async waiting is crucial for avoiding race conditions and ensuring that tests accurately reflect the behavior of asynchronous components. Choosing the right method ensures that tests are both efficient and reliable.

Decision Tree:

Q: What are you waiting for?

  • Element to APPEAR? -> Use findBy*

    await screen.findByTestId('hot-phase')
    await screen.findByRole('button')
    await within(container).findByTestId('combobox')
    
  • Element to DISAPPEAR? -> Use waitForElementToBeRemoved

    await waitForElementToBeRemoved(screen.getByTestId('loading'))
    await waitForElementToBeRemoved(modal)
    // Note: Use getBy (not queryBy) - element must exist before removal
    
  • Attribute change? -> Use waitFor + expect

    await waitFor(() => expect(element.getAttribute('aria-checked')).toBe('true'))
    
  • Text content change? -> Use waitFor + expect

    await waitFor(() => expect(element.textContent).toBe('Complete'))
    
  • Mock function call? -> Use waitFor + expect

    await waitFor(() => expect(mockFn).toHaveBeenCalled())
    

Anti-Patterns (NEVER use these):

// WRONG
await waitFor(() => { expect(screen.getByTestId('foo')).toBeInTheDocument(); })
// CORRECT
await screen.findByTestId('foo')

// WRONG
await waitFor(() => expect(screen.queryByTestId('loading')).not.toBeInTheDocument())
// CORRECT
await waitForElementToBeRemoved(screen.getByTestId('loading'))

// WRONG
await waitFor(() => { const el = screen.getByRole('button'); expect(el).toBeInTheDocument(); })
// CORRECT
await screen.findByRole('button')

Act Warnings = Missing Await:

  • Act warnings mean React state updates outside act() boundaries
  • Root cause: Missing await on async operation BEFORE assertion
  • Fix: Add await to the operation BEFORE the assertion
  • Common sources: findBy, waitFor, waitForElementToBeRemoved, user interactions

5. Query Selector Strategy

Choosing the right query selector can significantly impact the performance and maintainability of tests. Prioritizing selectors based on their speed and specificity helps create efficient and reliable tests. This strategy ensures that tests are not only effective but also performant, contributing to a smoother development workflow.

Priority (fastest to slowest):

  1. ByTestId - Direct data-testid/data-test-subj lookup (fastest, O(1))
  2. ByLabelText - Label association lookup (fast, O(n))
  3. ByPlaceholderText - Direct attribute lookup (fast, O(n))
  4. ByText - Text content search (fast with specific strings, O(n))
  5. ByRole - Role + accessible name computation (slowest, O(n^2))

Guidelines:

  • Default: Use ByTestId (Kibana convention: data-test-subj)
  • Use ByText for precise, unambiguous text matches
  • Avoid ByText when ambiguous (multiple elements with the same text)
  • Avoid ByRole unless scoped or for structural validation

ByRole - When Acceptable:

  • YES: Scoped with within(): within(cell).getByRole('button')
  • YES: Structural validation: getAllByRole('cell') to verify column count
  • YES: No simpler alternative exists
// NO
screen.getByRole('button', { name: /Save/ }) // -> screen.getByText(/Save/)
screen.getByRole('searchbox') // -> screen.getByPlaceholderText(/Search/i)

Rationale: ByRole is 20-30% slower (computes roles + accessible names across the entire DOM)

Null Handling:

  • Use getBy when element MUST exist (throws if not found)
  • Use queryBy when element MAY NOT exist (returns null)
  • Pattern: expect(screen.queryByTestId('optional')).not.toBeInTheDocument()

Screen Queries Over Container:

  • Prefer: screen.getByTestId() over const { getByTestId } = render()
  • Rationale: Screen queries are stable across re-renders
  • Container destructuring creates stale references

6. Fake Timers Setup & Strategy

Properly setting up and managing fake timers is crucial for testing components that rely on time-based functions like setInterval and setTimeout. Using fake timers prevents tests from hanging or timing out and allows for precise control over the passage of time. This strategy ensures that tests are both predictable and efficient.

Required Configuration:

  • ALWAYS use fake timers by default
  • Required: jest.useFakeTimers({ legacyFakeTimers: true })
  • Place in: beforeAll() at test suite level
  • Rationale: Kibana components use setInterval/setTimeout; fake timers prevent timeouts
  • Without this: Tests hang or timeout waiting for real timers

Manual Timer Advancement - When Needed:

Rule: Manual advancement is RARELY needed. waitFor automatically advances timers by 50ms during polling.

Decision Tree:

Q: Do you need manual timer advancement?

  • Testing timer-dependent behavior? (polling, debounce, throttle) -> YES: Use jest.advanceTimersByTime(ms)
  • Waiting for async UI updates? -> NO: Use findBy/waitFor (they auto-advance timers)
// Example - HTTP Polling
jest.useFakeTimers({ legacyFakeTimers: true });
httpSetup.get.mockResolvedValue({ status: 'complete' });

// Advance timers for polling interval
jest.advanceTimersByTime(5000);

// Wait for result (waitFor will advance further if needed)
await waitFor(() => expect(screen.getByText('Complete')).toBeInTheDocument());

7. Form Testing - Complete Guide

This guide provides a comprehensive approach to testing forms, covering various aspects such as setting input values, triggering validation, and waiting for results. Understanding these techniques is essential for ensuring that forms function correctly and provide a seamless user experience. This approach helps create robust and reliable form tests.

Core Principle: Form helpers are SYNCHRONOUS. Tests handle all waiting.

Why Helpers Must Be Synchronous:

The Trap: Manual timers in helpers mask the real problem (synchronous assertions in tests).

// Anti-Pattern
// Helper with timer (seems necessary but WRONG)
async function setPolicyName(name: string) {
  fireEvent.change(input, { target: { value: name } });
  fireEvent.blur(input);
  await act(async () => {
    await jest.runOnlyPendingTimersAsync();  // WRONG: Masking the problem
  });
}

// Test with synchronous assertion (the ACTUAL problem)
await setPolicyName('test');
expectErrorMessages([...]);  // WRONG: Checks immediately - no waiting
// Correct Pattern
// Helper - just fire events (synchronous)
function setPolicyName(name: string) {
  fireEvent.change(input, { target: { value: name } });
  fireEvent.blur(input);
  // No timer! Tests handle waiting
}

// Test - wait for the outcome
setPolicyName('test');  // Synchronous call
await waitFor(() => expectErrorMessages([...]));  // CORRECT: Waits for validation

The Truth:

  • waitFor advances timers automatically during polling
  • Manual timers are ONLY needed because tests don't wait properly
  • Fix the test pattern, timers become unnecessary

Setting Input Values:

  • ALWAYS use fireEvent.change for input value changes
  • NOT: fireEvent.input (unreliable in jsdom)
  • NOT: userEvent.type (unless testing realistic typing behavior)
  • Pattern: fireEvent.change(input, { target: { value: 'new value' } })
  • Rationale: Matches React's controlled component pattern, fastest

Triggering Validation:

  • Validation typically triggers on blur or form submit
  • For field validation: fireEvent.blur(input) after setting value
  • For form validation: fireEvent.submit(form) OR fireEvent.click(submitButton)
const input = screen.getByLabelText(/Policy Name/i);
fireEvent.change(input, { target: { value: 'test' } });
fireEvent.blur(input);  // Triggers field validation
await screen.findByText('Name is required');  // Wait for error

Waiting for Validation Results:

  • ALWAYS wait for validation messages to appear
  • Use findBy* to wait for error/success messages
  • Don't assume validation is synchronous
// Pattern for errors
fireEvent.blur(input);
await screen.findByText('Field is required');  // Wait for async validation

// Pattern for success
fireEvent.click(submitButton);
await screen.findByText('Successfully saved');

Async Input Effects (Search, Autocomplete):

When input change triggers async side effects (API calls, search):

fireEvent.change(searchInput, { target: { value: 'search term' } });
await screen.findByText('Search Result 1');  // Wait for results

Use case: Debounced search, autocomplete, async validation

Form Submission Testing:

// Mock submit handler
const onSubmit = jest.fn();
render(<MyForm onSubmit={onSubmit} />);

// Fill form
fireEvent.change(nameInput, { target: { value: 'John' } });

// Submit
fireEvent.click(submitButton);

// Wait for outcome
await waitFor(() => expect(onSubmit).toHaveBeenCalled());

Complex Form Interactions - Different Wait Conditions:

// Success case - wait for navigation
fireEvent.click(submitButton);
await waitFor(() => expect(mockNavigate).toHaveBeenCalled());

// Validation error - wait for error message
fireEvent.click(submitButton);
await screen.findByRole('alert');

// Loading state - wait for spinner
fireEvent.click(submitButton);
await screen.findByRole('progressbar');

Wait Logic Placement - Utility vs Call Site:

Decision Matrix:

Q: Where should waitFor go?

  • Utility is always async (API calls, navigation)? -> Wait INSIDE utility: async function submitAndWait() { ...; await waitFor(...); }
  • Utility sometimes sync, sometimes async? -> Wait at CALL SITE: utility(); await waitFor(...);
  • Wait condition varies by test? -> Wait at CALL SITE with test-specific condition

Default: Wait at the call site. Tests are more readable when waits are explicit.

8. fireEvent vs userEvent

Choosing between fireEvent and userEvent depends on the complexity of the interaction being tested. fireEvent is suitable for simple interactions, while userEvent is more appropriate for simulating realistic user behavior. Understanding the differences between these APIs helps create tests that accurately reflect user interactions and ensure the reliability of components.

Decision Matrix:

Q: Which interaction API?

  • Simple click/change/blur? -> fireEvent (fastest, synchronous)
  • Typing with special keys (Enter, Tab, Arrow)? -> userEvent with keyboard API
  • Realistic user behavior important? -> userEvent (simulates actual events)
  • Performance critical? -> fireEvent (no event simulation overhead)
// fireEvent Patterns
fireEvent.click(button);  // Synchronous
fireEvent.change(input, { target: { value: 'text' } });
fireEvent.blur(input);
fireEvent.submit(form);

// userEvent Patterns
// REQUIRED: Setup with timer advancement
const user = userEvent.setup({ advanceTimers: jest.advanceTimersByTime });

await user.click(button);  // Async!
await user.type(input, 'text');
await user.keyboard('{Enter}');
await user.tab();

Key Difference:

  • fireEvent: Fires a single event directly (synchronous)
  • userEvent: Simulates a full user interaction (async, multiple events)

Example: userEvent.click fires mousedown, mouseup, click in sequence

III. COMPONENT PATTERNS

This section provides specific techniques for testing various component patterns, such as routing, license checks, and portal components. These techniques help address common challenges in testing complex components and ensure that they function correctly in different scenarios. By following these patterns, developers can create robust and reliable tests for a wide range of components.

9. Router Mocking

Mocking the router is often necessary when testing components that rely on routing functionality. This technique allows you to isolate the component being tested and control the routing context. By mocking the router, you can simulate different routing scenarios and ensure that the component behaves as expected.

jest.mock('react-router-dom', () => ({
  ...jest.requireActual('react-router-dom'),
  useParams: () => ({ name: 'test-policy' }),
  useHistory: () => ({
    push: jest.fn(),
    location: { search: '' },
  }),
}));

10. License & Feature Flags

Testing components that are gated by licenses or feature flags requires mocking the license and feature flag status. This technique allows you to test different scenarios based on the license or feature flag state. By mocking these dependencies, you can ensure that the component behaves correctly under various conditions.

// Mock license as valid
httpRequestsMockHelpers.setLoadPolicies([mockPolicy]);
httpRequestsMockHelpers.setGetLicense({ license: { status: 'active', type: 'trial' } });

// Or mock license hook
jest.mock('../path/to/license', () => ({
  useLicense: () => ({ isActive: true, isGoldPlus: true }),
}));

11. Portal Components

Portal components render outside the normal DOM tree, requiring a different approach to querying. Querying from the document ensures that you can access and interact with portal components. This technique is essential for testing modals, flyouts, and other components that render outside the main DOM tree.

// For modals, flyouts, tooltips
const modal = document.querySelector('[data-test-subj="confirmModal"]');
expect(modal).toBeInTheDocument();

// Or use within on document.body
const confirmButton = within(document.body).getByTestId('confirmModalButton');

12. Async Select Components

Testing async select components involves waiting for options to load and simulating user interactions. This technique ensures that the select component functions correctly when dealing with asynchronous data. By following these steps, you can create tests that accurately reflect the behavior of async select components.

// Open combo box
fireEvent.click(screen.getByTestId('comboBoxToggleListButton'));

// Wait for options
await screen.findByTestId('comboBoxOptionsList');

// Select option
fireEvent.click(screen.getByText('Option Text'));

13. Tab Navigation

Testing tab navigation involves clicking tabs and waiting for the tab panel to render. This technique ensures that tab components function correctly and that the correct content is displayed when a tab is selected. By testing tab navigation, you can ensure a smooth and intuitive user experience.

// Click tab
fireEvent.click(screen.getByRole('tab', { name: /Settings/i }));

// Wait for tab panel
await screen.findByRole('tabpanel');

IV. HTTP MOCKING

This section provides patterns for mocking HTTP requests, which is essential for testing components that interact with APIs. Mocking HTTP requests allows you to isolate the component being tested and control the responses it receives. By mocking HTTP requests, you can ensure that your tests are predictable and reliable.

14. HTTP Request Mocking

Mocking HTTP requests involves setting up mock responses for API calls. This technique allows you to control the data returned by the API and test different scenarios. By mocking HTTP requests, you can ensure that your tests are isolated and that the component behaves as expected.

httpRequestsMockHelpers.setLoadPolicies([
  { name: 'policy1', phases: { hot: { ... } } }
]);

// Verify calls
expect(httpSetup.get).toHaveBeenCalledWith('/api/policies');

15. Async HTTP Responses

Testing components that handle asynchronous HTTP responses requires simulating delayed responses. This technique allows you to test loading states and error handling. By simulating delayed responses, you can ensure that your component handles asynchronous operations correctly.

// Delay response
httpSetup.get.mockImplementation(() =>
  new Promise(resolve => setTimeout(() => resolve({ data }), 100))
);

// Wait for loading to complete
await waitForElementToBeRemoved(screen.getByTestId('loading'));

16. HTTP Mock State Management

Properly managing HTTP mock state is crucial for preventing tests from contaminating each other. Clearing mocks and re-registering default responses ensures that each test is isolated and predictable. This technique helps maintain the integrity of your test suite and prevents unexpected behavior.

beforeEach(() => {
  httpSetup.get.mockClear();
  httpSetup.post.mockClear();
  // Re-register default responses
  httpRequestsMockHelpers.setLoadPolicies([]);
});

V. TEST ORGANIZATION

This section provides guidelines for structuring test files and ensuring test isolation. Proper test organization is essential for maintainability and readability. By following these guidelines, you can create a test suite that is easy to navigate and understand.

17. Test File Structure

Structuring test files with clear describe and it blocks helps organize tests and make them more readable. This structure provides a clear hierarchy for tests, making it easier to understand the component being tested and the specific scenarios being covered. By following this structure, you can create a test suite that is well-organized and easy to maintain.

describe('ComponentName', () => {
  beforeAll(() => {
    jest.useFakeTimers({ legacyFakeTimers: true });
  });

  afterAll(() => {
    jest.useRealTimers();
  });

  beforeEach(() => {
    // Reset mocks
    jest.clearAllMocks();
  });

  describe('WHEN condition', () => {
    it('SHOULD expected behavior', async () => {
      // Arrange
      render(<Component />);
      
      // Act
      fireEvent.click(screen.getByText('Button'));
      
      // Assert
      await screen.findByText('Result');
    });
  });
});

18. No Manual cleanup()

Jest auto-cleanup between tests. NEVER call cleanup() manually. Jest's automatic cleanup ensures that each test starts with a clean slate, preventing interference between tests. This feature simplifies test setup and ensures the reliability of your test suite.

19. Increasing Individual Test Timeout

If a test legitimately needs more time, you can increase the timeout for that specific test. This technique allows you to accommodate tests that require more time to complete without increasing the timeout for the entire suite. By increasing the timeout for individual tests, you can ensure that your tests are efficient and that legitimate timeouts are handled appropriately.

it('SHOULD handle slow operation', async () => {
  // test code
}, 30000);  // 30 second timeout

// Or for all tests in file:
jest.setTimeout(30000);

20. Test Isolation

Each test must be independent, with no shared state or order dependencies. Clean mock state in beforeEach. Test isolation is crucial for ensuring that tests are predictable and reliable. By isolating tests, you prevent interference between tests and ensure that each test is testing a specific behavior in isolation. This technique helps maintain the integrity of your test suite.

// Anti-pattern
let sharedComponent;
beforeAll(() => { sharedComponent = render(<Component />); });
it('test 1', () => { /* uses sharedComponent */ });
it('test 2', () => { /* uses same sharedComponent - BAD */ });

// Correct
it('test 1', () => { render(<Component />); /* ... */ });
it('test 2', () => { render(<Component />); /* ... */ });

21. Test Performance Quick Wins

Improving test performance can significantly reduce the time it takes to run your test suite. This section provides quick wins for improving test performance, such as using more efficient query selectors and reducing test scope. By implementing these techniques, you can ensure that your tests run quickly and efficiently.

Order of impact:

  1. Replace ByRole with ByTestId (20-30% improvement)
  2. Use fireEvent instead of userEvent where possible
  3. Remove unnecessary waitFor wrappers (use findBy instead)
  4. Reduce test scope (test one behavior per test)

Detection:

Slow tests often have:

  • Multiple getByRole/findByRole queries

  • userEvent for simple interactions

  • waitFor wrapping getBy (should be findBy)

  • Single test validating multiple unrelated behaviors

    Fix: Split into focused tests

    Exception: Don't split if setup overhead dominates (>50% of test time)

# Find slow tests
SHOW_ALL_SLOW_TESTS=true node scripts/jest <path>

# Check for heavy selectors
rg "getByRole|findByRole|getAllByRole" <test-file>

# Check for userEvent
rg "userEvent\.(click|type|keyboard)" <test-file>

22. Test Splitting Strategy

Splitting tests can improve test performance and isolation. This strategy provides guidelines for when to split tests and when to keep them unified. By following these guidelines, you can create a test suite that is both efficient and maintainable.

When to Split:

  • Test validates multiple independent behaviors
  • Test >3s AND >50 lines
  • Setup time <50% of total test time

Benefits:

  • Faster execution
  • Better failure isolation
  • Clearer intent
  • Easier maintenance

When NOT to Split:

  • Setup overhead dominates time (>50% of test)
  • Test validates true integration behavior (not independent units)

Decision Rule:

Split if: Test body >3s AND setup <1s

Keep unified if: Setup >50% of test time

VI. REFACTORING LEGACY CODE

This section provides patterns for refactoring legacy code, specifically Enzyme tests. These patterns help ensure a smooth transition to RTL and maintain the integrity of your test suite. By following these patterns, you can refactor your tests efficiently and effectively.

23. Systematic Utility Refactoring

Refactoring test utilities systematically is crucial for minimizing disruptions and ensuring a smooth transition. This approach helps prevent unexpected test failures and ensures that the refactoring process is manageable. By following this workflow, you can refactor your test utilities efficiently and effectively.

Problem: Codebase has test utilities using manual timer advancement.

// WRONG: File-by-file refactoring (breaks tests unpredictably)
// CORRECT: Utility-by-utility refactoring (isolated impact)

// Workflow (one utility at a time):
// 1. Pick ONE utility function with timer code (e.g., clickSubmitButton)
// 2. Remove timer advancement from the utility itself
// 3. Find ALL call sites: rg "utilityName\(" --files-with-matches
// 4. Fix EACH call site with appropriate waitFor
// 5. Verify each changed file at the narrowest scope (Pattern 2)
// 6. Run full suite when utility refactoring complete
// 7. Move to the next utility

// Why this works:
// - Isolated impact: Only call sites of current utility affected
// - Fast feedback: Verify each file immediately
// - Clear debugging: Know which utility caused issues
// - Incremental progress: Each utility is a stable checkpoint

// Critical Pitfall - Missing Await:
// Problem: Remove timers from utility, but forget the utility is still async.

// WRONG (missing await):
// waitForValidation();  // No await!
// expect(someCondition()).toBe(true);  // Runs immediately!

// CORRECT:
// await waitForValidation();  // Awaited
// expect(someCondition()).toBe(true);  // Runs after validation

// Detection:
// - TypeScript/ESLint: @typescript-eslint/no-floating-promises
// - Manual: rg "utilityName\(" | rg -v "await.*utilityName"

// Common scenarios:
// - Utility was async for timer, still async for waitFor
// - Utility signature changes when removing timer parameters
// - All call sites must update when the signature changes
// - Use TypeScript compiler errors as a checklist

24. Enzyme Artifact Detection & Cleanup

Identifying and removing Enzyme-specific artifacts is a crucial step in the migration process. This ensures that your test suite is free of Enzyme dependencies and fully aligned with RTL. By removing these artifacts, you can ensure a clean and efficient transition to RTL.

// Search patterns:
// - .exists()
// - .find()
// - .simulate
// - shallow
// - mount
// - wrapper

// Replace with:
// - screen.getBy/queryBy
// - fireEvent
// - render

// Common artifact:
// WRONG: expect(wrapper.find('Component').exists()).toBe(true)
// CORRECT: expect(screen.getByTestId('component')).toBeInTheDocument()

25. Removing Custom Testbed Abstraction

Eliminating custom testbed abstractions simplifies your test setup and makes tests easier to understand. This technique promotes a more direct and transparent testing approach. By removing unnecessary abstractions, you can create tests that are more maintainable and less prone to errors.

Problem: Integration tests using custom testbed architecture with excessive abstraction.

// Anti-pattern: Test file -> .helpers.tsx -> Action creators -> Implementation (4 layers)

// Solution:
// - No .helpers.tsx files per test
// - Use shared helpers from ../../helpers
// - No setup() function
// - No actions object
// - Direct screen queries

Decision Matrix - Keep or Remove Helper:

  • Keep if:

    • Multi-step interaction
    • Used in 3+ tests
    • Conditional behavior
  • Remove if:

    • Single query
    • Used in 1-2 tests
    • Straightforward

Important: Jest auto-cleanup between tests. NEVER call cleanup() manually.

26. No Harness Abstractions in Plugin Tests

Avoid using harness abstractions in plugin tests to keep tests simple and direct. This technique promotes a more straightforward testing approach. By avoiding unnecessary abstractions, you can create tests that are easier to understand and maintain.

Use @kbn/test-eui-helpers ONLY for EUI-specific component testing.

For plugin tests:

  • Use simple inline helpers
  • Use shared utility functions
  • Avoid custom harness or testbed abstractions
  • NOT co-located with individual test files

Rationale: Harness patterns add unnecessary abstraction; direct helpers clearer.

27. No Index Imports from Helpers

Import directly from specific helper files to avoid re-exporting Enzyme artifacts. This technique ensures that your imports are clear and specific. By avoiding index imports, you can prevent accidental re-exporting of Enzyme artifacts and ensure a clean transition to RTL.

// CORRECT
import { setupEnvironment } from './helpers/setup_environment'
// WRONG
import { setupEnvironment } from './helpers'

Rationale: Import directly from specific helper files.

VII. DEBUGGING & TROUBLESHOOTING

This section provides techniques for debugging and troubleshooting common issues encountered during the migration process. These techniques help you identify and resolve problems efficiently. By following these techniques, you can ensure a smooth and effective migration process.

28. Semantic Code Search for Debugging

Use semantic code search to compare implementations in your branch with those in the main branch. This technique helps identify differences and potential issues. By comparing implementations, you can gain insights into why tests are failing and how to fix them.

When tests fail in your branch but you suspect they worked before:

  • Use semantic_code_search to check the main branch
  • The MCP tool searches the main branch by default
  • Instant access to reference implementations

Pattern:

  1. Search for test file or helper files
  2. Compare implementations, not just test structure
  3. Look for differences in: default routes, function signatures, test setup patterns

ALWAYS use when:

  • This test should work but doesn't
  • How did this work before?

Advantage: No need for Git worktrees or checkouts - instant main branch access.

29. Console Log Debugging for Data Flow

Trace data flow with strategic console logs to identify mismatches between form state and UI state. This technique helps you understand how data is flowing through your component and identify any issues. By tracing data flow, you can pinpoint the source of problems and resolve them effectively.

When form state doesn't match UI state:

  • Trace data flow with strategic console.logs
  • Add logging at transformation points: helpers, serializers, components
  • Reveals mismatches invisible in test output
fireEvent.change(input, { target: { value: 'test' } });
console.log('Form state:', form.getValues());
console.log('Serialized:', serializeForm(form));

// Remove console.logs after debugging.

30. Common Errors Reference

This section provides a reference for common errors encountered during the migration process and their solutions. This reference helps you quickly identify and resolve common problems. By consulting this reference, you can save time and effort in troubleshooting your tests.

// Error: Found multiple elements by: [data-test-subj=...]
// Cause: Component rendered twice (e.g., describe() + beforeEach() both call setup)
// Fix: Remove duplicate setupTest() call, use actions from beforeEach
// Rule: One render per test (RTL cleanup handles it between tests)

// Error: Functions are not valid as a React child
// Cause: Component function referenced but not called/rendered

// WRONG: {MigrationGuidance} // Function reference
// CORRECT: {MigrationGuidance({ docLinks })} // Called with props
// CORRECT: <MigrationGuidance docLinks={docLinks} /> // Rendered as JSX

// Error: Element not found (but element exists in UI)
// Cause: License check failed, component didn't render
// Fix: Mock license in test setup (see Pattern 10)

// Error: Cannot read properties of undefined (reading 'url')
// Cause: Router context missing
// Fix: Mock react-router-dom hooks (see Pattern 9)

31. Test Helper Type Definitions

Use explicit type definitions for test helpers to ensure type safety and prevent errors. This technique helps you catch type-related issues early in the development process. By using explicit type definitions, you can improve the reliability and maintainability of your test suite.

Problem: TypeScript cannot infer properties when intersecting multiple function return types.

// WRONG - Pure intersection type:
// let helpers: ReturnType<A> & ReturnType<B>;

// CORRECT - Explicit properties + intersection:
// const helpers: { prop1: Type1; prop2: Type2 } & ReturnType<A> & ReturnType<B>;

// General rule:
// - Functions return functions -> explicit
// - Objects return objects -> intersection

32. Test Helper Return Types - Async vs Sync

Ensure that you correctly handle async and sync test helper return types. This technique helps prevent common errors related to async operations. By correctly handling return types, you can ensure that your tests are reliable and accurate.

Common mistake: Treating synchronous render helpers as async.

RTL render functions return RenderResult synchronously.

Check helper signature:

  • Returns Promise? -> Use await
  • Returns RenderResult/object? -> No await, use directly

Remove await if the helper is synchronous; use screen queries directly.

33. Setup Functions - Return Stable Objects

Setup functions should return objects with stable references to avoid stale closure issues. This technique ensures that your test setup is consistent and reliable. By returning stable objects, you can prevent unexpected behavior and ensure the accuracy of your tests.

// CORRECT
const result = setup();
return result; // Use result consistently
// WRONG
// Call setup() multiple times in the same test

// Rationale: Avoids stale closure issues.

VIII. REFERENCE

This section provides a quick reference for useful commands, PRs, and Kibana conventions. This reference helps you quickly access important information and resources. By consulting this reference, you can streamline your migration process and ensure that you are following best practices.

// Test & Lint Commands:
// - Type check: cd <plugin path> && npx tsc -p tsconfig.json --noEmit
// - Lint: node scripts/eslint <path>
// - Unit: node scripts/jest <path to test>
// - Integration: node scripts/jest_integration <path> (only if jest.integration.config.js exists nearby)

// Reference PRs:
// - #242062 (ILM migration - most recent, canonical reference)
// - #239643 (CCR migration - canonical patterns)
// - #238764 (Management/ES UI Shared - timer handling, userEvent setup)

// Kibana Conventions:
// - Use data-test-subj (NOT data-testid) in Kibana codebase
// - RTL query: screen.getByTestId('foo') queries data-test-subj="foo"
// - Rationale: Kibana convention across FTR and Jest tests

IX. QUICK REFERENCE INDEX

This index provides a quick way to find information on specific topics and errors. This index helps you quickly locate the information you need to address specific issues. By consulting this index, you can save time and effort in troubleshooting your tests.

By Topic:

  • Performance:

    -> Pattern 21: Test Performance Quick Wins

    -> Pattern 22: Test Splitting Strategy

    -> Pattern 5: Query Selector Strategy (includes ByRole performance)

    -> Pattern 8: fireEvent vs userEvent

  • Forms:

    -> Pattern 7: Form Testing - Complete Guide (covers all form scenarios)

  • Waiting/Async:

    -> Pattern 4: Element Queries & Async Waiting

    -> Pattern 6: Fake Timers Setup & Strategy

    -> Pattern 7: Form Testing (includes wait logic placement)

  • Timers:

    -> Pattern 6: Fake Timers Setup & Strategy

    -> Pattern 23: Systematic Utility Refactoring (removing timer code)

  • Components:

    -> Pattern 9: Router Mocking

    -> Pattern 10: License & Feature Flags

    -> Pattern 11: Portal Components

    -> Pattern 12: Async Select Components

    -> Pattern 13: Tab Navigation

  • Refactoring:

    -> Pattern 23: Systematic Utility Refactoring

    -> Pattern 24: Enzyme Artifact Cleanup

    -> Pattern 25: Removing Testbed Abstraction

    -> Pattern 26: No Harness Abstractions

  • Debugging:

    -> Pattern 28: Semantic Code Search

    -> Pattern 29: Console Log Debugging

    -> Pattern 30: Common Errors Reference

By Error Message:

  • Found multiple elements -> Pattern 30 (duplicate renders)
  • Functions are not valid as a React child -> Pattern 30
  • Element not found (but visible in UI) -> Pattern 10 (license check)
  • Cannot read properties of undefined -> Pattern 9 (router mock)
  • Act warning -> Pattern 4 (missing await)
  • Test timeout -> Pattern 21 (performance), Pattern 19 (individual timeouts)
  • Slow tests -> Pattern 21 (performance quick wins)

Acceptance Criteria

The acceptance criteria outline the conditions that must be met to ensure a successful migration. These criteria serve as a checklist for verifying that the migration is complete and that the tests are functioning correctly. By meeting these criteria, you can ensure that the migration has been successful and that your test suite is in good shape.

  • [ ] All Enzyme imports removed from Runtime Fields plugin
  • [ ] All registerTestBed / testbed patterns removed
  • [ ] All .helpers.ts files either deleted or converted to pure RTL helpers
  • [ ] All tests passing with RTL
  • [ ] No shallow() or mount() calls remain
  • [ ] No wrapper variables remain
  • [ ] Direct screen queries used throughout
  • [ ] Type check passes
  • [ ] Lint passes
  • [ ] Zero Enzyme-related imports in the entire plugin

By migrating from Enzyme to React Testing Library, we are not only modernizing our testing practices but also ensuring that our tests are more robust, maintainable, and aligned with the way users interact with our applications. For additional information on React Testing Library, visit the official React Testing Library Documentation.