Inconsistency with Dropdown object selection causing frequent assertionErrors

I am trying to run automated tests in some fixture views against custom components, however in my fixture views leveraging native components I am having some annoying inconsistency with the Dropdown class and select_option_by_text_if_not_selected

For instance in this function

self.icon_dropdown = Dropdown(
                locator=(By.ID, "icon-dropdown"), driver=self.test_page.driver
            )

@pytest.mark.component
    def test_button_icons(self):
        """Test button icon functionality - optimized version."""
        # Switch to Icon tab and set components
        self.switch_to_tab_and_set_components(2)

        icons = [
            ("none", None),
            ("star", "material/star"),
            ("settings", "material/settings"),
            ("user", "material/person"),
            ("home", "material/home"),
        ]

        for icon_value, expected_path in icons:
            # Select icon from dropdown
            self.icon_dropdown.select_option_by_text_if_not_selected(
                icon_value.title() if icon_value != "none" else "None",
                binding_wait_time=1.0,
            )

            if icon_value == "none":
                # Reduced retry attempts and wait time
                assert not self.test_page.wait_with_retry(
                    self.test_button.has_icon, max_attempts=5, wait_between=0.2
                ), "Button should not have an icon"
            else:
                assert self.test_page.wait_with_retry(
                    self.test_button.has_icon, max_attempts=5, wait_between=0.5
                ), f"Button should have an icon for '{icon_value}'"

                def check_icon_path():
                    actual_path = self.test_button.get_icon_path()
                    return actual_path == expected_path

                assert self.test_page.wait_with_retry(
                    check_icon_path, max_attempts=5, wait_between=0.5
                ), f"Expected icon path '{expected_path}', got '{self.test_button.get_icon_path()}'"

Only about 50% of the time does it successfully get through each icon, but often it will just get hung up on one and not select past that one, but it's random.

I have this same problem with anything that for-loops through a drop-down options.

Are there any tips and tricks to using the dropdown selection to make sure it's working consistently?

Generally the errors are all the native IA Assertion Error

tests/test_button.py:475: in test_comprehensive_property_matrix
    self.icon_dropdown.select_option_by_text_if_not_selected(
/Users/gamblek/.pyenv/versions/3.13.1/lib/python3.13/site-packages/ignition_automation_tools/Components/PerspectiveComponents/Inputs/Dropdown.py:365: in select_option_by_text_if_not_selected
    IAAssert.contains(
/Users/gamblek/.pyenv/versions/3.13.1/lib/python3.13/site-packages/ignition_automation_tools/Helpers/IAAssert.py:25: in contains
    assert expected_value in iterable, msg
E   AssertionError: Assert Star in ['None']
E   Message: Failed to select option in Dropdown.

I am leaning to think that the issue is around it making the selections too fast or something like that.

Because if I just try the same selection a few times in a row, then it makes it through every test fine.

for index in range(3):
    try:
        variant_dropdown.select_option_by_text_if_not_selected(
            variant.title()
        )
        break
    except AssertionError as e:
        if index == 2:
            raise e
        continue

EDIT:
I am thinking maybe its something more global, I am noticing that I am seeing a similar inconsistency with setting values on a numeric input as well.

test_sizes = [12, 16, 20, 24]

for size in test_sizes:
    # Set width via numeric input
    self.width_input.set_text(str(size))

    # Reduced wait time
    self.test_page.wait_for_binding_propagation(0.2)

    # Verify the input value was set correctly
    def check_size():
        actual_value = self.width_input.get_text()
        return actual_value == str(size)

    assert self.test_page.wait_with_retry(
        check_size, max_attempts=5, wait_between=0.2
    ), f"Expected width '{size}', got '{self.width_input.get_text()}'"

Where every once and awhile it seems to "miss" the events to either click/write data into things. Specifically with the input field its with that self.width_input.set_text(str(size)) getting "missed".

For some additional info:

Here is my webdriver
import os
import logging
from selenium import webdriver
from selenium.webdriver.chrome.service import Service

logger = logging.getLogger(__name__)


def get_webdriver(executor_url, browser="chrome", headless=None):
    """
    Initialize Selenium WebDriver with optimized settings for CI/CD.

    Args:
        executor_url (str): Selenium Grid/Remote WebDriver URL
        browser (str): Browser type (currently only 'chrome' supported)
        headless (bool): Force headless mode. If None, auto-detect based on environment

    Returns:
        WebDriver instance
    """
    if browser.lower() != "chrome":
        raise ValueError(f"Unsupported browser: {browser}")

    # Auto-detect headless mode for CI environments
    if headless is None:
        headless = bool(os.getenv("CI")) or bool(os.getenv("GITHUB_ACTIONS"))

    logger.info(f"Initializing {browser} WebDriver (headless: {headless})")
    logger.info(f"Executor URL: {executor_url}")

    # Chrome options optimized for CI/CD
    options = webdriver.ChromeOptions()

    # SSL and certificate handling
    options.add_argument("--ignore-ssl-errors=yes")
    options.add_argument("--ignore-certificate-errors")
    options.add_argument("--ignore-certificate-errors-spki-list")
    options.add_argument("--ignore-ssl-certificate-errors")
    options.add_argument("--allow-running-insecure-content")

    # Performance and stability options
    options.add_argument("--no-sandbox")
    options.add_argument("--disable-dev-shm-usage")
    options.add_argument("--disable-gpu")
    options.add_argument("--disable-web-security")
    options.add_argument("--disable-features=VizDisplayCompositor")
    options.add_argument("--disable-extensions")
    options.add_argument("--disable-plugins")
    options.add_argument("--disable-images")  # Speed up loading
    options.add_argument("--disable-javascript-harmony-shipping")

    # Window size (important for element visibility)
    options.add_argument("--window-size=1024,768")
    options.add_argument("--start-maximized")

    # # Headless mode for CI
    if headless:
        options.add_argument("--headless")
        logger.info("Running in headless mode")

    # Logging preferences
    options.add_argument("--log-level=3")  # Suppress INFO, WARNING, ERROR
    options.add_experimental_option("excludeSwitches", ["enable-logging"])
    options.add_experimental_option("useAutomationExtension", False)

    # Set capabilities (W3C compliant)
    options.set_capability("acceptInsecureCerts", True)
    # Note: acceptSslCerts is deprecated in favor of acceptInsecureCerts

    # Additional capabilities are now set through options
    options.set_capability("goog:loggingPrefs", {"browser": "SEVERE"})

    try:
        # Create remote WebDriver (Selenium 4+ syntax)
        driver = webdriver.Remote(command_executor=executor_url, options=options)

        # Set timeouts
        driver.set_page_load_timeout(60)
        driver.set_script_timeout(30)

        logger.info("WebDriver initialized successfully")
        logger.info(
            f"Browser: {driver.capabilities.get('browserName')} {driver.capabilities.get('browserVersion')}"
        )

        return driver

    except Exception as e:
        logger.error(f"Failed to initialize WebDriver: {e}")
        raise


def is_ci_environment():
    """Check if running in a CI/CD environment."""
    ci_indicators = [
        "CI",
        "CONTINUOUS_INTEGRATION",
        "GITHUB_ACTIONS",
        "TRAVIS",
        "CIRCLECI",
        "JENKINS_URL",
        "BUILDKITE",
    ]
    return any(os.getenv(indicator) for indicator in ci_indicators)


def get_browser_logs(driver):
    """
    Retrieve browser console logs for debugging.

    Args:
        driver: WebDriver instance

    Returns:
        List of log entries
    """
    try:
        logs = driver.get_log("browser")
        if logs:
            logger.info(f"Retrieved {len(logs)} browser log entries")
        return logs
    except Exception as e:
        logger.warning(f"Could not retrieve browser logs: {e}")
        return []


def capture_network_logs(driver):
    """
    Capture network activity logs if available.

    Args:
        driver: WebDriver instance

    Returns:
        List of network log entries
    """
    try:
        logs = driver.get_log("performance")
        network_logs = [log for log in logs if "Network." in log.get("message", "")]
        if network_logs:
            logger.info(f"Retrieved {len(network_logs)} network log entries")
        return network_logs
    except Exception as e:
        logger.warning(f"Could not retrieve network logs: {e}")
        return []

Here is my test fixture page
import time
from selenium.webdriver.common.by import By
from selenium.webdriver.remote.webdriver import WebDriver
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import TimeoutException
from ignition_automation_tools.Pages.PerspectivePageObject import PerspectivePageObject

# Import standard Ignition components for interacting with controls
from ignition_automation_tools.Components.PerspectiveComponents.Inputs.Dropdown import (
    Dropdown,
)


class TestFixturePage(PerspectivePageObject):
    """
    Page object for the test fixture page that renders components by ID.
    Simply navigates to /test/test-fixture/{componentId} and waits for the component to render.
    """

    def __init__(self, driver: WebDriver, gateway_address: str):
        super().__init__(
            driver=driver,
            gateway_address=gateway_address,
            page_config_path="/test/test-fixture",
            primary_view_resource_path="Test Fixture",
            configured_tab_title="Test Fixture",
        )

    def _wait_for_page(self):
        """Wait for the component view to be visible."""
        WebDriverWait(self.driver, 60).until(
            EC.visibility_of_element_located((By.ID, "component-view"))
        )

    def navigate_to_component(self, component_id: str):
        """
        Navigate to a specific component test page.

        :param component_id: The component ID to navigate to (e.g., 'shadcn.input.button')
        """
        # Build the full URL with component ID
        component_url = f"{self.url}/{component_id}"

        # If we are not already at the component URL, navigate to it
        if self.driver.current_url != component_url:
            # Navigate directly to the component page
            self.driver.get(component_url)

        # Wait for the component view to be ready
        self._wait_for_page()

    def navigate_to(self):
        """Navigate to the base test fixture page (without component)."""
        self.driver.get(self.url)
        # Note: Don't wait for component-view here since no component is loaded yet
        self.wait_for_perspective_page()

    # Generic wait utilities for test synchronization

    def wait_for_binding_propagation(self, duration: float = 0.5):
        """
        Wait for data binding propagation to complete.

        :param duration: Time to wait in seconds (default 0.5s)
        """
        time.sleep(duration)

    def wait_for_ui_update(self, duration: float = 0.2):
        """
        Wait for UI updates to complete after user interactions.

        :param duration: Time to wait in seconds (default 0.2s)
        """
        time.sleep(duration)

    def wait_for_animation(self, duration: float = 0.2):
        """
        Wait for CSS animations/transitions to complete.

        :param duration: Time to wait in seconds (default 0.2s)
        """
        time.sleep(duration)

    def wait_for_element_change(
        self, locator, timeout: float = 5.0, poll_frequency: float = 0.1
    ):
        """
        Wait for any element to change (useful for dynamic content).
        This compares the element's outer HTML before and after to detect changes.

        :param locator: Tuple of (By, locator_string)
        :param timeout: Maximum time to wait in seconds
        :param poll_frequency: How often to check for changes in seconds
        :return: True if element changed, False if timeout
        """
        try:
            element = WebDriverWait(self.driver, timeout).until(
                EC.presence_of_element_located(locator)
            )
            initial_html = element.get_attribute("outerHTML")

            end_time = time.time() + timeout
            while time.time() < end_time:
                time.sleep(poll_frequency)
                try:
                    current_element = self.driver.find_element(*locator)
                    current_html = current_element.get_attribute("outerHTML")
                    if current_html != initial_html:
                        return True
                except:
                    # Element might have been recreated, consider this a change
                    return True

            return False
        except TimeoutException:
            return False

    def wait_for_element_stable(
        self, locator, stability_duration: float = 1.0, timeout: float = 10.0
    ):
        """
        Wait for an element to remain stable (unchanged) for a specified duration.
        Useful when you need to ensure an element has finished updating.

        :param locator: Tuple of (By, locator_string)
        :param stability_duration: How long element must remain unchanged
        :param timeout: Maximum time to wait for stability
        :return: True if element became stable, False if timeout
        """
        try:
            element = WebDriverWait(self.driver, timeout).until(
                EC.presence_of_element_located(locator)
            )

            end_time = time.time() + timeout
            last_change_time = time.time()
            last_html = element.get_attribute("outerHTML")

            while time.time() < end_time:
                time.sleep(0.1)
                try:
                    current_element = self.driver.find_element(*locator)
                    current_html = current_element.get_attribute("outerHTML")

                    if current_html != last_html:
                        # Element changed, reset stability timer
                        last_change_time = time.time()
                        last_html = current_html
                    elif time.time() - last_change_time >= stability_duration:
                        # Element has been stable for required duration
                        return True

                except:
                    # Element disappeared or error occurred
                    last_change_time = time.time()

            return False
        except TimeoutException:
            return False

    def wait_with_retry(
        self,
        check_function,
        max_attempts: int = 5,
        wait_between: float = 0.2,
    ):
        """
        Retry a check function multiple times with waits between attempts.

        :param check_function: Function that returns True when condition is met
        :param max_attempts: Maximum number of attempts (reduced from 10 to 5)
        :param wait_between: Time to wait between attempts in seconds (reduced from 0.5 to 0.2)
        :return: True if check_function eventually returned True, False otherwise
        """
        for attempt in range(max_attempts):
            if check_function():
                return True
            if attempt < max_attempts - 1:  # Don't wait after the last attempt
                time.sleep(wait_between)
        return False

    def set_dropdown_value(
        self, dropdown: Dropdown, option_text: str, repeats: int = 3
    ):
        for index in range(repeats):
            try:
                dropdown.select_option_by_text_if_not_selected(option_text)
                break
            except AssertionError as e:
                if index == repeats - 1:
                    raise e
                continue

Here is my test configuration

import os
import logging
import pytest
from utils.logging import setup_logging, configure_debug_mode
from utils.webdriver import get_webdriver
from utils.paths import TEMP_DIR, SCREENSHOT_DIR, LOG_DIR
from utils.screenshots import save_screenshot

Import from the test-specific pages directory (not library)

from pages.test_fixture_page import TestFixturePage

Environment-based configuration with sensible defaults

SELENIUM_EXECUTOR_URL = os.getenv(
"SELENIUM_EXECUTOR_URL", "https://selenium-executor.localtest.me/wd/hub"
)
GATEWAY_URL = "http://shadcn:8088"

Setup directories

os.makedirs(TEMP_DIR, exist_ok=True)
os.makedirs(SCREENSHOT_DIR, exist_ok=True)
os.makedirs(LOG_DIR, exist_ok=True)

def pytest_addoption(parser):
"""Add custom command line options."""
parser.addoption(
"--debug-automation", # Changed from --debug to avoid conflicts
action="store_true",
default=False,
help="Enable debug logging for all libraries (selenium, urllib3, etc.)",
)
parser.addoption(
"--log-selenium",
action="store_true",
default=False,
help="Enable selenium debug logging specifically",
)
parser.addoption(
"--log-requests",
action="store_true",
default=False,
help="Enable requests/urllib3 debug logging",
)

def pytest_configure(config):
"""Configure pytest with custom markers and logging settings."""
# Configure debug logging based on command line options
debug_mode = config.getoption("--debug-automation") # Changed from --debug
log_selenium = config.getoption("--log-selenium")
log_requests = config.getoption("--log-requests")

configure_debug_mode(
    debug_mode=debug_mode, log_selenium=log_selenium, log_requests=log_requests
)

# Add custom markers
config.addinivalue_line(
    "markers", "slow: marks tests as slow (deselect with '-m \"not slow\"')"
)
config.addinivalue_line("markers", "integration: marks tests as integration tests")
config.addinivalue_line(
    "markers", "smoke: marks tests as smoke tests for quick validation"
)
config.addinivalue_line(
    "markers",
    "quit_on_failure: marks tests that should quit the entire test session if they fail",
)
config.addinivalue_line(
    "markers", "component: marks tests that test individual components"
)
config.addinivalue_line(
    "markers", "variants: marks tests that test component variants"
)
config.addinivalue_line(
    "markers", "debug: marks tests that need debug logging enabled"
)

Setup logging after pytest configuration

logger = setup_logging()

Log configuration for debugging (only in debug mode)

if logger.isEnabledFor(logging.DEBUG):
logger.debug(f"Selenium Executor URL: {SELENIUM_EXECUTOR_URL}")
logger.debug(f"Gateway URL: {GATEWAY_URL}")
else:
logger.info("Test suite initialized (use --debug-automation for verbose logging)")

@pytest.fixture(scope="session")
def gateway_url():
"""Provide the Ignition gateway URL from environment or default."""
return GATEWAY_URL

@pytest.fixture(scope="session")
def selenium_executor_url():
"""Provide the Selenium executor URL from environment or default."""
return SELENIUM_EXECUTOR_URL

@pytest.fixture(scope="session")
def driver(selenium_executor_url, request):
debug_enabled = logger.isEnabledFor(logging.DEBUG)
if debug_enabled:
logger.debug(f"Connecting to Selenium at: {selenium_executor_url}")
driver = get_webdriver(executor_url=selenium_executor_url)
yield driver
if debug_enabled:
logger.debug("Closing WebDriver session")
driver.quit()

@pytest.fixture(scope="function", autouse=True)
def reset_browser(driver, test_fixture_page):
driver.delete_all_cookies()
test_fixture_page.navigate_to() # Navigate to base URL
yield

@pytest.fixture(scope="function")
def test_fixture_page(driver, gateway_url, request):
"""Create the test fixture page (without navigating to any component)."""
debug_this_test = request.node.get_closest_marker("debug") is not None
debug_enabled = logger.isEnabledFor(logging.DEBUG)

if debug_enabled or debug_this_test:
    logger.debug(f"Creating test fixture page for {gateway_url}")

return TestFixturePage(driver=driver, gateway_address=gateway_url)

Enhanced error handling with screenshots

@pytest.hookimpl(tryfirst=True)
def pytest_runtest_setup(item):
"""Log test start."""
# Only log test start in debug mode or for debug-marked tests
debug_this_test = item.get_closest_marker("debug") is not None
debug_enabled = logger.isEnabledFor(logging.DEBUG)

if debug_enabled or debug_this_test:
    logger.debug(f"Starting test: {item.name}")

@pytest.hookimpl(tryfirst=True)
def pytest_runtest_teardown(item, nextitem):
"""Log test completion."""
# Only log completion in debug mode or for debug-marked tests
debug_this_test = item.get_closest_marker("debug") is not None
debug_enabled = logger.isEnabledFor(logging.DEBUG)

if debug_enabled or debug_this_test:
    logger.debug(f"Completed test: {item.name}")

@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
"""Hook to capture test results and handle failures with enhanced debugging."""
outcome = yield
report = outcome.get_result()

# Add extra information for failed tests
if report.when == "call" and report.outcome == "failed":
    # Always log test failures (not just in debug mode)
    logger.error(f"❌ Test {item.name} failed")

    # Capture enhanced debugging info if driver is available
    if hasattr(item, "funcargs"):
        driver = item.funcargs.get("driver")

        if driver:
            try:
                current_url = driver.current_url
                page_title = driver.title
                logger.error(f"📍 Failure context - URL: {current_url}")
                logger.error(f"📄 Page title: {page_title}")

                # Save screenshot for failed test
                screenshot_path = save_screenshot(driver, item.name, "failure")
                logger.error(f"📸 Screenshot saved: {screenshot_path}")

                # Only show browser logs in debug mode or if specifically requested
                debug_this_test = item.get_closest_marker("debug") is not None
                debug_enabled = logger.isEnabledFor(logging.DEBUG)

                if debug_enabled or debug_this_test:
                    # Log browser console errors
                    from utils.webdriver import get_browser_logs

                    browser_logs = get_browser_logs(driver)
                    if browser_logs:
                        logger.debug("🖥️  Browser console errors:")
                        for log in browser_logs[-5:]:  # Last 5 errors
                            logger.debug(f"  {log}")
                    else:
                        logger.debug("🖥️  No browser console errors found")

            except Exception as e:
                logger.warning(f"⚠️  Could not capture failure details: {e}")

    # Handle quit_on_failure marker
    if item.get_closest_marker("quit_on_failure"):
        logger.error("=" * 80)
        logger.error(
            "🚨 CRITICAL FAILURE: Test marked with 'quit_on_failure' has failed!"
        )
        logger.error(f"🚨 Failed test: {item.name}")
        logger.error("🚨 Stopping all test execution immediately.")
        logger.error("=" * 80)

        pytest.exit(
            f"Test '{item.name}' marked with 'quit_on_failure' failed. Stopping all tests.",
            returncode=1,
        )

Health check fixtures for CI

@pytest.fixture(scope="session", autouse=True)
def verify_services_health(gateway_url, selenium_executor_url, request):
"""Verify that required services are accessible before running tests."""
import requests

# Allow skipping health checks if needed
if os.getenv("SKIP_HEALTH_CHECKS", "").lower() in ("true", "1", "yes"):
    logger.info("⏭️  Skipping service health checks (SKIP_HEALTH_CHECKS=true)")
    return

debug_enabled = logger.isEnabledFor(logging.DEBUG)

if debug_enabled:
    logger.debug("🔍 Verifying service health...")
else:
    logger.info("🔍 Checking services...")

# Check Selenium
try:
    response = requests.get(
        selenium_executor_url.replace("/wd/hub", "/status"), timeout=10
    )
    if response.status_code == 200:
        logger.info("✅ Selenium service is healthy")
    else:
        pytest.fail(f"Selenium service returned status {response.status_code}")
except Exception as e:
    pytest.fail(f"❌ Cannot connect to Selenium service: {e}")

if debug_enabled:
    logger.debug("✅ Service health checks completed")

This bit right here is pretty clear about what is happening. The ONLY option in your dropdown is None, and so it is unable to select Star. How are the options of the Dropdown being populated?

This dropdown has a hard-coded list of options, and I Was able to confirm via watching the test that those items are present in the list.

My thoughts are that the Click is happening on the dropdown expansion icon, and then the check for items is happening faster than the modal is appearing in front of the dropdown causing it to find nothing.

What happens when you set binding_wait_time to an absurd number like 10? Do you get a 100% pass rate or a similar 50% pass rate?

Are the dropdowns simple or do they have a long list of options? If a long list, it might be worth tweaking our implementation of Components/PerspectiveComponents/Inputs/Dropdown.py to determine which path the automation is taking when it passes and when it fails. If self.get_selected_options_as_list() isn't returning the item half the time, that could explain some of the flakiness of the test.

Garth

I actually notice that this doesn't seem to resolve it as well, as it seems sometimes that actions performed by Clicks are not "taken" for lack of better words

This was a simple one I just threw together clicking on a toggle switch, it still definitely fails about the same frequency.

@pytest.mark.component
def test_button_enabled_state(self):
    """Test enabling/disabling the button."""
    # Switch to Basic tab and set components
    self.switch_to_tab_and_set_components(0)

    for _ in range(15):

        # Test disabled state
        self.enabled_toggle.set_switch(
            should_be_selected=False, binding_wait_time=5.0
        )
        self.test_page.wait_for_binding_propagation()

        # Test enabled state
        self.enabled_toggle.set_switch(
            should_be_selected=True, binding_wait_time=5.0
        )
        self.test_page.wait_for_binding_propagation()

With the following error

________________________________________________________________ TestShadCNButton.test_button_enabled_state ________________________________________________________________
tests/test_button.py:153: in test_button_enabled_state
    self.enabled_toggle.set_switch(
/Users/gamblek/.pyenv/versions/3.13.1/lib/python3.13/site-packages/ignition_automation_tools/Components/PerspectiveComponents/Inputs/ToggleSwitch.py:86: in set_switch
    IAAssert.is_true(
/Users/gamblek/.pyenv/versions/3.13.1/lib/python3.13/site-packages/ignition_automation_tools/Helpers/IAAssert.py:304: in is_true
    assert value, msg
E   AssertionError: Assert False
E   Message: Failed to set the state of a Toggle Switch to True.

What I am thinking is that it's something with the selenium container that is causing some "missed clicks" or something like that. It's almost like the browser doesn't have focus, and then when a click lands on the browser it just sets focus and then doesn't actually perform the action. (if that makes sense?)

In all of these scenarios, if I wrap them in this, it works 100% of the time in my tests. This is just a terrible way for me to do everything

try:
    # perform the action here
except AssertionError:
    # blindly try again

Selenium has the ability to record entire sessions that can sometimes be useful in identifying what is going on if you are leveraging Docker. The capability is:

options.set_capability('se:recordVideo', True)

We have it setup to run in a way similar to how this article has it described where our browsers are spun up in docker with the functionality enabled. Sometimes this works better than screenshots, and sometimes it just helps confirm what we already know. Just throwing out another tool that might help identify what is going on.

Conveniently I was just working on grabbing some screen recordings.

I am not sure why this slows it down, but you can notice on the 3rd time it clicks the toggle switch, it clicks and the highlight/focus change on the toggle switch, but nothing happens. Then it clicks again and resumes.

Then afterwards, towards the end, it does the same thing going from False to True and just sets the focus to the thumb and highlights its border blue. (Seen by the really fuzzy screenshot below)

Screen Recording 2025-05-27 at 3.21.55 PM

image

This is with me attempting to isolate concerns, and just clicking the Thumb directly 100 times in a row

for _ in range(100):
    self.enabled_toggle.ts_thumb._click()
    self.test_page.wait_for_binding_propagation()

Okay,

Someone (me) vibe coded this test screen into existence with a fancy AI tool I am working on, and it added an onActionPerformed AND the Bidirectional binding to set the props on those other components... and so every once in a while I was hitting a race condition....

That being said, the library is totally fine and this was operator error... :man_facepalming:t2:

1 Like

Oy! TIL. The linguistic correlates are.... oddly (in)appropriate.

2 Likes

We'll Ignore the fact that I was letting something automated make a test to do something automated... lol

2 Likes