Why did my test fail?

Tests are going to fail. Typically, this is because they are identifying an issue on your website or application which is exactly their point. They are telling you that something is wrong.

However, in some cases tests will fail because of design issues with the test itself or because your website or application has changed. This page outlines common errors that lead to test failures and walks through steps to diagnose and fix them.

Table of Contents

Element not found

The "Element not found" error is by far the most common error you will encounter as you test. It means that the test is unable to find the element that is being targeted in the step. For instance, the link or button that you wish to click on the page cannot be found. That's the simple part. Determining why the element cannot be found is the more intricate part.

There are many reasons why an element cannot be found. It is most often the result of the test's design, and not because the element has randomly gone missing. Some of reasons for this error are quite simple and can be easily addressed. Let's walk through our suggested approach for diagnosing this error.

The video and screenshot provided with each test result are incredibly helpful when investigating this error. We recommend starting there.

  1. Click on the "Watch video" link and watch the full video for your test run. This will help you to understand the sequence of steps that led to the issue and will let you confirm that previous steps were carried out as expected.
  2. Next, open the final screenshot on the right side of the result by clicking on it. This will show you a full-size screenshot of the page where the step failed.
  3. Confirm that the screenshot is showing something. Is the screenshot blank? If so, then it's possible that Ghost Inspector could not reach your website URL or that your website requires HTTP authentication credentials. This is especially likely if it's the first step of your test that's failing.
  4. If the screenshot is rendering a page, is it the expected page? For instance, are you expecting to see your application dashboard, but instead you're seeing the login form? If so, then your test may not include the proper login steps. It's also possible that earlier steps did not perform as expected or that you recorded a non-repeatable sequence.
  5. If the screenshot is showing the expected page, then look for the element from the failing step. For instance, if you are attempting to click a button, make sure you can visibly see that button on the page. If it's not visible, something may be going wrong earlier in your test. If you can see that the element is there in the screenshot and Ghost Inspector simply can't locate it, then it's likely that the step selector is not accurately targeting the element.
  6. If none of the steps above have led to clear explanation for the step failure, it's possible that timing issues are playing a role.

Unreachable URL

Ghost Inspector needs a URL that it can use to access your website or application from our servers. If your website is running at a URL like http://localhost:3000 or https://company.internal:4444, then Ghost Inspector will not be able to reach it directly. You will need to consider creating a tunnel to your local or private host so that Ghost Inspector can access it.

If the website you're trying to reach is behind a firewall and must have IP addresses added to an "Allow list", you can do this using our published IP addresses. Network traffic from your test runs will always use these published IP addresses. Note that you will need to allow all the IP addresses for any geolocations that you're using.

HTTP authentication credentials are missing

Your test may have failed because your website requires HTTP authentication credentials. For security reasons, we are not able to capture these in the test recorder. However, they can easily be added afterwards to the test's settings under Settings > Browser Access > HTTP Authentication (or for the whole suite under Settings > Test Defaults > HTTP Authentication).

Login steps are missing

Every Ghost Inspector test runs in a new, isolated browser session. Think of it like an incognito window. If your test is required to log into an application, that login sequence must be a part of the test so that it happens in every test run. If you start recording a test inside of an application that you're already logged into in your own browser, the test will fail when it's run with Ghost Inspector. You'll see the login form show up in the screenshot of the test result because Ghost Inspector started from a fresh browser that was not logged in.

While every test does need to login individually, you don't need to record your login steps every single time. You can record those steps once as an importable module which can then be added to the beginning of all your tests. Simply record your login steps as a standalone test, then use the test editor to add an "Import steps from test" step anywhere you need them.

When recording your login steps, our browser extension cannot capture auto-filled/saved credentials from your password manager. You must type (or paste) the credentials into the login form in order for the test recorder to capture them. If the username and password are pre-filled when you open the login form, you must clear out those credentials and type (or paste) them back in. Alternatively, you can add the "Assign" steps afterwards in the test editor.

Note: You should always create tests using dummy or staging credentials. For security reasons, Ghost Inspector tests should not be using your own, private login credentials or any sensitive data.

Step selector is not accurate

The step's "selector" target is the identifier that it uses to find the element on that page that's involved in the step. For instance, you may click a "Submit" button that's identified by its ID using the selector #submit-btn. You may be assigning an email address to a field with using its name attribute with a selector like input[name="email"]. If you've used our test recorder, these selectors are generated for you. Alternatively, you may have defined them yourself using our test editor.

As your application changes, these selectors often need to be updated to account for adjustments in the DOM of your website or application. For instance, you may be targeting input[name="email"] in a test step but that field got renamed to input[name="personal-email"] in the latest release. Now your test is unable to find the input field and the step fails. There are a couple of things you can do to fix this and to help prevent it in the future:

  1. Firstly, it's important to understand how selectors work even if you're using our test recorder which generates them for you. We suggest reading our blog post on CSS selectors strategies for a thorough explanation.
  2. Next, evaluate the selector and see if anything erroneous stands out.
    • Does the selector include random looking strings or numbers, like #order-361729 or .css-v9e2xr? If so, it's likely that this is a dynamic selector that changes on each page load or deployment. We have some recommendations for working around this challenge.
    • Does the selector include class names that are unnecessary or conditional, like .hover or .display-mobile. Often times these classes are recorded unintentionally due to specific actions or settings in your own browser and can be removed.
    • If the selector is using XPath to find the element using its text label, has that text changed? For instance, is the selector looking for //button[contains(text(), 'Submit form')] but the button now has the text "Send" instead of "Submit form"?
  3. If it's unclear why the selector isn't working, you can either generate a new one yourself or re-record the test.
    • To create a new selector yourself, navigate to the same place where the step failed in your own browser. You can use the tools available in your browser to create and test new selectors in the developer console.
    • If you opt to re-record the test, it's often easier to capture the whole test flow again and make sure other changes are accounted for as well. This is particularly useful if your website or application has changed substantially.
  4. Keep in mind that Ghost Inspector supports backup selectors for your steps. This feature lets you specify multiple selectors for finding the element. Our system will check all of them (continually, in order). It can help the "durability" of your tests tremendously to specify multiple selectors so that the step has other options to fall back to when attempting to locate the element. For instance, you might specify two different selectors for a button that look for it both by its ID and text label by using #submit-btn and //button[text()="Submit"], respectively.

Earlier steps passed but were unsuccessful

In some cases, a test step is failing because some event didn't occur the way it was supposed to earlier in the test run. For instance, maybe a specific checkbox needs to be checked in order for a form to be submitted, and the step that is supposed to check that box is accidentally clicking the wrong box. That particular step may have still passed, but now the form submission fails and the test is not able to find the "Success" message afterwards. Even though the assertion for "Success" is the step that's failing, it's due to the checkbox issue occurring many steps earlier.

This type of issue is where the test result's video really comes in handy. Watch the video in detail and ensure that each step is working the way it is supposed to work. Look for any indication that a step isn't succeeding in the way that it should. Keep in mind that events may not be happening due to timing issues and the test moving too quickly.

Non-repeatable sequence

Imagine that you manage a scheduling application and you want to test the process of booking an appointment. You use the Ghost Inspector test recorder to capture yourself booking an appointment at a certain date and time, then save the test. When you run the test afterwards in Ghost Inspector it fails because the appointment slot from the recording is no longer available. The recording process itself blocked future bookings of the same slot. It's now unavailable in your test runs. You've created a "non-repeatable sequence" in your test.

A similar situation can happen if you book an appointment for tomorrow's date in your test, then run the test 3 days later. The date you specified is now in the past and cannot be booked. The same can happen if your test purchases an item and deducts from the available quantity each time. Eventually there may be no more items in stock for the test to purchase and it will fail.

Situations like this are common in "booking" and "purchasing" tests but can occur in all types of tests that deal with "state" meaning data that is affected by your test's actions or by outside factors. You will need to consider dates and availability when designing your tests. Instead of choosing a specific date, you may need a step that ensures a future date is always chosen using JavaScript or navigating in the date picker.

Timing issues

Timing is very important in automated testing. There are things that we instinctively know to wait for as humans that automated tests cannot intuit. Automated tests are often executing much faster as well, which may not leave time for certain events to complete the way they do when a human is carrying them out manually.

Ghost Inspector provides a number of options to deal with timing issues:

  1. Our test running system has "smart logic" built-in which does it's best to wait for elements, network requests and other types of readiness through implicit waiting.
  2. We provided a number of step timing settings that let you control how our waiting logic works. We set reasonable defaults, but these may need to be increased to deal with slower interactions. For instance, if submitting an order during a test takes 20 seconds, then you would definitely need to increase the "Element Timeout" setting to 30 seconds or more to leave time for that transaction to complete. Our default setting is only 15 seconds, so the test will give up on the interaction and assume that it's failed too early.
  3. Because our system waits implicitly for elements and conditions, you can use assertions to control the flow of your test. We highly recommend this practice and encourage you to add an assertion step after any type of submission or transaction occurs in your test.
  4. As a last resort, you can add "Pause" steps to your test to pause for a specific amount of time. In some cases, this may be necessary. However, in most cases, this type of "flow control" is better facilitated through assertions and step timing settings.
Note: Ghost Inspector tests have a maximum run time of 10 minutes. If you are using "Pause" steps with very long wait times, you will need to ensure that the test is able to complete within 10 minutes or consider breaking it into multiple sequential tests.

JavaScript error. Check console output.

This error means that your JavaScript code in the step threw an exception. In other words, the JavaScript code encountered an error and was not able to complete. Our system will output the error in the "Console Output" section of the test result, below the steps. The error will be highlighted in red for easier identification.

One of the most common scenarios where we see this happen is when a step references an element in the JavaScript code that does not exist on the page (or exist yet). For instance, the JavaScript may execute document.querySelector('#btn').click() but the #btn element is not on the page so an error is thrown.

Keep in mind that JavaScript steps do not wait for elements to be present. Our formal step actions, like a "Click" step, will wait for the #btn element to be present before proceeding. However, we're not able to determine element references inside of custom JavaScript code so the code will execute immediately. If this is an issue, we recommend adding an "Element is present" step for the element (in this case #btn) before the JavaScript step that references it. This will ensure the element is present before proceeding with the JavaScript step.

Promise was rejected. Check console output.

This error means that the Promise used in an asynchronous JavaScript step has called reject() to fail the step, instead of calling resolve() to pass it. When using reject() in your Promise code, we recommend passing in the error, like reject(err) (you may need to adjust for the variable name of the error). Our system will output the error in the "Console Output" section of the test result, below the steps. The error will be highlighted in red for easier identification.

In order to resolve this error, you will need to adjust your JavaScript code or address the problem on the page to ensure that resolve() is called instead of reject() inside the Promise.

Malformed script. Promise must end with })

When using an asynchronous JavaScript step, our system expects your code to be fully wrapped in a Promise. The code must start with exactly return new Promise and end with exactly }). Here is a simple example:

return new Promise(function (resolve, reject) {
  resolve(true)
})

If the code in your step does not follow this format, it may not be treated as asynchronous (the test will not wait for it to complete) or it could result in an error. This error typically occurs when the code starts with return new Promise but does not end with }).

Condition raised exception

This error means that the JavaScript code in your step condition threw an exception. In other words, the JavaScript code encountered an error and was not able to complete. For further explanation, please see the JavaScript error section above.

Error executing click on element

This error means that our browser automation was not able to click the element in the step, or that it encountered a problem immediately after the click was performed. These are the common scenarios where this error can occur:

  • The element you attempted to click was hidden or obscured by another element on the page. When this happens, our system will attempt to fallback to a JavaScript implementation to click the hidden element. However, in some cases this is still not possible and will lead to an error. We recommend identifying the element in the screenshot to ensure that it's fully visible.
  • The element that you clicked triggered some kind of external dialog box. The most common occurrence is when an <input type="file"> element is clicked in Firefox. This will trigger the file selection dialog in the browser and cause the automation to stall. Clicking on the <input type="file"> element prior to the file assignment step is not necessary in Firefox. This failure can also occur when other types of browser-level dialog boxes are triggered.

Error sending keys to input

This error means that our browser automation was not able to properly send keypresses into the element in the step. This problem can arise for a number of reasons:

  1. The target element is not an input element that accepts keypresses.
  2. The target element is covered or overlapped by another element and cannot be focused.
  3. The target element is part of a credit card checkout form with a complex iframe scheme.

We recommend checking all of these possibilities. Possibility #3 often occurs in Stripe, Braintree and Shopify checkout forms where each form input is placed into its own iframe. Our system is capable of assigning the values properly. However, if the iframe portion of the selector is not specific enough, the test may end up looking for the input in the wrong iframe. Ensure that the iframe portion of the selector clearly targets the proper iframe with a specific selector like iframe[id^="cc-expiration"]. Please contact support if you would like help adjusting your selectors.

Error performing step

This error means that the browser was unable to move forward performing the step because it has stalled out waiting for a network request or action to complete.

  • Ghost Inspector waits for each page to fully load before performing steps. In some cases, there may be assets on the page (often 3rd party JavaScript libraries or images) that are loading extremely slowly. You've probably experienced something similar to this in your own browser where the page has loaded and is visible, but you still see an endless "spinner" in the tab because of some outstanding asset that has not responded. Our system will attempt to stop inactive connections after 1 minute. However, if the connection is responding very slowly and not timing out, it can stall the browser automation. If this happens, investigate the page in question for any slow loading assets.
  • There may be an external dialog box that is preventing actions from proceeding. For instance, a file selection dialog has been triggered by clicking an <input type="file"> element or something like window.confirm() has been used. In both cases, our system will attempt to close these dialog boxes for you. However, in certain situations, adjustments may need to be made in your test. We recommend walking through the steps of your test locally to investigate whether an external dialog box is being triggered.