Skip to main content

Experimentation

Brightspot offers built-in experimentation functionality to help you A/B test your content. This functionality empowers you to test, iterate, and optimize your digital experiences right within the same CMS in which you create content.

Use cases

  • An editor may want to test whether a promo on the site's homepage performs better with a call-to-action link that is written specifically to the story vs. generic language like Continue Reading.
  • An editor may want to test whether a promo on the site's page performs better with an image or a video.
  • An editor may want to test whether redesigning an article template (for example, breaking up the content in shorter segments with more images, pull quotes, or in-line modules) will increase engagement.

Overview of the configuration process

To configure the Experimentation integration, you take the following steps:

  1. Add experimentation permissions to a role so that users may work with experiments.
  2. Create, manage, and view the results of experiments.
  3. Add a widget to the dashboard that lists all experiments.
  4. Configure notifications so that experiment stakeholders are notified when certain actions occur.

Adding experimentation permissions to a role

Before you can use the experimentation feature, you must first ensure your role has the necessary permissions.

To add experimentation permissions to a role:

  1. From the left navigation, under Admin, click Users & Roles.

  2. In the Roles widget, select the role to which you want to give experimentation actions.

  3. Under Additional Permissions, click , then select Experimentation.

  4. In the Permissions field, select one or multiple of the following permissions you want the role to have:

    • Archive Experiment—The role can archive experiments.
    • Archive Variation—The role can archive variations.
    • Calculate Experiment Duration—The role can determine how long the experiment will run.
    • Create Experiment—The role can create new experiments.
    • Create Multiple Running Experiments—The role can create multiple, simultaneously running experiments. Brightspot recommends only giving this role to advanced experimentation users, as improper use can cause skewed data and ineffective experiments.
    • Create Shared Goal—The role can create shared goals.
    • Create Variation—The role can create new variations.
    • Delete Experiment—The role can delete archived experiments.
    • Delete Variation—The role can delete archived variations.
    • Edit Experiment—The role can edit existing experiments.
    • Edit Variation—The role can edit existing variations.
    • End Experiment—The role can end ongoing experiments.
    • Restore Experiment—The role can restore experiments from an archived state.
    • Restore Variation—The role can restore variations from an archived state.
    • Toggle Status—The role can toggle the status of the experiment.
    • View Experiment Results—The role can view experiment results.
    • View Experiments—The role can see the Experiments widget on the content edit page.
    • View Variations—The role can view variations.
  5. Click Save.

Configuring experiments

Experiments help you make strategic content decisions in order to better capture visitor's attention. Each experiment is comprised of at least one variation, and each experiment will have results that inform your content decisions.

To configure an experiment:

  1. Search for and open an existing asset for which you want to create an experiment.
  2. In the side toolbar, click .
  3. In the Variations widget, click Create a Variation.
  4. In the New Variation pop-up, in the Name field, give the variation a name.
  5. Edit the asset as desired.
  6. Click Publish.
Note

Clicking Publish does not make the variation live or override the existing, default version of the asset. It simply saves the variation so that it may be used in experiments.

  1. Repeat steps 3–6 to add any additional variations.

  2. In the Experiments widget, click Create an Experiment.

  3. In the New Experiment pop-up, do the following:

    1. In the Name field, give the experiment a name.

    2. In the Description field, describe the experiment.

    3. Under Schedule Start On, select a date and time when the experiment will begin; however, if you want to run the experiment right away, retain the default Start Immediately.

    4. Toggle on Auto Promote Winner to automatically promote the experiment winner to be the asset's primary variation.

    5. Click Next.

    6. Under Delivery Rule, fill out the following fields:

      1. Under Target, select one of the following:

        • All Visitors—All visitors are included in the experiment.
        • Segment—Define a segment of visitors that are included in the experiment.
      2. Under Exposure, select the percentage of this traffic that will be included in the experiment sample. By default, this percentage is set as 100.

      3. Under Traffic Allocation, select one of the following:

        • Allocate traffic dynamically—Automatically manages your traffic allocation using a multi-armed bandit in real time based on the performance of the variations.
        • Manual—Manually allocate traffic by fixed percentages to show for each variation and control.
      4. Under Variations, select the variations you want included in the experiment. If your role has the Create Multiple Running Experiments permission, then you will only see unused variations. This is to prevent multiple experiments from using the same variations.

      5. Under Control, select the variation that serves as the default. By default, the first version of the asset is set.

      6. Manual—Manually manage your traffic allocation.

        1. Under Variations, select the variations you want included in the experiment.
    7. Click Next.

    8. Under Goal, select one of the following goals of the experiment:

      1. Number of Page Views—Tracks the number of pages a visitor has viewed during a session. For example, if you want visitors to engage with more of the site and wish that they visit at least five pages, you would create the following goal:
        • Comparison: Greater Than
        • Page Views: 5
      2. Access to Page—Tracks whether a visitor has accessed a specific page/URL regardless of whether a button was clicked. For example, if you want to increase engagement to a page, you might add a module that only appears on pages that match a referral pattern. You would then create the following goal:
        • Comparison: Contains
        • URL: Example Sustainability Path (/sustainability)
      3. Click Tracking—Notes whether a visitor clicks on a specific element on a page (like a button or a link). For example, if you want to test whether introducing a different call to action design will motivate users to click on the demo request form, you would create the following goal:
        • Page URL: Target Page
        • CSS Selectors: a.Button[href="target page"]
      4. Scroll Tracking—Notes how far a visitor has scrolled down a page (for example, 50%, 70%, and so on). For example, if you want visitors to spend more time on a page and scroll through at least 70% of it, you would create the following goal:
        • Page URL: [Current permalink]
        • Scroll Target Percentage: 75
      5. Time Spent—Tracks how long a visitor stays on a site or a specific page. For example, if you want to redesign a page to see if users view more of its content, you would create the following goal:
        • Comparison Type: More Than
        • Duration: 5
        • Time Unit: Minutes
    9. Click Done.

Note

If your role has the Create Multiple Running Experiments permission, you can repeat steps 8–10 to create additional experiments. Please see Experimentation best practices at the bottom of this topic for additional information.

Checking experiment status

Once you have created an experiment, you can check its status.

To check experiment status:

  1. Search for and open an existing asset that has an experiment of which you want to check status.
  2. In the side toolbar, click .
  3. In the Experiments widget, find the existing experiment whose status you want to view.
  4. View status.
Note

Statuses include the following:

  • Running—The experiment is currently running.
  • Running (MTU Limited)—The experiment has been temporarily paused because the monthly tracked users/site traffic limit has been reached.
  • Paused—The experiment is paused.
  • Planned—The experiment has been scheduled, but has not started yet.
  • Planned (MTU Limited)—The experiment has been scheduled, but has not started yet, and will not start until the monthly tracked users/site traffic limit is reset.
  • Completed—The experiment has completed.
  • Archived—The experiment was archived.

Editing an experiment

Once you have created an experiment, you can update it.

Caution

Modifying an experiment in progress may produce skewed results. See Experimentation best practices at the end of this topic for more details.

  1. Search for and open an existing asset that has an experiment of which you want to check status.
  2. In the side toolbar, click .
  3. In the Experiments widget, find the existing experiment whose status you want to view.
  4. Click > Edit Experiment.
  5. Make the desired changes.
  6. Click Save.

Pausing an experiment

You can pause an experiment when needed.

  1. Search for and open an existing asset that has an experiment of which you want to check status.
  2. In the side toolbar, click .
  3. In the Experiments widget, find the existing experiment whose status you want to view.
  4. Click > Pause Experiment.

Archiving an experiment

If an experiment is not stable, you can archive it.

  1. Search for and open an existing asset that has an experiment of which you want to check status.
  2. In the side toolbar, click .
  3. In the Experiments widget, find the existing experiment whose status you want to view.
  4. Click > Archive Experiment.

Searching and filtering for variations and experiments

At times, you may have many variations and experiments. In such cases, you can search and filter for variations and experiments so you can quickly take the appropriate actions.

To search for variations:

  1. Search for and open an existing asset for which you want to search for a variation.
  2. In the side toolbar, click .
  3. In the Variations widget, use the search bar to search for your desired variation.

To filter for variations:

  1. Search for and open an existing asset for which you want to filter for a variation.
  2. In the side toolbar, click .
  3. In the Variations widget, click on the Any Experiment drop-down list.
  4. Select the desired experiment.

Brightspot displays only those variations used in the experiment you specify.

To search for experiments:

  1. Search for and open an existing asset for which you want to search for an experiment.
  2. In the side toolbar, click .
  3. In the Experiments widget, use the search bar to search for your desired experiment.

To filter for experiments:

  1. Search for and open an existing asset for which you want to filter for an experiment.
  2. In the side toolbar, click .
  3. In the Experiments widget, click on the All Statuses drop-down list.
  4. Select the desired status of the experiment you want to view.

Brightspot displays only those experiments with the status you specify.

Ending an experiment

At times, you may find it necessary to end an experiment before a winner has been determined.

To end an experiment:

  1. Search for and open an existing asset whose experiment you want to end.
  2. In the side toolbar, click .
  3. In the Experiments widget, find the existing experiment that you want to end.
  4. Click > End Experiment.
  5. In the End Experiment pop-up, toggle on Archive variations associated with this experiment if you want to archive associated variations.
  6. Click Confirm.
  7. Click Done.

Brightspot marks the experiment Status as Completed but will report the Winner as Undetermined. In this case, you must manually promote a variation as the winner of an experiment. For details, see Manually promoting a variation as the winner of an experiment.

Manually promoting a variation as the winner of an experiment

If an experiment is not yielding the expected results and you want to end the experiment prematurely, you have the option to complete it before a winner is determined. This action archives the experiment.

To manually promote a variation as the winner of an experiment:

  1. Search for and open an existing asset for which you want to manually promote a winner for a running experiment.
  2. In the side toolbar, click .
  3. In the Experiments widget, find the existing experiment whose winner you want to promote.
  4. Click > End Experiment.
  5. In the Variations widget, find the variation you want to promote as the winner.
  6. Click > Promote.
  7. In the Promote Variation pop-up, click Confirm.
  8. Click Done.

Viewing experiment results

Once an experiment has ended, you can view its results to inform future content strategy decisions.

To view an experiment's results

  1. Search for and open an existing asset that has an experiment of which you want to view results.
  2. In the side toolbar, click .
  3. In the Experiments widget, find the existing experiment whose results you want to see.
  4. Click the bar chart icon to open the Experiment Results pop-up.
  5. View results.
Tip

The following descriptions may help you interpret the results:

  • Reliability—A score lower than 95% means that the test is not stable and requires more data.
  • Confidence Interval—A small interval is better because it indicates a more reliable test.
  • Recommended Variation vs. Winning Variation—A recommended variation indicates a variation that is currently performing well, but could still change. A winning variation is confirmed when all metrics are green.

Experimentation best practices

Experimentation is a powerful feature, and Brightspot recommends being mindful of the following best practices:

  1. One experiment per asset— Running multiple experiments on the same page or component can interfere with results; however, you may give a role the ability to create and run multiple experiments simultaneously if desired.
  2. Use a 1:1 variation split whenever possible—A 50/50 traffic split between control and variation helps you reach statistical significance faster and reduces sampling bias.
  3. Estimate test duration upfront—Use the duration calculator to determine the minimum time your test should be run based on expected traffic and conversion goals.
  4. Test during normal traffic patterns—This will reduce skewed data that may occur when running tests during atypical periods like holidays, product launches, or major announcements.
  5. Avoid mid-test changes—Modifying a variation or experiment while it is running can invalidate your data. If a major change is needed, stop the experiment and launch a new one.
  6. Test one variable at a time—To best understand what is driving results, isolate your changes. Testing multiple elements at once makes it difficult to attribute impact to a specific change.