How AI Is Helping Organisations Design Better eLearning with A/B Testing

elearning_AB_testing

For years, most eLearning design decisions have been based on experience, preference and stakeholder input.

  • What looks more engaging.
  • What feels right for the audience.
  • What has worked before.

And while experience absolutely matters, it often leads to a familiar problem – we build once and hope it works. In an organisational context, this becomes even more complex because you’re not designing for one learner. You’re designing for frontline teams, managers, executives and new starters with no context.

Each of these groups interacts with learning differently. Yet some organisations still deliver a single learning experience to everyone.

The result is a disconnect between content and audience. Some learners feel overwhelmed, others disengaged and many complete training without meaningful impact or simply don’t complete.

Why A/B Testing Has Been So Hard to Apply in Learning

A/B testing sounds simple – create two versions, test them and see what works. But in organisational learning, the challenge hasn’t just been understanding the concept. It has been the effort required to make it happen. Creating variations often meant:

  • Rewriting content
  • Rebuilding interactions
  • Reworking assessments
  • Republishing and retesting

In other words, double the work. So instead, organisations defaulted to a middle ground where a version of learning that works reasonably well for most people but rarely works exceptionally well for anyone.

AI Changes the Equation Completely

AI has removed one of the biggest barriers to A/B testing: time. What once took days can now take hours. Using apps like Articulate 360 AI Assistant, learning teams can now quickly generate variations of:

  • Content length and structure
  • Interaction styles
  • Tone and language
  • Assessment approaches

Instead of rebuilding from scratch, you can work from a base version and adapt it. This makes it possible to test different approaches without significantly increasing development effort.

More importantly, it allows organisations to explore something far more valuable than variation and that’s the right fit.

Designing eLearning That Fits Different Roles

One of the biggest shifts enabled by AI and A/B testing is moving away from designing for the “average learner.”

Because in reality, that learner doesn’t exist. What you’re aiming for instead is the right fit between:

  • Content and role
  • Interaction and experience level
  • Depth and practical need
  • Time investment and real-world application

For example:

  • Frontline staff may benefit from short, scenario-based learning
  • Managers may need decision-focused content with context
  • Executives may prefer concise, insight-driven overviews

A/B testing allows you to explore these differences. Not by guessing, but by observing how different groups respond.

What A/B Testing Looks Like Across an Organisation

At an organisational level, A/B testing becomes more than just testing design elements. It becomes a way to understand how different parts of your workforce learn best.

For example:

  • Version A: Generalised content
  • Version B: Role-specific contextualisation

Or:

  • Version A: Click-through learning
  • Version B: Scenario-based decision making

Or:

  • Version A: Full-length module
  • Version B: Streamlined version

The key is to test one variable at a time so you can clearly identify what is influencing the outcome.

Start with a Small Pilot Group

A/B testing does not need to be rolled out across your entire organisation from the start. In fact, it works best when it begins small.

Select a pilot group of learners and divide them into two cohorts.

  • Group A completes Version A
  • Group B completes Version B

This allows you to test ideas in a controlled way, identify patterns early, and refine your approach before scaling.

What to Measure Across Different Roles

To make A/B testing meaningful, you need to measure more than completion. Look at a combination of behavioural and performance data.

  • Time to complete – do some roles move faster through streamlined content while others benefit from more depth?
  • Engagement patterns – where are learners interacting more or less?
  • Assessment outcomes – are learners demonstrating understanding or simply progressing?
  • Decision-making quality – do scenario-based interactions improve real-world choices?
  • Feedback by role – what are different groups saying about relevance and usefulness?

When you analyse this data across roles, you start to see clear patterns emerge.

Using AI to Scale What Works

Once you identify what works for different audiences, AI allows you to scale that approach efficiently.

You can:

  • Adapt content for different roles without starting from scratch
  • Adjust tone and complexity based on audience needs
  • Replicate successful interaction patterns
  • Continue testing and refining over time

This creates a learning environment that evolves with your organisation.

Moving from One-Size-Fits-All to Role-Based Learning

A/B testing supported by AI enables a shift from standardised learning to aligned learning. Instead of delivering the same experience to everyone, organisations can:

  • Tailor learning to different roles
  • Improve relevance and engagement
  • Reduce unnecessary content
  • Increase real-world application

This is where learning starts to feel purposeful and not just something to complete, but something that genuinely supports performance.

Why This Matters for Modern Learning Strategies

Organisations are under increasing pressure to deliver learning that is both efficient and effective. Generic content is no longer enough. By combining AI with A/B testing, organisations can:

  • Continuously improve learning design
  • Align content to real job roles
  • Reduce wasted time in training
  • Increase engagement and application

This shifts learning from a compliance activity to a performance tool.

A Practical Starting Point

If you’re looking to introduce this approach, keep it simple. Start with one module, one audience segment and one variable to test.

For example:

  • Test a role-specific version versus a general version
  • Test a scenario-based approach for frontline teams
  • Test a shorter version for leadership

Use AI to create variations quickly, run a pilot, and measure the results. Then apply those insights to your next build.

Where B Online Learning Fits

At B Online Learning, we work with organisations to move beyond standardised content and towards learning that aligns to real roles and real work.

By combining AI-supported development with strong instructional design, we help:

  • Identify the right fit for different audiences
  • Test and refine learning approaches
  • Scale what works across the organisation
  • Build learning that reflects how people actually learn
  • Final Thought

A/B testing has always had the potential to improve eLearning and AI has made it achievable.

For organisations, the real opportunity lies in using it to move beyond generic learning and towards something far more effective. Learning that fits, aligns and improves performance.

Ready to Apply This in Practice?

The next step is knowing how to apply this in your own learning design.

If you’re looking to explore how AI can support smarter eLearning design, including approaches like A/B testing, our Certified Articulate 360 AI Assistant workshop shows you how to apply these tools in a practical, real-world context. Designed for L&D teams and eLearning developers, it focuses on using AI to design, build, and refine learning more efficiently.

Get in Touch

We would love to hear from you. Give us a call or fill in the form and we will contact you soon.







    What product or service are you interested in?