Testing framework

I want to break the world and rebuild it from scratch. But as we know, it never works that way. Even just shifting the responsibility of the code quality on the shoulders of the developers require preparation. This is why before refactoring, before moving to CI/CD, and mostly before building a new framework, we needed a reliable testing framework.

The problem is when you start looking at test automation there are so many options.

Viv Richards: Spot the Difference: Automating Visual Regression Testing

The quest started with an internal tool, that the cooperative wanted us to use. It looked nice, and we were able to start it running quite fast. But thankfully, very soon, we faced some problems: there was no reliable documentation, the echo-system, and community where small, or none at all. Worst of all, I was under the responsibility of one (brilliant) developer that just created a bug that blocked us for two weeks.

So I declared a one week period for everyone to dedicate some time to read, and do there own research. Wanting all the members to search by themself, so when we will discuss the options, even the IC that will not have an opinion will be familiar with the terms and considerations.

But as a lead, I find it important to be familiar with the technology in our daily work. I wanted to feel the frameworks myself and try to build something that looks as close to my dream testing suite as possible. Only then display my thoughts to the staff, asking them to be as critical as possible.

The structure was simple:

/selectors/
/selectors/heaser.json
/selectors/menue.json
/selectors/...

/environments/
/environments/qa/
/environments/qa/generalData.jsom
/environments/qa/users.json
/environments/qa/...
/environments/staging/generalData.jsom
/environments/staging/users.json
/environments/staging/...
/environments/production/...

/commonActions/
/commonActions/signIn.ts
/commonActions/changeProject.ts
/commonActions/...

/api-tests/...
/integrations/...

One folder with all the document/screen in the APP. One folder with all the environment data. One folder for the common site actions. And two different types of tests. The one thing left behind was the unit test, which needs to be a part of the code repository.

API tests

Starting with the most simple test suite, and using an improved version of CRUD util with Jest.

test('Create ETL: check id and some values', async () => {
  serverRes = await Request.post(Users.admin, currentUrl, exampleData);
  newId = serverRes.id;
  expect(serverRes).toBeDefined();
  expect(serverRes.id).toBeGreaterThan(0);
  expect(typeof serverRes).not.toBe('string');
  expectJsonToBeEqual(serverRes, exampleData);
  expect(serverRes.projectId.toString()).toEqual(currentEnvironment.projects.thisTestProjectId);
});

Environments

Since the goal was to be able to run the test on every environment, we created a class that sets the current environment. The only catch was that the class needed to support all the different types of tests, getting the value for each execution from different places: test from a different framework, console commands with a different syntax, test from the package script, and running the tests and debugging from the IDE.

Another thing you can notice is that we call the user by its name. Users.superUser. Each test can add as many users to the configurations, and not worrying about there values. The CRUD util gets the user data from the environment files.

E2E & Integration test: where the fun starts

Here the story starts to be interesting. After the research, I wrote the integration test with WebdriverIO. Proudly displaying an Express/Node server that can pass or mock the APP requests. But while presenting it, one of the developers mentioned that the specific test is e2e test and not integration. I phased for a moment and then incensed that the meeting is canceled since I got it wrong and that I’m going back to the sketch board.

A day later the Express/Node was removed, and there had been two folders: one with WebDriverIO for e2e test, and one with WebdriverIO & Puppeteer for integration, with the ability to mock server requests nad have an integration test without depending on a server.

E2E test

it('change project from the top menu', () => {
  login(User.superUser);

  let project1 = env.projectsList[0];
  let project2 = env.projectsList[1];

  changeProject(project1);
  expect(getCurrentProjectName()).toBe(project1.name);

  changeProject(project2);
  expect(getCurrentProjectName()).toBe(project2.name);
});

Common actions

Both the integration test and the e2e test were based on top of the commit action methods. UI methods that did common UI behaviors, and ended with assertions, to verify their success before continuing to the test and operation on the wrong location or data.

const openProjectMenu = () => {
  $(selectors.header.projectSettings.projectSettingsButton).waitForClickable({ timeout: 8000 });
  $(selectors.header.projectSettings.projectSettingsButton).click();
};

const getCurrentProjectName = () => {
  const projName = $(selectors.header.projectNameDisplay);
  browser.waitUntil(
    () => projName.getText() != "",
    {
      timeout: 5000,
      timeoutMsg: 'Project name didnt show after 5s'
    })
  return projName.getText();
}

const changeProject = (project) => {
  if(getCurrentProjectName() != project.name){
    openProjectMenu();
    $(selectors.header.projectSettings.projectSelectBoxDropDown).click();
    $(`[data-value="${project.id}"]`).click();
}
    expect(getCurrentProjectName()).toBe(project.name);
};

Selectors

As can be seen in both the test and the common actions, all the UI selectors are places on one well-organized location.

The target was to have selectors that are behavior-driven as possible. Not to rely on automation tags, or element ID’s, but to query after the screen element: a button with a specific text, an icon, or some title on the header. Only as a last resort, we used document selectors.

Integration test

While playing with Puppeteer, I wondered about the advantage of continuing to use WebdriverIO. If we needed Puppeteer for the HTTP mocks, why not built everything with it. It needs a little more wrapping, but it provides the strongest control over the browser:

describe('Header elements', () => {

  beforeAll(async () => {
    mock.requests(page ,[
      [mock.data.workspace, (url) => {return url.includes('/v1/workspace')}],
      [mock.data.project(7), (url) => {return url.endsWith('/v1/projects/7')}],
      [mock.data.documents, (u) => u.includes('/v1/documents')]
    ]);
  });

  test('Check user dyalog', async () => {

    await page.goto(env.workspaceServer, {waitUntil: 'networkidle0'});
    await page.click(selectors.header.userDialog.userDialogIcon);

    let text = await element.get(page, selectors.header.userDialog.userEmail,element.values.innerText)
    expect(text).toBe(mock.data.workspace.user.email);

    text = await element.get(page, selectors.header.userDialog.userName,element.values.value)
    expect(text).toBe(mock.data.workspace.user.userName);
}, timeout);

As can be seen, the HTTP mocks had been wrapped nicely:

const requests = async (page, mocks = []) => {
  page.on('request', request => {
    let bypass = false;
    mocks.forEach(mock => {
      if (mock[1](request.url())) {
        bypass = true;
        request.respond({
          status: 200,
          contentType: 'application/json',
          body: JSON.stringify(m[0])
        });
      }
    });
    if(!bypass){
      request.continue()
    }
  });
  await page.setRequestInterception(true);
}

Conclusion

UI tests it’s still a struggle, especially for us, because we work in two-week iterations. People don’t have time to stop to try out new things. They’re you constantly trying to sort of release products a pace, and so, even if you give them the wheels for the cart, they just can’t stop, because they don’t have the time.

The problem is they don’t alternate because they don’t have time. They don’t have time, because they don’t automate. They need to try and break that cycle, so it’s a bit of a catch-22.

One week, four different testing framework, plenty of fun, and long nights in front of the screen, In the end, everything worked together nicely and elegantly. One repository with Jest, WebDriverIo & Puppeteer. But still, we decided to use TestCafee and Jest instead. And that is a totally different story.

test('sign-in: should show the report', async (testController) => {
  await testController
  .typeText(mecSelectors.username,MEC.user.userName)
  .typeText(mecSelectors.password,MEC.user.password)
  .click(mecSelectors.loginButton)
  .expect(Selector(report.title).withText(currentEnvironmentReports.reportTitle).exists).ok()
  .expect(Selector(mecSelectors.cportDataExample).exists).ok()
});