Automating the automation: Automating Vstest

As part of my career, I have, on multiple occasions, found it useful to automate visual studio tests (VS tests) execution. The goal of this article is to show how to do so and why. I hope you can find it useful no matter your experience; for the experienced readers, you might get some ideas on how to use the capabilities of vstest, batch files and logs generated, setting up the basics to revisit some of the parts in the future.

If you have never created or run tests in Visual Studio, take this as a starter guide on executing automation. If you have ever run Visual Studio Unit Tests, you should be familiar with the Test Explorer tool:

At times, we might want to include these tests as part of a different system rather than executing them from Visual Studio. It is possible to use some existing functionality in the command line to help you doing so: VSTest.Console.exe is a command-line tool to run Visual Studio tests.

Steps 1 and 2 are for setting up the test data of this post, feel free of skipping them if you consider them too simple. Otherwise, I hope they can help you get started with Unit test in visual studio and general automation with it.

Disclaimer: I’m explaining this from a Windows Operating System’s perspective, but it could be done similarly from anywhere you could run vstest console tool.

Step 1: Create an unit test project

Note: needless is to say that you need Visual Studio installed to follow these steps.

We are going to start creating a test project in Visual Studio. To do so, we can click File -> New -> Project and select a C# template ‘Test‘ as in the image below.

Select the destiny folder and insert a name for it. Refer to the Visual Studio documentation if you have any issues.

Step 2: Create tests

The project should come with a unit test class by default. If for some strange reason that’s not the case, or you want to create another class, you can right click the Project and select Add->Unit Test (please note if you do this but you already had a class, then you might have a duplicate TestMethod1 in your list later on)

Now we are going to add a command inside the test that comes by default (because we are not adding test logic for now, we just add the ‘Fail’ command as shown below). We are also adding a Priority above this test. Then we can copy and paste the lines from [TestMethod] (line 9 in the image below) up to the last ‘}’ (line 14) three times to get a few tests in our test explorer. Change the names of the methods so each have a different name (for the example we have TestMethod1, TestMethod2, TestMethod3, TestMethod4.

For the example, all odd tests (TestMethod1 and TestMethod3) have priority 2 and even tests (TestMethod2 and TestMethod4) priority 1.

To be able to see the test explorer we should go to the menu Test->Windows -> Test Explorer.

And to be able to see the tests we have created we should go to Build -> Rebuild Solution.

Now, you can try and run the tests on the left hand side. They should fail because we are not adding any logic to them. If you are interested in how to do this, leave a comment below and I’ll write another article with more details on this. You can also look up the Visual Studio documentation to learn more.

In the next step we are going to learn how to execute the test cases outside Visual Studio.

Step 3: VSTest

Now, if we want to execute these tests outside Visual Studio’s environment we can use the console that is typically install under the following path:

C:\Program Files (x86)\insert VS version here\Common7\IDE\CommonExtensions\Microsoft\TestWindow

Where “insert VS version here” would be something like ” Microsoft Visual Studio 14.0 ” or “Microsoft Visual Studio\2017\Enterprise”… basically, you can navigate to your Program Files (x86) folder and look for a folder that has Microsoft Visual Studio within its name. Then continue with the rest of the path as above.

Inside the aforementioned folder you can find “vstest.console.exe“. Alternatively, you can download the nuget package and search it in the path where it’s installed.

Typically we would access this file from the command line (be it admin cmd or visual studio’s native tool command prompt)

We can open cmd (as administrator, by right clicking on it) and type “cd path” where path is the path to the above file with the correct visual studio version for your installation. It is convenient that this path is surrounded by quotes (“”), just in case there are spaces the program can recognise them as part of the path and not as a different command.

Now, you can select what type of tests to run by adding some parameters to the call, but you can test it by calling “vtest.console.exe” followed by space an the path to the dll of the test project. For example: vstest.console.exe c:\....\UnitTestProject1\UnitTestProject1\bin\Debug\UnitTestProject1.dll

You should see something like this:

Since you’ve set up your tests to fail. Later on, once your tests are finished and they are all passing you would see something like this:

If you are new to testing, you are probably wondering why would we want to run them in this complicated way instead of using the beautiful UI from Visual Studio. If so, keep on reading, this is where things start to get interesting.

Now, imagine we want to run just the first and third test method, for this we should make the following call: vstest.console.exe pathtodll /Tests:TestMethod1,TestMethod3

As you can see, now only TestMethod1 and TestMethod3 were executed (you can use the names of your test methods). Note that it should be failing for you, I’m just adding the passing image because is cleaner.

Remember we setup priorities before? So how to run the tests with higher priority? vstest.console.exe pathtodll /TestCaseFilter:”Priority=1″

In Microsoft vstest.console documentation, there are much more ways we can use this tool, including parallel runs. Have you started to see why using this tool could be very powerful? What else could we do with this?

Step 4: Creating batch files

The coolest part is that we can create a file with these calls and then, we can use that file virtually anywhere (continuous integration, night builds, as part of an agent to be executed on a remote computer…) These sort of files are called “batch files” because they run a set or batch of operations.

The first line of the batch file would be to CD into the vstest console folder. Then we can add calls to the tool for running the tests we want. Finally, we add a pause to verify that this is working fine.

To do this, just use the window’s notepad and type the instructions below. When saving it, save it with .bat extension instead of .doc or .txt.

cd C:\Program Files (x86)\insert VS version here\Common7\IDE\CommonExtensions\Microsoft\TestWindow
vstest.console.exe pathtodll /Tests:TestMethod1,TestMethod3 /Logger:Console
pause

Remember to change pathtodll to your actual project and add the right VS version. Now, if you execute this newly created file (as administrator), you should see the same results as before. Pressing any letter closes the console that opens up.

If you don’t want to see the results in a console (as if would be happening if you integrate this file with other projects), just remove the last command (pause). The logs of the results are saved in the current folder (the one for the vstest program) and we will analyse them on the last section.

Explaining continuous integration or multi-project execution would be a bit more complicated and out of the scope of this post (but do leave a comment or reach out in Twitter if you want me to explain it). But I can explain how to set up your computer to run this file every night with Windows!

For this, you need to open the task Scheduler (you can search for it in the lower left search box on Windows). Then click on “create a new basic task” on the right hand side and go through the assistant. The most important things are to specify the frequency you want the file to run and browse to select the saved .bat file. Now this file will be running with the indicated frequency automatically (if your computer is turned on at that time).

The next thing you want to do is to check the logs that this file has generated every time it was executed.

Step 5: Saving logs

Running automatic tasks and tests is awesome, but not really useful unless we know the results so we can do something about them.

First of all we should check where the logs are saved. Usually they are saved into a folder called “Test results” within the folder where you’ve run the file. Because we were using cmd (admin) and navigating to the vtests.console folder, it would be created there. In fact, that’s the reason we need administrator permission to run the file. There is a parameter with to run vtest.console to change this location, although my vstest.console was not recognising it, so I stick to the vstest.console folder for the purposes of this article.

I think trx logs are useful and should be always installed by default. To get them we can add a parameter to the vstest “ /Logger:trx“.  The generated file can be opened with the notepad and it will give you information about the run tests. However, we would focus on /Logger:Console as it is simpler.

Another way of retrieving the logs is by using the capabilities associated with Windows batch system. We just need to add ” > pathToFile\file.txt” where path to file would be the path all the way to a file with a txt extension (this file does not need to exist, you can create it with this command). This way a file will be saved with the contents of the console.

cd C:\Program Files (x86)\insert VS version here\Common7\IDE\CommonExtensions\Microsoft\TestWindow
vstest.console.exe pathtodll /Tests:TestMethod1,TestMethod3 /Logger:Console > pathToFile\file.txt
pause

You might want to save different files (by adding date and time) or replace the latest one (by keeping the same name as above), depending on the frequency that it is generated (if it takes long enough, we don’t mind replacing it).

Using the parameter “ /Logger:TimelineLogger” can give you a bit more information of the times of execution, but it will make it harder to parse later on.

Step 6: Playing with the logs

Now we have a text file with the logs…but reading all the files all the time, might be a bit boring..what to do with it? You get it, automate it!

Let’s output just the number of test case that have failed. We can do this with any programming language, but let’s keep going with batch. Why? Because I feel people underestimate it, so here it is:

@echo off
set /a FAILED=0
for /f %%i in ('findstr /i /c:"Failed" file.txt') do (
set /a FAILED=FAILED + 1
)
set /a FAILED = FAILED- 1
if %FAILED% gtr 0 (
echo Failed: %FAILED%
)
pause

The first line of the file allows the output to come up on the screen. Then we create a variable to save the number of times that we find the string “Failed”, we perform a loop with the search over the file called “file.txt”, take one out (because at the end there is a summary with the word “Failed” on it, which we don’t want to count) and only if the result is greater than 0 it is printed.

When executed for the file with priority 1 test cases failing we can see this result on a console:

If everything passes, nothing is printed for this example.

Please keep in mind that with this method, any test case that has the word “Failed” within its name will also increment the search, so this is just for demonstration purposes.

Maybe we would prefer to indicate as well the names of the failed test cases, or to print the last sentence on the file, which has already a summary.

We can also create some code that would send us an email if there are any failed test cases, or push the results into a graph, or send some sort of alert, or even a slack message… There are many possibilities, but they are…well…another story.

Testing on VR world

Previously, I’ve written a couple of posts about how to get yourself started on VR in which I promised some stories about testing on this world.

Why do I call it world instead of application? Because the goal of virtual reality is to create realistic synthetic worlds that we can inspect and interact with. There are different ways of interacting in these worlds depending on the device type.

Testing an application in VR would be similar to testing any other application, while we would have to take into account some particularities about the environment. Later on we will see different kinds of testing and think about the additional steps for them in VR, but first let’s see the characteristics of the different types of devices.

Types of devices:

Phone devices (Google Cardboard or DayDream) – allows you to connect your phone (or tablet) on the device to be able to play a VR app in there.

This is possible because most of smartphones nowadays come with gyroscopes: a sensor which uses the Earth’s gravity to determine the orientation.

Some Cardboards (or other plastic versions) can have buttons or a separate trigger for actions on the screen (as it is the case for DayDream), but the click is usually not performed in the object. Instead, it is done anywhere in the screen while the sight is fixated on the object. If the device does not have a button or clicker, the developer have to rely on other information for interaction, such as entering and exiting objects or analyzing the length of time the user was on the object.

Cardboard VR device – Picture credit mentatdgt

Computer connected devices (HTC, Oculus, Samsung VR…) that generally comes with (at least) a headset and handset, have an oled device with high resolution and supporting low persistence embedded in the headset, so you don’t need to connect it to a screen. They detect further movement, as it is not just about the movement of the head but also the movement on the room itself and the hand gestures. This is done differently depending on the device itself.

We have moved from being able to detect user head movement (with reasonable size devices), to use sounds, to use hand gestures… so now, testing VR applications is getting more complicated as it now require the test of multiple inputs. The handset devices usually have menu options as well.

Before going on, I’d like to mention AR. AR is about adding some virtual elements in the real world, but with AR we do not create the world. However, AR has a lot in common with VR, starting with the developing systems. Therefore, the testing of the two platforms would be very similar

We have talked about the hardware devices in which the VR applications would run, but we should also talk about the software in which the applications are written.

Samsung gear + one handset

Developing platforms:

Right now there are two main platforms for developing in VR: Unity and Unreal, and you can also find some VR Web apps. Most of things that are done with Unity use C# to control the program. Unreal feels a bit more drag and drop than unity.

Besides this, if you are considering to work in a VR application, you should also take into account the creation of the 3D objects, which is usually done with tools such as blender or you can find some already created onlin.e

But, what’s different in a VR application for testers?

Tests in VR applications:

VR applications have some specifics that we should be aware of when testing. A good general way of approaching testing on VR would be to think about what could make people uncomfortable or difficult.

For example, sounds could be very important, as they can create very realistic experiences when done appropriately, that make you look where the action is happening or help you find hidden objects.

Let’s explore each of the VR testing types and list the ways we can ensure quality in a virtual world. I am assuming you know what these testing are about and I’m not defining them deeply, but I will be giving examples and talking about the barriers in VR.

Usability testing:

It ensures that the customer can use the system appropriately. There are additional measurements when testing in VR such as verifying that the user can see and reach the objects comfortably and these are aligned appropriately.

We are not all built in the same way, so maybe we should have some configuration before the application for the users to be able to interact properly with the objects. For example, the objects around us could be not seen or reached easily by all our users as our arms are not the same length.

You should also check that colors, lighting and scale are realistic and according with the specifications. This could not only affect quality, but change the experience completely. For example, maybe we want a scale to be bigger than the user to give the feeling of shrinking.

It is important to verify that the movement does not cause motion sickness. This is another particularly important concept for VR applications that occurs when what you see does not line up with what you feel, then you start feeling uncomfortable or dizzy. Everyone have a different threshold for this, so it is important to make sure the apps are not going to cause it if used for long time. For example, ensure the motions are slow or placing the users in a cabin area where things around them are static, or maintaining a high frame-rate and avoiding blurry objects.

Sitting experience – Picture credit rawpixel.com

If there is someone on your team that is particularly sensitive to motion sickness, this person would be the best one to take the tester role for the occasion. In my case, I asked for the help of my mother, who was not used at all to any similar experiences and was very confused about the entire functioning.

Accessibility testing

Is a subset of usability testing that ensures that the application being tested can be appropriately used by people with disabilities like hearing, color blindness, old age and other disadvantaged groups.

Accessibility is especially important in VR as there are more considerations to make than in other applications such: mobility, hearing, cognition, vision and even olfactory.

For mobility, think about height of the users, hand gestures, range of motion, ability to walk, duck, kneel, balance, speed, orientation…

To ensure the integration of users with hearing issues, inserting subtitles of the dialogs would be a must, and ensuring those are easily readable. The position of the dialogs should be able to tell the user where the sound is coming from. In terms of speech, when a VR experience require this, it would be nice if the user could also provide other terms of visual communication.

There are different degrees of blindness, so this should be something we want to take into account. It is important that the objects have a good contrast and that the user can zoom into them in case they are too far away. Sounds are also a very important part of the experience and it would be ideal that we can move around and interact with the objects based on sound.

I realized on how different the experience could be depending on the user just by asking my mother to help me test one of my apps. She usually wears glasses to read, so from the very beginning she could not see the text as clearly as I did.

I mentioned before that in VR it is possible to interact with object by focusing the camera on them for a period of time. This is a simple alternative to click without the need of hand gestures for people with difficulty using them.

There are many sources online about how to make fully accessible VR experiences, and I am sure you can come up with your own tests.

Integration testing

Its purpose is to ensure that the entire application functions as is should in the real world and meets all requirements and specifications.

To test a VR application, you need to know the appropriated hardware, user targeting and other design details that we would go through with the other type of testing.

Also, in VR everything is 360 degrees in 3 coordinates, so camera movement is crucial in order to automate tests.

Besides, there might be multiple object interaction around us that we would also need to verify, such collisions, visibility, sounds, bouncing…

There are currently some testing tools, some within Unity that could help us automate things in VR but most are thought from a developer’s perspective. That’s one more reason for us to ensure that the developers are writing good unit tests to the functions associated with the functionality and, when possible, with the objects and prefabs. In special, unit test should focus in three main aspects for testing: the code, the object interaction and the scenes. If the application is using some database, or some API to control things that then change in VR, we should still test them as usual. These tests would alleviate the integration tests phase.

Unity test runner

Many things are rapidly changing on this area, as many people have understood the need for automation over VR. When I started with unity, I did not know the existence of the testing tools, and tested most of it manually, but there are some automated recording and playback tools around.

Performance testing

Is the process of determining the speed or effectiveness of a system.

In VR the scale, a high number of objects, different materials and textures and number and of lights and shadows can affect the system performance. The performance would vary between devices, so the best thing to do would be to check with the supported ones. This is a bit expensive to do, that’s why some apps would only focus in one platform.

Many of my first apps were running perfectly well in the computer but they would not even start on my phone.

It is important to have a good balance to have an attractive and responsive application. New technologies also make it important to have a good performance, so the experience is realistic and immersive. But sometimes in order to improve performance we have to give up on other things, such as quality of material or lights, which would also make the experience less realistic.

In the case of unity, the profiler tool would give you some idea of the performance, but there are many other tools you can also use. In VR, we need to be careful with the following data: CPU usage, GPU usage, rendering performance, memory usage, audio and physics. For more information on this, you can read this article.

Unity profiler

Also, you can check for memory leaks, battery utilization, crash reports, network impact…. and use any other performance tools available on the different devices. Some of these get a screenshot of the performance by time and send it to a database to analyze or set up alerts if anything goes higher for you to get logs and investigate the issue, while others are directly installed on the device and run on demand.

Last but not least, VR applications can be multiplayer (that’s the case of the VRChat) and so we should verify how many users can connect at the same time and still share a pleasant experience.

Security testing

Ensures that the system cannot be penetrated by any hacking way.

This sort of testing is also important in VR and as the platforms evolve, new attacks could come to live. Potential future threats might be virtual property robbery, especially with the evolution of cryptocurrency and with monetization of applications.

Other testing

Localization testing: as with any other application, we should make sure that proper translations are available for the different markets and we are using appropriate wordings for them.

Safety testing: There are two main safety concerns with VR (although there might be others you could think of)

1. Can you easily know what’s happening around you?

Immersive applications are the goal of VR, but we are still living in a physical world. Objects around us could be harmful, and not being aware of alarms such as a fire or carbon monoxide could have catastrophic results. Being able to disconnect easily when an emergency occurs is vital on VR applications. We should make sure the user is aware of the immersion by ensuring we provide some reminder to remove objects nearby.

Smoke could be unnoticed – Picture credit CARLOS ESPINOZA

Every time I ask someone to give a test to some app with the mobile device, they start walking and I have to stop them otherwise they might hit something. And this is the mobile device, not the entire immersive experience.

Even I, being quite aware of my surrounding, give a test to some devices that include sound, and I hit my hand with several objects around me. Also, I could not hear what the person that handed over the device was telling me.

2. Virtual assaults in VR:

When you test an application in which users can interact with each-others in VR, the distance allowed between users could make them feel uncomfortable. We should think about this too when testing VR.

Luckily, I haven’t experienced any of these myself but I have read a lot of other people talking about this issue. Even some play through online on VR chat, you can see how people break through the comfort zone of the players.

Testing WITH VR: There are some tools being developed in VR to allow for many different purposes and technologies as emulation of real scenarios. Testing could be one of them, we could, for example, have a VR immersive tool to teach people how to test by examples. I have created a museum explaining the different types of testing, maybe next level could be to have a VR application with issues that the users have to find.

What about the future?

Picture credit Harsch Shivam

We have started to seen the first wireless headsets and some environment experiences such moving platforms and smelling sensations.

We shall expect the devices to get more and more accurate and complete and therefore there would be more things to test and that we should have into account. We are also expecting the devices to get more affordable with the time, which would increase the market.

Maybe someday we would find it hard to differentiate between what’s real and what’s a simulation… maybe… we are already in a simulation.

Automating test case decision (using AI in testing part I)

1. The problem (and possible actions):

While testing, we need to decide carefully what test cases we will create, maintain, remove and execute per deployment.

Imagine that you join a company and get handled over a long list of test cases. You know absolutely nothing about them and you need to decide which ones to use for production (you have a time restriction of 10 minutes to execute them). What would you do?

  1. Try to understand which of the existing tests are needed and decide manually which ones to run:
    1. Check the priority of these test cases. Unfortunately, not many people review the priority of the test cases, so you can have obsolete test cases that are still marked as high priority but might be covered by other tests or the original functionality no longer be in place.
    2. Check the creation date. However, sometimes, an old test case might still make sense or be important.
    3. Ask the existing testers. Although, sometimes they have moved out of the company by the time you join and if not, things change so quickly that they might not be able to help anymore.
  2. Scrap it all and start over. I think this is a drastic solution, it might work out, but you might be wasting time re-doing something that might already be working fine.
    1. You could decide to just test the latest feature and not do any regression (trusting that the system was well enough tested before)
  3. Spend days learning about the features, executing all the test cases and figuring out what tests what and which tests you need to re-do. It’s a very analytic approach, but you are not likely to have the time for this, even if you have a lot of resources to execute them in parallel (which you should try to do). Also, maybe you need to refactor some of them, so you still need to do a selection.
    1. You could decide to leave comprehensive test for after deployment and only focus on a small set of features before that.
    2. You could do the deployments at hours where the load is small and do them more often (although this is generally painful for the team)
  4. Use new technologies to figure out which test cases to run (for example AI).
  5. Mix and match: Implementing point 4 on its own could be tricky. The best would be to mix it out with the others, analyzing and reviewing test cases, selecting higher current priorities, executing them in parallel to verify the percentage of success, eliminating test cases that don’t make sense anymore or that constantly fail…

As lynx, we are curious animals and we tend to ask many questions to understand the system. For example, some of the questions you could ask are:

  • How often are the iterations of the projects? If there are fast iterations, chances are that old test cases are not needed anymore.
  • How long do we have to verify a build?
  • Are the technologies from development changing? If they are, it would be a good moment to change on testing too, and point 4 could be a good solution here. I think it’s always good to have similar technologies between development and testing so both teams feel aligned and can help each other in a better way.
  • Do you have available testers in the company to whom to ask about the recent features and tests? If so, you can start with 3 adding up to 1 and 2 (so you don’t bother people with silly questions).
  • Is priority aligned within the company? Is priority per build or per feature? Is there a clear list of features per build? Is there a clear way of tracking which old features might be affected by the new ones?

It’s important is to balance well the test cases to get as many defects as possible and as early as possible, and also to ensure there is no overhead on the process.

Some tests can create false failures or be not reliable. Also, I’d like to highlight that sometimes writing tests takes too long or needs too many resources and some testers would write those test for the sake of ticking the “automated” box. That is not a good practice, be careful with these.

2. Understanding the process (how do we test)

Every time we want to automate anything (in this case we want to automate human decisions), we need to think about the manual way of doing it: When, as human, we decide which test cases to execute, what are we basing our decisions on? We want to check priority (of test cases and feature) and creation date. We might also take into account the severity of the test and feature (how costly would it be to fix a defect related with those). Another thing could be to look at previous runs and check how many times has this test case been failing or how many defects have raised already.

Note that the measures themselves are also estimated – it is important to have a good process in terms of the estimation. The first thing is to clean up the test cases and the system (process) itself. Having good documentation around when something is considered high priority or high severity could help out when aligning the system across the team or the company.

The second thing we need to do for automating tests decision is to decide which variables we are going to take into account for our system. Some of the above mentioned could actually measure the same thing. Having a short and clear number of variables is essential in order to build a correct system, since the more variables the more complicated the system would be and the longer it would take for it to make decisions.

An example or two variables that could be measuring the same thing could be the priority of the test case and the priority of the feature, if the system is well assigned.

There are tools and algorithms thought to identify automatically which variables are actually more important for the data or what sort of relationship there are among them, as this is sometimes not obvious for a human. Just have this in mind when creating your system (as this is usually topic 1 in any machine learning related book).

3. What’s AI

In order to automate these decisions, we could make use of one of the technologies that is being trending recently because of the new systems being able to compute it faster and the creation of better algorithms: Artificial intelligence.

According to Arthur Samuel in 1959, Artificial intelligence gives “computers the ability to learn without being explicitly programmed.”

Artificial intelligence is a big area, and there are many ways we could use it to help with testing.

Note also, that this is not a simple topic and there are many people who have dedicated their entire careers to artificial intelligence. However, I am simplifying it as much as possible since I’m taking this as an introduction and overview.

For this story, I am going to focus in using artificial intelligence to decide among test cases. I found two interesting ways of doing this. The first one is called “rule based system”.

4. Rule based system:

A rule based system is a way to store and manipulate knowledge to interpret information in a useful way. For us, we would like to use fixed rules in order to get an automatic decision of if we want to execute our test case or not. Imagine this as if you wanted to teach it to a newbie who needed your logic to be written down in notes.

For example: If risk is low and priority is low and test case has run at least once before, then do not run the test case. This rule would not act on its own, but mixed with a long list of rules written in this style (which is related to logic programming, in case you want to learn more about it). The group of rules is called “knowledge base”.

In this system, there is no automatic inference of the rules (which means that they are given by a human and the machine does not guess them). But there are some cycles that the machine goes through in order to make a final decision:

  1. Match: The first phase would try to match all the possible rules with each test case creating a conflict set with all satisfied rules.
  2. Conflict-Resolution: One of the possible rules is chosen for execution for that test case. If no rules are satisfied, the interpreter halts.
  3. Act: We mark the test cases as execute or not execute. We then execute and can return to 1 as the action have changed the property of the tests (last executed, passed or failed…)

 

5. Fuzzy logic – hands on:

If you ask experts to define things like ‘high priority’, ‘new test case’ or ‘medium risk’, they probably will not agree among them. They can agree that a test case is important, but when exactly are they marking it with priority 3 or 2 or 1 (depending on your project’s scale) would be a bit more difficult to explain.

In a fuzzy system, such as our, we define things with percentages and probabilities.  If we gather the information of the particular definitions for a variable, we will find it follows a specific function, such a trapezoid, triangle or Gaussian.

Imagine that we asked a lot of experts and come up with the example below:

Let’s define ‘low’ as a trapezoidal function starting on the edge (minimum value) and travelling to 20 and 40.

‘Medium’ would be the same function on the points 20, 40, 60, 80 (note that they overlap)

‘High’ shall be 60, 80 and maximum value.

The graph would represent our system as such:

fuzzylowmedhi

If we decide on the variables (for example ‘priority’) and definitions (also called labels, for example, ‘low’), the functions that compose those labels (as the graph above) and the rules among the variables, we should be able to implement a system that would decide for us if we should run a test or if it is safe to go without it. Let’s do so!

After a bit digging for a good C# library to implement this sort of things (maybe using F# would have been easier), I came across: http://accord-framework.net which seems to be a good library for many AI related implementations. We can install its NuGet Package with visual studio.

The first thing we need to do is define a fuzzy database to keep all these definitions:

Database fdb = new Database();

Then we need to create linguistic variables representing the variables we want to use in our system. In our case, we want to look at priority, risk, novelty of test case and pass-failure rate. Finally, we will like to define a linguistic variable to store the result, that we are calling ‘mark execute’.

 LinguisticVariable priority = new LinguisticVariable("Priority", 0, 100);
 LinguisticVariable risk = new LinguisticVariable("Risk", 0, 100);
 LinguisticVariable isNew = new LinguisticVariable("IsNew", 0, 100);
 LinguisticVariable isPassing = new LinguisticVariable("IsPassing", 0, 100);
 LinguisticVariable shouldExecute = new LinguisticVariable("MarkExecute", 0, 100);
// note on the last one that the name of the variable does not have to match the name for the rule,
//      which is the string literal that we are assigning it

After that, we define the linguistic labels (fuzzy sets) that compose above variables. For that, we need to define their functions.

For demonstrative purposes, let’s say that we have the same definitions for low, medium and high for priority and risk. For novelty, pass rate and mark execute, we are going to define a yes/no trapezoidal function. Note that we cannot use ‘no’ as it is a ‘reserved word’ for the rule specifications (more below), so we would call it ‘DoNot’. The yes/no function graph that we are using looks like this:

fuzzyyn


// defining low - medium - high functions
TrapezoidalFunction function1 = new TrapezoidalFunction(20, 40, TrapezoidalFunction.EdgeType.Right);
FuzzySet low = new FuzzySet("Low", function1);
TrapezoidalFunction function2 = new TrapezoidalFunction(20, 40, 60, 80);
FuzzySet medium = new FuzzySet("Medium", function2);
TrapezoidalFunction function3 = new TrapezoidalFunction(60, 80, TrapezoidalFunction.EdgeType.Left);
FuzzySet high = new FuzzySet("High", function3);

// adding the labels to the variables priority and risk
priority.AddLabel(low);
priority.AddLabel(medium);
priority.AddLabel(high);
risk.AddLabel(low);
risk.AddLabel(medium);
risk.AddLabel(high);

// defining yes and no functions
TrapezoidalFunction function4 = new TrapezoidalFunction(10, 50, TrapezoidalFunction.EdgeType.Right);
FuzzySet no = new FuzzySet("DoNot", function4);
TrapezoidalFunction function5 = new TrapezoidalFunction(50, 90, TrapezoidalFunction.EdgeType.Left);
FuzzySet yes = new FuzzySet("Yes", function5);

// adding the labels to novelty (isNew), pass rate (isPassing) and markExecute (shouldExecute)

isNew.AddLabel(yes);
isNew.AddLabel(no);

isPassing.AddLabel(yes);
isPassing.AddLabel(no);

shouldExecute.AddLabel(yes);
shouldExecute.AddLabel(no);

// Lastly we add the variables with the labels already assigned to the fuzzy database defined above

fdb.AddVariable(priority);
fdb.AddVariable(risk);
fdb.AddVariable(isNew);
fdb.AddVariable(isPassing);
fdb.AddVariable(shouldExecute);

That was a bit long, still with me? We are almost done.

We have defined the system, but we still need to create the rules. Next step is creating the inference system and assigning some rules.

Note that for this implementation the rules are not weighted. We can make it a bit more specific (and complicated) assigning weight to the rules to denote their importance.

Also, note that these rules are defined in plain English, making it easier for the experts and other players on the project to contribute to them.

InferenceSystem IS = new InferenceSystem(fdb, new CentroidDefuzzifier(1000));

// We are defining 6 rules as example, but we should take them from experts on the particular system. The rules don't necessarily need to work out for every system.
IS.NewRule("Rule 1", "IF Risk IS Low THEN MarkExecute IS DoNot");
IS.NewRule("Rule 2", "IF Priority IS High OR Risk IS High THEN MarkExecute IS Yes");
IS.NewRule("Rule 3", "IF Priority IS Medium AND IsPassing IS Yes then MarkExecute IS Yes");
IS.NewRule("Rule 4", "IF Risk IS Medium AND IsPassing IS DoNot THEN MarkExecute IS Yes");
IS.NewRule("Rule 5", "IF Priority IS Low AND IsPassing IS Yes THEN MarkExecute IS DoNot");
IS.NewRule("Rule 6", "IF IsNew IS Yes THEN MarkExecute IS Yes");

Finally, we need to set the actual inputs or values from the tests. The ideal scenario would be that we retrieve them from a file. We could automate the extraction of the variables of our tests into this file from our test case database.

For this example we are typing the values directly. Let’s think of a test case with low priority (20% low), low risk, quite new (is 90% new) and with low passing rate (since it is new, that makes sense). This would be defined as this:

IS.SetInput("Priority", 20);
IS.SetInput("Risk", 20);
IS.SetInput("IsNew", 90);
IS.SetInput("IsPassing", 10);

If we want to define a test case with high priority and risk, old and with high passing rate, the variables would look something like this:

 IS.SetInput("Priority", 90);
 IS.SetInput("Risk", 90);
 IS.SetInput("IsNew", 10);
 IS.SetInput("IsPassing", 90);

For now, let’s get the outputs directly on the console. It would look like this:

try
{
float newTC = IS.Evaluate("MarkExecute");
Console.WriteLine(newTC);
Console.ReadKey();
}
catch (Exception e)
{
Console.WriteLine("Exception found: " + e.Message);
Console.ReadKey();
}

The result of passing the first test case to this system is that we should execute it with 49.9% of security and for the second we get 82.8%.

After playing around for a while with this particular set of rules, I’d say that the system is a bit pessimistic and plays a bit too safe. It’s hard to get values under 50% (which we could assume it’s safe not to execute those test cases).

6. Rule based system – conclusions:

  • An expert / experts are needed to specify all the rules (we might influence the system. In the example above, I’m making the system too safe)
  • These rules won’t automatically change and adapt; we need to add new rules if the situation changes
  • The rules are hard to define: shall we always run all the cases when risk is high and feature is old?
  • Fuzzy definitions and fuzzy results make the system a bit complicated to understand and, again, to define
  • There could be relationships between the variables that are not obvious to us
  • We need to parse the test case variables in order for them to make sense in the system (a bit more of automation)

The problem about a human deciding the rules and the variables is that some of these variables could be measuring the same things or relate to each other without it being obvious to us.

An example could be: when a feature is new and the risk is high there might be a low probability of the test case to fail, so we might not need to execute it. This could happen because, knowing that the risk is high, developers might put more efforts on the code. (Note: This is hypothetical, not necessarily the case)

That is why, while it is important to analyse as many variables as possible, we still need to get a compromise and try not to fall on these cases, for which we need the experts… or a system to discover automatically the importance of the variables. But this is…well…another story.

 

 

Automating the automation: Dealing with dynamic objects IDs

As testers, we sometimes find that developers don’t take testing into account while writing the system and design things that are very difficult for us to test. I’ve some experiences about it that I would like to share in case they help or inspire you.

Localised ID’s

A developer on my team was once tired of redeploying many times to rename objects because of business decisions. He decided to put into place a system to store the objects so people from localisation and internationalisation could change the text themselves without need of redeployment from his side.

It sounds like a great idea, but the issue was that the id’s were also on that table, and the people that were on charge of translating the text, would also translate those ID’s without understanding what they were for.

What was before me, was a set of localised pages that “most of the times” would work.

In this case, the solution I did was to check the developer’s code to understand how he was retrieving those objects and do the same from my code.

Automatic object creation

Another case of dynamic objects Id’s that I’ve seen is when the developers would create an automatic number of items dynamically on the page. In this case, the objects were created always with id’s that didn’t really identify what object they were, but would follow an structure.

For example, if they populate a list of users based on an input, each of them could have an id on the sort of “id_1, id_2, id_3…” (at least they had id’s).

Before me, they were doing manual tests because “it was not something we could automate”.

For this case, what I did was searching for id’s in the page. Something like this:

boolean caseExit = false;

int i=0;

while (!caseExit)

{

if (getObject("ID_" + i++).exits)

// do something

else

caseExit = true;

}

The “do something” part could be many things. For example, we could check that all the users had a certain property.

If your “exists” throws an exception you might need a try-catch instead of if statement in here.

An important note: do this if you are not able to retrieve the objects somehow else. For example, if what you have is a table you can easily access the different rows as long as the table has an id (you don’t need ID per row). It could also be similar if this is an html list.

The problem is when you have a serie of new unrelated objects. Moreover, if this is the case for all the objects in the page (they auto populate them all somehow), maybe auto-creating the page would help better for the case (explained on my previous post).

Angular/react dynamism

With introduction of JIT (a great article explaining here: buff.ly/2qFL28g) the browsers could start handling more dynamism in their websites. AngularJS (insert increasing version number here), ReactJS or VueJS are examples of frameworks that allow for this to happen.

But, as these frameworks started to get popular, some other tools were also created that would allow us to deal with this new dynamism. For example, AngularJS team created Karma, which is a NodeJS application that allows you to input your tests in the command line and aligns well with Jasmine and others tools for testing.

For end to end, you could check tools like protractor, nightwatchjs and testcafe.

There are many frameworks, extensions and customisations in the open source community. They are starting to move almost as fast as the front end tools (The frustration on front end is very nicely explained in this post: https://hackernoon.com/how-it-feels-to-learn-javascript-in-2016-d3a717dd577f )

Each of them would be tailored for a particular case scenario, so if you are building a framework you need to do a good research first, and ask many questions to your team.

POM for dynamic objects

Many people start forgetting about page object design pattern when they start automating on dynamic objects. However I would recommend you still incorporate it as much as you can because it really helps maintaining a clean code and reducing the time to write it.

Even if you have a lot of tests, you usually don’t have the time resources to execute every single one of them. For this, you need to decide what test to run, and this could also be a difficult, repetitive and automate-able process. But that’s…well…another story.

Automating the automation

Have you ever find yourself writing the exact same basic tests for certain objects over and over again? With the only difference on the id or path or even the element retrieval?

Wouldn’t it be nice to have some way of doing this bit faster? Automation is about making your computer do repetitive tasks for you, and this can be applied to everything, including, yes, writing code.

The process to automation is as follows:

  1. Identify repetitive tasks
  2. Write code that would do these tasks for you
  3. Identify when the code needs to be executed
  4. Execute the code or automate the execution

In order to give an example or something being repetitive during automation, I would like to introduce the page object model. If you already know about this, maybe you should skip the next section.

Page object model

The page object, in short, it’s a model for organising functional testing classes and objects so the code is cleaner and easier to modify. These sort of models are also known as design patterns (note to any developer reading this: yes, there are design patterns for testing and you should be using them!).

There is plenty of information about POM (page object model), starting on selenium website (http://www.seleniumhq.org/docs/06_test_design_considerations.jsp#page-object-design-pattern), so I am just going to summarise it here.

In POM, we have (at least, depending on the implementation) two different types of classes:

  1. A class (page class) that would gather all the information of the objects that we need from a website’s page (for example, a button or a text box) and micro-methods to preform common tasks (such as click, type, select, verify existence…). If you are very newbie, just open your browser’s developer mode (usually f12 does it) and using the selector hover over the website to see the objects that it has and their associated values.
  2. Another class (model class) would implement the more sophisticated methods (test cases) using the objects and methods exposed by our page class.

In this way, if the website changes layouts it would be easier to retrieve the objects again without the need of finding every instance of that object throughout the code.

1. Identify the repetitive tasks

So, you need to extract some of the elements on a website page, and add small known methods… We have found our repetitive tasks!

However, our job is not done here. There are still many things to take into account for automation:

  1. The tools: What tools are the developers using? What programming language? And how are we going to design the system in respect to this? We should, within possible, try to align with the development tools, but sometimes this is not the best choice. For example, imagine that developers use both javascript and ruby to build a website, but that you have a team full of knowledgeable test engineers experienced in Java, shall we train them and align with the developers or take advantage of their current skills? This would have to be taken case by case.
  2. Dynamic programming: Would we need to extend our framework to support elements that would be refreshed in the screen without fully re-loading the website? (Nowadays, you most likely would!)
  3. Number of elements/iframes: If we have a lot of iframes, or elements in the website (such as many nested divs), but we only need to access certain elements, we might prefer to write an automated solution that allows us to input the elements that we want. However, it could be possible that we want to define everything on a page automatically, because that takes away one manual step while keeping a reasonable load.

2. Write code that does the tasks for you

To give an example and in order to simplify things, let’s say that we have decided to create a solution in Java, that there is no dynamic loading and that we are going to say which elements and properties we want rather than getting them all.

The idea of the code is as follows:

Allow input of a list of elements, the type, the selector type and the selector for each of them. We could use a csv file for this, for example:

name= loginbox, type=input, selector=id, id=loginbox;
name= password, type=input, selector=xpath, id=…;
name= loginButton, type=button, selector=name, id=loginButton;
name= textSuccess, type=textBox, selector=text, id=success;

  1. For each of the lines on the input file, create an object with the name and selector given.
  2. For each of the lines on the input file, create a method for the object Type:
    1. For input: type method
    2. For button/href: click method
    3. For textBox: method to verify test
    4. For all: exist method

It would look something like this:

// retrieval of the fileLines left out for exemplification
for (Object o : fileLines) {

switch(o.type) {

case "input":

toPrintInPageClass += "WebElement " + o.name + " = driver.findElement(By." + o.selector+ "(\"" + o.id + "\"));\n";

toPrintInPageClass += "void " + o.name + "_type(string toType) { \n\t" + o.name + ".sendKeys(toType); \n}";

break;

case "button":

...... (you should have gotten the idea now)

This should take less than an hour to build, but after that, every page would be built in no time and the code would be less prone to errors as opposed to copy-pasting the same code over and over again. (You write one class as opposed to one per page)

An additional benefit is that if you have a manual tester in the team, now you can assign him or her the definition of the inputs and execution of the code. And you just made that person able to “program” with this “very high level programming language”.

Bonus: After this is built, we can create another class that would extract the items from the website into the input file for this class. That way we can retrieve all the elements in one go without human intervention.

3. Identify when the code needs to be executed

What happens if a developer changes the website and adds an element? Shall we execute the whole thing again or just add manually the missing element?

Even if it is tempting to add the missing element manually, I would suggest to add it to the input file, otherwise if someone decides to execute this code, your element would be missed. It is likely to still be faster to execute the code than to add an element manually.

But what if it is not? What if we have so many elements at the moment than executing the code can take longer than adding just one more?

I would still run the code rather than add elements manually, because it could as well be that some of those elements do not exist anymore. But if it is only one quick change, please remember to change the input file too.

As an addition, we could add into the code functionality to modify the class already created rather than create a new one, but I’d say this could be a lot of overhead for the benefit that you can get out of it.

4. Execute the code or automate the execution

Lastly, especially if you have the object retrieval automated too, you might want to automate the execution. For example, you can say to the computer (using a cron file could be a way): run this code every evening at 8pm. Or once a week… once a month… That way you could “completely” forget about the page definition side of the POM and just focus on the functionality.

Alternatively, if you have a way of retrieving the objects from the website, you could check that the original input is the same one as the newly generated and only execute the page class creation if they are different. That should be faster to run and it would allow you to change the actual code only when required.

However, be careful with this approaches, because you might miss when you need to add new functionality.

This is quite common in automation: you need to keep an eye on it. After it is built, it usually needs maintenance and refactoring. The better built, the less maintenance, but that does not mean 0 maintenance.

Conclusions:

Many sources would recommend to make short and simple tests, with the idea of identifying the failing parts easily and clearly. Nonetheless, sometimes you can spend longer in creating these simple tests than in actually testing the parts that are likely to fail, so the right balance is important.

What we’ve looked at today would help creating many simple tests, but you still should be careful not to over-test things as sometimes it could be too expensive or not really needed.

On the other hand, this could be a difficult task to do if we find with dynamically created objects or/and dynamically assigned id’s/properties. There are ways of dealing with these as well, but that’s…well…another story.