API Testing

What’s an API?

An API (Application programming interface) is a set of calls that an application does to communicate parts of the application. For example, the user’s view (browser or UI) with some software component (in a remote server or within the user’s computer) that makes the necessary operations for an application to function.

If you are curious about how this looks for a web application, you just need to check the ‘network’ tab on the developer’s tool on your browser. You can see there are many calls happening in the background when trying to reach a website.

Network tab on Chrome for google.com URL

If you click in one of the calls of the left hand side, you will get some information as on the right hand side. The request URL is the address that the call was trying to hit.

For these web calls, there are two main methods: GET (to request some information from the server) and POST (to send some information to the server). You can learn more about other methods here.

The next field is also very important. This is the message that we get from the server when the call is executed. In this case it is 307 (a redirection). If you are curious about what other statuses number mean, you can check this web, if you are a cat person, or if you are more of a dog person this one.

There are two widely used protocols for sending information between devices: SOAP(stands for Simple Object Access Protocol) that sends information with XML format and REST(Representational State Transfer) that sends information with several formats such as json, html, xml, and plain text (see this article for further explanation in the formats).

Tools to test API’s?

Please keep in mind that the tools mentioned below are not the only ones in that you can use for API testing. I’m talking specifically about these ones because they are the ones I’ve used in the past.

In the section after this one, I’ll show an example about how to do an API test.


According to their website, Swagger is an open source and professional tool-set that “Simplifies API development for users, teams, and enterprises”


I have used swagger UI as a way to easily check API URLs and understand the calls to then add them into my test code, but, I have not tried all the Swagger’s tooling set yet. I think this is a easy way to communicate changes on the API across the team and document it.

Alternatively to this, the developers should document their API calls in some other form, generally some list, as Twitter does here. The problem I have had with this option is that sometimes the documentation might be out of date and then you need to figure out in the development’s code what’s the exact API call. With Swagger, the list of calls come directly from the code, which makes it easier to handle and up to date.

Swagger is supported by SmartBear, same as SoapUI, so for API testing with it, please check below.

Soapui and Postman,

SoapUI is ” a Complete API Test Automation Framework for SOAP, REST and more”. There is an open source version and a professional one featuring more functionality. The API testing part looks like this:


I took the image from their website and it is quite self-explanatory. Besides, there is a lot of documentation to get you started in there.

Postman is a “collaboration platform for API development”. In the development sense, it provides automatic documentation, so there is no problem with developers making change on functionality and forgetting to upload it.

For API testing, it’s very easy to get started. You can create REST, SOAP, and GraphQL queries. It supports multiple authentication protocols (I talk about this later) and certificate management. Please refer to their website for further information.

Wireshark and Fiddler:

These two programs are very useful to analyse network packets. They are powerful programs and a must known for security, network and performance testing and checking the packets at micro level. You can actually see the exact data sent over the network. However, if what you are looking for are tools for API testing, I would probably not go for them but the ones above, because they are more high level and specific for that.

That said, I have used them before in order to test API’s that required specific secure certificates and for debugging issues (specially performance ones). If you are interested in knowing more about Fiddler, I recommend this article. For Wireshark, this one.


How to do this programmatically?

If you want to add this testing into your automation code, you have some help on with the tools mentioned before. However, there are many ways of doing these type of calls with different programming languages. For example, here is how to do a rest call with python:

import requests
# make a get call
response = requests.get("URL")

# do something with the result
response.status_code # this gives you the status code as mentioned above
response.json() # this gives you a json with the response on it

# make a post call
response = request.post("URL", {json data})

It gets a bit more difficult when you need to add parameters, authentication or parsing some types of data, but there is plenty of documentation about it all. Let’s see an specific example using the API provided by numbersapi.com.

import requests

response = requests.get("http://numbersapi.com/42?json")


The result when you execute the code above is:

{'text': '42 is the result given by the web search engines Google, Wolfram Alpha and Bing when the query "the 
answer to life the universe and everything" is entered as a search.', 'number': 42, 'found': True, 'type': 'tr

With Python, you could play with the json data to easily retrieve and validate the text, or the number, that there is some result…

For more information about what exactly test when testing API, I think this post is wonderfully well explained (they use postman as example).

Why should I care? UI VS API testing

UI (User interface) testing is the best way to simulate the actual behaviour of the users. However, we tend to re-test things in the UI that could be covered already by testing the API (and in some companies this could be done by a different group or team).

Let’s say a developer changes a call for an API. Let this call be the list of movies that someone liked. Now imagine this API call is not modified in some part of the application, the result being the user cannot find its liked movies. What’s happening in the UI test?

We will then get the UI Test couldn’t find an object. This could be due to the API call being wrong or a bug in the automation, an update on the way of the object needs to be retrieved, a button not working properly, the object being hidden somehow…

However, if you have an API test for it, you should be able to understand that the call is not retrieving anything. If you need to verify things such as results of a search, it’s probably best to use API to check the entire list (which could be done with a quick comparison) and let the UI verify that a result appears where it should rather than the result itself. Also, you should be verifying that the API call is correct (and update the test call if it is not).

Level up:

API calls are less likeable to change than UI objects and they generally come in different versions when they do, not to disturb previous releases of the application. This means you might want to add functionality to verify which version is being tested.

It is also interesting to use this to speed up our UI testing. The most common example being the Login method. This method is usually a bottle neck for the rest of the tests, and if it fails, you don’t know what else might be failing or passing and you are blocked until the fix. Whilst it’s super important to have a Login test to make sure that your users can log into the application, performing UI login each time it’s needed for another test, slows down your execution.

Google login screen

What’s the solution? You can use API testing to skip the login bit. Be careful when doing this, it wouldn’t be secure to have an API to do this in production environment. An example could be to set up some unique tokens (see an example about doing this with soapUI here) that are of quick expiration to perform this skip and send it alongside the URL, or have an API call that sets some cookies or session to be logged in.

If you have other repetitive dependent tests, you should consider running API calls for them before continuing with the test that depends on it. This would considerably speed up your test execution and your results would give you more trustful information about the problem.

That said, UI testing is the best way of ensuring everything works as per user behaviour and E2E and integration tests should not be substituted by API tests, use it only as a help and if it is not increasing the complexity and failures of your tests.

Yet another level up: stats

Another interesting thing that you can do thanks to API calls is to find out information about your application and about how users are using it. To analyse and visualise calls in a bigger scale you can use tools such as elastic search and kibana, and even use artificial intelligence to get conclusions from such calls, but that’s… well… another story.

2019 Review and 2020 plan

While everyone has been preparing this year’s resolutions, I wanted to share the ones I had for last year. I was getting the feeling that I did not do very well with them, but after writing them down, it turns out I accomplished a lot and it makes me want to achieve more next year.

Photo by freestocks.org from Pexels

Doing more

One of the things I am often asked by people is: “how do I manage to get to do so many things on my free time?” When I hear this question, my first thought is: “well, you obviously haven’t visited my blog often…”

To me if I can help one single person throughout the contents of my blog or inspire someone with my talks, I would achieve my goal: I would have made the world a bit better. For this reason, I tend to wait for inspiration to write; I might be doing a research for the next post, or working on learning a skill.

Therefore, once you are aware of your end goal, I think the first step to get more things done is to find inspiration. You can get this from reading books, articles, attending to talks, having active friends, joining courses or projects… But, be careful, sometimes this could be counterproductive. Maybe seeing how everyone around you manage their time so well, discourages you for not being able to do as good. Don’t let this bring you down, everyone has their own tempo and you can start at any point in your life.

A trick I use sometimes when watching or listening to some content, it what I call the ‘2x factor’: set it up 2 times the normal speed. I don’t recommend you do this with everything, for me it generally depends on the speaker’s speaking pace. When the speaker is talking slow or about something I already know or don’t find very interesting, I increase the speed of the video. I find generally 1.50x is a good speed for me for most things, 2x might be too fast and whilst I can understand everything, it makes me nervous.


If getting inspiration is the first step, the second one, to me, is to get to action. Sounds easy? Get started on anything, even for 5 minutes a day, and you will be achieving more than you are right now. I particularly enjoy the ‘pomodoro technique’ (with rewards), in which you try to work 25 minutes straight followed by a 5 minutes break (in which I grab some drink or snack). I find it very hard to finish the 25 minutes the first two or three times but then I start hating the alarm for taking breaks because it interrupts my flow.


One of my new year resolutions was to track my readings. I’ve noticed I’ve been reading here and there, never really stopping to think about my progress.

First thing I did to firstly, check my progress and secondly inspire myself to read more is to join the goodreads reading challenge. Since I was not sure how much I was currently reading, I set it to be 24 books.

Photo by Pixabay from Pexels

I was shocked to realise that I read 6 books the first month. Ok, let’s be fair, half of those were audiobooks, I love walking around and having them help me get to more information. Because of traveling or other commitments, I didn’t get to 6 books every month, but I also didn’t track them all. I discovered that sometimes Goodreads didn’t have the book I was reading and sometimes it just removes books from your shelves without noting. I contacted their support (as suggested on that link) and all I was told was to re-add them and make up the dates I read them… This proved to me that it is not a good place to track books so I’ll be happy to hear of other options if you have any.


In order to motivate myself, I like having some commitments to push myself to do them, for example, participating in meetings and conferences such as the automation guild (it’s an online conference, you might be still on time to join this year’s). The videos for it took me longer than expected, but I learnt a lot because of that. I presented in a total of 5 conferences around the world and spoke in a couple of meetups. If you want to know more of my past and future appearances, you can do so here.

Another way I have of motivating myself is by joining courses. Last year I wanted to finish the VR course I’ve been pushing aside for a while. I’ve been doing some research in VR and that has taken priority over the course, but paying a course monthly and not having time to work on it is really painful. On the other hand, not paying for a course makes me less committed to it. This year, I want to work more on my AI knowledge.


Another thing you can do to get inspired is to write a list of life achievements or values you want. If you have it nearby and read it often, it could help you focus on what’s more important for you. Also, try activities that get you excited, find other passions.

Some months I was focused on improving Chinese, practising piano and meditating. Besides, on summer I went back to Spain from China, so was recovering from jet-lag for a couple of weeks and then decided to enjoy some time off too.

Another new year resolution was to practice yoga at least 3 times a week and going to the gym at least 3 times too. I’ve been quite regular with this, even when travelling, except the times I’ve been sick.

Photo by Cedric Lim from Pexels

Next year, I would like to plan better the activities and periods of break, the same as I plan for work or conferences. Otherwise, commitments can get in the way and push everything out and compensation later on does not seem to be the best solution.


One of my 2019’s resolutions was to travel more, and I think I surpassed my expectations: I travel through China (Beijing, Xian, Zhangjiajie, Tongren, Guilin..), US (San Francisco), Japan (Tokyo), Spain (Valencia), UK (London), The Netherlands (Amsterdam), Germany (Berlin, Dusseldorf..), Switzerland (Zurich) and Russia (Moscow).

I moved to a new city but, because I was meeting many new people throughout the trips and did a lot of sight seeing of the visited places, I did not feel like engaging in many activities there. I did go to a Muse concert (which was, as usually, quite amazing) and walked around a lot. I like doing this when I go to a new city as I feel getting lost is actually the best way of getting to know a place.

Photo by Porapak Apichodilok from Pexels

Whats next

Something else that works for me is to list the things I’ve done in the day at the end of it. Same for the year, as you’ve read. This is a way of being appreciative, which makes me want to achieve more.

Being grateful is also really important. Another daily writing or morning meditation, if you wish. I struggled with this because some days I might not interact with anybody as to be grateful to them. However, you can be grateful to yourself and your achievements and for the opportunities and the good that’s on its way and the good that happens to others.

I’ve counted 12 post that I have half written, so keep checking because there is a lot to come. A couple of them are on VR (I might merge them into one), thoughts to share, AI, API testing… If you are especially interested in one of those, let me know so I give it priority.

I like most of my last year’s resolutions, I want to keep them, and I haven’t added up much to this year: do things to make the world better, block times of rest, have more patience, look at the brighter side of everything, get an AI specialisation, write more on my blog, find a new app for my reading… Would I manage to add all of that? That’s…well…another story.

How to test a time machine

I have recently watched a video from PBS space time that got me thinking: if we were to have a time machine, how would we test it? I’ve seen a lot of “how would you test X” type of questions in interviews, but I don’t think I’ve ever seen this one before (I am not trying to give you ideas for interview questions!)

It’s not rocket science… it’s rocking testing science!

I couldn’t help but compare it firstly to a spaceflight, so I started wondering: how do they test spacecrafts? And what better one than Apollo 11 to start with? If only we had a time machine for getting its source code… Yes, I have been looking at Assembly code trying to make sense of potential testing routines, like this one… and… guess what I found there?

This section of the code is checking the Gimbal lock of the accelerometers! Do you remember the concept from my last post? Maybe I just have a case of Baader-Meinhof, but I do feel Gimbal lock is an important concept to learn, so check it out.

Testing in Assembly was not as ‘easy’ as nowadays (for example, macros does not seem to be a thing that I could find in the Apollo11 programming syntax). Do not expect a page object model or a library with tests or testing functions. Nor common methods for before and after tests. Actually, don’t expect any sort of OOP, to start with.

In my search I could find some files with tests on them, but they are mostly for stressing the hardware by sending signals to the different devices and recovering from bad statuses. Also, spacecrafts might need to check correctness of bits to make sure there are no catastrophic arithmetic errors.

from @wikipedia

Time traveling tests

Imagine we have covered the unit and integration tests for both hardware and most of the software for our machine that could potentially match any currently existing ones. What are the specific cases we should cover for our time machine? The logical cases to think of:

1) The machine should not kill the traveler: We should insert some devices to measure that the cabin of the machine is livable. Also, keep in mind for the rest of the tests, that usually spaceships are first tested with robots inside, rather than humans. (Security test)

2) There must be some ways of safe interrupting the process (risk assessment/testing): In case anything wrong happens, what’s the risk involved for the passenger? (Security test)

3) As the machine goes forward or backwards, the traveler state stays while the surrounding world changes: We could measure this by introducing some other object before that we know it decays and checking how long time it has passed in the machine. (Usability test)

4) There is a way to return: I assume we will want this, so we should test it. (Security test?)

5) Performance tests: how many times it can go backwards and forward in time? (Performance but also, regression test?)

6) What is the minimum time that can travel? (test it) If the time machine requires a rapid negative speed (as the video above suggested as one of the possibilities), I imagine traveling in time as playing billiard or golf. Let me explain: When you try to position a ball on a particular hole, you need to give it the right original impulse (not too much, not too little) but also the right angle depending on the starting point. We are likely to travel space as well as time, so it might be particularly difficult to stop on a particular moment or place. (Which may explain why nobody made it to the time travelers party) (Usability test)

Photo by Tomaz Barcellos from Pexels

7) Test boundaries scenarios: go back to start of the earth (or universe) and forwards to its end. What happens if just before or just after? Does time always exist? Can we travel to a point there is no time? Technically these should be tests we would like to do, but I think they are probably not doable, realistically speaking (can I use this expression in this context?)

8) Try to change a small thing in the past…does it change the future? If a change makes a parallel universe, then we should not ever recover the machine. If we test this, we should look for an event we can easily undo (like maybe turn a light on/off?) (Exploratory test)

9) Time traveler meeting same time traveler. Test paradoxes. (Exploratory test)

11) Test placing a box where the machine is positioning before a trip. Then take the box away, position the machine on that same spot and test traveling to before we positioned the box. Does the box o machine break? (Integration test)

12) Test traveling to when the box is still in place; do the machine or the box break? (Integration test)

13) Try to travel twice to the same exact time and place. (Integration test)

14) Could the machine travel between different universes? (If we have a way of doing so, which might be more likely if we are using wormholes for the trips than negative speed) (Integration test)

From futurism.com (and an interesting article)

15) Is there a maximum/minimum size of the machine? If so, test them. (Boundary test)

16) Can everybody use the machine or only qualified people? (Accessibility test)


Maybe we have already invented a time machine but kept it secret because it is dangerous. Or maybe we have gone back in time to stop ourselves from inventing it, creating a parallel universe and as a result, our universe is the one in which such machine was never invented (time version of the Fermi paradox)

Whichever the answers to these questions, I hope you’ve found this exercise fun and enjoyed reading this post as much as I enjoyed writing it. Let me know if you can come up with more tests we could do to the time machine (if we were to have one). I will resume my serious posts soon, they will be… well… another story.

We need to talk about quaternions…

I know, I know, this doesn’t seem like anything that has to do with testing. However, I found this concept to be very challenging and I think some people might be interested in knowing about it, especially if they are considering automating objects on VR.

The problem:

You want to be able to rotate the main camera with Google VR. Google design has reached the conclusion that moving the camera in virtual reality should be something that the user does, not the developer (I think it was possible in earlier versions of Unity and Google VR sdk).

So I decided to create an empty object and assign the camera as a child object of it. To move the camera, I move this object. To undo the camera movement done by the user, I move the object the opposite way. Should be easy, shouldn’t it?

Well, sometimes… but other times it does not work. I spent a very long time searching online for other people that had this issue and it was either working for them or not, nobody would explain why this was the case. I believe the reason to be the effect of gimbal lock.

I know, that sounds like a made up word, but this is actually an important concept that you should be aware of, let me explain:

Gimbal lock is an effect in which the object loses one degree of rotation, which is common when you use three dimensional vectors. At some point, two of the rotation coordinates get lined up in a way that every time you move one, the other one moves as well. If you are a visual person, this video from GuerrillaCG (a Youtube channel with information about 3D modelling and animation) explains it quite clearly.

How did I understand what this mean and how did I figure this was the issue? I decided to encapsulate the camera object in another object, and then that object in another one. Then I assigned them some colored spheres and pointers and I run some tests. After a while, it became clear that movements were not coordinated and that the rotation variables were not working as expected.

Attempts to understand 3D issues

Introducing quaternions:

The difference between Euler vectors (the usual 3 components vector that indicates a point in the 3D space) and quaternions (a 4 components vector) could help us avoid the Gimbal lock effect.

Quoting wikipedia: ” In mathematics, the quaternions are a number system that extends the complex numbers. […] Quaternions find uses in both pure and applied mathematics, in particular for calculations involving three-dimensional rotations such as in three-dimensional computer graphics, computer vision,[…] “

I know, it still sounds a bit gibberish… Basically (very basically), quaternions are a mathematical way of representing 3D objects by using 4 coordinates (representing them in 4D space instead), that an Irish mathematician (called William Rowan Hamilton) came up with and decided to literally set in stone, on a bridge.

Commemorative Hamilton plaque on Broome Bridge. From wikipedia

They are difficult to understand and to visualise (as we don’t intuitively know how a 4D space should be visualised), but the maths applied to them are easier than to the 3D vectors. For example: if you want to rotate an object on an x plane, on the y plane and on the z plane, the object will not be in the same state if you perform the entire rotation on x and then on y and then on z, that if you do it in a different order (because each coordinate rotates based on the others too). However, with quaternions, you will always get the same result, as you can rotate two planes at once (which is particularly convenient when you are trying to undo a movement or automate a 3D object)

I am linking to a lot of articles below for you to deep dive, but two main points to remember are:

  1. You should keep one of the values of the quaternions fixed (or set to 1) as this is the way of telling the quaternions that there has been a rotation.
  2. Rotation angles in quaternions are half the expected in 3D rotation (so be careful when calculating the rotations in 4D). This is known as double-cover and it’s actually quite useful once you get the grasp on it.

If you are interested in knowing more about quaternions, you can check this and this videos from 3Blue1Brown (a Youtube channel full of interesting mathematical concepts with a very easy explanation, I really recommend it) Also, I enjoyed the explanations on this article. If you are considering working with 3D objects movement at some point, you should definitely watch these videos and play around with the simulators to understand everything better.


It is not my intention to go against the design of a system such as Google VR, and you should listen to the platform owners as, in general, it is wise not to tinker with the camera. However, sometimes I find it to be useful to undo a user’s movement, for example, for automation purposes or if the gyroscope is drifting on its own (more about phone’s sensors in this interesting article).
In these cases, the use of quaternions is so fundamental, that it is definitely worth it to spend sometime learning about them.
The next step after the automation of the camera and other 3D objects would be to automate the hands movements, but that’s…well…another story

Automating the automation: Automating Vstest

As part of my career, I have, on multiple occasions, found it useful to automate visual studio tests (VS tests) execution. The goal of this article is to show how to do so and why. I hope you can find it useful no matter your experience; for the experienced readers, you might get some ideas on how to use the capabilities of vstest, batch files and logs generated, setting up the basics to revisit some of the parts in the future.

If you have never created or run tests in Visual Studio, take this as a starter guide on executing automation. If you have ever run Visual Studio Unit Tests, you should be familiar with the Test Explorer tool:

At times, we might want to include these tests as part of a different system rather than executing them from Visual Studio. It is possible to use some existing functionality in the command line to help you doing so: VSTest.Console.exe is a command-line tool to run Visual Studio tests.

Steps 1 and 2 are for setting up the test data of this post, feel free of skipping them if you consider them too simple. Otherwise, I hope they can help you get started with Unit test in visual studio and general automation with it.

Disclaimer: I’m explaining this from a Windows Operating System’s perspective, but it could be done similarly from anywhere you could run vstest console tool.

Step 1: Create an unit test project

Note: needless is to say that you need Visual Studio installed to follow these steps.

We are going to start creating a test project in Visual Studio. To do so, we can click File -> New -> Project and select a C# template ‘Test‘ as in the image below.

Select the destiny folder and insert a name for it. Refer to the Visual Studio documentation if you have any issues.

Step 2: Create tests

The project should come with a unit test class by default. If for some strange reason that’s not the case, or you want to create another class, you can right click the Project and select Add->Unit Test (please note if you do this but you already had a class, then you might have a duplicate TestMethod1 in your list later on)

Now we are going to add a command inside the test that comes by default (because we are not adding test logic for now, we just add the ‘Fail’ command as shown below). We are also adding a Priority above this test. Then we can copy and paste the lines from [TestMethod] (line 9 in the image below) up to the last ‘}’ (line 14) three times to get a few tests in our test explorer. Change the names of the methods so each have a different name (for the example we have TestMethod1, TestMethod2, TestMethod3, TestMethod4.

For the example, all odd tests (TestMethod1 and TestMethod3) have priority 2 and even tests (TestMethod2 and TestMethod4) priority 1.

To be able to see the test explorer we should go to the menu Test->Windows -> Test Explorer.

And to be able to see the tests we have created we should go to Build -> Rebuild Solution.

Now, you can try and run the tests on the left hand side. They should fail because we are not adding any logic to them. If you are interested in how to do this, leave a comment below and I’ll write another article with more details on this. You can also look up the Visual Studio documentation to learn more.

In the next step we are going to learn how to execute the test cases outside Visual Studio.

Step 3: VSTest

Now, if we want to execute these tests outside Visual Studio’s environment we can use the console that is typically install under the following path:

C:\Program Files (x86)\insert VS version here\Common7\IDE\CommonExtensions\Microsoft\TestWindow

Where “insert VS version here” would be something like ” Microsoft Visual Studio 14.0 ” or “Microsoft Visual Studio\2017\Enterprise”… basically, you can navigate to your Program Files (x86) folder and look for a folder that has Microsoft Visual Studio within its name. Then continue with the rest of the path as above.

Inside the aforementioned folder you can find “vstest.console.exe“. Alternatively, you can download the nuget package and search it in the path where it’s installed.

Typically we would access this file from the command line (be it admin cmd or visual studio’s native tool command prompt)

We can open cmd (as administrator, by right clicking on it) and type “cd path” where path is the path to the above file with the correct visual studio version for your installation. It is convenient that this path is surrounded by quotes (“”), just in case there are spaces the program can recognise them as part of the path and not as a different command.

Now, you can select what type of tests to run by adding some parameters to the call, but you can test it by calling “vtest.console.exe” followed by space an the path to the dll of the test project. For example: vstest.console.exe c:\....\UnitTestProject1\UnitTestProject1\bin\Debug\UnitTestProject1.dll

You should see something like this:

Since you’ve set up your tests to fail. Later on, once your tests are finished and they are all passing you would see something like this:

If you are new to testing, you are probably wondering why would we want to run them in this complicated way instead of using the beautiful UI from Visual Studio. If so, keep on reading, this is where things start to get interesting.

Now, imagine we want to run just the first and third test method, for this we should make the following call: vstest.console.exe pathtodll /Tests:TestMethod1,TestMethod3

As you can see, now only TestMethod1 and TestMethod3 were executed (you can use the names of your test methods). Note that it should be failing for you, I’m just adding the passing image because is cleaner.

Remember we setup priorities before? So how to run the tests with higher priority? vstest.console.exe pathtodll /TestCaseFilter:”Priority=1″

In Microsoft vstest.console documentation, there are much more ways we can use this tool, including parallel runs. Have you started to see why using this tool could be very powerful? What else could we do with this?

Step 4: Creating batch files

The coolest part is that we can create a file with these calls and then, we can use that file virtually anywhere (continuous integration, night builds, as part of an agent to be executed on a remote computer…) These sort of files are called “batch files” because they run a set or batch of operations.

The first line of the batch file would be to CD into the vstest console folder. Then we can add calls to the tool for running the tests we want. Finally, we add a pause to verify that this is working fine.

To do this, just use the window’s notepad and type the instructions below. When saving it, save it with .bat extension instead of .doc or .txt.

cd C:\Program Files (x86)\insert VS version here\Common7\IDE\CommonExtensions\Microsoft\TestWindow
vstest.console.exe pathtodll /Tests:TestMethod1,TestMethod3 /Logger:Console

Remember to change pathtodll to your actual project and add the right VS version. Now, if you execute this newly created file (as administrator), you should see the same results as before. Pressing any letter closes the console that opens up.

If you don’t want to see the results in a console (as if would be happening if you integrate this file with other projects), just remove the last command (pause). The logs of the results are saved in the current folder (the one for the vstest program) and we will analyse them on the last section.

Explaining continuous integration or multi-project execution would be a bit more complicated and out of the scope of this post (but do leave a comment or reach out in Twitter if you want me to explain it). But I can explain how to set up your computer to run this file every night with Windows!

For this, you need to open the task Scheduler (you can search for it in the lower left search box on Windows). Then click on “create a new basic task” on the right hand side and go through the assistant. The most important things are to specify the frequency you want the file to run and browse to select the saved .bat file. Now this file will be running with the indicated frequency automatically (if your computer is turned on at that time).

The next thing you want to do is to check the logs that this file has generated every time it was executed.

Step 5: Saving logs

Running automatic tasks and tests is awesome, but not really useful unless we know the results so we can do something about them.

First of all we should check where the logs are saved. Usually they are saved into a folder called “Test results” within the folder where you’ve run the file. Because we were using cmd (admin) and navigating to the vtests.console folder, it would be created there. In fact, that’s the reason we need administrator permission to run the file. There is a parameter with to run vtest.console to change this location, although my vstest.console was not recognising it, so I stick to the vstest.console folder for the purposes of this article.

I think trx logs are useful and should be always installed by default. To get them we can add a parameter to the vstest “ /Logger:trx“.  The generated file can be opened with the notepad and it will give you information about the run tests. However, we would focus on /Logger:Console as it is simpler.

Another way of retrieving the logs is by using the capabilities associated with Windows batch system. We just need to add ” > pathToFile\file.txt” where path to file would be the path all the way to a file with a txt extension (this file does not need to exist, you can create it with this command). This way a file will be saved with the contents of the console.

cd C:\Program Files (x86)\insert VS version here\Common7\IDE\CommonExtensions\Microsoft\TestWindow
vstest.console.exe pathtodll /Tests:TestMethod1,TestMethod3 /Logger:Console > pathToFile\file.txt

You might want to save different files (by adding date and time) or replace the latest one (by keeping the same name as above), depending on the frequency that it is generated (if it takes long enough, we don’t mind replacing it).

Using the parameter “ /Logger:TimelineLogger” can give you a bit more information of the times of execution, but it will make it harder to parse later on.

Step 6: Playing with the logs

Now we have a text file with the logs…but reading all the files all the time, might be a bit boring..what to do with it? You get it, automate it!

Let’s output just the number of test case that have failed. We can do this with any programming language, but let’s keep going with batch. Why? Because I feel people underestimate it, so here it is:

@echo off
set /a FAILED=0
for /f %%i in ('findstr /i /c:"Failed" file.txt') do (
set /a FAILED=FAILED + 1
set /a FAILED = FAILED- 1
if %FAILED% gtr 0 (
echo Failed: %FAILED%

The first line of the file allows the output to come up on the screen. Then we create a variable to save the number of times that we find the string “Failed”, we perform a loop with the search over the file called “file.txt”, take one out (because at the end there is a summary with the word “Failed” on it, which we don’t want to count) and only if the result is greater than 0 it is printed.

When executed for the file with priority 1 test cases failing we can see this result on a console:

If everything passes, nothing is printed for this example.

Please keep in mind that with this method, any test case that has the word “Failed” within its name will also increment the search, so this is just for demonstration purposes.

Maybe we would prefer to indicate as well the names of the failed test cases, or to print the last sentence on the file, which has already a summary.

We can also create some code that would send us an email if there are any failed test cases, or push the results into a graph, or send some sort of alert, or even a slack message… There are many possibilities, but they are…well…another story.

Testing on VR world

Previously, I’ve written a couple of posts about how to get yourself started on VR in which I promised some stories about testing on this world.

Why do I call it world instead of application? Because the goal of virtual reality is to create realistic synthetic worlds that we can inspect and interact with. There are different ways of interacting in these worlds depending on the device type.

Testing an application in VR would be similar to testing any other application, while we would have to take into account some particularities about the environment. Later on we will see different kinds of testing and think about the additional steps for them in VR, but first let’s see the characteristics of the different types of devices.

Types of devices:

Phone devices (Google Cardboard or DayDream) – allows you to connect your phone (or tablet) on the device to be able to play a VR app in there.

This is possible because most of smartphones nowadays come with gyroscopes: a sensor which uses the Earth’s gravity to determine the orientation.

Some Cardboards (or other plastic versions) can have buttons or a separate trigger for actions on the screen (as it is the case for DayDream), but the click is usually not performed in the object. Instead, it is done anywhere in the screen while the sight is fixated on the object. If the device does not have a button or clicker, the developer have to rely on other information for interaction, such as entering and exiting objects or analyzing the length of time the user was on the object.

Cardboard VR device – Picture credit mentatdgt

Computer connected devices (HTC, Oculus, Samsung VR…) that generally comes with (at least) a headset and handset, have an oled device with high resolution and supporting low persistence embedded in the headset, so you don’t need to connect it to a screen. They detect further movement, as it is not just about the movement of the head but also the movement on the room itself and the hand gestures. This is done differently depending on the device itself.

We have moved from being able to detect user head movement (with reasonable size devices), to use sounds, to use hand gestures… so now, testing VR applications is getting more complicated as it now require the test of multiple inputs. The handset devices usually have menu options as well.

Before going on, I’d like to mention AR. AR is about adding some virtual elements in the real world, but with AR we do not create the world. However, AR has a lot in common with VR, starting with the developing systems. Therefore, the testing of the two platforms would be very similar

We have talked about the hardware devices in which the VR applications would run, but we should also talk about the software in which the applications are written.

Samsung gear + one handset

Developing platforms:

Right now there are two main platforms for developing in VR: Unity and Unreal, and you can also find some VR Web apps. Most of things that are done with Unity use C# to control the program. Unreal feels a bit more drag and drop than unity.

Besides this, if you are considering to work in a VR application, you should also take into account the creation of the 3D objects, which is usually done with tools such as blender or you can find some already created onlin.e

But, what’s different in a VR application for testers?

Tests in VR applications:

VR applications have some specifics that we should be aware of when testing. A good general way of approaching testing on VR would be to think about what could make people uncomfortable or difficult.

For example, sounds could be very important, as they can create very realistic experiences when done appropriately, that make you look where the action is happening or help you find hidden objects.

Let’s explore each of the VR testing types and list the ways we can ensure quality in a virtual world. I am assuming you know what these testing are about and I’m not defining them deeply, but I will be giving examples and talking about the barriers in VR.

Usability testing:

It ensures that the customer can use the system appropriately. There are additional measurements when testing in VR such as verifying that the user can see and reach the objects comfortably and these are aligned appropriately.

We are not all built in the same way, so maybe we should have some configuration before the application for the users to be able to interact properly with the objects. For example, the objects around us could be not seen or reached easily by all our users as our arms are not the same length.

You should also check that colors, lighting and scale are realistic and according with the specifications. This could not only affect quality, but change the experience completely. For example, maybe we want a scale to be bigger than the user to give the feeling of shrinking.

It is important to verify that the movement does not cause motion sickness. This is another particularly important concept for VR applications that occurs when what you see does not line up with what you feel, then you start feeling uncomfortable or dizzy. Everyone have a different threshold for this, so it is important to make sure the apps are not going to cause it if used for long time. For example, ensure the motions are slow or placing the users in a cabin area where things around them are static, or maintaining a high frame-rate and avoiding blurry objects.

Sitting experience – Picture credit rawpixel.com

If there is someone on your team that is particularly sensitive to motion sickness, this person would be the best one to take the tester role for the occasion. In my case, I asked for the help of my mother, who was not used at all to any similar experiences and was very confused about the entire functioning.

Accessibility testing

Is a subset of usability testing that ensures that the application being tested can be appropriately used by people with disabilities like hearing, color blindness, old age and other disadvantaged groups.

Accessibility is especially important in VR as there are more considerations to make than in other applications such: mobility, hearing, cognition, vision and even olfactory.

For mobility, think about height of the users, hand gestures, range of motion, ability to walk, duck, kneel, balance, speed, orientation…

To ensure the integration of users with hearing issues, inserting subtitles of the dialogs would be a must, and ensuring those are easily readable. The position of the dialogs should be able to tell the user where the sound is coming from. In terms of speech, when a VR experience require this, it would be nice if the user could also provide other terms of visual communication.

There are different degrees of blindness, so this should be something we want to take into account. It is important that the objects have a good contrast and that the user can zoom into them in case they are too far away. Sounds are also a very important part of the experience and it would be ideal that we can move around and interact with the objects based on sound.

I realized on how different the experience could be depending on the user just by asking my mother to help me test one of my apps. She usually wears glasses to read, so from the very beginning she could not see the text as clearly as I did.

I mentioned before that in VR it is possible to interact with object by focusing the camera on them for a period of time. This is a simple alternative to click without the need of hand gestures for people with difficulty using them.

There are many sources online about how to make fully accessible VR experiences, and I am sure you can come up with your own tests.

Integration testing

Its purpose is to ensure that the entire application functions as is should in the real world and meets all requirements and specifications.

To test a VR application, you need to know the appropriated hardware, user targeting and other design details that we would go through with the other type of testing.

Also, in VR everything is 360 degrees in 3 coordinates, so camera movement is crucial in order to automate tests.

Besides, there might be multiple object interaction around us that we would also need to verify, such collisions, visibility, sounds, bouncing…

There are currently some testing tools, some within Unity that could help us automate things in VR but most are thought from a developer’s perspective. That’s one more reason for us to ensure that the developers are writing good unit tests to the functions associated with the functionality and, when possible, with the objects and prefabs. In special, unit test should focus in three main aspects for testing: the code, the object interaction and the scenes. If the application is using some database, or some API to control things that then change in VR, we should still test them as usual. These tests would alleviate the integration tests phase.

Unity test runner

Many things are rapidly changing on this area, as many people have understood the need for automation over VR. When I started with unity, I did not know the existence of the testing tools, and tested most of it manually, but there are some automated recording and playback tools around.

Performance testing

Is the process of determining the speed or effectiveness of a system.

In VR the scale, a high number of objects, different materials and textures and number and of lights and shadows can affect the system performance. The performance would vary between devices, so the best thing to do would be to check with the supported ones. This is a bit expensive to do, that’s why some apps would only focus in one platform.

Many of my first apps were running perfectly well in the computer but they would not even start on my phone.

It is important to have a good balance to have an attractive and responsive application. New technologies also make it important to have a good performance, so the experience is realistic and immersive. But sometimes in order to improve performance we have to give up on other things, such as quality of material or lights, which would also make the experience less realistic.

In the case of unity, the profiler tool would give you some idea of the performance, but there are many other tools you can also use. In VR, we need to be careful with the following data: CPU usage, GPU usage, rendering performance, memory usage, audio and physics. For more information on this, you can read this article.

Unity profiler

Also, you can check for memory leaks, battery utilization, crash reports, network impact…. and use any other performance tools available on the different devices. Some of these get a screenshot of the performance by time and send it to a database to analyze or set up alerts if anything goes higher for you to get logs and investigate the issue, while others are directly installed on the device and run on demand.

Last but not least, VR applications can be multiplayer (that’s the case of the VRChat) and so we should verify how many users can connect at the same time and still share a pleasant experience.

Security testing

Ensures that the system cannot be penetrated by any hacking way.

This sort of testing is also important in VR and as the platforms evolve, new attacks could come to live. Potential future threats might be virtual property robbery, especially with the evolution of cryptocurrency and with monetization of applications.

Other testing

Localization testing: as with any other application, we should make sure that proper translations are available for the different markets and we are using appropriate wordings for them.

Safety testing: There are two main safety concerns with VR (although there might be others you could think of)

1. Can you easily know what’s happening around you?

Immersive applications are the goal of VR, but we are still living in a physical world. Objects around us could be harmful, and not being aware of alarms such as a fire or carbon monoxide could have catastrophic results. Being able to disconnect easily when an emergency occurs is vital on VR applications. We should make sure the user is aware of the immersion by ensuring we provide some reminder to remove objects nearby.

Smoke could be unnoticed – Picture credit CARLOS ESPINOZA

Every time I ask someone to give a test to some app with the mobile device, they start walking and I have to stop them otherwise they might hit something. And this is the mobile device, not the entire immersive experience.

Even I, being quite aware of my surrounding, give a test to some devices that include sound, and I hit my hand with several objects around me. Also, I could not hear what the person that handed over the device was telling me.

2. Virtual assaults in VR:

When you test an application in which users can interact with each-others in VR, the distance allowed between users could make them feel uncomfortable. We should think about this too when testing VR.

Luckily, I haven’t experienced any of these myself but I have read a lot of other people talking about this issue. Even some play through online on VR chat, you can see how people break through the comfort zone of the players.

Testing WITH VR: There are some tools being developed in VR to allow for many different purposes and technologies as emulation of real scenarios. Testing could be one of them, we could, for example, have a VR immersive tool to teach people how to test by examples. I have created a museum explaining the different types of testing, maybe next level could be to have a VR application with issues that the users have to find.

What about the future?

Picture credit Harsch Shivam

We have started to seen the first wireless headsets and some environment experiences such moving platforms and smelling sensations.

We shall expect the devices to get more and more accurate and complete and therefore there would be more things to test and that we should have into account. We are also expecting the devices to get more affordable with the time, which would increase the market.

Maybe someday we would find it hard to differentiate between what’s real and what’s a simulation… maybe… we are already in a simulation.

How do I get into…? guide

One of the most common question I get when people reach out to me about virtual reality (VR) is: how could I get started? Even thought I already have written an article about this, but maybe I should talk about my own experience instead so you can get more specific examples about it. If you are not into VR, please bear with me, as what I’m about to tell you can help you getting into any topics you wish to get into.

My Story

My first experience with a VR device was during a hackathon in Microsoft, when one of the interns brought his Oculus Rift. Back then it was a very expensive device, so it was very interesting to be able to play around with one. But I found that it would still have some issues to solve, starting for adding hands gestures.

As life goes, sometimes you get stuck in what you are doing at work and don’t get the time to investigate interesting new things. In my case, I bought a house and there was a lot of stress related to this. It was not until years later that I actually got the chance to try again another device, this time it was over mobile on a meetup called “tech for good” in Dublin. In this meetup, they were using VR mobile devices to provide social impact. It was my first experience with phone VR and I thought: Ok, now this is something that anybody can use and get, therefore it is something that is going to need testing.

After that, another hackathon (this time an open Nasa hackathon) got my interest in VR and AR back. I highly recommend this hackathon as I made really good friends there and we had so much fun building a AR/VR experience to navigate a satellite. My team (who won the local People’s choice award in Dublin) created an application that simulate a satellite around the orbit (on AR) and translate to see the view from that satellite (VR). If you are interested, here is our project

When I found myself having more time, I started looking for information about VR. I found a Udacity course on VR and decided to take it on. Back when I started it, the course covered many topics, although they made the decision of separating the courses in different specialties, which makes much more sense. If you are interested in seeing some of the projects I made during this course, check my Github account.

After that, I got interested in open source projects on AR and wanted to start doing testing there… However, life got in the way again when I moved to China. It’s still on my to-do list.

I was lucky enough to start working for Netease Games in China right after, so I had then enough flexibility and hardware access to do some more research in VR including some automated testing with Google Cardboard, which it should be now integrated in Airtest project (I know, not many people are using Google Cardboard anymore but, hey, you need to start somewhere.. the other research is still ongoing)

I also was lucky to have the opportunity to attend to the second Sonar in Hong Kong, which is a music and technology festival, and it showcased some cool new technologies and devices in VR (including aroma experiences and snow-surfing)

Besides that, I started to think of plans and ways of testing VR applications too (as Netease was working in some projects like nostos, which I had the opportunity to try myself and really enjoyed it).

Around that time, I gave a talk in Selenium conference in India gathering all this gained knowledge (which I talked about on this post). In order to prepare for this talk I played around and created my own ‘conference simulator’ just to get prepared for it.

Another thing I do frequently to gain knowledge in VR is to watch playthroughs and online reviews, as you can learn a lot from watching others play and it could be very good to understand your potential users if you are working on a game. I also have read some books on the matter (shout out to packtpub which gives away free IT books everyday!)

Have you found a pattern?

I know you have, if you are a reader of this blog you surely are a clever Lynx by now, but just in case, I have highlighted it in bold letters: Attending to (and after a while starting) hackathons, meetups, talks and festivals, watching or reading online related content and books, and playing around in courses, open source projects, at work and on your own projects will get you into anything you are interested to get into.

It sure sounds like a lot of things to do, but the tools are already around you, and I’m talking about years worth of experience here. Just take one thing at a time and you will too become an expert of that thing you are into. The biggest difficulty is to pick which one to take at any given time and to be honest about yourself and how much time you can spend on that. (I regret a ton not having put more efforts on the AR open source project when I had the chance)

Of course, if you are not really into it, then it would sound like a lot of work, in which case it’s probably better to save yourself time and pick something else. I like to think of it as a test of passion, or on the words of ‘Randy Pausch’ from his talk ‘Achieving Your Childhood Dreams’: “brick walls”. (By the way, this is one of the best motivational talks I’ve ever watch, and I actually re-watch it at least once a year to keep me motivated. Also, it mentions VR too 🙂 )

As you would imagine, this is not the only subject I spent time with or I gave my attention to, another big one for me is artificial intelligence, but that’s…well…another story.

Hacking social media

I know, I still owe you some stories, but I am now inspired to talk about something else. Besides, today’s issue is easier to put it into words. I don’t need to sit down and think carefully on a way of explaining some technical concept such as artificial intelligence while not sounding boring. But, just so you know, I am still working on the other stories.

I would like to show you how dangerous social media could become and on one hand highlight the need of asking the right questions when a new technology comes along in order to set proper tests and barriers (lynx are curious animals, aren’t we?).

On the other hand, highlight the importance to take breaks from it and think about yourself and things that would make YOU happy instead of thinking of things that ‘would make other people think that you are happy’ and therefore approve of you. I hope you enjoy reading this article.

Let’s think about it: the most viewed YouTube channels from independent creators are from people below their 30’s (or just on them), many started them 5-9 years ago. They have been getting a lot of pressure from fans and companies that would like a piece of their influence. Celebrities with less direct exposure to their fans have done crazy things in the past because of social pressure. Yet these influencers are not invited (that I know of) to Davos or famous lists of most influential people, besides demonstrating incredible marketing strategies, knowledge of new technologies, having charisma and being very intelligent (more than they let to be seen in some cases)

Social media affects society, not only these influencers, but many people are actually feeling depress or harm themselves because of social media. It is also a potential source for propaganda of all types and an a source for advertisement of all sorts by using the platforms ‘algorithms‘ in their favor.

There is a very important point to consider, which is its the potential for hacking (if you are interested on this, there is more information here and here). So, imagine that someone could actually go there and decide what you are going to see… how could this affect you?

Techniques and prevention:

Let’s imagine a platform that accepts comments and likes (let’s forget about dislikes). How could someone socially hack it?

1) Removing the likes: We would need to intercede the information that this platform is showing to the user and eliminate the likes the user will see. Maybe we should only eliminate them partially, so the user is not suspicious of not seeing any likes at all. For this, we could have a pondered random variable that would eliminate or not eliminate each like. How would you feel if all of the sudden nothing you write gets any likes? Prevention: From test side, make sure the like system works properly and cannot be done by anonymous sources. Make sure accounts are real. Make sure the user sees only real data. From user perspective, when you see something that you like, mention it in person, start a conversation about it instead of just clicking a button.

2) Liking specific posts: This is a bit fancier. Based on above, we could have some sort of AI algorithm that could classify the posts. Then we can decide which comments are going to show as liked for the user. How would you feel if only some type of your posts would get liked? Would that change your way of writing? Prevention: From test side, make sure all information is shown to the user. From user side, find your audience and focus on them. Also, consider talking with this people directly too (or in conferences). Try to keep honest to your goals for writing and who you want to reach.

3) Filtering comments: This would require some form of classification as with the previous point. Instead of targeting the likes, we would target the comments but the idea would be the same. Eliminate from view those that are not ‘interesting’. What would you think if you only receive a certain types of comments? Prevention: From test side, make sure all information is shown to the user. Maybe have a conversation about the feature itself and allow users to hide all comments. From user side, as above.

4) Creating comments: We could create new comments with AI. You might think the user would realize about this, but if done carefully they might not even notice this or confront the person making that comment. Besides, the social media platform might allow for not-logged in comments. This adds to the feeling of the previous one. Prevention: Have a conversation about blocking anonymous comments or disabling them. From user side, if you see a strange comment from someone that does not add up, clarify this with the person. It can also help with misunderstandings. Option 2, disable or stop reading comments.

5) Changing the advertisement around the website to a convenient one for propaganda or for harm (only for some types of social media). Prevention: Most of sites have a way of deciding what advertisements you are more interested of. Also, try cleaning cookies regularly or use private browsers and VPN services.

6) Extracting automatically interesting information for malicious purposes. Prevention: Be careful with what you post, don’t use information that is available for anybody as your passwords or security questions or pictures with personal data (such passports or train tickets). If you really want to share pictures of a trip, try uploading them after the trip and enjoy while in there!

7) Connecting certain types of people. I am not 100% sure how this could be used for malicious purposes but surely someone would find a way. Making sure you can block people is also very important.

8) Taking things out of context. Prevention: It’s very hard to delete something from the internet once it is out there, but some platforms allow it. Have you ever read your old posts? It is a good idea to do some clean-up every so often. Also, if this happens to you, keep a track of the entire context. Maybe have a system in which you can remove what you have written before it goes online, take some hours before posting to make sure you want to post that.

Why am I talking about social hacking social media in a test blog? Well, because, if you happen to be working on developing a social media project, you should make sure that these attacks are not possible and think about how the user could feel for other features to come.

(Please take some time to go throughout the articles I liked above to know more)

Thoughts about being addicted to being connected:

When was the last time you did something good for someone but did not tell anybody about it? When you do something good for someone and post about it, how do you know you are doing it because of the other person and not to have a better image of yourself in front of others?

When was the last time that you went for a trip and didn’t share the pictures with anybody? What about not taking any pictures at all? There is something truly special about having a memory that is just yours to have.


Social media has evolved quickly in very short time, and we need to consider a lot of new things, more so if we are in the development team of one of these platforms. We should really stop and agree about what is ethical and not with the particular platforms and maybe even list the set of contraindications as if we do with addictive substances. For example, would it be ethical for some platforms (maybe for young people?) to change what the users see in order to protect them from the bad critic? Consider this could, in theory, save some lives, but, in contrast, it would take away some potential good feedback disguised as bad comments. Maybe this is a feature you want people to turn on and off? If not, maybe you should list in your contraindications that it could be an issue. And I don’t mean terms and conditions, it’s not about saving your behind if anything happens, it’s about actually alerting the user about what could be experienced. Terms and conditions are…well.. another story.


Some sources if you are interested in this area that helped me control my internet usage:

Book: “How to break up with your phone” by Catherine Price.

Watch: Crash courses on navigating digital information

Talking in selenium conference

Recently I’ve had the amazing opportunity to present in none other that Selenium conference in India. Even though I have done some other talks in the past, this was a big challenge.

I would like to share my experience, for those lynx curious about talking in or attending to conferences.

1) Submitting proposals:

The power of a deadline can sometimes be very impressive…at least for me.

I was keeping a couple of proposals for things I wanted to share for quite a while, always postponing to actually sit down and properly write them.

However, when I received a message stating that the last call for proposals for the selenium conference was in a couple of days, I told to myself: “hey, nothing to lose, just write them down and send them on before is too late. Let’s just get out of our comfort zone!”

2) Getting accepted and preparing for the conference:

After submitting the ideas, I’ve gotten contacted for one of my proposal asking for more details. There was some exchange of communications and after that…silence… nothing. I’ve gotten no idea if they were happy with my replies at all.

A few days later, I received an email. Bad news: my proposal was not accepted. I was shocked, after so many messages, I thought they would have been interested on it. Luckily they provided a rejection reason: “We’ve already accepted another proposals from the same speaker”.

Really?? I suddenly remembered: I did send two proposals. And right after that, I realised I was accepted to speak in THE selenium conference… and I started to freak out. What now?


I took a deep breath.

I reviewed the published list of speakers and then I realised that I had the pleasure of having met one of them before: Maaret Pyhajarvi. I brace myself, took all my courage and decided that if I was going to speak in the conference, I should be able to reach out for help.

I was very lucky that she was very nice and helpful. We arranged some online meetings (which was not easy because of the time difference) and she gave me plenty of valuable advice. I’m so happy that I asked Maaret for help and so grateful for her advice. I highly recommend that you watch her presentation and lightning talks, they were absolutely brilliant and inspiring.

OK, what else can I do to prepare for the day? (Besides planning and creating the slides and content, which is easier said than done)

As I was talking about virtual reality, I thought it was only fair to create an app for practising. So I asked the organisers for some pictures of the room I would present on and I created it on VR, even adding some original murmuring sounds. If you are interested, it’s uploaded here.

3) Travelling to India. Getting around. Impressions:

One of the reasons for me to apply for the selenium conference in India was that I was, at the time, located in China, so it was very close. I didn’t need many days off, the trip is shorter and it also was fully covered by the conference. Also, I’ve never been in India, great excuse to visit it!

I was told to first book the trip and then request reimbursement. They actually helped me to pick a better flight that was also a bit cheaper, and they dealt with the currency exchange. I am thankful for the organisers to help so much with this.

They also booked me in the hotel where the conference was taking place and arrange a car to collect me from and to the airport. I didn’t know how useful this would be until I arrived there. I would have had no idea about how to get to the hotel once there, so once again: so grateful for this.

I didn’t have time to travel a lot, just meet with the other speakers for food. That said, it seemed that Bangalore didn’t have much to see within the city but if you go a bit outside you can visit places. Unfortunately I didn’t have time for this , so my opinion about India can’t really count much from this trip. I’m going to have to go back for a better impression 😊

4) Impressions on the selenium conference:

I feel it was well organised, food was delicious (although I did get sick coming back home, but apparently this could happen during your first days in India for what I’ve heard).

I didn’t have much time to see all the talks, but I’ve watched some of the videos afterwards. I feel there was a good balance of speakers and that people should take into account the levels explained on the descriptions.

It was a pity that I was not able to attend to all the talks, but it is OK because of the recordings. I also felt pressured because I was presenting, so I’m happy I presented early enough and I could relax afterwards.

I really enjoyed it overall, it was a great experience and amazing opportunity for networking. All the speakers were very approachable, which I think it’s one of the biggest values of attending a conference.

5) The presentation. Self-feedback. Lighting talks.

What you are about to read now is a series of self-improvement tips, because, as a good Lynx, I am always learning. I am writing them more for myself than for you, but maybe this could help you too, if you ever consider presenting anything.

Looking back, I wish I reviewed much more and I trusted my own experience of needing more content than what it seems enough when practising. I felt it ended up being a bit short. Learning: prepare extra content and practice a lot.

For some reason (probably the fact that it was THE SELENIUM conference) I was as nervous as if it was my first conference ever. Learning: breathe slow, try relaxation techniques before the talk.

I wish I would have kept it more natural, after the matter I came up with more ideas to break the tension with the audience, but before that I didn’t know how to do it. Learning: get information about audience and think of how to break tension with them and make the content more interesting.

I wasn’t so sure about the content: it was a beginner talk but because it was very generic I could not go deep in any of the testing types… I wish now I would have spent longer in some of them or done some demo. However, I was relying in the questions to go a bit deeper…shockingly, there were not many questions. I think it could be a cultural thing because other presenters told me that the same happened to them as well. Learning: keep only 5 minutes for questions and show more technical bits. Talk about what you think it is important instead of waiting for them to ask.

I did blank out and forgot many many things I was meant to say. I took my safety notes with me but yet I lost my way throughout. This was what I was most scared of. Learning: slow down, move the notes as I go along. Have notes on the laptop, not only paper, it looks more natural to look in there.

Performance: muy accent didn’t come up clean. I should pursue to speak slower. I also looked and sounded tired, I was. I should try to get better sleep before the presentation day. Learning: just before the presentation, get as much sleep as possible. Travel earlier to avoid jet-lag. Record yourself to check if the pronunciation is correct or you need to change some words.

The lights were not the best – neither my shirt nor my slides matched with the background and projector. Learning: bring spare clothes and presentation with two background colours and try them before the meeting.

Video: it started late, my introduction was not recorded. I didn’t know I wasn’t supposed to move and I disappear every so often. Learning: Ask video person to record a short video and let you see it. Ask if you can move around or better stay in one place.

Luckily I could make up a bit during the lighting talks, in which I seem more like myself. But, unfortunately, not many people attended and it’s not part of the video of the talk.

6) Conclusion

Even though I feel I could have done much better, I enjoyed the entire experience a lot. I learned plenty from it and I will be much more prepared for the next one.

If you attended or have watched the video, you might be interested on knowing more about a project that I mentioned I am working on…but that’s, well, another story.

Hands on: automating scratch with AirtestIDE

On one of my firsts posts in this blog, I went through a hands on about how to automate scratch. The goal was to be able to teach kids how to test, hand by hand with how to develop programs.

Back at the time, the only program that I could find that would allow me to do this automation was not open source. The experience was very nice but since the goal was to use it on coderdojo , the conclusion was that it would not work out to use a paid solution.

Currently, I’m part of the development team on an open source project called ‘Airtest‘. Please note the intention on this post is not to advertise this product, but to provide a different solution that could actually be used in coderdojo. However, I want to disclaim it clearly: I work on this project, also, I am very fond of it.

Part 1: scratch development

Scratch is a simple tool for getting started with developing just it by dragging and dropping easy sentences. I’ve already explained how to create an app with scratch in my previous post, so I am going to assume you know how to do this.

Let’s just use the project that I created as example: http://scratch.mit.edu/projects/48422496/

This program moves a cat (called sprite) 10 pixels every time the user clicks on it. If it reaches a corner it supposes to bounce. The goal is to verify that the cat bounces when reached the corner.

Part 2: Using Airtest project to automate scratch

    1. Go to the official website: http://airtest.netease.com/ and download Airtest IDE. (Note: it is possible that you see some Chinese around as we are based in China, but don’t get too scared, everything should also be written in English)
    2. Click on “File” -> “New” -> “.air Airtest Project”,  to create a new airtest test file1
    3. Make sure you have “Selenium window” options active by clicking “Window”->”Selenium Window” (you can close Airtest assistance, Poco Assistance and devices).                2
    4. You also want to make sure you have your Chrome path configured in “Options”->”Settings”. If you have Chrome installed and are using Windows, this is generally under C:/Program Files (x86)/Google/Chrome/Application/chrome.exe 3
    5. Make sure your cursor is at the bottom of the script editor.  Then, click on the globe icon at the top left of the “Selenium Window” to open a browser for test.4
    6. Click “Yes” on the yellow message that appears on the top. 5Now you could continue with the recording method (right hand side top button on the Selenium Window). However, since scratch is done in flash, we won’t be able to retrieve a selenium element. we would proceed with Visual testing.
    7. Make sure your cursor is at the bottom of the script editor. Click the “start_web” button and input the url in the quoted yellow text. 6 It would look like this:
    8. Make sure your cursor is at the bottom of the script editor. Click the “airtest_touch” button and select the flag so it is within the highlighted rectangle. Double click inside it to confirm the selection  7
    9. Insert a this piece of code (this is a loop to repeat the next sentence 30 times, which would be the cat click):
       for x in range(0, 30): 
    10. Make sure your cursor is at the bottom of the script editor. Click tab once. Click the “airtest_touch” button and select the cat. Double click to confirm the selection. Your code should look like this (I have added a sleep command to wait for the elements but it should not be needed): 8
    11. Click play to see how the program works (make sure to activate flash on the browser for the program to load the first time). Feel free of getting some snack while your computer does your job for you 😉
    12. Bonus step: verify that the cats bounces. You can do this by inserting an “assert_template” and selecting the cat (same as with airtest_touch). However, the program would pass if it finds a cat, in any position, so even if it is upside down. Alternatively, you can create a snapshot to check manually this part.


Since Airtest project’s back-end (poco) is based on Python (a programming language) you can insert loops and other instructions easily.

Because of its powerful IDE which allows for record/playback or the use of simple buttons, it is easy for beginners.

Since visual testing is possible, even within a Selenium test, it works with programs or websites in which object hierarchy (DOM) is not easy/possible to retrieve (such as flash applications).

Because it is open source and the IDE is free, it can be used in teaching.

There is much, much more that this one tool can do that what is shown here. However, that is… well, another story.