API Testing

What’s an API?

An API (Application programming interface) is a set of calls that an application does to communicate parts of the application. For example, the user’s view (browser or UI) with some software component (in a remote server or within the user’s computer) that makes the necessary operations for an application to function.

If you are curious about how this looks for a web application, you just need to check the ‘network’ tab on the developer’s tool on your browser. You can see there are many calls happening in the background when trying to reach a website.

Network tab on Chrome for google.com URL

If you click in one of the calls of the left hand side, you will get some information as on the right hand side. The request URL is the address that the call was trying to hit.

For these web calls, there are two main methods: GET (to request some information from the server) and POST (to send some information to the server). You can learn more about other methods here.

The next field is also very important. This is the message that we get from the server when the call is executed. In this case it is 307 (a redirection). If you are curious about what other statuses number mean, you can check this web, if you are a cat person, or if you are more of a dog person this one.

There are two widely used protocols for sending information between devices: SOAP(stands for Simple Object Access Protocol) that sends information with XML format and REST(Representational State Transfer) that sends information with several formats such as json, html, xml, and plain text (see this article for further explanation in the formats).

Tools to test API’s?

Please keep in mind that the tools mentioned below are not the only ones in that you can use for API testing. I’m talking specifically about these ones because they are the ones I’ve used in the past.

In the section after this one, I’ll show an example about how to do an API test.

Swagger:

According to their website, Swagger is an open source and professional tool-set that “Simplifies API development for users, teams, and enterprises”

https://swagger.io/tools/swagger-ui/

I have used swagger UI as a way to easily check API URLs and understand the calls to then add them into my test code, but, I have not tried all the Swagger’s tooling set yet. I think this is a easy way to communicate changes on the API across the team and document it.

Alternatively to this, the developers should document their API calls in some other form, generally some list, as Twitter does here. The problem I have had with this option is that sometimes the documentation might be out of date and then you need to figure out in the development’s code what’s the exact API call. With Swagger, the list of calls come directly from the code, which makes it easier to handle and up to date.

Swagger is supported by SmartBear, same as SoapUI, so for API testing with it, please check below.

Soapui and Postman,

SoapUI is ” a Complete API Test Automation Framework for SOAP, REST and more”. There is an open source version and a professional one featuring more functionality. The API testing part looks like this:

https://www.soapui.org/docs/endpoint-explorer.html

I took the image from their website and it is quite self-explanatory. Besides, there is a lot of documentation to get you started in there.

Postman is a “collaboration platform for API development”. In the development sense, it provides automatic documentation, so there is no problem with developers making change on functionality and forgetting to upload it.

For API testing, it’s very easy to get started. You can create REST, SOAP, and GraphQL queries. It supports multiple authentication protocols (I talk about this later) and certificate management. Please refer to their website for further information.

Wireshark and Fiddler:

These two programs are very useful to analyse network packets. They are powerful programs and a must known for security, network and performance testing and checking the packets at micro level. You can actually see the exact data sent over the network. However, if what you are looking for are tools for API testing, I would probably not go for them but the ones above, because they are more high level and specific for that.

That said, I have used them before in order to test API’s that required specific secure certificates and for debugging issues (specially performance ones). If you are interested in knowing more about Fiddler, I recommend this article. For Wireshark, this one.

https://blog.wireshark.org/

How to do this programmatically?

If you want to add this testing into your automation code, you have some help on with the tools mentioned before. However, there are many ways of doing these type of calls with different programming languages. For example, here is how to do a rest call with python:

import requests
# make a get call
response = requests.get("URL")

# do something with the result
response.status_code # this gives you the status code as mentioned above
response.json() # this gives you a json with the response on it

# make a post call
response = request.post("URL", {json data})

It gets a bit more difficult when you need to add parameters, authentication or parsing some types of data, but there is plenty of documentation about it all. Let’s see an specific example using the API provided by numbersapi.com.

import requests

response = requests.get("http://numbersapi.com/42?json")

print(response.status_code)
print(response.json())

The result when you execute the code above is:

200
{'text': '42 is the result given by the web search engines Google, Wolfram Alpha and Bing when the query "the 
answer to life the universe and everything" is entered as a search.', 'number': 42, 'found': True, 'type': 'tr
ivia'}

With Python, you could play with the json data to easily retrieve and validate the text, or the number, that there is some result…

For more information about what exactly test when testing API, I think this post is wonderfully well explained (they use postman as example).

Why should I care? UI VS API testing

UI (User interface) testing is the best way to simulate the actual behaviour of the users. However, we tend to re-test things in the UI that could be covered already by testing the API (and in some companies this could be done by a different group or team).

Let’s say a developer changes a call for an API. Let this call be the list of movies that someone liked. Now imagine this API call is not modified in some part of the application, the result being the user cannot find its liked movies. What’s happening in the UI test?

We will then get the UI Test couldn’t find an object. This could be due to the API call being wrong or a bug in the automation, an update on the way of the object needs to be retrieved, a button not working properly, the object being hidden somehow…

However, if you have an API test for it, you should be able to understand that the call is not retrieving anything. If you need to verify things such as results of a search, it’s probably best to use API to check the entire list (which could be done with a quick comparison) and let the UI verify that a result appears where it should rather than the result itself. Also, you should be verifying that the API call is correct (and update the test call if it is not).

Level up:

API calls are less likeable to change than UI objects and they generally come in different versions when they do, not to disturb previous releases of the application. This means you might want to add functionality to verify which version is being tested.

It is also interesting to use this to speed up our UI testing. The most common example being the Login method. This method is usually a bottle neck for the rest of the tests, and if it fails, you don’t know what else might be failing or passing and you are blocked until the fix. Whilst it’s super important to have a Login test to make sure that your users can log into the application, performing UI login each time it’s needed for another test, slows down your execution.

Google login screen

What’s the solution? You can use API testing to skip the login bit. Be careful when doing this, it wouldn’t be secure to have an API to do this in production environment. An example could be to set up some unique tokens (see an example about doing this with soapUI here) that are of quick expiration to perform this skip and send it alongside the URL, or have an API call that sets some cookies or session to be logged in.

If you have other repetitive dependent tests, you should consider running API calls for them before continuing with the test that depends on it. This would considerably speed up your test execution and your results would give you more trustful information about the problem.

That said, UI testing is the best way of ensuring everything works as per user behaviour and E2E and integration tests should not be substituted by API tests, use it only as a help and if it is not increasing the complexity and failures of your tests.

Yet another level up: stats

Another interesting thing that you can do thanks to API calls is to find out information about your application and about how users are using it. To analyse and visualise calls in a bigger scale you can use tools such as elastic search and kibana, and even use artificial intelligence to get conclusions from such calls, but that’s… well… another story.

2019 Review and 2020 plan

While everyone has been preparing this year’s resolutions, I wanted to share the ones I had for last year. I was getting the feeling that I did not do very well with them, but after writing them down, it turns out I accomplished a lot and it makes me want to achieve more next year.

Photo by freestocks.org from Pexels

Doing more

One of the things I am often asked by people is: “how do I manage to get to do so many things on my free time?” When I hear this question, my first thought is: “well, you obviously haven’t visited my blog often…”

To me if I can help one single person throughout the contents of my blog or inspire someone with my talks, I would achieve my goal: I would have made the world a bit better. For this reason, I tend to wait for inspiration to write; I might be doing a research for the next post, or working on learning a skill.

Therefore, once you are aware of your end goal, I think the first step to get more things done is to find inspiration. You can get this from reading books, articles, attending to talks, having active friends, joining courses or projects… But, be careful, sometimes this could be counterproductive. Maybe seeing how everyone around you manage their time so well, discourages you for not being able to do as good. Don’t let this bring you down, everyone has their own tempo and you can start at any point in your life.

A trick I use sometimes when watching or listening to some content, it what I call the ‘2x factor’: set it up 2 times the normal speed. I don’t recommend you do this with everything, for me it generally depends on the speaker’s speaking pace. When the speaker is talking slow or about something I already know or don’t find very interesting, I increase the speed of the video. I find generally 1.50x is a good speed for me for most things, 2x might be too fast and whilst I can understand everything, it makes me nervous.

https://www.pexels.com/@neo8iam

If getting inspiration is the first step, the second one, to me, is to get to action. Sounds easy? Get started on anything, even for 5 minutes a day, and you will be achieving more than you are right now. I particularly enjoy the ‘pomodoro technique’ (with rewards), in which you try to work 25 minutes straight followed by a 5 minutes break (in which I grab some drink or snack). I find it very hard to finish the 25 minutes the first two or three times but then I start hating the alarm for taking breaks because it interrupts my flow.

Books

One of my new year resolutions was to track my readings. I’ve noticed I’ve been reading here and there, never really stopping to think about my progress.

First thing I did to firstly, check my progress and secondly inspire myself to read more is to join the goodreads reading challenge. Since I was not sure how much I was currently reading, I set it to be 24 books.

Photo by Pixabay from Pexels

I was shocked to realise that I read 6 books the first month. Ok, let’s be fair, half of those were audiobooks, I love walking around and having them help me get to more information. Because of traveling or other commitments, I didn’t get to 6 books every month, but I also didn’t track them all. I discovered that sometimes Goodreads didn’t have the book I was reading and sometimes it just removes books from your shelves without noting. I contacted their support (as suggested on that link) and all I was told was to re-add them and make up the dates I read them… This proved to me that it is not a good place to track books so I’ll be happy to hear of other options if you have any.

Commitments

In order to motivate myself, I like having some commitments to push myself to do them, for example, participating in meetings and conferences such as the automation guild (it’s an online conference, you might be still on time to join this year’s). The videos for it took me longer than expected, but I learnt a lot because of that. I presented in a total of 5 conferences around the world and spoke in a couple of meetups. If you want to know more of my past and future appearances, you can do so here.

Another way I have of motivating myself is by joining courses. Last year I wanted to finish the VR course I’ve been pushing aside for a while. I’ve been doing some research in VR and that has taken priority over the course, but paying a course monthly and not having time to work on it is really painful. On the other hand, not paying for a course makes me less committed to it. This year, I want to work more on my AI knowledge.

Activities

Another thing you can do to get inspired is to write a list of life achievements or values you want. If you have it nearby and read it often, it could help you focus on what’s more important for you. Also, try activities that get you excited, find other passions.

Some months I was focused on improving Chinese, practising piano and meditating. Besides, on summer I went back to Spain from China, so was recovering from jet-lag for a couple of weeks and then decided to enjoy some time off too.

Another new year resolution was to practice yoga at least 3 times a week and going to the gym at least 3 times too. I’ve been quite regular with this, even when travelling, except the times I’ve been sick.

Photo by Cedric Lim from Pexels

Next year, I would like to plan better the activities and periods of break, the same as I plan for work or conferences. Otherwise, commitments can get in the way and push everything out and compensation later on does not seem to be the best solution.

Traveling

One of my 2019’s resolutions was to travel more, and I think I surpassed my expectations: I travel through China (Beijing, Xian, Zhangjiajie, Tongren, Guilin..), US (San Francisco), Japan (Tokyo), Spain (Valencia), UK (London), The Netherlands (Amsterdam), Germany (Berlin, Dusseldorf..), Switzerland (Zurich) and Russia (Moscow).

I moved to a new city but, because I was meeting many new people throughout the trips and did a lot of sight seeing of the visited places, I did not feel like engaging in many activities there. I did go to a Muse concert (which was, as usually, quite amazing) and walked around a lot. I like doing this when I go to a new city as I feel getting lost is actually the best way of getting to know a place.

Photo by Porapak Apichodilok from Pexels

Whats next

Something else that works for me is to list the things I’ve done in the day at the end of it. Same for the year, as you’ve read. This is a way of being appreciative, which makes me want to achieve more.

Being grateful is also really important. Another daily writing or morning meditation, if you wish. I struggled with this because some days I might not interact with anybody as to be grateful to them. However, you can be grateful to yourself and your achievements and for the opportunities and the good that’s on its way and the good that happens to others.

I’ve counted 12 post that I have half written, so keep checking because there is a lot to come. A couple of them are on VR (I might merge them into one), thoughts to share, AI, API testing… If you are especially interested in one of those, let me know so I give it priority.

I like most of my last year’s resolutions, I want to keep them, and I haven’t added up much to this year: do things to make the world better, block times of rest, have more patience, look at the brighter side of everything, get an AI specialisation, write more on my blog, find a new app for my reading… Would I manage to add all of that? That’s…well…another story.

How to test a time machine

I have recently watched a video from PBS space time that got me thinking: if we were to have a time machine, how would we test it? I’ve seen a lot of “how would you test X” type of questions in interviews, but I don’t think I’ve ever seen this one before (I am not trying to give you ideas for interview questions!)

It’s not rocket science… it’s rocking testing science!

I couldn’t help but compare it firstly to a spaceflight, so I started wondering: how do they test spacecrafts? And what better one than Apollo 11 to start with? If only we had a time machine for getting its source code… Yes, I have been looking at Assembly code trying to make sense of potential testing routines, like this one… and… guess what I found there?

This section of the code is checking the Gimbal lock of the accelerometers! Do you remember the concept from my last post? Maybe I just have a case of Baader-Meinhof, but I do feel Gimbal lock is an important concept to learn, so check it out.

Testing in Assembly was not as ‘easy’ as nowadays (for example, macros does not seem to be a thing that I could find in the Apollo11 programming syntax). Do not expect a page object model or a library with tests or testing functions. Nor common methods for before and after tests. Actually, don’t expect any sort of OOP, to start with.

In my search I could find some files with tests on them, but they are mostly for stressing the hardware by sending signals to the different devices and recovering from bad statuses. Also, spacecrafts might need to check correctness of bits to make sure there are no catastrophic arithmetic errors.

https://upload.wikimedia.org/wikipedia/commons/thumb/3/3c/Ariane_5ES_with_ATV_4_on_its_way_to_ELA-3.jpg/1200px-Ariane_5ES_with_ATV_4_on_its_way_to_ELA-3.jpg
from @wikipedia

Time traveling tests

Imagine we have covered the unit and integration tests for both hardware and most of the software for our machine that could potentially match any currently existing ones. What are the specific cases we should cover for our time machine? The logical cases to think of:

1) The machine should not kill the traveler: We should insert some devices to measure that the cabin of the machine is livable. Also, keep in mind for the rest of the tests, that usually spaceships are first tested with robots inside, rather than humans. (Security test)

2) There must be some ways of safe interrupting the process (risk assessment/testing): In case anything wrong happens, what’s the risk involved for the passenger? (Security test)

3) As the machine goes forward or backwards, the traveler state stays while the surrounding world changes: We could measure this by introducing some other object before that we know it decays and checking how long time it has passed in the machine. (Usability test)

4) There is a way to return: I assume we will want this, so we should test it. (Security test?)

5) Performance tests: how many times it can go backwards and forward in time? (Performance but also, regression test?)

6) What is the minimum time that can travel? (test it) If the time machine requires a rapid negative speed (as the video above suggested as one of the possibilities), I imagine traveling in time as playing billiard or golf. Let me explain: When you try to position a ball on a particular hole, you need to give it the right original impulse (not too much, not too little) but also the right angle depending on the starting point. We are likely to travel space as well as time, so it might be particularly difficult to stop on a particular moment or place. (Which may explain why nobody made it to the time travelers party) (Usability test)

Photo by Tomaz Barcellos from Pexels

7) Test boundaries scenarios: go back to start of the earth (or universe) and forwards to its end. What happens if just before or just after? Does time always exist? Can we travel to a point there is no time? Technically these should be tests we would like to do, but I think they are probably not doable, realistically speaking (can I use this expression in this context?)

8) Try to change a small thing in the past…does it change the future? If a change makes a parallel universe, then we should not ever recover the machine. If we test this, we should look for an event we can easily undo (like maybe turn a light on/off?) (Exploratory test)

9) Time traveler meeting same time traveler. Test paradoxes. (Exploratory test)

11) Test placing a box where the machine is positioning before a trip. Then take the box away, position the machine on that same spot and test traveling to before we positioned the box. Does the box o machine break? (Integration test)

12) Test traveling to when the box is still in place; do the machine or the box break? (Integration test)

13) Try to travel twice to the same exact time and place. (Integration test)

14) Could the machine travel between different universes? (If we have a way of doing so, which might be more likely if we are using wormholes for the trips than negative speed) (Integration test)

From futurism.com (and an interesting article)

15) Is there a maximum/minimum size of the machine? If so, test them. (Boundary test)

16) Can everybody use the machine or only qualified people? (Accessibility test)

Conclusion:

Maybe we have already invented a time machine but kept it secret because it is dangerous. Or maybe we have gone back in time to stop ourselves from inventing it, creating a parallel universe and as a result, our universe is the one in which such machine was never invented (time version of the Fermi paradox)

Whichever the answers to these questions, I hope you’ve found this exercise fun and enjoyed reading this post as much as I enjoyed writing it. Let me know if you can come up with more tests we could do to the time machine (if we were to have one). I will resume my serious posts soon, they will be… well… another story.

We need to talk about quaternions…

I know, I know, this doesn’t seem like anything that has to do with testing. However, I found this concept to be very challenging and I think some people might be interested in knowing about it, especially if they are considering automating objects on VR.

The problem:

You want to be able to rotate the main camera with Google VR. Google design has reached the conclusion that moving the camera in virtual reality should be something that the user does, not the developer (I think it was possible in earlier versions of Unity and Google VR sdk).

So I decided to create an empty object and assign the camera as a child object of it. To move the camera, I move this object. To undo the camera movement done by the user, I move the object the opposite way. Should be easy, shouldn’t it?

Well, sometimes… but other times it does not work. I spent a very long time searching online for other people that had this issue and it was either working for them or not, nobody would explain why this was the case. I believe the reason to be the effect of gimbal lock.

I know, that sounds like a made up word, but this is actually an important concept that you should be aware of, let me explain:

Gimbal lock is an effect in which the object loses one degree of rotation, which is common when you use three dimensional vectors. At some point, two of the rotation coordinates get lined up in a way that every time you move one, the other one moves as well. If you are a visual person, this video from GuerrillaCG (a Youtube channel with information about 3D modelling and animation) explains it quite clearly.

How did I understand what this mean and how did I figure this was the issue? I decided to encapsulate the camera object in another object, and then that object in another one. Then I assigned them some colored spheres and pointers and I run some tests. After a while, it became clear that movements were not coordinated and that the rotation variables were not working as expected.

Attempts to understand 3D issues

Introducing quaternions:

The difference between Euler vectors (the usual 3 components vector that indicates a point in the 3D space) and quaternions (a 4 components vector) could help us avoid the Gimbal lock effect.

Quoting wikipedia: ” In mathematics, the quaternions are a number system that extends the complex numbers. […] Quaternions find uses in both pure and applied mathematics, in particular for calculations involving three-dimensional rotations such as in three-dimensional computer graphics, computer vision,[…] “

I know, it still sounds a bit gibberish… Basically (very basically), quaternions are a mathematical way of representing 3D objects by using 4 coordinates (representing them in 4D space instead), that an Irish mathematician (called William Rowan Hamilton) came up with and decided to literally set in stone, on a bridge.

https://upload.wikimedia.org/wikipedia/commons/a/a2/William_Rowan_Hamilton_Plaque_-_geograph.org.uk_-_347941.jpg
Commemorative Hamilton plaque on Broome Bridge. From wikipedia

They are difficult to understand and to visualise (as we don’t intuitively know how a 4D space should be visualised), but the maths applied to them are easier than to the 3D vectors. For example: if you want to rotate an object on an x plane, on the y plane and on the z plane, the object will not be in the same state if you perform the entire rotation on x and then on y and then on z, that if you do it in a different order (because each coordinate rotates based on the others too). However, with quaternions, you will always get the same result, as you can rotate two planes at once (which is particularly convenient when you are trying to undo a movement or automate a 3D object)

I am linking to a lot of articles below for you to deep dive, but two main points to remember are:

  1. You should keep one of the values of the quaternions fixed (or set to 1) as this is the way of telling the quaternions that there has been a rotation.
  2. Rotation angles in quaternions are half the expected in 3D rotation (so be careful when calculating the rotations in 4D). This is known as double-cover and it’s actually quite useful once you get the grasp on it.

If you are interested in knowing more about quaternions, you can check this and this videos from 3Blue1Brown (a Youtube channel full of interesting mathematical concepts with a very easy explanation, I really recommend it) Also, I enjoyed the explanations on this article. If you are considering working with 3D objects movement at some point, you should definitely watch these videos and play around with the simulators to understand everything better.

Conclusion

It is not my intention to go against the design of a system such as Google VR, and you should listen to the platform owners as, in general, it is wise not to tinker with the camera. However, sometimes I find it to be useful to undo a user’s movement, for example, for automation purposes or if the gyroscope is drifting on its own (more about phone’s sensors in this interesting article).
In these cases, the use of quaternions is so fundamental, that it is definitely worth it to spend sometime learning about them.
The next step after the automation of the camera and other 3D objects would be to automate the hands movements, but that’s…well…another story

How do I get into…? guide

One of the most common question I get when people reach out to me about virtual reality (VR) is: how could I get started? Even thought I already have written an article about this, but maybe I should talk about my own experience instead so you can get more specific examples about it. If you are not into VR, please bear with me, as what I’m about to tell you can help you getting into any topics you wish to get into.

My Story

My first experience with a VR device was during a hackathon in Microsoft, when one of the interns brought his Oculus Rift. Back then it was a very expensive device, so it was very interesting to be able to play around with one. But I found that it would still have some issues to solve, starting for adding hands gestures.

As life goes, sometimes you get stuck in what you are doing at work and don’t get the time to investigate interesting new things. In my case, I bought a house and there was a lot of stress related to this. It was not until years later that I actually got the chance to try again another device, this time it was over mobile on a meetup called “tech for good” in Dublin. In this meetup, they were using VR mobile devices to provide social impact. It was my first experience with phone VR and I thought: Ok, now this is something that anybody can use and get, therefore it is something that is going to need testing.

After that, another hackathon (this time an open Nasa hackathon) got my interest in VR and AR back. I highly recommend this hackathon as I made really good friends there and we had so much fun building a AR/VR experience to navigate a satellite. My team (who won the local People’s choice award in Dublin) created an application that simulate a satellite around the orbit (on AR) and translate to see the view from that satellite (VR). If you are interested, here is our project

When I found myself having more time, I started looking for information about VR. I found a Udacity course on VR and decided to take it on. Back when I started it, the course covered many topics, although they made the decision of separating the courses in different specialties, which makes much more sense. If you are interested in seeing some of the projects I made during this course, check my Github account.

After that, I got interested in open source projects on AR and wanted to start doing testing there… However, life got in the way again when I moved to China. It’s still on my to-do list.

I was lucky enough to start working for Netease Games in China right after, so I had then enough flexibility and hardware access to do some more research in VR including some automated testing with Google Cardboard, which it should be now integrated in Airtest project (I know, not many people are using Google Cardboard anymore but, hey, you need to start somewhere.. the other research is still ongoing)

I also was lucky to have the opportunity to attend to the second Sonar in Hong Kong, which is a music and technology festival, and it showcased some cool new technologies and devices in VR (including aroma experiences and snow-surfing)

Besides that, I started to think of plans and ways of testing VR applications too (as Netease was working in some projects like nostos, which I had the opportunity to try myself and really enjoyed it).

Around that time, I gave a talk in Selenium conference in India gathering all this gained knowledge (which I talked about on this post). In order to prepare for this talk I played around and created my own ‘conference simulator’ just to get prepared for it.

Another thing I do frequently to gain knowledge in VR is to watch playthroughs and online reviews, as you can learn a lot from watching others play and it could be very good to understand your potential users if you are working on a game. I also have read some books on the matter (shout out to packtpub which gives away free IT books everyday!)

Have you found a pattern?

I know you have, if you are a reader of this blog you surely are a clever Lynx by now, but just in case, I have highlighted it in bold letters: Attending to (and after a while starting) hackathons, meetups, talks and festivals, watching or reading online related content and books, and playing around in courses, open source projects, at work and on your own projects will get you into anything you are interested to get into.

It sure sounds like a lot of things to do, but the tools are already around you, and I’m talking about years worth of experience here. Just take one thing at a time and you will too become an expert of that thing you are into. The biggest difficulty is to pick which one to take at any given time and to be honest about yourself and how much time you can spend on that. (I regret a ton not having put more efforts on the AR open source project when I had the chance)

Of course, if you are not really into it, then it would sound like a lot of work, in which case it’s probably better to save yourself time and pick something else. I like to think of it as a test of passion, or on the words of ‘Randy Pausch’ from his talk ‘Achieving Your Childhood Dreams’: “brick walls”. (By the way, this is one of the best motivational talks I’ve ever watch, and I actually re-watch it at least once a year to keep me motivated. Also, it mentions VR too 🙂 )

As you would imagine, this is not the only subject I spent time with or I gave my attention to, another big one for me is artificial intelligence, but that’s…well…another story.



Hacking social media

I know, I still owe you some stories, but I am now inspired to talk about something else. Besides, today’s issue is easier to put it into words. I don’t need to sit down and think carefully on a way of explaining some technical concept such as artificial intelligence while not sounding boring. But, just so you know, I am still working on the other stories.

I would like to show you how dangerous social media could become and on one hand highlight the need of asking the right questions when a new technology comes along in order to set proper tests and barriers (lynx are curious animals, aren’t we?).

On the other hand, highlight the importance to take breaks from it and think about yourself and things that would make YOU happy instead of thinking of things that ‘would make other people think that you are happy’ and therefore approve of you. I hope you enjoy reading this article.

Let’s think about it: the most viewed YouTube channels from independent creators are from people below their 30’s (or just on them), many started them 5-9 years ago. They have been getting a lot of pressure from fans and companies that would like a piece of their influence. Celebrities with less direct exposure to their fans have done crazy things in the past because of social pressure. Yet these influencers are not invited (that I know of) to Davos or famous lists of most influential people, besides demonstrating incredible marketing strategies, knowledge of new technologies, having charisma and being very intelligent (more than they let to be seen in some cases)

Social media affects society, not only these influencers, but many people are actually feeling depress or harm themselves because of social media. It is also a potential source for propaganda of all types and an a source for advertisement of all sorts by using the platforms ‘algorithms‘ in their favor.

There is a very important point to consider, which is its the potential for hacking (if you are interested on this, there is more information here and here). So, imagine that someone could actually go there and decide what you are going to see… how could this affect you?

Techniques and prevention:

Let’s imagine a platform that accepts comments and likes (let’s forget about dislikes). How could someone socially hack it?

1) Removing the likes: We would need to intercede the information that this platform is showing to the user and eliminate the likes the user will see. Maybe we should only eliminate them partially, so the user is not suspicious of not seeing any likes at all. For this, we could have a pondered random variable that would eliminate or not eliminate each like. How would you feel if all of the sudden nothing you write gets any likes? Prevention: From test side, make sure the like system works properly and cannot be done by anonymous sources. Make sure accounts are real. Make sure the user sees only real data. From user perspective, when you see something that you like, mention it in person, start a conversation about it instead of just clicking a button.

2) Liking specific posts: This is a bit fancier. Based on above, we could have some sort of AI algorithm that could classify the posts. Then we can decide which comments are going to show as liked for the user. How would you feel if only some type of your posts would get liked? Would that change your way of writing? Prevention: From test side, make sure all information is shown to the user. From user side, find your audience and focus on them. Also, consider talking with this people directly too (or in conferences). Try to keep honest to your goals for writing and who you want to reach.

3) Filtering comments: This would require some form of classification as with the previous point. Instead of targeting the likes, we would target the comments but the idea would be the same. Eliminate from view those that are not ‘interesting’. What would you think if you only receive a certain types of comments? Prevention: From test side, make sure all information is shown to the user. Maybe have a conversation about the feature itself and allow users to hide all comments. From user side, as above.

4) Creating comments: We could create new comments with AI. You might think the user would realize about this, but if done carefully they might not even notice this or confront the person making that comment. Besides, the social media platform might allow for not-logged in comments. This adds to the feeling of the previous one. Prevention: Have a conversation about blocking anonymous comments or disabling them. From user side, if you see a strange comment from someone that does not add up, clarify this with the person. It can also help with misunderstandings. Option 2, disable or stop reading comments.

5) Changing the advertisement around the website to a convenient one for propaganda or for harm (only for some types of social media). Prevention: Most of sites have a way of deciding what advertisements you are more interested of. Also, try cleaning cookies regularly or use private browsers and VPN services.

6) Extracting automatically interesting information for malicious purposes. Prevention: Be careful with what you post, don’t use information that is available for anybody as your passwords or security questions or pictures with personal data (such passports or train tickets). If you really want to share pictures of a trip, try uploading them after the trip and enjoy while in there!

7) Connecting certain types of people. I am not 100% sure how this could be used for malicious purposes but surely someone would find a way. Making sure you can block people is also very important.

8) Taking things out of context. Prevention: It’s very hard to delete something from the internet once it is out there, but some platforms allow it. Have you ever read your old posts? It is a good idea to do some clean-up every so often. Also, if this happens to you, keep a track of the entire context. Maybe have a system in which you can remove what you have written before it goes online, take some hours before posting to make sure you want to post that.

Why am I talking about social hacking social media in a test blog? Well, because, if you happen to be working on developing a social media project, you should make sure that these attacks are not possible and think about how the user could feel for other features to come.

(Please take some time to go throughout the articles I liked above to know more)

Thoughts about being addicted to being connected:

When was the last time you did something good for someone but did not tell anybody about it? When you do something good for someone and post about it, how do you know you are doing it because of the other person and not to have a better image of yourself in front of others?

When was the last time that you went for a trip and didn’t share the pictures with anybody? What about not taking any pictures at all? There is something truly special about having a memory that is just yours to have.

Conclusion

Social media has evolved quickly in very short time, and we need to consider a lot of new things, more so if we are in the development team of one of these platforms. We should really stop and agree about what is ethical and not with the particular platforms and maybe even list the set of contraindications as if we do with addictive substances. For example, would it be ethical for some platforms (maybe for young people?) to change what the users see in order to protect them from the bad critic? Consider this could, in theory, save some lives, but, in contrast, it would take away some potential good feedback disguised as bad comments. Maybe this is a feature you want people to turn on and off? If not, maybe you should list in your contraindications that it could be an issue. And I don’t mean terms and conditions, it’s not about saving your behind if anything happens, it’s about actually alerting the user about what could be experienced. Terms and conditions are…well.. another story.

Sources:

Some sources if you are interested in this area that helped me control my internet usage:

Book: “How to break up with your phone” by Catherine Price.

Watch: Crash courses on navigating digital information

Talking in selenium conference

Recently I’ve had the amazing opportunity to present in none other that Selenium conference in India. Even though I have done some other talks in the past, this was a big challenge.

I would like to share my experience, for those lynx curious about talking in or attending to conferences.

1) Submitting proposals:

The power of a deadline can sometimes be very impressive…at least for me.

I was keeping a couple of proposals for things I wanted to share for quite a while, always postponing to actually sit down and properly write them.

However, when I received a message stating that the last call for proposals for the selenium conference was in a couple of days, I told to myself: “hey, nothing to lose, just write them down and send them on before is too late. Let’s just get out of our comfort zone!”

2) Getting accepted and preparing for the conference:

After submitting the ideas, I’ve gotten contacted for one of my proposal asking for more details. There was some exchange of communications and after that…silence… nothing. I’ve gotten no idea if they were happy with my replies at all.

A few days later, I received an email. Bad news: my proposal was not accepted. I was shocked, after so many messages, I thought they would have been interested on it. Luckily they provided a rejection reason: “We’ve already accepted another proposals from the same speaker”.

Really?? I suddenly remembered: I did send two proposals. And right after that, I realised I was accepted to speak in THE selenium conference… and I started to freak out. What now?

seconf

I took a deep breath.

I reviewed the published list of speakers and then I realised that I had the pleasure of having met one of them before: Maaret Pyhajarvi. I brace myself, took all my courage and decided that if I was going to speak in the conference, I should be able to reach out for help.

I was very lucky that she was very nice and helpful. We arranged some online meetings (which was not easy because of the time difference) and she gave me plenty of valuable advice. I’m so happy that I asked Maaret for help and so grateful for her advice. I highly recommend that you watch her presentation and lightning talks, they were absolutely brilliant and inspiring.

OK, what else can I do to prepare for the day? (Besides planning and creating the slides and content, which is easier said than done)

As I was talking about virtual reality, I thought it was only fair to create an app for practising. So I asked the organisers for some pictures of the room I would present on and I created it on VR, even adding some original murmuring sounds. If you are interested, it’s uploaded here.

3) Travelling to India. Getting around. Impressions:

One of the reasons for me to apply for the selenium conference in India was that I was, at the time, located in China, so it was very close. I didn’t need many days off, the trip is shorter and it also was fully covered by the conference. Also, I’ve never been in India, great excuse to visit it!

I was told to first book the trip and then request reimbursement. They actually helped me to pick a better flight that was also a bit cheaper, and they dealt with the currency exchange. I am thankful for the organisers to help so much with this.

They also booked me in the hotel where the conference was taking place and arrange a car to collect me from and to the airport. I didn’t know how useful this would be until I arrived there. I would have had no idea about how to get to the hotel once there, so once again: so grateful for this.

I didn’t have time to travel a lot, just meet with the other speakers for food. That said, it seemed that Bangalore didn’t have much to see within the city but if you go a bit outside you can visit places. Unfortunately I didn’t have time for this , so my opinion about India can’t really count much from this trip. I’m going to have to go back for a better impression 😊

4) Impressions on the selenium conference:

I feel it was well organised, food was delicious (although I did get sick coming back home, but apparently this could happen during your first days in India for what I’ve heard).

I didn’t have much time to see all the talks, but I’ve watched some of the videos afterwards. I feel there was a good balance of speakers and that people should take into account the levels explained on the descriptions.

It was a pity that I was not able to attend to all the talks, but it is OK because of the recordings. I also felt pressured because I was presenting, so I’m happy I presented early enough and I could relax afterwards.

I really enjoyed it overall, it was a great experience and amazing opportunity for networking. All the speakers were very approachable, which I think it’s one of the biggest values of attending a conference.

5) The presentation. Self-feedback. Lighting talks.

What you are about to read now is a series of self-improvement tips, because, as a good Lynx, I am always learning. I am writing them more for myself than for you, but maybe this could help you too, if you ever consider presenting anything.

Looking back, I wish I reviewed much more and I trusted my own experience of needing more content than what it seems enough when practising. I felt it ended up being a bit short. Learning: prepare extra content and practice a lot.

For some reason (probably the fact that it was THE SELENIUM conference) I was as nervous as if it was my first conference ever. Learning: breathe slow, try relaxation techniques before the talk.

I wish I would have kept it more natural, after the matter I came up with more ideas to break the tension with the audience, but before that I didn’t know how to do it. Learning: get information about audience and think of how to break tension with them and make the content more interesting.

I wasn’t so sure about the content: it was a beginner talk but because it was very generic I could not go deep in any of the testing types… I wish now I would have spent longer in some of them or done some demo. However, I was relying in the questions to go a bit deeper…shockingly, there were not many questions. I think it could be a cultural thing because other presenters told me that the same happened to them as well. Learning: keep only 5 minutes for questions and show more technical bits. Talk about what you think it is important instead of waiting for them to ask.

I did blank out and forgot many many things I was meant to say. I took my safety notes with me but yet I lost my way throughout. This was what I was most scared of. Learning: slow down, move the notes as I go along. Have notes on the laptop, not only paper, it looks more natural to look in there.

Performance: muy accent didn’t come up clean. I should pursue to speak slower. I also looked and sounded tired, I was. I should try to get better sleep before the presentation day. Learning: just before the presentation, get as much sleep as possible. Travel earlier to avoid jet-lag. Record yourself to check if the pronunciation is correct or you need to change some words.

The lights were not the best – neither my shirt nor my slides matched with the background and projector. Learning: bring spare clothes and presentation with two background colours and try them before the meeting.

Video: it started late, my introduction was not recorded. I didn’t know I wasn’t supposed to move and I disappear every so often. Learning: Ask video person to record a short video and let you see it. Ask if you can move around or better stay in one place.

Luckily I could make up a bit during the lighting talks, in which I seem more like myself. But, unfortunately, not many people attended and it’s not part of the video of the talk.

6) Conclusion

Even though I feel I could have done much better, I enjoyed the entire experience a lot. I learned plenty from it and I will be much more prepared for the next one.

If you attended or have watched the video, you might be interested on knowing more about a project that I mentioned I am working on…but that’s, well, another story.

Hands on: automating scratch with AirtestIDE

On one of my firsts posts in this blog, I went through a hands on about how to automate scratch. The goal was to be able to teach kids how to test, hand by hand with how to develop programs.

Back at the time, the only program that I could find that would allow me to do this automation was not open source. The experience was very nice but since the goal was to use it on coderdojo , the conclusion was that it would not work out to use a paid solution.

Currently, I’m part of the development team on an open source project called ‘Airtest‘. Please note the intention on this post is not to advertise this product, but to provide a different solution that could actually be used in coderdojo. However, I want to disclaim it clearly: I work on this project, also, I am very fond of it.

Part 1: scratch development

Scratch is a simple tool for getting started with developing just it by dragging and dropping easy sentences. I’ve already explained how to create an app with scratch in my previous post, so I am going to assume you know how to do this.

Let’s just use the project that I created as example: http://scratch.mit.edu/projects/48422496/

This program moves a cat (called sprite) 10 pixels every time the user clicks on it. If it reaches a corner it supposes to bounce. The goal is to verify that the cat bounces when reached the corner.

Part 2: Using Airtest project to automate scratch

    1. Go to the official website: http://airtest.netease.com/ and download Airtest IDE. (Note: it is possible that you see some Chinese around as we are based in China, but don’t get too scared, everything should also be written in English)
    2. Click on “File” -> “New” -> “.air Airtest Project”,  to create a new airtest test file1
    3. Make sure you have “Selenium window” options active by clicking “Window”->”Selenium Window” (you can close Airtest assistance, Poco Assistance and devices).                2
    4. You also want to make sure you have your Chrome path configured in “Options”->”Settings”. If you have Chrome installed and are using Windows, this is generally under C:/Program Files (x86)/Google/Chrome/Application/chrome.exe 3
    5. Make sure your cursor is at the bottom of the script editor.  Then, click on the globe icon at the top left of the “Selenium Window” to open a browser for test.4
    6. Click “Yes” on the yellow message that appears on the top. 5Now you could continue with the recording method (right hand side top button on the Selenium Window). However, since scratch is done in flash, we won’t be able to retrieve a selenium element. we would proceed with Visual testing.
    7. Make sure your cursor is at the bottom of the script editor. Click the “start_web” button and input the url in the quoted yellow text. 6 It would look like this:
      driver.get("https://scratch.mit.edu/projects/48422496/")
    8. Make sure your cursor is at the bottom of the script editor. Click the “airtest_touch” button and select the flag so it is within the highlighted rectangle. Double click inside it to confirm the selection  7
    9. Insert a this piece of code (this is a loop to repeat the next sentence 30 times, which would be the cat click):
       for x in range(0, 30): 
    10. Make sure your cursor is at the bottom of the script editor. Click tab once. Click the “airtest_touch” button and select the cat. Double click to confirm the selection. Your code should look like this (I have added a sleep command to wait for the elements but it should not be needed): 8
    11. Click play to see how the program works (make sure to activate flash on the browser for the program to load the first time). Feel free of getting some snack while your computer does your job for you 😉
    12. Bonus step: verify that the cats bounces. You can do this by inserting an “assert_template” and selecting the cat (same as with airtest_touch). However, the program would pass if it finds a cat, in any position, so even if it is upside down. Alternatively, you can create a snapshot to check manually this part.

Conclusion

Since Airtest project’s back-end (poco) is based on Python (a programming language) you can insert loops and other instructions easily.

Because of its powerful IDE which allows for record/playback or the use of simple buttons, it is easy for beginners.

Since visual testing is possible, even within a Selenium test, it works with programs or websites in which object hierarchy (DOM) is not easy/possible to retrieve (such as flash applications).

Because it is open source and the IDE is free, it can be used in teaching.

There is much, much more that this one tool can do that what is shown here. However, that is… well, another story.

Examples of AI applications and how possibly test them

Recently I attended an online crowdchat hosted by ministry of testing about testing AI applications.

The questions were very interesting, but it was hard to think of a right answer for all AI applications, as this is a very broad field. Explaining it over twitter would be confusing, so I thought I may as well create a post giving some examples.

Kudos to someone on Twitter that mentioned supervised and unsupervised learning at the end of the chat. I was very sleepy at the time (the chat started at 4am my time) so I was not able to find his tweet in the morning to vote for it. I think that we could understand better the types of AI applications that we could have if we divide them in supervised vs unsupervised. More information here.

Supervised learning examples

The idea behind it is easy to understand: These applications have a phase of learning in which we keep feeding them with data and rewarding them if they produce a correct result, while punishing them when they don’t until the produced results match the expected results within a threshold. (In other words, until we are happy with the results).

Let’s ignore for now the exact ways in which we can punish or reward a machine and just focus on the general idea.

After this learning phase, we generally just use the application and no more learning takes place. We “turn off” the learning. This is called inference phase. Not all the applications have inference phase; sometimes we want them to keep learning from the users, but this can turn out to be problematic, as we will see further on.

I think these are the easiest AI applications to test functionally speaking, as we just need to pass new data and check the results obtained against the expected. Apart from this, they behave just like any other application and we can also go through the other types of testing without many changes (performance, security, system..)

NPR / OCR:

Imagine for example a number plate recognition system – once the system learns how to recognize the numbers in the license plate, you don’t have to keep training it. The application can use the learned patterns to verify the new number plates.

There are many tests we could think of here, without caring for how the application gets the results: try with characters that have strange typography (if allowed in the country), tilt the number plate, check the boundary in distance from the vehicle…

An OCR (optical character recognition) application could also be done with this technique. In fact, the number plate recognition system could be considered as a specific type of OCR.

Digital personal assistance (Cortana, Siri, Alexa…):

Quite common nowadays, they help you find out information using voice commands. They could also use supervised learning (although, I believe the right classification for them would be “semi-supervised learning”, but let’s think of them as just supervised for the sake of the example). However, in this case the application keeps learning from the users. It stays in the learning phase.

The reason they can ‘safely’ do this it is because they collect data from the users but not their direct input in whether the result was to be penalized or rewarded. An example of application getting direct input from the user to keep learning would be a chatbot that guesses something and asks if that guess was correct. This could be easily tricked by dishonest users.

Applications that keep learning are much trickier to test, even functionally, as if we pass wrong inputs to test, they will learn wrong. If I had to test one of these, I would use a copy of the state of each iteration we would like to test in an isolated environment, so we don’t break the acquired good learning. For performance testing it would be best to use valid data, to ensure the learning process continues well.

If anybody is concerned about AI gaining consciousness, this type of applications would be the problematic one, as they could be learning things we are not aware of depending on the power that the programmer and the user gave them and the data they are able to collect. This brings up the question: Should testers be responsible to test consciousness?

Unsupervised learning examples

The key of these applications is to discover relationships on the data without direct penalization or reward. They are very useful when we are not sure of what the output should be, and to discover things that we would not naturally think of being related.

There are two types: Clustering (when the system discovers groupings in data) and association (for discovering rules describing data). I won’t go deep on them in this post, as it is a lot of information as it is.

Tailored content-advertising (Amazon, Netflix, Google…)

These apps want to be able to predict what the customers that bought something would be interested on next. In fact, digital personal assistance tools could also use this data to help you find what you want (that’s why I mentioned before they should be classified as ‘semi-supervised’ learning). I cannot think of any ways of testing this except checking on the impact on the sales after the application is in place, but this could potentially be subjective to chance or other factors not related with the application itself.

Apart from that, the test of the application should be the same as we already do with non-AI applications (not just the results, but how the user inputs the data and how the application responds and shows back the data…) Imagine this as a feature of a bigger product, all the other features would also need to be tested as well.

The moral impact of these applications, in my opinion, is that at some point they might be telling you (as a user) what you want, even before you know you wanted it.

What could possibly go wrong?

What should we be careful about in AI that might not need so much attention in other  apps?

Things could go very wrong if we leave apps learning constantly and we leave the users to provide the penalization or rewards. You probably have heard of applications such as image recognition systems and chatbots becoming racists and sexists. Sometimes this is because the test data given to the application is biased, but it could also be because of trolls playing around with the application in unexpected ways and giving rewards when the application is wrong.

Also leaving apps learning on their own is not the best idea, as we do not control what they are actually learning, as mentioned before.

If you are interested, I found an article with some more examples of issues with AI applications here.

What else have you got?

Below is a list of readings that I found very interesting while researching for this post (a couple of the links are about video games and AI):

How “hello neighbor” game’s AI works

AI predicting coding mistakes before developers make them

Examples of AI

Game examples of AI

How would you test these applications?

What do you think about the moral connotations?

If used well, AI could be harmless and powerful. In fact, it could also be a good tool that we could use for automating our testing, but that’s…well…another story.

Interviewing your interviewer (17 tips)

This topic is probably not only for testers, but, since I have been dealing with it recently, I thought some people might find it interesting. Also, I know, I owe you a couple of other posts (I’m working on them, but I am also moving houses and I have limited time to write right now)

I have been told that I ask good questions at the end of the interviews, so I’d like to share my system in case someone else finds it useful. Tip: the important thing is not to ask questions to impress your interviewer, but to use this time frame to find out things about the job and the company.

I usually get very nervous when I do interviews, even though I have interviewed people myself as well. I find, that a good trick for me not to get so nervous, is to think of the process as a two-ways interview, in which you are also interviewing the company to verify if you really want to work in there.

I know this might be hard to take, especially for beginners: “How would I be interviewing a company that I want to work for? I just want the job, if I didn’t like the company I would not have applied for it.” However, it is important to know as much as possible about the job you are about to be doing for a good while. That is why it is crucial that you ask as many questions as you can in order to understand how they work in that particular team and what is expected of you.

Another point, if you can, is to double check the answers with the interviewer’s reaction to the question. I mean, they are supposed to say good things about the company… imagine how bad they would look if the candidate says something like “well, I need to drop my application because the interviewer told me this is an awful place to work for”. But, if you are paying enough attention, you can see some reactions such as long pauses or struggles, that could lead you to believe that they did not feel comfortable with the question or they are trying to sugar coat their answers.

By now, you are probably thinking that this could be a good advise, but you would like to see actual examples of these questions. I owe you a couple of code samples by now, so I won’t let you down on this one. Be careful, some of them might be in the job description, and asking them might show little research and be so annoying for the interviewer as when they ask you to walk them through your CV (which usually means they have not bothered to read it fully). Below some examples of things you might need to know before joining a company (note, when I do an interview these come naturally to me, it depends on the specific job, these are general examples that I can think of right now):

  1. What process are they following? (Agile, waterfall…)
  2. Would you be joining the team of the interviewer or a different one?
  3. How would you relate to the interviewer in the company?
  4. What technology are they using? (The description usually would mention one or two, but you might ask what would be the biggest used, or for a technology that is not in the description, for example, what they use for source control)
  5. Do they do code reviews?
  6. What’s the relationship between the developers and testers? Do they sit together? Share code? Do common code reviews?
  7. How often do the interviewer…? (meet clients, have team meetings, create new features, spend in paperwork such as performance review…)
  8. How long are the springs? (if using agile)
  9. How many times did the interviewer use *insert benefit* this month?
  10. How do they do the performance reviews? How do they measure performance?
  11. What are they expecting of the candidate?
  12. Is there a possibility of *insert benefit*? (getting bonus, stock, gym membership, learning expenses… this depends on what you are looking for in the company)

Some extra tips:

  1. Try to ask the right questions for the right interview: technical interviewers might not know the answer to an hr questions (for example benefits) and hr people might not have an answer for technical questions (for example technologies they use). You might be wasting their time and not getting your answer anyways, so it is better to save the questions for the right round. (Be sure to learn about the rounds to know when to ask what)
  2. If you can, try to say them in a way that sounds a bit more personal for the interviewer, they are more likely to give you honest answers if you are asking for their opinions than for the company’s protocols. For example, the style of number 9 is more personal than number 12 for knowing about benefits. While number 9 gives you information about the actual behaviour of your co-workers and the non-spoken politics in the company, number 12 can give you room to negotiate a particular benefit that is not usually given (you can use this style with hr).
  3. I have said this already, but: don’t ask them for the sake of asking questions. Think about what it is not clear to you ahead of time and take a pen and paper next to you when you ask them to write down the answers. This can help you not to repeat yourself and to remember everything at the end of the process. It might be something you can negotiate or it might be something for you to discard the company (or the specific team). This does not only give you valuable information but puts you in a more powerful and confident position when doing an interview rather than feeling under test.
  4. Don’t get angry or depressed if you don’t get the job: sometimes it’s just a matter of being lucky and getting an interviewer that connects better with you. Sometimes a company might offer different positions or to work for teams that use different technologies that might be more aligned with your experience. And above it all: if your interviewer is pushing you back for not having the exact same expertise as him/her, you are probably better off not working with that person anyways. I think the trick for a company to work well is to have people with different set of expertise: they might not know something with the same depth as you do, but they might know a lot about something else and you both can learn from each other.
  5. Keep trying and practising: doing interviews might be extenuating but, with enough practice and asking the right questions, sooner or later, you’ll get just the job you want.

Let me know in the comments if you can think of other good questions to ask and let me know if you like these sort of posts about the interview process. I could tell more about it, but that’s… well… another story.