• 2 months ago
Because it has been requested, here is the full video with the result of the playlist I have created about how I have created an automated testing system using Insomnia and its open-source capabilities.

The playlist I created about this video has 70 zoom-in videos with extreme focus on the topics talked about in this long-form video. The short video form is always a favorite among many of you, but for others, the long form is the favorite. The advantage of the long form in this case is that you get to see the whole of the IntelliJ setup and the full screen, which can be, for many, a better way to visualize the topics discussed.

Have a good one everyone!

---
Source code:

- https://github.com/jesperancinha/wild-life-safety-monitor

---

Playlist:

- https://www.youtube.com/playlist?list=PLw1b-aiSLRZjPtoGsn5zl-TwWvqy3tzlh
---

As a short disclaimer, I'd like to mention that I'm not associated or affiliated with any of the brands eventually shown, displayed, or mentioned in this video.

---

All my work and personal interests are also discoverable on other different sites:

- My Website - https://joaofilipesabinoesperancinha.nl/
- Reddit - https://www.reddit.com/user/jesperancinha
- Credly - https://www.credly.com/users/joao-esperancinha/badges
- Pinterest - https://nl.pinterest.com/jesperancinha/
- Instagram - https://www.instagram.com/joaofisaes/
- Facebook - https://www.facebook.com/joaofisaes/
- Spotify - https://open.spotify.com/user/jlnozkcomrxgsaip7yvffpqqm
- Medium - https://medium.com/@jofisaes
- Daily Motion - https://www.dailymotion.com/jofisaes
- Tumblr - https://www.tumblr.com/blog/jofisaes

---

If you have any questions about this video please put a comment in the comment section below and I will be more than happy to help you or discuss any related topic you'd like to discuss.

If you want to discover more about my open-source work please visit me on GitHub at:

- GitHub - https://github.com/jesperancinha
Transcript
00:00:00When we are implementing REST services, we usually have a lot of concerns.
00:00:14One of them can be resilience, another one can be robustness, and another one can be
00:00:18how our REST services are going to respond in face of adversity.
00:00:23We end up wanting to implement some kind of automated testing that will allow us to test
00:00:27the behavior of all our infrastructure and how the different services communicate with
00:00:32each other.
00:00:33There are multiple solutions for this.
00:00:35We sometimes use something called chain tests, and that is simply implementing our own client
00:00:40for our own services and see how that behaves with different requests made to the different
00:00:44microservices.
00:00:45Another one can be using Cypress, because Cypress does have good support for testing
00:00:50REST endpoints, however, Cypress is more designed for the front end.
00:00:54And so, another thing that we can do is implement our automated testing with cURL or with wget.
00:01:02And here, we would have to rely on some kind of shell scripting, maybe some bash, some
00:01:06sh file, or maybe some batch file for Windows systems, and none of these solutions actually
00:01:13appear to be a perfect solution ever, until we come across things like Postman or Insomnia.
00:01:20These are tools that have a graphic user interface that allows us to automate our requests.
00:01:25We can, instead of programming our automated tests, we can now configure them, which usually
00:01:31provide us with a lot of heavy lifting needed to create these automated tests.
00:01:35I have used tools like these in the past, and with them, I was able to test my services
00:01:40manually, but I never really got to automate anything with Postman.
00:01:44Another thing that I started using often, like many of you probably already have, after
00:01:48Postman, is Swagger, and to use the OpenAPI specification, and make these tests via an
00:01:54embedded interface in our application.
00:01:56And then, I found out recently about Insomnia, and Insomnia is a tool quite like Postman,
00:02:03and I was able to use it in one of my projects.
00:02:06If you have been following this channel, you already know by now that I have created a
00:02:09project called Wildlife Safety Monitor.
00:02:11In this project, I have explored Kuma and other open source projects like Kong.
00:02:15When I created this application, I created an array of services, and I created an interaction
00:02:19between them.
00:02:20And the idea was to send signals from an albatross called Pekinu, and it would send its location
00:02:28to the listener service.
00:02:30The listener service would then relay that data to the collector service, and then the
00:02:33aggregator will be our access point for the different data that would be calculated, or
00:02:39even the raw data in the database.
00:02:41So I thought, now that I want to explore how Insomnia works, would I be able to create
00:02:45an automated testing for my application that would precisely test that data flow from the
00:02:52listener all the way up to the collector?
00:02:54One important thing to know about Insomnia is that there are two versions.
00:02:57One is open source, and the other one has nothing to do with this series of videos.
00:03:01So what I wanted to do is to make sure that I would keep the project open source, and
00:03:06that I would only use the open source tooling available regarding Insomnia.
00:03:10And this is what this series is about, how I have created an automated testing system
00:03:14using Insomnia and its open source capabilities.
00:03:19Before we dive into our example, it is important that we make sure that we have in our local
00:03:23machine two command line applications installed.
00:03:26One of them is Kind, and the other one is kubectl.
00:03:29We will need Kind to install our registry, and we will need kubectl to perform operations
00:03:35in our cluster.
00:03:36Now to do that, I have created a script, and this is called install-cluster-apps-linux.
00:03:42And I will start it right now, and then I will go through the script with you to see
00:03:48what it does.
00:03:51If we go here into the actual shell script, we can see that the first thing that it does
00:03:57is to install the latest version of Kind.
00:04:00Then it will install a few command line application utilities like apt-transport-https, CA certificates,
00:04:07curl, and gpg.
00:04:08This is in order to be able to install in our local apt repositories the keys necessary
00:04:13to install certain specific versions of different command line applications, namely, for this
00:04:17case, only Kubernetes.
00:04:20And what we do afterwards, we will read out of this file, out of this API, the latest
00:04:26LTS version of Kubernetes, and then use that in our commands to install the keys necessary
00:04:32for Kubernetes, and then finally ending it all with an apt update and a specific installation
00:04:38of kubectl.
00:04:40But this, of course, isn't enough.
00:04:41We also need to start our cluster.
00:04:43And to do that, I have also created a script, a makefile script that is located over here,
00:04:48and it is called kubernetes-init.
00:04:55And before I continue explaining what this script does, let's first start it because
00:04:59it takes a while to start our whole cluster.
00:05:01And therefore, I will just call it like this, and there.
00:05:06So now the script is running.
00:05:08We will have our cluster running in a few minutes, and in the meantime, let's see what
00:05:12it does.
00:05:13The first thing that it does is clean up our machine from any previous runs of our code.
00:05:17What it is doing here is simply removing the configuration, then it moves on to remove
00:05:21the cache, then it moves on to remove all the clusters related to kind.
00:05:27In the second phase of this script, we call b, and b is short for build, and what it does
00:05:33is simply build all our Gradle submodules.
00:05:36In the third phase of this script, we create our cluster, and this is simply kind calling
00:05:40create cluster with the name WLSM Mesh Zone, and then using kubectl to show us the details
00:05:46of the newly created cluster.
00:05:48In the fourth phase, we are creating a local registry.
00:05:51This is where we are going to push our images needed to start our containers in our Kubernetes
00:05:57cluster.
00:05:58So we start the registry, and we call this script kind with registry.
00:06:02This script is available in Kind's own website, and anyone can download it and use it.
00:06:09Once that registry has started, we can finally create and push our images.
00:06:14And so in this case, we are going to go through every single module, and then push every single
00:06:19image to the local registry.
00:06:22And at this point, you may ask, why do we need a local registry?
00:06:26I'm trying to avoid in this case to use a Docker Hub registry, therefore making the
00:06:31project completely independent and avoid using any cloud services at this point and just
00:06:36using our local machine for it.
00:06:38In this sixth phase, we are applying all our different Kubernetes template files, and we
00:06:43are going to loop through the different modules that we have.
00:06:47And so important to highlight here is the kubectl apply file by force.
00:06:53This of course is for a demo, force is not advised to be used.
00:06:57And with this force command, we ensure that in our demo project, we override any possible
00:07:02pre-existing configuration in our Kubernetes cluster.
00:07:05At this point, we have everything starting up, we only need to wait until our clusters
00:07:09are ready.
00:07:10And for that, I have created a script that simply waits for the clusters to be ready.
00:07:15So this means that when we check out the states of the pods, they all need to be completed.
00:07:20And that's exactly what this WLSM wait script does, where we can have a look at right over
00:07:25here.
00:07:26And as we can see in our shell script, we can see that we are only waiting for our containers
00:07:30to be ready.
00:07:31This is a check for all of the containers.
00:07:32So all of them need to already be running before this script stops blocking the execution.
00:07:38And that means that at this point, everything has run successfully.
00:07:42And we see that all containers and all pods are running and ready.
00:07:45So this means that our cluster is ready for testing.
00:07:48But before we start testing our cluster, we need to open the ports.
00:07:51And for that, I have another script in the make file, and that is called open all ports.
00:07:57And this will just activate port forwarding in all of the critical ports that we want
00:08:03to use for our cluster.
00:08:05These are the 8080 for the listener, then we've got the 8081 for the collector, then
00:08:11we need the 8082 for the aggregator, and finally, the one for the database deployment.
00:08:17The last one is for Kuma, but we are not using Kuma here, so this will fail when we
00:08:22run.
00:08:23But that's not a problem because the execution will continue and all we want to do is basically
00:08:27to be able to access our cluster from the outside of the cluster.
00:08:30So therefore, I will just start the open all ports, make file script.
00:08:36And now we see here in the command line that forwarding has been activated.
00:08:41So this means that we now have all our containers available, they are all running, and we are
00:08:46ready to perform tests.
00:08:48But before I dive further into testing, let's go through our application once more, but
00:08:53in this case, zooming in at how everything is set up together.
00:08:56And the first step is probably to have a look at the database.
00:08:59So let's do that.
00:09:00In the database, we have seen that there's a lot of different tables, and they define
00:09:05what an animal is, and what the genus and species they belong to.
00:09:11But the most important thing is the animal itself.
00:09:13So let's read the table and see what's in there.
00:09:17When we read this table, we find the two animals that we've mentioned in the beginning of this
00:09:22series.
00:09:23The one is Pekinu, and the other one is test Pekinu.
00:09:25So the first ID, if you recall, is created randomly, and the second one is fixed.
00:09:32Let's have a look at the first one, which is with this ID.
00:09:37So when we start a database, this row is created.
00:09:42It has a species ID, pretty much the same as the second animal that we have created.
00:09:47But let's now copy this ID and use it in our test request.
00:09:52And before we start out with insomnia, let's first see how this behaves with our scratchpads
00:09:57in IntelliJ.
00:09:58I have one here called test requests, and we can now make a request to our listener.
00:10:04So let me put first the ID over here.
00:10:06We have latitude and longitude.
00:10:08We can change these numbers at will.
00:10:10We can, for example, put one like this, and then put like this.
00:10:16So we've got 3, 4, 5, 5, 6.
00:10:18For latitude and longitude, we have 1, 2, 3, 1, 2, 4.
00:10:20These are not valid latitude and longitude numbers, but these are just numbers that are
00:10:24acceptable to insert in our database.
00:10:26This post request represents the sending of the signal from the sensor and giving us the
00:10:31location of our animal.
00:10:33So let's do the post, and we can see now that the request has gone through.
00:10:46We can see that our request has gone through, but what exactly happened to our request?
00:10:50Let's have a look at the code and see what it does.
00:10:53If we go to the listener, and we go here to source, main, Java, and go to our controller,
00:11:00we can see that this is a traditionally implemented Spring application where we have a post mapping
00:11:06for a create function, which is called sendAnimalLocation, and that returns the animal location that
00:11:12has been put into the database.
00:11:15But there is no database in this case.
00:11:17What we call here is a persist method from the listener service that will add an animal
00:11:22to the cache, and then it will post it to our collector service that is located right
00:11:29over here in this URL that you can see over here at the top.
00:11:33The collector is located over here, and this is the source code for it.
00:11:41In the controller, we can see that we've got another post mapping for animals, and it's
00:11:49called listenAnimalLocation, and it has a request body that matches the contract of
00:11:55the listener service, and will now try to persist this value.
00:12:00What it does, it calls this persist method, which will then use this persist method that
00:12:05will then publish our event to our application event publisher.
00:12:11So this means that when we publish this event, it will be eventually handled by an event
00:12:16handler, and this is located over here.
00:12:19It's called the event handler service, which contains an event listener for the animal
00:12:23location event that we have sent via the event publisher.
00:12:27And this is finally where we persist our data to the database.
00:12:30This is where the location of the animal gets persisted into our animal location table.
00:12:37So let's now perform our request to see if our location gets to the database.
00:12:43So let's go back to our Scratchpad, and that is located over here.
00:12:47As we know, test request, as we have seen before, and it's going to make a POST request
00:12:51to the create endpoint that will allow us to give in the actual location of the sensor.
00:12:58Let's double check the ID of the animal.
00:13:15Let's now perform our request.
00:13:17If I press play here, I will see that it will take a while to get the request done,
00:13:23but we will get a 200.
00:13:24A 200 in this case, as we have seen before, means that the data went through successfully
00:13:30via the listener through to the collector, and the collector has sent the event via an
00:13:37event publisher to where we then handle this event in an event handler.
00:13:43And finally, at that point, we persist our data to the database.
00:13:47If we have a 200 at this point, we should already be able to see the location of Piquinho.
00:13:52Let's have a look in the database if that is true or not.
00:13:56To check on animal location, we see that we have one location there, and let's double
00:14:00check with our request if this makes sense.
00:14:03If we look at the latitude and longitude, we see that latitude has 34556.
00:14:08Let's see if that matches with our request, 34556.
00:14:12Then the longitude, we need to find in the database, 123124, and what do we have in the
00:14:18database?
00:14:19123124, and the animal ID matches the one we see in the test requests, and of course,
00:14:25animal location gets its own ID.
00:14:27Let's send another location just to make sure that we are doing this correctly, and let's
00:14:30say that the location is the same, and we put a zero.
00:14:34Of course, these are not valid values for latitude and longitude.
00:14:37This is just for our project.
00:14:39Let's post it again, and if we look at the animal location, we see another location there.
00:14:43What did we just do at this point?
00:14:46We have tested a successful case from the listener to the collector to the database.
00:14:51Three services are being checked right over here.
00:14:55Another thing that we could potentially do is go over to the aggregator service, and
00:15:09in the aggregator service, we will find two controllers that will provide us with helpful
00:15:14endpoints.
00:15:15The animal controller has an endpoint that is a gap mapping of all the animals located
00:15:20in our database.
00:15:22This is implemented with WebFlux, and if we go through the whole chain, we can see that
00:15:25the service simply makes a call to find all in our animal repository database, and the
00:15:31animal repository is simply a repository that will look into the table animal in our database
00:15:36using Java records.
00:15:39Let's have a look at what we have in the database for this point.
00:15:41We've got Piquino and TestPiquino, so if we make this request, we are expected to find
00:15:46at least two animals when we perform it.
00:15:50For that, let's simply map this to our Scratchpad, and then simply perform a GET request to that
00:16:01endpoint, and we should be able to see here the two animals, and that's exactly what we
00:16:07have.
00:16:08We've got Piquino and we've got TestPiquino, but this isn't the only endpoint that we have
00:16:12available for the aggregator.
00:16:14We also have here the location controller, which is important because here we can also
00:16:18see the location of our animals.
00:16:21For that, we've got this endpoint, which gets us all the locations of the animals available.
00:16:26Remember that the aggregator service is meant to be a read-only service that we can use
00:16:31to read our data, whatever that data may be, and at this point, it only reads data.
00:16:35It doesn't perform calculations yet.
00:16:39We can do the same and pass this request to our Scratchpad, and there perform the location
00:16:45request, which should give us now two locations of Piquino, which are these two.
00:16:52The one we put before, and the second location with the zero behind it.
00:16:56So this gets an idea of what we are willing to test, and what do we want to test with
00:17:00this architecture.
00:17:03It is simple on one hand, because it's very easy to understand, but on the other hand,
00:17:08there are lots of moving parts over here, and of course, if we want to see if this is
00:17:12going to work in a production environment, then we need something more robust to test
00:17:16it, not just someone manually making these requests in the Scratchpad, and checking if
00:17:20everything is okay with our application.
00:17:23Implementing all of this stuff with curl or wget requests would be fairly cumbersome and
00:17:28difficult to maintain, as we have seen already in the installation scripts.
00:17:32This is something that is very easy to provide easy shell scripting for the installation
00:17:38of common line applications, but when it comes to testing, it becomes something completely
00:17:43different with a much higher level of complexity.
00:17:48And this is the point where we can choose to make our own clients that will check over
00:17:53the code, and this is something that usually is called chain tests, and that will also
00:17:58add another layer of complexity to everything we are trying to test for, and because these
00:18:03are only backend services, using Cypress is probably not a good idea, however good their
00:18:08support for REST services endpoint testing is actually quite good.
00:18:13And this is where we begin to think about alternatives like Postman, or Insomnia, or
00:18:17any other alternative.
00:18:19But in this series, we are going to talk about Insomnia, so let's look at it right now.
00:18:24If I open Insomnia, we then see this dashboard with all of the different stuff that we can
00:18:31configure in Insomnia.
00:18:33This is already pre-configured, these three different documents are already pre-configured,
00:18:38but before we dive into them, what we are going to do right now is to see some aspects
00:18:44of Insomnia, how can we create document files that we can use in our automated testing.
00:18:49To be able to start with Insomnia, we first need to install it of course, and for that
00:18:53I have created scripts.
00:18:55These shell scripts are located in a specific folder in our project called WLSM Insomnia
00:19:01Test.
00:19:02And if we go inside of it, we will see inside Insomnia, the Install Inso Linux and Install
00:19:09Insomnia Linux.
00:19:10These are scripts that are going to be used by our GitHub pipelines to install these two
00:19:16programs.
00:19:17Now at this point, you're probably wondering why do we have two different programs.
00:19:21Insomnia is a desktop application that we can use to configure our tests and to configure
00:19:25our requests, much like if you are used to Postman, it's kind of like that.
00:19:29And if you haven't used any tools like this, then this video is great for you because I
00:19:33will go through that as well.
00:19:35And Inso is just a short for Insomnia, and it's just a common line application that has
00:19:41features related to Insomnia.
00:19:43One of them is to be able to start our tests in an automated way.
00:19:47The Install Insomnia Linux is made specifically in this case for Ubuntu, and it will install
00:19:53via its Debian package version.
00:19:56So if you have an alternative, please look at their website to find out the way to install
00:19:59for your operating system.
00:20:01And let me know again in the comments if you have any problems with installing this.
00:20:06The Inso common line application installation is also done via this shell script that I
00:20:11have also created.
00:20:13And this installation is made on the basis of a tar.exe file available in their releases
00:20:17section of their GitHub repo.
00:20:20These two shell scripts are created from the shell scripts that are already available on
00:20:26their website or on GitHub.
00:20:28Make sure to look for them if this is something that will not work on your machine, if you
00:20:32see that this is something that is not compatible with your operating system.
00:20:36Once you've got Inso and Insomnia installed, it's time to run Insomnia, which is what we
00:20:41just saw over here.
00:20:42And the starting point for how we design test projects in Insomnia is most likely the open
00:20:49API specification of your endpoint.
00:20:51If you are using IntelliJ, there's a great thing that you can do to get this file.
00:20:57And that is if you go to the endpoint that you want, which in our case is the listener
00:21:01service endpoint, we can then go over to the project, which is located over here.
00:21:09And if we get to the controller, we get over here, we can generate the open API draft for
00:21:16this endpoint.
00:21:17And the great thing is that it will create a file already ready for us, and it will also
00:21:25generate a kind of a graphic user interface that we can use already.
00:21:29The only thing that it doesn't do really well is the detection of the URL, because it doesn't
00:21:33know at this point which URL are we willing to use for this GUI that it generates.
00:21:38And so what we can do is go to our test requests file, where we have already created a post
00:21:44request for it, and here extract the endpoint, which will be this one.
00:21:56So this is our root endpoint, and the create endpoint will be defined over here.
00:22:01With this definition file, we are ready to create our first draft of our document inside
00:22:06Insomnia.
00:22:07So let's copy this, and let's create a new document next to the existing ones that we
00:22:14are going to have a look at further down the line.
00:22:16And in this case, we just press create, and then we do an import, and then we do clipboard.
00:22:24And without the need to press any other key, we just do a scan.
00:22:28And what it does, it checks and validates whatever we have in our clipboard, and it
00:22:33will create a workspace, two requests, and two environments with zero cookie jars.
00:22:39Let's see what happens.
00:22:40Now we have created a new document, and this document is this one over here.
00:22:46We have just created our first draft for our document definition for our tests.
00:22:53If we do a get info, for example, right now with the application running, we will find
00:22:59that this is listener service version one.
00:23:01So that means that Insomnia is already able to communicate with our endpoints.
00:23:06If we try, for example, to send information about the sensor location at this point, we
00:23:11can try it out, and simply send a payload where the latitude and longitude are zero,
00:23:15but the animal is the one we need, and this one will be Piquinho.
00:23:20And therefore, we can go into the database and get its ID, and then go over here, execute,
00:23:27and then we see that it executes correctly.
00:23:29And now we would expect to find a location of zero zero in our database.
00:23:35Let's see if that's the case.
00:23:37Check animal location, and we see that we've got latitude and longitude zero.
00:23:42The location of the animal is there now, and that means that Insomnia is now able also
00:23:48to send sensor location data to our REST services.
00:23:52This is great, but this is still not the test.
00:23:56For that, we still need to define collections, and let's do that right now.
00:24:00If we go over here to spec, and click on this dented wheel, we can then generate a
00:24:06collection.
00:24:07And what it does, it goes straight into the collection tab, and there we will find our
00:24:11requests post and get.
00:24:13This is where this starts to get very interesting.
00:24:16We will find that there is a warning here, and it says that attempt to output null of
00:24:20undefined variable.
00:24:21There is a base URL already created for us that now Insomnia is requesting us to define
00:24:26it.
00:24:27And we can do that in manage environments.
00:24:29We can do that over here.
00:24:30We can open it, and we can define in the base environment, for example, our base URL to
00:24:36be the one we have mentioned before, which will be from the test request, this one, and
00:24:45just up to listener.
00:24:47So this is our base URL.
00:24:49We need to make sure that base URL is double quoted.
00:24:52This is the only way that this JSON file will be interpreted.
00:24:56So if we close this one, and now run our get, we should be getting the services version
00:25:01in our response, which is exactly what we see here.
00:25:05But get info is just a simple get request.
00:25:08If we make a post request, then we will find that our body is still filled with just string
00:25:13as the first example, and latitude and longitude still with zero as well.
00:25:17Here we can also add the ID of the animal we are testing.
00:25:21In this case, it is bikini once again.
00:25:24And then here we can add it over here, and then we can send our request.
00:25:29We have sent a new location, which means that if we go to our database, we are able to see
00:25:33that there's another location added with latitude and longitude set to zero.
00:25:39This is great.
00:25:40This is where Insomnia starts to get really interesting.
00:25:43Looking at this, we see latitude and longitude with two different values, which in this case
00:25:47are both set to zero.
00:25:49And we can assume that we can change these values to something like this and something
00:25:53like this.
00:25:54And if we send it through the wire, we get latitude and longitude with different values.
00:26:00We can, however, program latitude and longitude with different kind of values using Faker.
00:26:06And Faker allows us to select a random value for these fields.
00:26:11And in this case, it makes sense to put latitude with a random integer.
00:26:15And the same thing goes for longitude.
00:26:21And now if we make our request, we can see that we have a new location given to our database
00:26:27with random number for latitude and longitude that we can then look in our database and
00:26:32see that they have been added just as we can see right over here.
00:26:39But we still have one problem, and that is this animal ID.
00:26:42The animal ID is something that we know at this point that is fixed, and we read it and
00:26:49we were able to read it from the database.
00:26:52But normally from the outside, you cannot read these animal ID values.
00:26:56We have, however, an endpoint in the aggregator that gives us just that.
00:27:01Let's have a look at that endpoint from our IDE.
00:27:05And so if we go to the test requests and we have a look at how do we get the animal list,
00:27:11we can see that the endpoint is accessible via port 8082 and at this URL.
00:27:18That means that if we call the aggregator on this endpoint, we'll get a list of all
00:27:21the animals in our database, and these are exactly these two, Bikinu and TestBikinu.
00:27:29We want the first one, and so one of the things that is important in Insomnia to do in this
00:27:35case is to get the first animal out of our database.
00:27:38And for that, we will create manually a new request, and this request will have as its
00:27:44URL this one, and then we can try it and see that we get the two different animals
00:27:51here.
00:27:52So essentially, we need to get the first element of the response and the ID of the first element
00:27:57of the first response.
00:27:59How do we do that in our requests?
00:28:02Let's just first give it a nice name.
00:28:07Let's say that this is settings.
00:28:10Let's call it, instead of new requests, let's say this get animals, for example.
00:28:18In this, we can then save like this, and then we can go to our post create location, which
00:28:32is this one, and then say over here that we want to get, we want to put on this field
00:28:43the response that comes, something that comes out of a raw body of a response.
00:28:50So we need to select it and then say here that we want to request out of get animals,
00:28:55and we already have here a live preview of that request.
00:28:59So that means that we can select precisely what we want over here if we select over here
00:29:05the body attribute, and in the body attribute, we can use JSON path.
00:29:10To do that, we know that we want the first element ID, which should be this one.
00:29:19As you can see in the live preview, we get the UOID for this animal.
00:29:24The beautiful thing is that now we are done.
00:29:26We get now still some errors because we removed the double quotes.
00:29:36Remember the JSON payloads need to be defined with double quotes.
00:29:39If we send this now, we will see that the animal is correct.
00:29:43We got a new random number.
00:29:45Remember this 29 and this 38 over here, and let's see if we can see that back in the database.
00:29:53If we go over here and then we go to our database in animal locations and refresh, we can see 26
00:29:59and 38, which means that this is the data that we have just placed in the database,
00:30:05and this request will be a chain request that will first get all the animals from the database,
00:30:11get the first one, use the first animal's ID in our second payload for the post request,
00:30:17and then puts a random location.
00:30:19So this already provide us with some level of manual testing.
00:30:21The first testing that we can see here is that we check the get animals.
00:30:26If it works, then we get a list and with that list, we create a location.
00:30:31And that location is made by using the animal that comes from the previous request.
00:30:34So we are already testing the aggregate to some level and also the listener to some level.
00:30:39And then we also have the info as we have seen now is completely tested
00:30:44and returns the listener service V1.
00:30:46But these are only collections.
00:30:47This is created only for users that want to test the application manually.
00:30:52If we want to automate this, we need to make tests.
00:30:55And we can find tests under this tab.
00:30:58And here we can create new test suites.
00:31:02An easy way to create a test suite is only to check if we get a 200.
00:31:05And that's simply by going here to the new test suite.
00:31:09Go here to run tests.
00:31:10Now everything runs.
00:31:11There's no test, of course.
00:31:12But if we do a new test, we can select here a request.
00:31:16For example, get info.
00:31:18We are already testing get info.
00:31:19If we run it, we see that it runs because it returns a 200.
00:31:23And we can specify inside of it via JavaScript what we want to test for.
00:31:29We can use expectations.
00:31:31This is compatible with CHI.
00:31:33So CHI expectations are pretty much accepted here.
00:31:35And we can use them in any way we want to check our requests regarding payload
00:31:40and regarding statuses, for example.
00:31:43We can also test another request, for example, the create request,
00:31:47which we should now be able to get a 200 as well.
00:31:50If we run it, we see that we get a 200.
00:31:53Then we can just assume that a new location has been created or not.
00:31:56But we can check that in the database, of course, right now.
00:31:58So if we now perform a refresh of the database,
00:32:02we see that there's a new entry, entry number nine,
00:32:05with a new latitude and a new longitude.
00:32:07This is all great.
00:32:08And we can also test for the remaining request.
00:32:13And that is the get animals.
00:32:17Which means that in this suite, we can also test a different service.
00:32:22So we are testing a bit of the aggregator and a bit of the listener.
00:32:26This is all fine.
00:32:28There is a catch here that I want you to remember.
00:32:31Unfortunately, for this specific situation where we get a dynamic request
00:32:35and other kind of dynamic requests,
00:32:38when we export this data to be used with Inso automatically for our pipelines,
00:32:43they just don't work yet.
00:32:44At least I wouldn't be able to get it to work.
00:32:47And I have checked this in the export documents.
00:32:49In the export documents, unfortunately,
00:32:51do not export the capability of using chained requests.
00:32:56So this is the reason why am I using a fixed ID test Piquiño,
00:33:02because that is the one that we use in our pipeline in the end.
00:33:05That means that in GitHub Actions,
00:33:07we make tests using a fixed ID animal
00:33:10instead of an animal created with a randomly generated ID.
00:33:14All right, so this is one way to test the listener.
00:33:17Let's check the aggregator
00:33:18and how could we make changes there for our tests.
00:33:26And I will now delete this document
00:33:27because this is not the document that we want to have a look at.
00:33:29This is just something that I have created now
00:33:31to give you a demo of some possibilities using Insomnia
00:33:35for this particular service.
00:33:37But we want to test another service and that is the aggregator.
00:33:39So let's have a look at it right now.
00:33:44And so we know at this point how to import a document,
00:33:47how to start a new document in Insomnia.
00:33:51Easy.
00:33:51We need first to get the OpenAPI specification
00:33:53for the particular service that we want to test.
00:33:56So let's go back to our IDE
00:33:57and get the aggregator's OpenAPI specification document.
00:34:01So now we already know how to do this.
00:34:04And so go over here.
00:34:07And so go over here.
00:34:23In the aggregator service,
00:34:25what we are interested now in testing is the locations.
00:34:28So we want to simply be able to call the first test suite
00:34:33and create different locations in our database.
00:34:36So that means that the sensor will be sending data to our database,
00:34:39let's say 10.
00:34:40And then we want to check,
00:34:42we want to test if we see 10 different locations.
00:34:45And to do that, of course,
00:34:46the method that we want to focus on is the list all
00:34:49with all the animal locations.
00:34:51So if we go over here,
00:34:53we find the animal location controller.
00:34:55This is the one where we want to create our specification.
00:34:58Let's do that.
00:34:59And so we can go over here.
00:35:01And here we generate the OpenAPI draft.
00:35:04We can copy.
00:35:06And then here we do a create,
00:35:08import,
00:35:09and then we go to clipboard.
00:35:11And we scan our clipboard content.
00:35:13And then we import it.
00:35:14We see that it has one single request.
00:35:17That is our list of locations.
00:35:19And then finally, we click on import.
00:35:20And we've got a new wildlife safety monitor document.
00:35:23But in this case, it is for the aggregator.
00:35:26Let's open it and see if that makes sense.
00:35:29It does make sense.
00:35:31One thing that we just forgot to do
00:35:32is to correct the location of the URL.
00:35:35That one is very easy.
00:35:36We can just go over here to our test request scratchpad.
00:35:43And then here we can get the aggregator URL.
00:35:48And we can just put it over here.
00:35:50And we don't want, of course, animal in this case.
00:35:54Because this is...
00:35:56So this one here is the root of our service.
00:35:57Let's test it and see if it works.
00:35:59So we can go over here to the right.
00:36:02And let me just fix this a bit so that we can see it better.
00:36:06See, we can see we already have the small GUI
00:36:08just like we did in the IDE.
00:36:10And then we can click on it and just simply try it out.
00:36:12And if we execute it,
00:36:13we should get a list of the current locations
00:36:15that we have placed in the database.
00:36:18Which is great because now we are sure that this works.
00:36:21So let's now create a collection with it.
00:36:24Go over here and generate a collection.
00:36:26It creates the different requests that we need.
00:36:28In this case, we only have one.
00:36:29That's the get location.
00:36:31And if we just go here to manage environments,
00:36:35we can, of course, now create our own base URL variable
00:36:41that we need so that we know where our service is located.
00:36:47So this is the URL of our aggregator service.
00:36:50We can close this one.
00:36:51And now what we should do,
00:36:54just to be sure that we are doing everything right,
00:36:57is just send a request.
00:36:58And there we go.
00:36:59On the right side, we see all of the locations that we need.
00:37:02One thing that we may find interesting also to test here,
00:37:05and this is just to show you how prescripting can work for us,
00:37:09is going over here to scripts.
00:37:11And here we can put any kind of script we want
00:37:14to execute before we make our request.
00:37:17Of course, here we are still in the collection.
00:37:19This means that we are only configuring our manual requests.
00:37:22And one particular script that we want to test
00:37:25is the one that I have stored in the repository.
00:37:27So if we go over there and we go to our folder
00:37:30and go to prescript location,
00:37:32we can then copy paste this straight into here.
00:37:36And here we've got all of the script
00:37:38that is going to do something very specific and very easy
00:37:41that you have already seen in other examples
00:37:44throughout this video series.
00:37:46So the first thing that we want to do is to get an animal.
00:37:49So what it does, it gets the animal,
00:37:51then it will get the first ID of that animal.
00:37:53And that first ID of that animal
00:37:55will be used in the second request,
00:37:58which is to create a location.
00:38:00And it will do that over here on this endpoint.
00:38:03When the new location has been added,
00:38:05we want to see that location back.
00:38:07And to see the location back is the actual request here.
00:38:11So prescripting works in the following way.
00:38:13We first execute anything we need to execute.
00:38:16In this case over here,
00:38:17we can use our variables from our environment.
00:38:20And then we can just perform our request.
00:38:23If we run this right now, as it is,
00:38:26we will get an actual location here.
00:38:29We can see that the last location now is 2023.
00:38:32So latitude is 20, longitude is 23, the first two numbers.
00:38:38So let's just send it and see what happens.
00:38:40If we send it, we've got a response.
00:38:43So let's check what's at the end.
00:38:45Now we should expect an actual location to be added.
00:38:48Remember 2023 and this latitude and longitude.
00:38:53Which are the latitude and longitudes
00:38:55that we have placed over here.
00:38:56And as you can see, this one,
00:38:58if you have already memorized it a bit
00:39:00because we've seen it so often,
00:39:01this is the ID of Bikini for our execution.
00:39:04It's a random ID.
00:39:08The power of using this JavaScript is very interesting
00:39:10because in one single request,
00:39:12we can test everything that we have seen before in one go
00:39:15and see it working.
00:39:18We can jump from here into making our tests.
00:39:21So let's go into the tests tab
00:39:23and just add a new test suite as we did before.
00:39:25And then run the test for the new request,
00:39:29which is a get location.
00:39:31So what do you think will happen when we run this test?
00:39:34We are going to run the previous request,
00:39:37the get location request,
00:39:39which has a pre-scripting installed.
00:39:42The pre-scripting will make a request
00:39:45to the current animals we have,
00:39:47get the last one, post a new location.
00:39:49And then at the end,
00:39:50the actual request will be run,
00:39:52the get all locations,
00:39:54where we will see that the new location will be added.
00:39:56At least this was the behavior
00:39:58that we saw before in the collections.
00:40:01Now will this happen also
00:40:03when we run this in the tests tab?
00:40:06Let's have a look.
00:40:08We run the tests and we see 200.
00:40:11Now remember that we've had one last location
00:40:14with static locations.
00:40:16So if that script has been executed,
00:40:18we should be able to see an actual location.
00:40:21Otherwise, we don't see that actual location.
00:40:23Let's see what actually happened
00:40:24when you go here to collection.
00:40:26Now, if I would run this get location right now,
00:40:29it would create an extra location.
00:40:31The danger with that
00:40:32is that we would have something extra
00:40:34in our set of different locations
00:40:36currently available in our database.
00:40:38So what we are going to do,
00:40:39we're going to add a new request
00:40:42with this location
00:40:44and the base URL over here.
00:40:48And one more thing,
00:40:53this underscore here and the dot after that
00:40:57allow us to do some kind of auto-completion,
00:40:59which is very handy.
00:41:00That is just another feature
00:41:01that is an integral part of Insomnia.
00:41:03So let's see now what happens
00:41:05if we run this request.
00:41:07Do we get an actual location or not?
00:41:09Let's have a look.
00:41:10We go all the way down
00:41:11and we see that we only have one extra static location,
00:41:14which was exactly how the system looked like
00:41:17before we ran that test.
00:41:20What this means is that the tests
00:41:22don't run these prescripting scripts,
00:41:26at least not the way I have created them.
00:41:28We can go back to tests.
00:41:31In here, we see that we can only configure the location.
00:41:34There isn't really a place where we can say
00:41:37that we need to run the prescripts.
00:41:41And so the only thing that the tests do
00:41:43is everything automated.
00:41:45And unfortunately that prescripting doesn't work.
00:41:48So we already know two things.
00:41:50Prescripting here in the tests don't work
00:41:53and the automated chain test requests
00:41:55using the dynamic insertion of values also doesn't work.
00:42:00So what this tells us is that
00:42:01everything that runs in tests
00:42:03has to be well-defined in the collection
00:42:06and everything that's dynamic probably won't work.
00:42:09And we need to check if everything that's dynamic works.
00:42:13As a heads up, the random integers do work
00:42:15and that means that we can use them
00:42:17in our test setup for the Git Actions pipeline.
00:42:21So now we can go to the main page
00:42:22and delete this one also.
00:42:26All right, so we've got all our different requests over here
00:42:29and now it would be interesting
00:42:30to be able to run these tests
00:42:32against our cluster in an automated way.
00:42:35But first, let's have a look
00:42:37at how the tests actually look like
00:42:38to be able to run them
00:42:39within the virtual environment of Git Actions.
00:42:42So the first thing we want to look at
00:42:44is the first version that I had created of the listener.
00:42:48Here we can go straight into collections.
00:42:49We don't have any scripts here anymore
00:42:52and if we go to body of the create location,
00:42:56we see that we still have a chained response.
00:42:59So this test will fail
00:43:01but we will check it out shortly in the pipeline
00:43:03if that works or not.
00:43:06Info should work in any case here.
00:43:09We have seen that before
00:43:10and the get latest animal should also work
00:43:13because it doesn't have any scripts
00:43:15and it doesn't have any particular dependency
00:43:17and it doesn't have any body.
00:43:18The whole test suite will fail
00:43:20simply because of that simple request
00:43:23to add another location to our database.
00:43:25If we go back now
00:43:27and we check out the static listener.
00:43:31Now this one is a different story
00:43:33because this one does a POST request
00:43:38but the body is fixed.
00:43:40So that animal ID
00:43:42is the animal ID of our fixed animal
00:43:44that we created the database,
00:43:45the test bikini animal.
00:43:47Let's have a look if that matches.
00:43:49520 should match to what we have in the database
00:43:55which is 520.
00:43:59All right.
00:43:59So this is a test suite that will work
00:44:05because it will find the last animal.
00:44:07It will be able to send the location
00:44:09on behalf of the test bikini.
00:44:12Then we find the aggregator.
00:44:14The aggregator test
00:44:21has a GET for the location,
00:44:24a GET for the animals
00:44:26and it has a GET pre-scripting.
00:44:28This pre-request is exactly the same
00:44:31as we have seen before
00:44:33and it is here so that we see
00:44:34when we run this from the command line
00:44:36that this doesn't work.
00:44:38So we can see in the test
00:44:39that all of this will be tested,
00:44:41all locations, the call GET
00:44:42and the pre-script locations.
00:44:45From within this will all work
00:44:47except that now the quantities are different
00:44:49because instead of having one animal
00:44:51I actually have two.
00:44:52So I need to change this to two
00:44:54and now it works.
00:44:55It passes.
00:44:56So we get all locations.
00:44:57It should be green.
00:44:58If we have 10 locations
00:44:59it should probably fail as well.
00:45:01I think it's just pure luck that we have 10
00:45:04and the second one is simply
00:45:07the pre-script locations
00:45:09which of course should have 10 as well
00:45:11and in this case has 10
00:45:12because we do have 10 locations
00:45:13and we can check that in the GET location
00:45:16over here, the one without pre-scripting.
00:45:20All right, so we can send this one
00:45:22and we can see that we've got
00:45:23one, two, three, four, five, six, seven, eight, nine and 10.
00:45:29So we've got curiously 10 locations.
00:45:33This is the setup that we have in Insomnia.
00:45:36One thing is when we are logged into Insomnia
00:45:40that means that something happens in our system
00:45:42that allows the system to know
00:45:44that we are connected to Insomnia
00:45:46and that we want to run all of these tests.
00:45:49To run these tests
00:45:50we only need to run a specific command.
00:45:53We can look at it in the GitHub actions file
00:45:57where we can already have an idea
00:45:59of how the pipelines are set up
00:46:02but for now the thing that we are interested in
00:46:05is how do we start tests
00:46:07and that is with the script Insomnia start test pod
00:46:11and this test pod is another pod
00:46:13that we need to perform our tests automatically.
00:46:15I will explain in a minute
00:46:16why do we need a pod for it
00:46:17but locally we don't need it
00:46:19because we know what localhost is.
00:46:21So let's have a look at our test pod
00:46:23and that is the one over here
00:46:27and if we go to the make file
00:46:30we can see that it runs an entry point that has this
00:46:34and what we need at this point is just this
00:46:38inso run test.
00:46:41If we run inso run test
00:46:44we'll be able to first choose
00:46:46the test suite that we want to start
00:46:48and if we choose the first one the static listener
00:46:52it will run successfully
00:46:53because it is only creating an extra location.
00:46:56If we run the second suite
00:46:59which is the non-static listener
00:47:02we will have a problem.
00:47:04If we run the non-static version
00:47:06it means that we are going to try to get our animal
00:47:08from the aggregator service.
00:47:10We have seen before that this doesn't work
00:47:11and in fact if we now run it
00:47:14we see that it doesn't work.
00:47:15The reason being is that it has no understanding
00:47:17of that request
00:47:18and therefore it doesn't perform it
00:47:20we don't get the id
00:47:21and so it cannot post a new location
00:47:24and so the 200 request will fail.
00:47:28If we now run the last
00:47:29which is the aggregator test suite
00:47:32now I think all the tests will fail
00:47:34because all the locations have changed
00:47:36there are more than 10 locations
00:47:38and plus the prescripting is not working.
00:47:40Let's have a look at what happens.
00:47:44There.
00:47:46So at this point there are 13 locations there
00:47:50that is because in between I made a test request
00:47:52and so we have 13
00:47:53and we see that the call to prescript endpoint also failed
00:47:57and the reason for that is that
00:47:59we still have 13 locations
00:48:02so what this tells us is that prescripting didn't work
00:48:05because otherwise we would have a fail as well
00:48:07but a 14 locations fail
00:48:10not 13
00:48:11and so prescripting doesn't work
00:48:14and chained requests don't work as well
00:48:17but random does work
00:48:18and that is what we saw in the previous successful results.
00:48:22Now we can also make these now work
00:48:25because we know that we have 13 locations now in the database
00:48:28and so it's very easy
00:48:30we can just go over here to the aggregator
00:48:35and in the aggregator we can just go here to the tests
00:48:38and wherever we test for 10
00:48:40we will now test for 13.
00:48:45In here as well
00:48:46no not here because we test for three
00:48:48for two animals and we test 30.
00:48:50Okay so now let's run this from the command line
00:48:56and if we run these they should all be successful
00:48:59and the prescripting also works
00:49:02but that's because we cheated
00:49:04prescripting here this should have been a 14 and not a 13
00:49:09but and so if we run now
00:49:15this will fail
00:49:17and here we've got two different kinds of requests
00:49:20that we cannot perform in the pipeline
00:49:22so this is why it was necessary for me to make this all static
00:49:27while this is all very interesting
00:49:29it is also very important that
00:49:32we get an idea of how do we put this in the pipeline
00:49:35as we have seen before
00:49:37and if you paid attention to this one over here
00:49:42you would have realized that
00:49:43I'm making these specific requests to invoke specific test suites
00:49:49so here I'm opening this document
00:49:52and I'm opening the first suite that comes in the list
00:49:54we only have one suite so we can just do this echo
00:49:57so we don't have to choose it automatically
00:49:59and here also the first suite from this document
00:50:04the first one will launch 10 times to create 10 locations
00:50:07and the second one will launch only once to test for two animals
00:50:11and that the 10 locations exist in our database
00:50:15as simple as that
00:50:16but before we get to here which is the end
00:50:19of how this was implemented in github actions
00:50:22we need to think how do we run this in git actions
00:50:24because git actions runs on a virtual environment
00:50:28that has no concept or little concept of what a local host is
00:50:33in some of these environments we can't even get the machine name
00:50:36but this is always a problem
00:50:37in these virtual environments we cannot get the local host
00:50:40we cannot get a local machine
00:50:41so how are we going to perform our tests if we need to access endpoints
00:50:45well the answer is to get a containerized environment
00:50:48within the virtual environment
00:50:50so that we can access our endpoints using localhost
00:50:53or the name of the machines themselves
00:50:55and this is why these tests needed to be integrated in a container
00:50:59which will then run in a pod in our kubernetes cluster
00:51:02and the way to do that is actually very simple
00:51:05the only thing though is that we will need to replace
00:51:08every single text that refers to localhost to the matching service
00:51:12but before we get to there the first thing that we need to do
00:51:15is to export our documents in the correct format
00:51:18i chose json you can choose yaml if you prefer
00:51:20but the idea is that we can use these files
00:51:24and then make sure to point them out when we run the test commands
00:51:28if you have noticed now in the pipeline
00:51:30we didn't specify any test suite
00:51:31and we also did not specify any single particular file
00:51:35in the pipeline though we need to do that
00:51:37because in the pipeline we do not have insomnia installed
00:51:40and we have not logged in
00:51:41so having said that let's now have a look
00:51:44how to export documents in insomnia
00:51:47so we go back to our insomnia desktop api
00:51:50we go back to the home page
00:51:52and now we have three different documents
00:51:54these three different documents have been created for the example
00:51:57but the ones that we are mostly interested in
00:52:00are the static listener and the aggregator
00:52:03if we go to the aggregator
00:52:04there is an object here at the top right corner of these documents
00:52:07over here where we can simply select export
00:52:11and in export we can choose the request that we want to export
00:52:15and then we can choose the format that we want to export them with
00:52:18we can choose these three
00:52:20the one that worked for this project was a json file
00:52:22so we can do that and we can be done with it
00:52:25and then we can choose where do we want to save it
00:52:28we can just save it in our own particular folder
00:52:30we can save like in exports folder
00:52:34and in this exports folder we can just save our json files
00:52:38so this one was for this aggregator
00:52:40and the other one let's choose a static listener to export as well
00:52:45all the requests
00:52:46and then we export them to a json format file over here
00:52:50notice that the insomnia files are saved as insomnia
00:52:54and the current date by default
00:52:56so that means that you may overwrite the previous file
00:52:59so in this case we just say aggregator for example
00:53:04now that we have this we can go to our browser
00:53:06and go to documents where we save this
00:53:09go here to exports
00:53:10and then here we finally have these files
00:53:13that we can now have a look at what's inside of them
00:53:16these are very readable files
00:53:17so if we just open these files in IntelliJ
00:53:20we can select both of them
00:53:21and just open them up in IntelliJ over here
00:53:25and we format them
00:53:29then we can read the contents
00:53:31we are not going to use these files
00:53:33this is just for an example
00:53:34so here in these files we find all of these definitions
00:53:37and if you notice we have all of these local hosts everywhere
00:53:41and especially the definition of our url variable
00:53:44so we've got here local host
00:53:46and then for example over here we've got the base url
00:53:49and if we look for base url
00:53:51then we are going to see in one of the reference to it
00:53:54that it defines its value as being local host 8080 app v1 listener
00:54:01we know that our listener is running on local host
00:54:04but here's the interesting part
00:54:05if we run it in a containerized environment
00:54:08it's no longer local host
00:54:09then the machine has a name that Kubernetes gives it to it
00:54:13and here we need to make a small review
00:54:14of how domain names are created inside a Kubernetes cluster
00:54:18let's have a look at that
00:54:19for example if we look at this same file
00:54:23that has already been created
00:54:24and already running in github actions
00:54:26inside the test project
00:54:28which is this one over here
00:54:31if we go here to insomnia
00:54:32we can find insomnian.json
00:54:34this one has already been created
00:54:36and we also have insomnia listener
00:54:38if we look at the insomnia listener for example
00:54:41we can see that the aggregator has a domain name like this
00:54:44WLSM aggregator deployment
00:54:46followed by WLSM namespace
00:54:48svc cluster local 8082
00:54:51by default this is how we access the containers
00:54:54inside the Kubernetes cluster
00:54:56the first element is the name of the service
00:55:00let's have a look at it
00:55:01in the aggregator we will find
00:55:04in the deployment file
00:55:05we will find that the name of the service
00:55:08is WLSM aggregator deployment
00:55:10and so that's the service name
00:55:12then we put in the namespace
00:55:15so dot the namespace
00:55:16which is WLSM namespace
00:55:18we can also check it over here
00:55:19it is created over here on the first segment
00:55:21of this deployment script
00:55:23and then finally at the end
00:55:25we just use svc cluster local
00:55:27then we define the port 8082
00:55:29and then the rest of the url path
00:55:31and this is valid also for the listener
00:55:33so if we look for listener over here
00:55:37we should find that
00:55:38we also have the domain
00:55:40or the listener running container inside the pod
00:55:43that has this url
00:55:44as the address to access the listener endpoints
00:55:47after we make this replacement
00:55:49then it becomes very easy
00:55:51because now the files are ready
00:55:52to be executed in the pipeline
00:55:55and the way this works
00:55:56is by the usage of a docker file
00:55:59and the docker file is located right over here
00:56:01inside the insomnia test folder
00:56:03and here we can see that
00:56:05we are creating a docker container
00:56:07from an ubuntu image
00:56:09and we are installing some very necessary
00:56:12command line applications
00:56:14the first one is curl
00:56:15then we install also sudo
00:56:17and with this
00:56:18then we are allowed to install
00:56:19a couple of things that are very important
00:56:21but let's go one by one
00:56:23the next two commands
00:56:24inside our docker file are copy
00:56:25so we are going to copy
00:56:27the insomnia installation script
00:56:29in the inso installation script
00:56:31in order for this to run smoothly
00:56:33we want to avoid any kind of warnings
00:56:35that we may have when we run inso
00:56:37here is something that I learned
00:56:39when inso starts with its latest version
00:56:41what happens is that
00:56:42it will complain that
00:56:43these libraries aren't available
00:56:45so I decided to install them in a container
00:56:47before we run the tests
00:56:49finally we run the installation scripts
00:56:52for insomnia and for inso
00:56:54and the reason for doing that
00:56:56right over here
00:56:56is to avoid having to reinstall everything
00:56:59when we restart the container
00:57:00it is a general good practice
00:57:02and then finally we copy the files
00:57:05that have our test definitions
00:57:07the insomnia file
00:57:08and insomnia listener file
00:57:10the first file is about the aggregator
00:57:11and the second one of course
00:57:12as the name explains
00:57:14is about the listener tests
00:57:16so these two will be run
00:57:19every time we start the container
00:57:20and the container will not stay running
00:57:23the container will only just run
00:57:25the entry point
00:57:25which is located right over here
00:57:28and it will run these two commands
00:57:30the first command is the insomnia listener
00:57:31it will run it 10 times
00:57:33to create 10 different locations
00:57:34and finally it will run the aggregator test
00:57:37that will test for 10 different locations
00:57:40and check that there are two
00:57:41and only two animals
00:57:43in the database
00:57:44our piquinu created
00:57:45with a randomly generated id
00:57:47and our test piquinu
00:57:48with a fixed uid
00:57:50finally the way to run these tests
00:57:52is very easy
00:57:53in the github workflows
00:57:55actions definitions
00:57:56we've got one particular file
00:57:58that is this monitor insomnia file
00:58:01where we have defined
00:58:03everything that we need
00:58:04in order to be able to run the test
00:58:06the first thing that this does
00:58:07is install all the necessary
00:58:09common line applications
00:58:10that we will need
00:58:11then it will start our cluster
00:58:13just as we did locally
00:58:14then it will check the status of the pods
00:58:17and print them out to the console
00:58:18only for debugging purposes
00:58:20and then it will start the test
00:58:21and finally show the results
00:58:23let's have a look at the start test
00:58:25the start test
00:58:27is located in the makefile
00:58:29right over here
00:58:31and what this does
00:58:33it goes into the insomnia test
00:58:35it builds a new image
00:58:37and finally once the image is created
00:58:39it applies the deploy kubernetes template file
00:58:42and then finally it will wait
00:58:44for everything to be completed or running
00:58:47in this case the idea
00:58:48is to wait for the insomnia test to complete
00:58:50then we've got the show results
00:58:52and in the show results
00:58:53the only thing that will happen
00:58:54is show the results of the execution
00:58:57which is a printout of the logout of the console
00:59:01and that is done over here
00:59:03inside the WLSM insomnia test folder
00:59:05in the show result script over here
00:59:08which tests for the different possible terminal states
00:59:10of this pod
00:59:11the script itself does not check
00:59:13for this specific pod
00:59:15it tests for the last pod
00:59:17and it's obviously
00:59:18this is the last pod that it's running
00:59:20because this is the last one that we started
00:59:22so it sorts everything by start date
00:59:25and so the last pod is checked
00:59:28for status succeeded, failed or error
00:59:31and at the end
00:59:32if the result is anything other than completed
00:59:35then we report a fail
00:59:37to the running execution in the github actions
00:59:39or we just let it continue
00:59:41and that will be reported as a success
00:59:43but we can also run this locally
00:59:45so let's try just that
00:59:47our application is already running
00:59:49but it is now filled with more than 10 locations
00:59:51and we want to start from scratch
00:59:53so to do that
00:59:54we'll just go here to the database
00:59:56and just drop everything
00:59:59so delete rows
01:00:01and just commit them
01:00:03so there are no more animal locations in our database
01:00:05and now let's just run our pod
01:00:07and see what happens
01:00:08so this means that we have to run
01:00:10make insomnia start
01:00:15test pod
01:00:16so now it will start the container
01:00:20and at the end
01:00:21we expect to see only success results in the logs
01:00:36now that the execution has been complete
01:00:38let's have a look at the results
01:00:40and this mimics what we see in the github actions
01:00:42that i will show you in a minute
01:00:44so now we issue command
01:00:46make show results
01:00:50and we can see now that at the top
01:00:53we first ran the static listener
01:00:55which is about making a request to our listener
01:00:5810 times
01:00:59and we can see that we've got a bunch of them
01:01:01i can assure you that these are 10
01:01:03after finishing the listener tests
01:01:05we move on to the aggregator tests
01:01:07and at this point
01:01:08there will be 10 records in the database
01:01:10that then this test will say everything is okay
01:01:14because we are waiting for 10 locations
01:01:16in all of the three tests
01:01:18and even the pre-script
01:01:19we know that no script has run
01:01:21and therefore it will only have 10 locations
01:01:24when we run the endpoint that we have configured
01:01:26to have pre-scripting
01:01:29but the final test will be to check in the database
01:01:31and see if we do actually have 10 records over there
01:01:36this is an easy thing to do
01:01:37just opening the database
01:01:39because we do have the open ports
01:01:40remember that
01:01:41we've made the port forwarding open and available
01:01:44now we can check the database
01:01:46and we see that we have 10 records
01:01:48and these are records that have been created
01:01:50with our tests
01:01:51which will also happen in github actions
01:01:54and if you noticed
01:01:55latitude and longitude
01:01:56have been created with random numbers
01:01:58using the faker option of insomnia
01:02:03but the rest didn't work
01:02:04what's left for me to do in this video series
01:02:07is just to show you the github action pipelines
01:02:09in the project wildlife safety monitor
01:02:11let's have a look
01:02:13if we go to wildlife safety monitor
01:02:19and we go to actions
01:02:21we will find the last insomnia action
01:02:23which is this one
01:02:24that is called wildlife safety monitor insomnia
01:02:27so let's open it
01:02:29and we see that everything ran perfectly
01:02:33and that we've got a green job
01:02:35let's open it
01:02:37everything seems to have run okay
01:02:38but let's have a look at the show results
01:02:40which is the important part
01:02:41that we just saw in our local machine
01:02:43but now in github
01:02:46we show the results
01:02:48we have exactly the same
01:02:5110 calls to the listener service
01:02:521, 2, 3, 4, 5, 6, 7, 8, 9 and 10
01:03:00and then the last suite
01:03:01the aggregator suite
01:03:03where we check for the existence of 10 locations
01:03:06using the pre-scripting requests
01:03:09and the non-pre-scripting requests
01:03:10either of them doesn't run as any scripting request
01:03:13and we also run the test that checks
01:03:16for how many animals are in the database
01:03:18in this case two
01:03:19and they are all successful
01:03:21this is the way i have created a project using kubernetes
01:03:24and i tested it using insomnia
01:03:26in a pipeline driven environment
01:03:27using github actions
01:03:29the important takes about this
01:03:31this experiment runs now correctly
01:03:33and every time i do something with my project
01:03:36i know that these tests are going to run together
01:03:39it will test the flow from the listener
01:03:41to the collector to the database
01:03:42and from the aggregator to the database
01:03:45these are very important steps
01:03:47which seem very simple
01:03:48but they are a chain of events
01:03:50that also have an event publisher in the middle
01:03:52and also use a hazelcast cache
01:03:54even though it's just to put some data in it
01:03:56and never use it again
01:03:58but still it goes through all of that life cycle
01:04:02but this was all implemented open source
01:04:04that means that when exploring this open source
01:04:07a lot of the features might not be available
01:04:10think of for example
01:04:10what we just saw with pre-scripting
01:04:12and with chained requests
01:04:14but having it open source
01:04:16doesn't really prevent us from using inso
01:04:19and insomnia in our pipelines
01:04:21to perform tests for our applications
01:04:24i see this kind of testing important
01:04:26it is a kind of chain test
01:04:28mixed with end-to-end testing
01:04:30and it gives us more assurance
01:04:32that our project at least logically
01:04:34does what it has to do
01:04:49you
01:05:05as a short disclaimer
01:05:06i'd like to mention
01:05:06that i'm not associated or affiliated
01:05:08with any of the brands
01:05:09eventually shown, displayed or mentioned in this video

Recommended