We live in the age of rapid urban development, which calls for all around integration of information and communication technology as well as IoT solutions: smart homes, intelligent transportation and smart cities.
At Zazmic, our engineers Yurii (JS Team Lead) and Valeria (DevOps Engineer) have been working on a cutting edge project involving sensors, complex algorithm and Meteor. The platform monitors parking spaces availability. The challenges this project posed however, was what “dockerizing meteor” is and we talked with the engineers about how the Zazmic team began pioneering it. We interviewed Yurii and Valeria to find out:
Q: How did the Docker Story start? And what does is have to do with IoT?
Nowadays “internet of things” has become a buzzword: smart refrigerators, which generate reminders that you’ve run out of milk through an app; multicookers that prepare food at a prescheduled time; stations that feed and water pets as well as allow you to play with them remotely – thousands of really cool things that take simplicity to the next level.
Recently, I was fortunate to begin working on a very interesting project in this area of the modern mainstream: Smart parking – in China.
Q: ‘Smart parking’ should come across as something simple and easy for a driver, shouldn’t it? We pull into a parking garage, a gate opens and find a spot to park the vehicle.
Right. However, all the magic is worked “under the hood.” It’s not just a local number recognition and reconciliation at the local database. In fact, there are several sensors that collect data and send it to a remote cloud on AWS IOT (Amazon Web Services, Internet of Things), then by means of AWS (in fact, there are several ways to solve this problem) a state of sensors is sent to certain endpoints, where all the data analysis and decision making takes place. In addition to the sensors of “access” there are sensors of “statistics” that analyze the state of particular parking spaces. This means that the system represents the state of parking in real-time and has an easy-to-use access control, provides statistics of presence / lack of transport for specific parking places and client, and so on.
This is a handy tool for both parking owners and it’s customers. Watching cars drive in and out and having it all displayed in real-time interface is a little mesmerizing.
Q: Did this, what sounds so simple and fascinating, pose challenges?
China has always been a very special country for the Western culture. Having spent several years living in Southeast Asia, I am aware of the difference, and thus tried to understand the requirements of the end customer of the product. The first one was not using Google services. Instead of Google, which is not an option in China, we used Baidu. Another requirement was to deploy OpenStreetMaps (though, abandoning Google places API and geocomplete was not easy). Having this sorted out, we proceeded to the next part: the web application (together with the server part) needed to run on any hardware: AWS Container Service, VPS, VDS, etc. and on any OS: * nix, win, mac.
Q: In other words, the application was supposed to run from anywhere?
Yes. The task was to make sure it could be set up and run from anywhere and on any possible device. And that was when it came to me — we should use Docker!
Q: Why is it that Docker seemed like the most obvious solution?
Docker is convenient, fast and versatile. Especially since Zazmic is a Meteor Prime Partner and our team has experience dockerizing Meteor applications. Meteor Up, being the fastest way to dockerize, has alway been used and works well. Nevertheless, the client wanted AWS Container Service as an initial demo. And they wanted to have a complete image of the docker, whereas out-of-the-box Meteor Up provides an incomplete version, which requires further mounting of the project.
Q: Did it mean it wouldn’t have worked in Docker?
That meant, the image was empty and didn’t fit AWS CS at all. After analyzing the details of the existing docker images from meteorhacks (thankfully they exist!), we selected and configured one of them, which could build a complete image of the working project. And it worked in Docker on the local machine.
When trying to deploy the Docker on AWS Container Service our DevOps Specialist, Valeria, and I were stunned – we followed instructions precisely, but it didn’t work, even though no errors were detected and everything seemed OK.
Q: Valeria, you worked with Yurii on deploying the Docker. How did you find your way into it?
Like many others, I was once just an admin, happily dealing with EC2, S3, RDS, WordPress, Ubuntu and the like. Being a constant learner (and an absolute Windows girl in the past) I’ve been trying to get more familiar with Linux, always having something to read and try on those black screens. I’ve also learned lots of new terminology, dependencies and got to know new people.
Can you imagine what it was like for me to hear one day: ‘Okay, and now we need to Dockerize this app and MongoDB.’ I was perplexed. Now we need to do what?!
This project was a mind-blowing experience! Neither Yurii nor I had ever done anything like this before. We used AWS ECS (AWS Docker, though, not clear Docker, unlike some may think), and we packaged our application to Docker and packaged this Docker to AWS Docker and connected AWS Docker to another AWS Docker with a database running within separate Docker. It seemed endless.
Q: And that’s how you started building these Dockers?
First off, as mentioned before, Yurii packaged our new app to Meteor and created a local Docker. That was straightforward, because Meteor makes this process simple.
Q: So the real challenge was to package it to AWS ECS?
Indeed. It turned into endless brainstorming and three days of constant attempts to deploy Docker into Docker. With no documentation to turn to and no starting point. We really had nothing.
Simple googling and stackoverflow were useless. There was nothing at all. We realized no one had ever done this before us. We were struggling with AWS Container Service for a couple of days. Empirically and by analyzing how other AWS services work, we discovered that despite the fact that a certain user uploaded a certain image of Docker to the repository, it did not give them the right to use it. We needed to change settings of the image manually in order for it to be added to the list of authorized users. One simple and non-obvious tweak on access settings and everything works! It was as absurd as it was fun.
Q: Yurii, was it a happy ending to the Docker story?
Well, that wasn’t it. Remember that to create an image from the existing docker we needed a working project? Everything was going alright up to a point. Technicians on the client side were using Windows. And if you build a project which is to work in an environment *nix, on a machine with Windows, dependencies will be set for Windows. Thus, it will not work on the production (or on any other) server. And so it happened. We had to solve this problem. First off, we tried the usual method of “Occam’s Razor”, and started to upload a half-built project with all dependencies to Git. So that everything could be built on Windows, and work on *nix. Perfect! But, in fact, Git went crazy controlling 20,000+ files in a very complex nested structure, and began losing files, and “forgetting” about others. Everything worked, but deploying and building was taking hours. It was a torture. Eventually, together with the client’s technical expert, we created a docker image-‘builder’ that could run on any platform and built the project with a minimum of input data, and as a result gave an image of the application ready for production use. On any hardware, in any environment. Just as the customer wanted.
But apart from the application, there is also its database. The client did not want to depend on providers of cloud solutions for the database. They wanted to control the database, which I totally agreed on. In order to meet all these needs Valeria created several configurations of architecture, where in addition to the usual clustering of MongoDB, there were mini-clusters on Docker. Docker rescued us again.
Q: Valeria, did you face others challenges?
The main goal was to build MongoDB cluster within a single dockerfile and deploy it to AWS ECS – Docker cluster inside AWS VPC.
I think there is no need to get into the technical details, the number of sleepless hours, cups of coffee and breakdowns. We came up with multiple ways this could be built. Though, none of them worked. We then decided to try to run a single Mongo inside AWS ECS – that went okay. We tried to run MongoDB cluster inside a single EC2 server – again that went okay. But we couldn’t find a way to run MongoDB cluster inside AWS ECS.
Having tried everything, I realized that even Google doesn’t know everything. I decided to ask AWS gurus what I was doing wrong. The answer: “You are not doing anything wrong. It is just impossible!” Because of different AWS options and restrictions, a new service and for many other reasons. It was just impossible.
Q: Does this mean there is actually nothing you guys can’t do, unless it’s really impossible? ☺
I have to admit, we became a bit disappointed, and relieved at the same time. Moreover, we’d already managed to build Dockers in two other ways: on single EC2 and as a single app inside ECS clusters. And this experience will stay with us for a long time, because since that project we have received numerous requests to “Dockerize”.
The bottom line, by Docker we were able to build architecture which can work anywhere, and can be built on any machine with any operating system. And the possibility to create a cluster of MongoDB in Docker image allowed us to be more flexible and provide more options for the client.
As you could sense, I really like this technology, it makes life so much easier.
We ‘packaged’ our skills and imagination to build a great app!
A traveler and diver, who stands for active lifestyle and creating beautiful software.
A dreamer, free minder, curious engineer, who enjoys exploring new places and everything about AWS.