What came and what’s to come at Geoblink Tech

Happy new year! In these first days of 2018 here at Geoblink we have done a quick look back on what were the technologies that got us more excited during 2017. This includes technologies that some of us had to learn to work with our existing systems, the ones that we played around with just for fun and others that were new to us and were cool enough to end up in our production systems.

Not only that but we also compared that list against the technologies that each of us is looking forward to learn or work with in 2018. We hope you find the list interesting, and if you want to comment on it let us know in Twitter (@geoblinkTech).

Read more

2 days of fun and data BigDataSpain 2017

On Thursday and Friday last week a few geoblinkers from the Tech team were fortunate enough to attend Big Data Spain in Madrid, “one of the three largest conferences in Europe about Big Data”.

The line-up of speakers this year was amazing and they certainly didn’t disappoint. Moreover, our VP of Technology Miguel Ángel Fajardo and our Lead Data Scientist Daniel Domínguez had the chance to actively participate as speakers with a thought-provoking talk titled “Relational is the new Big Data”, where we tried to remark how relational databases can today solve many use cases regardless of the size of your dataset, adding lots of benefits with respect to other No-SQL options.

Relational is the new Big Data

Read more

Happy GIS Day!

From Geoblink we want to celebrate the GIS Day by sharing with you our last improvement in the catchment areas computation.

As Carlos explained in this previous post, at Geoblink we take advantage of the benefits of the graph theory for computing the catchment area of a location. Thus, we define our graph as a set of intersections (nodes) connected by street/road segments (links), and we add to both, nodes and links, some properties that define them.

The new property added to our links is the traffic peak, which allows us to compute catchment areas considering rush hours. Therefore, we first define a location of interest. Secondly, we apply our set of algorithms to our graph for computing the catchment area of the location of interest, specifying if the rush hours must be considered or not. As a result, we obtain a set of intersections and streets/roads segments that make up the catchment area of the location of interest with or without considering traffic peaks.

But, how does it work? Let’s compute the Geoblink’s catchment area traveling 5 minutes by car, not considering the rush hours and considering them.



As you can see, the resulting catchment area taking in consideration peak traffic is smaller than the other one. The addition of the traffic peak property to our street/road segments allows us to provide to our clients a more accurate catchment area when rush hours must be considered.

Learning from nothing or almost

Last article

In my first article for this blog,  I talked about how my teachers and a team of students I joined used the latest Deep Learning (DL) technologies to help fight cancer.
The goal was to segment and colorize areas in a scanner image corresponding to biological tissues, which could be used to estimate the health of the patient and, in turn, to a better formulation of his treatment.


Not colored scan

Colorization of a scanner image

Colorization of a scanner image

Colorization of a scanner image










At that time, in order to focus on how technology can improve medicine,
I had to skip a crucial component of this IODA project, namely pre-training of the auto-encoders. By presenting it here, I’ll try to illustrate a bit some problematics of deep learning.

So, first, let’s do a quick reminder on deep learning.

Read more

LATERAL, a hidden gem in PostgreSQL

LATERAL is a very useful tool that every SQL user should have in his toolbelt. However, it is normally not explained in introductory courses and many get to miss it.

Suppose that we want to find the perfect match between two tables. For example, we could have a table with all students, a table with all schools, and we would like to find the school that is closest to each student. Another example, we have a table of users with some preferences and another table with products, and we want to find the product that is most similar to what they want. Then LATERAL is a lot of times the right tool for you.

Let’s try out to solve the problem with mock data.

Read more

JS meets SQL. Say hi to AlaSQL!

At Geoblink, we often find ourselves moving a lot of data from our database to the frontend, to display in useful charts and graphs for our users. Although restructuring data from the backend is usually necessary, it is not especially challenging, as the queries to the database are crafted to return the data how we need it for each purpose. For these cases, Lodash is more than enough to filter and fit the data to needs of the front-end.  But, what happens when we do not query the data ourselves? Sometimes, when using third-party APIs, the data may not be structured how we want it. We need a way to organize this data quickly and easily.

Cue in AlaSQL. AlaSQL gives us the power of SQL queries in Javascript, in a fast, in-memory SQL database. With this library, we can store records in a relational table format, and query the data using SQL syntax. This is extremely useful for counting or grouping a large amount of records in a flash, without having to overthink how to manipulate the JSONs to achieve the required structure.

Read more

Automating data pipelines with jenkins

One of the cool things about being a Data Scientist at Geoblink is that we get to work on all stages of the data science workflow and touch a very diverse stack of technologies. As part of our daily tasks we gather data from a range of sources, clean it and load it into our database; run and validate machine learning models; and work closely with our DevOps/Infrastructure team to maintain our databases.

As it happens in other start-ups, as we grow rapidly it becomes more and more important to automate routine (and indeed boring) tasks, which take away precious development time from our core developers, but also from us data scientists.

While automation tools have long been used in software development teams, the increasing complexity of data science cycles has made clear the need for workflow management tools that automate these processes. No surprise then that both Spotify and AirBnB have built (and even better, open-sourced!) internal tools with that aim: Luigi and Airflow.

As part of our effort to iterate faster and promptly deliver our client requests, in the last couple of weeks I’ve spent some time working with the great automation tool we use, Jenkins, and in this post I’d like to give you a taste of how we use it in the Geoblink’s Data team.

Read more

Using Deep Learning to heal people suffering from cancer

DL is cool

Sometimes, we are happily using Deep Learning for futiles things like generating faces or changing horses into zebras. But most of the time, it’s a powerful tool that can help saving lives.

At the INSA of Rouen, I worked in a team of student implementing a solution based on an article published by researchers, some of them being my teachers. The article is called IODA: An input/output deep architecture for image labeling and was written by Julien Lerouge, Romain Herault, Clément Chatelain, Fabrice Jardin and Romain Modzelewski. Image labeling is the act of determining zones in an image and saying : ‘this zone corresponds to the sky’ or ‘this zone corresponds to a pedestrian’. But what’s fantastic with their work is that it also does image segmentation (it also detects where are the frontiers of the zones).

Example of image segmentation

Example of image segmentation

Read more

Parallelizing queries in PostgreSQL with Python

At Geoblink we run more than 20000 queries to generate just one of our several ~100Gb PostgreSQL databases from scratch from our raw data files. If we try to run them in sequential order, the database generation would take too much time. In order to reduce the generation time we parallelize several queries. Doing that by hand would be impossible so we use a nice script written in Python to generate and run the queries.

In this post I will show an example of how to do it in Python. Read more

PostgreSQL: Foreign keys with condition ON UPDATE CASCADE

Foreign keys are a key feature in Relational Databases, ensuring integrity and coherence of data. They allow doing transactions ON CASCADE, which means that changes on the primary key/unique constraint they reference is also applied. This has many advantages as the complexity of the database grows.

However, there might be cases when using ON CASCADE is risky because you can lose the track of what’s actually being changed (specially when deleting). So in general is a good practice for updates, but one must be careful to use it in some cases, especially for deletes. Maybe we created a demo account and we don’t want to allow non-expert users to delete the user account, due to the lose of all relative data.

In this post we are going to compare different alternatives to the ON CASCADE constraint and their performances.

Read more