Mentoring juniors and interns remotely

Walk with me

My background

In my previous job I’d worked remotely during 5 years, and now due to COVID 19 we have to work from home again. From time to time people ask me about the lessons learned during my remote position: I mostly answer explaining the common problems and ways to solve them. Actually I even did a presentation on the subject.

However there is a topic that rarely appears in questions, and due to this pandemic situation, I consider vital to keep out IT industry in good shape: mentoring juniors and interns. In my experience, I mentored people from Mexico, India and Spain, remotelly.

Juniors in the remote world

The main problem working from home is the lack of communication. This problem has effect in all employees, but it’s far more severe in the case of juniors.

Juniors are basically novices in the Dreyfus model of skills, which means that they should be learning following rules and examples. Novices don’t know why an error arises. They look for constant feedback. Feedback is key to improvement, which is what you want a junior to focus on. Feedback is communication.

Juniors can’t work autonomously. They need the guide of a senior. They may not need the big picture, but a clear task to do, challenging but understandable: they need to keep momentum. They need a plan with examples. In some cases, like working in a fast-moving startup, the lack of plan can be very confusing.

Juniors bring freshness and enthusiasm. But if they don’t feel on track, their excitement starts to go down; and believe me, it’s the worst that can happen. It’s important to realize this situation as soon as possible, because depending on junior’s personality (openness, sincerity, etc), this problem could remain hidden for weeks or even months, and leads to fatal consequences.

There is a minor problem too that use to happen in remote situations: the lack of “team” feeling. In the case of juniors this can be an issue too.

Communication

Communication is the key to keep a junior on track: it helps his learning process and his daily rhythm. But working from home makes this communication less casual, so we need to patch this problem with some processes. Following I present some possible solutions.

Man vs city

Daily Learning

A senior should be always available to give feedback to a junior. Moreover, it should make him understand that if he has any doubts, he can freely ask about it, despite he could sound silly. This also helps to give advice just-in-time, while keeping the feedback loop really short.

It’s is vital that each conversation with a junior ends with an explicit “I understand” or some other form of ACK. A quiet answer is no answer.

A senior should give examples to learn from, and take always the chance to explain some lessons learned while doing a similar task (that the junior is currenlty doing).

Finally, a senior should challenge the mind of a junior. A common way is to ask him for alternative solutions: try to find 2 solutions to the problem (or at least a variation of the solution); and later discuss the trade offs together.

Planning

It’s mandatory to set 1:1’s frequently. In these meeting there should be a follow up of the junior’s career evolution. Your company may have a career track, like this one by patreon, or just a simple list of concepts to cross out. But anyway the junior should understand clearly his good points and the ones to improve.

Junior members should know the general idea of the following tasks, and have at clear path: a list of tasks with easy explanations.

Team building

There are lots of different actions for team building, even for the remote case. But these days it’s quite common to see online meetups and tech events, some of them with a cheap or free access. It’s a great chance to invite juniors to watch the event together, and later discuss the subjects.

Another similar approach is a reading club: find some must-read tech book, and set a time to discuss each chapter together.

Final words

This new work-at-home world has come to stay. Hiring and mentoring juniors can be more difficult, but the energy and momentum that bring to the team is worth of. And, frankly, they are cheaper to hire too. The industry can’t live without them, and now it’s not the time to look away.

Be the mentor you wanted to meet when you were a novice.

This text was proofread by some juniors.


4 things I would’ve liked to know about Cloud Functions

We had an important project to do: build and keep updated some transformed tables in BigQuery with data that comes from a transactional system. We needed 3 pieces: the code that build the tables, a place to deploy this code and a scheduler to call it. Given the code piece was in Python, we had to evaluate different platforms to deploy.

cloud-functionThe platform chosen was Google Cloud Functions. Actually there is a nice diagram to let you choose among Google services that helped out (but for some reason it was not easy to found). We could’ve tried it deploying on our own hardware server, but a lateral idea with this project was testing the cloud.

Cloud Functions are perhaps the easiest way to deploy your code: given it’s Python, Go or Node.js; for other languages you could try the new Cloud Run, which is basically a docker container that is executed on an external event. A Cloud Function can be triggered using a HTTP call, a Google PubSub message, or some platform internal events (like when a file is uploaded). In our case, we use Google Scheduler (a basic cron) that sends a PubSub message.


Up to here everything sounds perfect, but we found 4 minor problems due to the lack of knowledge of the platform:

DEPLOY AND LOGS WITH DELAY
As you deploy and use the Function on the cloud, the results will come with delay. This is not a real problem, but if you’re get used to work with “tail” or other console scripts to explore the logs, you have to relax and wait a couple of minutes before sentencing your code is not working.

TIMEOUT
Cloud Function’s documentation says that this product is intended to short-running scripts, so it comes with a 1 minute timeout. We missed this detail at first, and the actual use misled us to think there was no timeout: if you manually launch the Function, apparently there is no timeout and runs for several minutes. But later, having a look at the logs, we found the timeout error (that actually is logged as INFO instead of ERROR!). It seems that when our script uses more than 1 minute, it continues until somebody else asks for resources: so if we are in a relaxed pool, we can be lucky and run it more time.

However you can configure, on deploy time, the timeout limit up to 9 minutes. Unluckily our code runs for 11 minutes, so we had to split it.

NO WRITE PERMISSIONS
The library we use does some disk-writing internally, and we didn’t realize it until we saw the problem in the logs. The good thing is that you can just write in /tmp, so we only had to reconfigure the library to write there. The weird thing is that anything that your code writes into /tmp is also written in the function log, so the logs can become difficult to follow.

SINGLE THREAD
This was the most trickiest one! We are using a Python library that, by default, creates 4 threads of execution. For some reason this doesn’t work well on Cloud Functions, and sometimes the connection with BigQuery is closed before all threads have finished. So we had to use an undocumented feature of the library to work only with a single thread.


Summing up, Google Cloud Functions is a lightweight way to execute your Python scripts, with a really way to deploy and use. But sometimes things go wrong under the hood, so you should READ THE LOGS to find out if all is ok. Checking that the final results matches what you expect will help too (for instance, doing automated test-queries that try to find non matching numbers).

Disclaimer: we chose Google Cloud Functions due to the particular background of the company (team, knowledge, etc). Depending on the task to do, you might want to have a look at other more specific products, that could help you better to make ETLs or processing data (for instance, Dataflow/Beam on Google platform).


My 2018 in review

My main objective in 2018 was to go deep in Machine Learning (as a way to continue 2017’s focus). When the year started, I decided to organize my free time in small 1-month projects. The original idea was to start with Deep Learning too, but I ended up exploring other fields, like data engineering.

In February I tried different approaches to develop a ML model to solve the famous Titanic Kaggle competition, where you have to predict the survival of different passengers given some data. It was really fun, because I explored different ideas, but I ended with a quite over-engineered notebook. Later I realized how important is to find which are the noisy features that you have to ignore.

In March I decided to improve my python skills so I set a challenge based in intuition: try to group products that are bought together, using real data from work (Ulabox, an online supermarket). I enjoyed creating sparse matrices with scipy and doing matrix operations with numpy, which was a good refresh of maths. The result was a nice dentogram that showed that some vegetables are bought together, as well as some types of yogurt.

In May I created a simple notebook to solve the Titanic competition but with one idea: help my work mates to join a competition and get excited with ML. So I made the most simple code that worked, but at the same time trying to show an eye-catching result. I tried plotting a simple decision tree with a great result: both coworkers joined the session, and other Kaggle users voted up my notebook.

In June I bought a new computer with a GTX1080, getting ready to jump to Deep Learning. I tried some tutorials (Tensorflow and Pytorch), but I didn’t like starting from level 0, that is, creating my own neurons from scratch. Actually I learned about neural networks years ago, at university. Later, almost at the end of 2018 I finally found a book with the level I was looking for: Advanced Deep Learning with Keras.

PyConDERegarding conferences: in July I attended PyData Berlin thanks to my employee (who paid me the tickets). Later in September I also attended DataEngConf in Barcelona, that really matched the needs of my company: make a data engineering plan. In October I took a train to Paris and then another to Karlsruhe to attend PyConDE; this conference was really well organized, with a wide concepts’ talks and in an incredible venue: a digital art museum with thought-provoking expositions about the future we are building.

The most interesting books I’ve read this year came as suggestions from conferences’ sessions: one is Lean Analytics and other is Data Engineering Teams. During 2018 I read some non work related books too, most of them sci-fi novels (like The Expanse book 3 and 4).

During summer I continue improving my knowledge of Python, using libraries to create images and videos. Also joined a MOOC about Google Cloud Platform (as a need from work).

I sent 3 papers for different Call for Papers during the year, and was lucky to get selected to do a workshop in November in Barcelona, during the unforgettable PyDay. I prepared a practical introduction to NLP, using classic and modern methods to classify texts. I chose Spanish jokes as the corpus to work on, and the result was amazing: both the audience and myself enjoyed a lot the workshop.

Finally in December I took a rest regarding tech stuff… and got married 🙂


My 4 favorite grouping tricks with pandas

ArchitectureWhen doing data analysis there is no better help than pandas library. Actually pandas is one of the reasons python became extremely popular in the data science field. It leverages one of the pillars of the field, numpy (a library for working with matrices), adding not only indexes and columns, but a wide functionality too. You can almost do magic with your data with pandas!

One of the most common uses with pandas is grouping data. You can make a group with the function groupby() and then apply some common action to that group, like mean(), count(), median(), etc.

The function groupby() sounds like SQL’s GROUP BY, and while it’s similar to its SQL cosin, it comes with extended powers. Let’s see some basic use before showing the tricks!

Let’s suppose we have a dataset with the results of an exam. We have 6 students that spent almost 2 hours (120 minutes) solving the problems from the exam, that took part in 2 different rooms (labeled 1 and 2).


INTRODUCTION: basic grouping

The most basic way to do grouping is by a column (or ‘feature’, in data analysis’ slang). In this case we age going to group by room, then choose only the time feature, and get the mean of time spent in each room.

Notice that the functions that are used with groups can also be used without grouping, as it’s showed in the next case. Here we are also showing here how to use square brackets to choose only some columns, in this case result and time, so later further operations are done only on them.

In the following case we first chose 2 columns, result and time, and then we group by result, looking for the maximum values.

Given these examples, we can get an idea of basic use… but let’s see now my 3 favorite tricks when grouping.

TRICK 1: list grouping

You can do grouping with more than one column, and the result will be a multi-index dataframe, nice!

Ok, ok, I hear you ‘this can be done with SQL too’. That’s right, but later you can use the multi-index for further exploration.

TRICK 2: grouping by function

You can pass a function as parameter to pandas’ groupby() to create groups. The function will get an index as parameter, so you can use pandas’ loc to locate the data. For instance, let’s suppose we want to group by the number of ‘e’ letter that each student name have.

So people with zero ‘e’s in his name spent 104.5 minutes as mean, while people with 2 ‘e’s finished the exam in just 94 minutes.

Isn’t it amazing? Of course this is a stupid example, but you can do things like, for instance, group ages by decades (like I did in a notebook on kaggle).

TRICK 3: Group and rank

pandas has a function called rank() that gets the order/rank of a column. For example it can sort time column and show a 1 for the quickest student, then 2 for the second one, etc.

But how could we get the rank per room? We want to know which student was the quickest for room 1, and which one for room 2…

So Alex was the number 1 globally and also the quickest in room 1. But George was the number 1 in room 2. Isn’t that magic?

Funny enough, rank() return the order as floats, but you can change it’s type with .astype(int) later.

TRICK 4: Group and process with agg() magic

With the agg() function you can describe several grouped processing in a compressed form. Let’s see the example to understand it better:

Using a dictionary we have defined first which columns we will work on. Then we define the operations we want to perform on this column; you can even write a lambda function there!

I hope you liked these tricks!


2017 focus: ML

At the end of 2016 I was still amazed with the result of AlphaGo vs. Lee Sedol match in March (for the 1st time a machine beats a top professional Go player), and at the same time I was looking for a subject to focus on in 2017, so I chose Machine Learning. During my university years I tried out some related tools (genetic algorithms, basic neural networks, etc), but for 10 years I’d not looked at it again.

The first stop was the famous Machine Learning course by Andrew Ng in Coursera, as everybody points you there. Despite it explains a lot of complex stuff in an intuitive way, soon you get tired of so much maths and using Octave/Matlab, when you should be using Python.

After one year learning about Machine Learning, I think I have quite a list of recommendations on how to start exploring the field. Disclaimer: this could be related with my preferred way of learning, that is, with text instead of videos. This could be a good way to start if you have no previous experience:

  • Do not watch that coursera’s ML course, but just read the notes somebody took on it instead.
  • Learn about Python, but specially about the libraries Numpy, Pandas and scikit-learn. Also how to run a jupyter notebook. And the best way to install them all is via Anaconda distribution.
  • Buy a copy (paper or ebook) of the book “Python Machine Learning” by Sebastian Raschka.
  • Join Kaggle and have a look at the Titanic tutorials, and it’s new Learn section. They also have a video-course in Udacity in case you like watching videos.
  • Don’t be in a rush to learn deep-learning (aka neural networks), because you’ll first have to learn about classic ML models, but also a lot of related processes: data cleaning, feature engineering and data visualization.

My first real-world input was in May, when I attended PyData conference in Barcelona, which was a turning point: I found lots of ideas to apply, but over all I felt the industry’s pulse.

workshopDuring summer I challenge myself to apply it at work and to do a conference talk. The subject was customer segmentation using non-supervised algorithms, using a dataset I prepared myself from our company’s data. Finally the talk became a 2-hour workshop.

It was the first time I did a presentation about Machine Learning in English. Despite the audience was satisfied with the workshop and some people had interesting conversation after, I felt that I should’ve work harder while preparing it.

As 2017 finished and 2018 started I’ll continue focusing on ML, but with a more practical approach. In my day work we have developed a recommendation system that will evolve with several ML models working together, and after work I’ll try to play more with Kaggle, taking part in some competitions.

In 2018, I’ll try deep learning too: both with Andrew Ng’s course with Tensorflow, a creative apps course and some video-tutorials on PyTorch. I’ll try to improve my engineering approach to ML, as things like version control, testing and deployment are very rare to see in a world with more university people than industry ones. Finally I plan to complete a nice course on data visualization with D3.js.

I hope all these links help somebody too!


Teaching students about real industry work

Some months ago I had the chance to teach University students about how we develop in the real world, as part of a “companies’ seminars” event.

There is an ongoing discussion in our industry: Do you need a major in Computer Science to become a successful developer?. People say that the subjects explained in the University become outdated quickly, basically due to the lightning speed of technology. People say that nowadays joining a course on javascript is enough to learn to program. Other people say that you must spend 4~5 years in University.

I’m on the side of the need for formal University education. Students need foundations to perfectly understand how things really work. But it’s true that they also need to know how the industry really work. Virtualization, code versioning, code quality (“clean”), tradeoffs, etc, are subjects that are not taught in University, unluckily.

During the seminar I taught students about general subjects like the tradeoffs we have to choose in our company, but also about last trending technologies like docker. Anyhow the most loved subject by them was my introduction to clean code, that opened their eyes. Let’s hope this will inspire them.

Here are the links to the slides I used:
Professional development
Clean Code
OOP and SOLID principles
Introduction to docker
Seminar conclusion

The best advice I gave them: Find a job in a company where you can learn.


PyData conference in Barcelona

pydatabcn2017I was lucky to attend PyData conference in Barcelona this year, hosted in ESADE.

Although I’m basically a PHP developer, I’ve been playing with data science tools lately with python’s stack. I have no real experience in data science, apart from a couple of prediction coding using linear regression, but I was curious.

With a novice spirit, I set some clear objectives: find out if data science is like teenager sex, or companies are really using it; get a feeling of the community; and try to learn as much as I could.

First of all, the community is vibrant, actually far more than PHP’s one in Barcelona. The organization was smooth too, and all the people I talked with was really nice. Everybody had things to learn, so came with an open mind.

It was funny to see that I was on the “data owners” side, while most people were in the “looking for datasets” side. This led to several conversations asking me how we use the data in our company.

Regarding the talks, there were quite a lot about tools. Python science stack have a wide range of evolving tools, and this somehow reminds me of PHP circa 2008, when basic tools (PHPUnit, for example) were becoming popular. It’s good to polish your tools and master them, so I welcomed those talks.

There were also some talks on theory, which surprised me, as I haven’t never seen university professors in software conferences. Mathematical and computer science concepts were explained, for instance on optimization. This contrasts with the common industry solution: if some code is slow, just use more machine instances, which is far cheaper that spend time trying to optimize things (at least 99% of the time). I don’t mean I didn’t like those talks (actually one was really mind blowing), but I would love to see more professors in some other conferences, getting a real feel of some industry practices.

I was looking for talks showing “real fire”, real examples in companies. We heard about hotels trying to predict cancellations (in order to do overbooking); we saw IBM’s Watson analyzing the personality of customers; predict which employees will leave a big company; ideas to react knowing bad weather will arrive; best weekday to publish job offers and set interviews; and some other extremely interesting stuff… but I do want more!

My overall feeling is that I learned a lot. Python is not really used as a language but more as an interface for some amazing libraries. It looks like I have no option but to start exploring the data in ulabox!

I’d like to thank ulabox (my employer) that paid the ticket, and all the people in the organization that did a great job!

I published some of my (unedited) notes too.


Remote working effectively

Some months ago my coworkers asked me to share my experience remote working. We work in a normal office, but I had worked from home during 5 years, from Barcelona, Seoul and Mexico DF. So I prepared a simple presentation about the main issues to consider if you want to try working from home.

After the presentation a interesting discussion followed. Some of my mates worked as freelancers in the past, and arrived to similar ways of managing the working time. It’s the key point when working from home: control yourself how and how many time you are productive.


Virtual disk design kata

In my current job (ulabox) we do every Thursday a internal training session, usually prepared by one of our department members. Some months ago I prepared a code kata on design patterns, with 5 steps with instructions. The idea was to push the team to debate about different approaches to a common problem, and show them some classical design patterns, as a way to polish our weapons. The result was good, but the discussion only really happened at the end, when I showed them those patterns.

Ninja weaponsSome weeks later I heard about a code conference in Barcelona, organized by the Barcelona Software Craftsmanship group, so I took the chance to polish my kata and ask them to do in the event. It was rejected to the main event.

Later I heard about Monday’s katas: this group organizes every Monday a code kata with up to 20 developers. I offered my kata and our office to do it, and on December 12th we did it! All participants agree: the kata is smooth and induces to think about the subjects it later shows.

I published my kata on github. Have fun!


Do you test your tests?

Weird shapesThe first time I read about serious testing was in The Pragmatic Programmer. The book explains the usual (boring) benefits of testing, but a twisted detail rolled my eyes up: also test your tests. Testing is a net that helps you to change the code without breaking the logic, and as a real life net, you should verify it works as expected. Tests should be in a tight relation with the code.

When is a test good? Trying to find the differences between a good test and a bad one is not obvious, however. Looking for lacks or anti-patterns in our tests is a good option to improve them.

Thanks to PHPUnit and Xdebug, the PHP community started to care about testing years ago. Since then, the easiest way to show the quality of a test suite is the code coverage, that is, the percentage of the code the test stresses. That worked until programmers started to focus on a 100% coverage, creating artificial tests that doesn’t stress the logic correctly, but instead get a fake 100% line coverage. If a line is executed once, even if the subject was a different test unit that uses that class, the line “is tested”.

Are you really testing each class? Following the logic? Even if you use proper unit tests, you may be missing things.

Let’s start with a stupid example, a function that does an “AND”, and a tests that gets a 100% coverage:

The test is only stressing 2 cases! Actually an “AND” has 4 possible cases, so the 2 missing cases (false-true, false-false) were totally ignored, despite you get a 100% line coverage.

This was a basic example to show the difference between line coverage and path coverage (in this case, 4 possible paths). The good news is that Derick Rethans is working on it. I wonder how many programmers will get surprised while seeing their code’s path coverage is low.

Another way to test your tests is to change the source code and see if the test fails (it should!) or not. This is called Mutation Testing, and helps to detect when a test is not working perfectly. For instance:

This test looks complete, but there is no test for the bound case, biggerThat5(5). This test is not really accurate.

In the PHP ecosystem there are 2 only available Mutation Testing tools: Humbug and Mutatesting. The second one, despite the author is also the creator of the excellent PHP-metrics, seems abandoned.

So the only real option is Humbug, developed by the author of Mockery. Unluckily it only works with PHPUnit for the moment. It basically finds places where the code can be easily changed, like a true for a false, or a number N for N+1, and runs the tests to see if that mutation is killed (that is, the test fails). For instance, in the previous example it changes 5 to 6, and the tests still work, so the mutation was not killed.

I just hope these tools become more popular, in order to improve the quality of our industry. And let’s hope soon Humbug will work in PHPspec too, as many companies are moving from PHPUnit to Behat-PHPspec.

The code of this post can be find at its github repo.