td-berlin Tech Blog

Remote Working Culture

christophoellrich • • culture, environment, teamwork, remote, and management

Trommsdorff+Drüner’s (t+d) first experience with remote work took place when we opened our office in Bejing. One would assume that this first wave of remote working culture prepared t+d to become what it is today in terms of remotely centered work. However, this long distance expansion of t+d did not really influence the daily business in our Berlin office too much (and vice versa) simply because the distance was too long and the dependencies between work in Bejing and Berlin were not that strong.

The second remote wave within t+d was of contrary nature. Our development team was and is growing because of an increasing number of projects along with a growing core of product features. Very soon it became obvious that our search for appropriate team members could not be limited to Berlin and Germany only. Thus we were looking for good RoR and JS developers in neighbouring countries. As time passed, our team grew to 13 highly skilled developers working from three different countries, with 5 different nationalities and one common experience at t+d. With every new team member, may it be a local or a remote, t+d tailored its work processes in the IT department to an extent that aims to produce maximum possible satisfaction for each individual person in the team. One of the keys to achieve this is t+d’s open and agile mindset when it comes to processes and tools. There are no rigid structures in and around the development teams that need to be obeyed just because those rules and structures have been there for the previous years. We highly pursue ‘trial and error’ in liaison with ‘take it or leave it’. We believe in team spirit the same way like we do in collaborative choices when we have to decide about tools and processes that define our daily workflows. Eventually, pending doubts could be countered with reliable velocities and great sprint results. Within our teams and projects we gained a level of ambidexterity that enables us to plan with quarters and deliver in weeks. At the same time Scrum helped us to prevent the well-known “small” additional tasks from being added to the scheduled workload of the teams. To achieve this we had to define new processes and evolve with them over time. At the same time one cannot create a remotely centered working culture solely by defining any order of working steps. The key is and will always be the persons in the teams and the constitution of their individual preferencese, experiences and ideas in combination with the right tools used in the right way.

/giphy Communication

Our communication tool of choice was an evolution from Skype, via a short trip to Hipchat and finally leading us to Slack. We are not the only company which discovered the extraordinary possibilities that Slack offers with regard to third party tool integrations and user experience. Slackbot and the Giphy integration opened up a huge creative space for all of us in the teams. The freedom in communication is not only funny, even (or especially?) when Giphy seriously fails from time to time, but it also motivates everybody in the team to participate in channels and discussions. Not only does it serve communicative entertainment, but also it allows our team to integrate supportive notifications like deployment and provisioning stats, DataDog alerts, Sentry notifications and much more. With the opportunity to combine those individual benefits within a communication tool, Slack became a big pillar of our working culture. As one can learn from the previous line this working culture is not only about productivity but also a lot about fun while being productive.

Trellorganization of work

The second tool that we use to make our working culture remote friendly is Trello. Trello offers a really nice and handy way of organizing tasks and user stories as they are defined in Scrum. Users can create lanes for specific statuses of tasks and manage product backlogs in a separate board while the team has full control over its sprint backlog board. It was obvious from the beginning that we could not rely on post-its on one of our office walls as the synchronization would have generated too much overhead for our remote colleagues. Trello appeared to be a nice digital substitute for post-its. Beyond this we even use Trello to do all(!) our Scrum appointments. It even makes the meetings more fun. For instance in retrospective we can leverage pictures to talk about experiences of the previous sprint or we can attach screenshots to tickets very fast with easy drag and drop functionalities. What Trello is lacking regarding to statistics we managed to catch up on by creating our own statistics based on burndown charts with Google Spreadsheets. This way we have an overview of numerous sprints and can work with average velocities during planning.

Let’s hangout

So far I’ve mentioned the tools that we use for communication and workflow management, but there is nothing that can substitute seeing the people who you are working with. Since this is rarely possible in a remote working setup we started from day 1 with using video conferencing tools. Initially our choice was Skype but soon we swtiched to Google Hangouts. Regardless the connection issues that we had, Hangouts offers more options for engagement. A hangout can be started straight out of Slack by typing /hangout so there is no need to start an additional application. Also hangout offers engaging special effects that make the team laugh from time to time which is an unvaluable capability when it comes to meetings as all of us know. With hangout team members see each other in the morning during standup and during all other Scrum appointments (planning, refinement, review, retrospective). As a positive side effect the ease of starting a video conference out of slack not seldom results in short conversations between team members, because usually it is quicker and involves less typing effort if one has a specific quesiton. In addition to the workflow management, this “face to face” communication fosters the remotely centered working culture that we established in t+d.

GitHub to rule it all

It’s probably not surprising that the next member of our stack is GitHub. The perfect tool for collaborative management and development of software. It allows our team members to work on the same code base without interfering each other’s work. Because of its collaborative character we also use GitHub for documentation of artefacts like API implementation guidelines etc. GitHub allows us to write a first version and then hand it over to the full team for review. Everbody can apply changes or give comments while the author can trace what’s been changed at a single glance.

Specifications in the wiki

As all of us know development of software for different clients needs specification of the respective needs and requirements. This is done by our product and project owners who specify all their user stories in a wiki. We always try to keep the structure of the wiki elements the same so the efforts for reading through all specs can be reduced to a minimum. Actually, an outcome of one retrospective was that we need a special format for the specification of KPIs and metrics that we use in our smart data projects. With aid of the development team the product owner was able to define a file format that is used eversince for definition of such KPIs. This avoids a communication overhead when we enter the implementation phase because it accounts for the positive atmosphere within our department as we are all working with the same files, same formats and especially the same project language. Of course the wiki allows everybody in the team to access from everywhere and another time this makes the geographical borders in our team to become more and more blurry and irrelevant.

Summary

There is no such thing like the tool that enables a company to build a a remote working culture. It is the combination and right implementation of tools in the daily work. One of the most important drivers is the team driven decision making when it comes to tools. Only the persons who have to use and work with the differnt tools should test them and decide whether to establish them or not. One might think that this opens Pandora’s box because teams would raise interest for new tools everyday. This is wrong for two reasons:

  1. At the end it comes down to collective team decisions when somethings new should be tried
  2. The longer a department is working with a specific tool the bigger its dependency gets with respect to your working culture

If the team still wants to change this so badly, then this tool really seems to be a pain and it is probably pushing your team’s satisfaction, producitivity and creativity more than it is acutally pulling it. Basic steps into a more communicative and independent working culture should be to talk about the current processes and analyse with your team what are the pros and cons. Consequently you should introduce improvements step by step and adjust your overall setup in an iterative and agile way. Feeback is key and counter arguments not necessarily mean that the selected tool or process is not working but maybe it is not properly used and just needs adjustments. Think about the key characteristics that you want to endorse in your team. Is it creativity? Introduce less words, more pictures in your meetings. Is it transparency? Introduce checklists in trello for each team member in a Scrum standup manner and so on an so forth.

I will now share my lines via GitHub and Slack with the team, move my Trello task forward in our sprint board and tell my team members tomorrow in our hangout standup that I finished my article and that I would like to get some feedback. I am wondering in which country it will be read first.

Provision your devops team

spk • • devops, provisioning, and ansible

At TD, we are using Ansible to provision most of our servers. No matter if AWS, DigitalOcean or Geib IT.

Several months ago we started migrating from managed hosting to in-house DevOps using Ansible. This switch has been done for a single project in the first place (we were developing / maintaining 5-8 projects which were running on 10-16 different servers at that time). It turned out to be the right decision for a couple of reasons, but first and foremost for the ease and speed we gained maintaining our infrastructure, thus the time we won when we had to react to changes in application requirements. Another very important point was that we often had to setup similar infrastructures for the projects we worked on. Today we are maintaining an infrastructure with more than 30 servers and Ansible really helps us to take the pain out of maintaining them.

I will try to sum up a few of the best practises our DevOps team discovered during the past couple of months.

Reproduce the environments using Vagrant

Vagrant helps us to mimic real world environments like staging and production by setting up several VMs locally.

In a team which uses different development environments (Mac OS, Debian, Ubuntu, …), it helps a lot to standardize the way how VMs are accessed. The following plugins helped us to achieve that:

Create a Vagrant stage for deployment testing

We mostly develop Ruby / Rails applications. For deployment we use Capistrano. Testing the deployment against a real world infrastructure is as simple as adding a Capistrano stage to your deployment with the hostname defined by vagrant-hostsupdater plugin.

This has been proven to be really useful for us. For example when new versions of puma / eye had been released, we added a new service to our stack or we had to reproduce and fix deployment errors.

Encrypt your credentials

We choose git-crypt over ansible-vault, because it is really easy to install, transparent for the user and much more integrated to our daily work.

One small drawback would be the .gitattributes file, which can get very long.

Create unencrypted fake credentials

This is useful if a developer needs to do a fix but doesn’t have the encryption key. With ansible we decided to use the common variable testing, this looks something like this:

- name: Load slack vars
  include_vars: "vars/slack.yml"
  when: testing is not defined

- name: Load testing slack vars
  include_vars: "vars/testing/slack.yml"
  when: testing is defined

Make your playbooks idempotent

This can be hard but it’s definitely worth it. It will help a lot to find problems with the playbooks.

Tag your provisioning

Tag using git or a monitoring tool (e.g. Datadog), will help you to quickly get information about the current state of the server.

Integrate to CI

Currently we are doing syntax checks and linting on the playbooks with ansible-lint.

References

Establishing tech culture in a non-tech company, part 1: Coding challenges

saschaknobloch • • culture, management, knowledge-transfer, and teamwork

In a marketing consultancy devoted to project work and a development team in the role of supporting and enabling consultants, establishing and keeping up a tech culture is a difficult task. There is always the (justified) struggle of producing features as quick and cost-efficient as possible on the one hand and the need for containing technical debt and achieving a reasonable quality on crucial components, not visible to clients, on the other. The first is a rather short term goal for immediate client satisfaction, the second more long term for achieving competitive advantage. Both have to be balanced carefully with the pendulum swinging back once it was pushed into one direction too far. Everything which does not fit onto a roadmap, e.g. engage software developers in knowledge exchange, reduce fragmentation or beeing creative and try new technologies, needs to be dealt with in a separate way.

In this environment, the development team at trommsdorff+drüner (td) used to have a weekly tech meeting, which was supposed to bring all developers from different projects together with the following intention:

The agenda was provided by the developers and the meeting only done with little management and organization. Sometimes it was OK, but overall this turned out to be a bad choice for several reasons:

Why it failed for td, we only can guess: It’s probably lying somewhere between the company philosophy, personal preferences or a missing common goal when writing software.

Anyway, we discussed during a retrospective meeting about how to change this and agreed at first step on doing coding challenges as more playful way of achieving the mentioned aims and spending some creative time. Meetings about specific topics like architecture or organizational purpose were from there on only done on demand (which turned out to be sufficient).

Setting and preparation

We selected one person, who would prepare the first couple of meetings and then after the flow has been adapted, every team member was supposed to take over control. The base setting looked like this:

The preparation was mostly searching for tasks fitting in this environment. Also, it should be a little inconvenient: Instead of giving normal coding dojos, we wanted to get the developers out of their comfort zone, at least from time to time. Everybody had this experience of picking well known problem domains and use similar approaches to solve problems. We presumed we would gain something when breaking with these habbits.

Sample challenges

In the following section, we will present two of the challenges. One was not very well accepted and led to grumpiness and no satisfactory results. The other on the contrary was happily solved and discussed in less time then anticipated. You will also find some of the other approaches we took at the end of this section.

1. Number converter

Write a number converter, which can transform roman into arabic numbers and vice versa (range 1..3999). This is a team challenge, so everybody is working on one single program and it is to be written like this:

This was a nightmare. Everybody involved was at senior level and it should have taken not more then 20 minutes to solve the task alone. But this one-line-per-developer-restriction crashed everything, though the problem was actually very easy to understand. We could not agree on things like class names, data structure or loop types and instead of going quick from step to step, one line of code took almost a minute on average. One of the first challenges, which was thought of as train your skills by limiting one specific part and still trying to compete, showed us the limits.

2. Pile of paper

An awesome example! There is a little bit of reading and planing required, before we could jump into coding. But in the end, everybody could solve the problem and we had three completely different approaches all having advantages and shortcomings. There was enough time for discussion and even test performance with gigantic boards and millions of virtual post its.

3. Other examples

For reference and inspiriation, we want to shortly adress some of the other challenges we had.

3.1 Choose other problem fields!

There is more to our jobs than just programming. Especially source code management and collaboriation tools take an important place in our daily work. So why not challenge / strengthen the skills needed for using those tools?

At td we heavily use github. The git-game is a terminal game designed to test your knowledge of git commands. We approached the game as a team using a single computer.

3.2 Choose different programming languages!

When doing the coding challenges, we aim for creating room for creativity. A good source for inspiriation is to choose a programming language which you are unfamiliar with. Most of the time we use Ruby to solve tasks, some of us do that for many years now, so one can sense that whenever we are given a specific problem, we think in Ruby about it.

In one of our challenges, solving the task it self wasn’t mandatory, whereas the main focus was put on using and experiencing an - to the participant - unknown language. Most of us couldn’t finish the task on time. But more important, some caught fire on digging into a new technology and gathering knowledge about other concepts. Eventually this will pay off some day, because we might be able to pull those experiences into decisions, like choosing a technology other than the ones we are already using, which could be a much better fit for solving a specific problem.

3.3 Set different goals!

Most of the times the main goal of a coding challenge is to solve a specific programming problem. Dimensions like performance, complexity etc. are often not taken into account, which is okay since the solution to the problem itself is nontrivial. Another goal, rather than for example generating the desired output for a given input, could be to measure the mentioned dimensions. When the focus shifts from quickly flushing code from your brain into the computer to programming a more thought-through solution, the problem itself should be a little easier.

Example: Find the first 1.000.000 prime numbers. We had both, those solutions which used a naive, slower approach and those which used more sophisticated, more performant algorithms.

Conclusion and outlook

It turned out that coding katas or dojos are a simple and fun way for us to get variety in our daily work. The environment is small and straightforward with very little preparation time. Discussing different solutions to the same problem helps for widening mindsets towards approaching tasks. Most of the times, developers left the meeting with a smile on their faces. As mentioned above, this wasn’t always the case. From time to time, challenges were not perceived positiveley by all participants. Failures are always part of the deal, no matter what one is doing. Embrace them and build your experience upon them, time will show what does work and what not.

In the future, we will apply challenges also to other fields of our work, for example developing technical concepts and architectures or working in cross functional teams to build something bigger.

This is the first attempt in a series to achieve a more tech centered kosmos in a non-tech company. The next articles will cover other approaches which we want take on that journey.

Improving test performance for Ruby and Mongoid

saschaknobloch • • ruby, rails, mongoid, mongodb, and testing

tl;dr

Recently we’ve implemented a small gem called MongoidCleaner. It’s a faster alternative for DatabaseCleaner for Ruby projects using MongoDB along with Mongoid. Besides the truncate strategy, it also provides a more performant drop strategy.

$ gem install mongoid_cleaner

When using RSpec, adjust your spec_helper.rb:

RSpec.configure do |config|
  config.before(:suite) do
    MongoidCleaner.strategy = :drop
  end

  config.around(:each) do |example|
    MongoidCleaner.cleaning do
      example.run
    end
  end
end

Introduction

Everyone agrees that code should be covered by tests. With rising popularity and awareness for TDD in the past years, there has been quite a big debate about whether tests should hit the database or not. Actually, the majority of the tests written for our Ruby and MongoDB powered backend application - which main task is to expose APIs for storing and extracting data - are touching the database. Why? Because that’s what this particular piece of software is built for: Interacting with the underlying database. Reading, writing, aggregating, calculating, validating, et cetera.

Don’t worry, this article won’t be about taking sides in this whole “TDD / should my tests hit the database or not” discussion. Instead I want to show, how a one line change led to running entire test suites 2-4 times faster.

Ensuring clean state

So, if you are going down the path of actually persisting objects to the database in your tests, you might want to ensure having a clean state on the way. In Ruby-Wonderland you certainly can find some library which handles that for you. The most popular one is DatabaseCleaner and we’ve been using it for good in most of our Rails projects.

At TD we heavily rely on peer reviews via pull requests on Github. We are using Travis CI to build our Github projects. For each pull request a Travis build is triggered. Growing code bases and rising number of projects, hence greater number of tests and eventually bigger build matrices, caused the Travis queue to clog up and make builds wait until others are finished. At this point we started to investigate solutions to speed up our tests.

We realised that cleaning the database after each test consumed most of the time which led to investigating the subject more closely. Particularly it was about the part which is responsible for clearing collections:

# some foo before
collections.each { |c| session[c].find.remove_all }
# some bar after

remove_all is a method implemented with moped which ultimately will result in calling MongoDBs remove() command. It will delete documents one bye one and MongoDB must update every index associated with the collection in addition to the data itself. Depending on the size of the collection, this can become a very expensive operation. Apparently it is also expensive on small collections - like in common test scenarios - when running the command a couple of hundred or even thousand times.

Other ways to ensure clean state

In order to find a more performant replacement for the remove() command, I consulted the MongoDB docs. And behold, they suggest:

To remove all documents from a collection, it may be more efficient to use the drop() method to drop the entire collection, including the indexes, and then recreate the collection and rebuild the indexes.

Well, about the indexes I don’t care so much in tests. Also we don’t have any logic depending on indexes. For the recreation of the collection: MongoDB creates a collection implicitly when its first referenced in a command. Neat.

Okay, so the docs prescription sounded promising and I thought it was worth going for drop() instead.

Why your company should adopt React.js really soon

mindreframer • • reactjs

Personal background

First my personal story, how a developer with little frontend experience ends up sucessfully leading 4 full real life frontend projects for real customers in one year.

In 2014 when I have joined TD, the applications here were mostly plain Rails applications with some Javascript to configure jQuery plugins + some glue. It was OK, but nothing special and the development experience worked fine for usual cases but restricted heavy logic on client by design.

Angular.js scars

My background was Rails/Ruby backend / DevOps with basic JS skills. I have worked with Angular.js before on a previous company building heavily configurable UI and got burned by seemingly simple requirements.

Custom directives in Angular.js break down and grow unimaginable confusing, as soon as you start composing directives in one each other. It felt literally like pushing squares into holes and I carried this trauma until May 2014 with me, trying to understand what went wrong. The team toughened up and used the XML principle: “XML is like violence. If it doesn’t solve your problem, you’re not using enough of it.”. So team members really dived deep into Angular.js, we bought advanced books for it and wrote crazy code just to work around the issues with wrong abstractions.

But I had learned the lesson: simple and easy are very different beasts. Angular.js made simple things easy, but left you dangling on hard things. The bidirectional bindings (the advertised advantage) became the root of the problem. They made the UI unpredictable and caused performance problems when used in big complex components. Also there were so many new concepts to learn and to combine, and one final issue sealed it for me:

directives can not be cleanly composed.

Not by mortal engineers who don’t study in Angular.js University and don’t get out with a shiny PhD of Angular.js . Just take a look at this - The Hitchhikers guide to the directive