A term thrown randomly in lots of software development shops, to describe different things. Why? Automation is a simple term, that used to mean a process that can run without (or with minimal) human intervention. Now it means lots of things and, in most cases, it’s a lie. Let me explain what I mean, through a software development skit:

Alice: Bob, where do we stand with the QA automation for this release?

Bob: We’ll probably have to modify 40% of the regression suite, ’cause lots of tests are failing.

Alice: Do you think we’ll make it in time?

Bob: We could bring in 5 guys from John’s team to help.

Alice: Sounds like a good idea. Let’s do that!

No one ever…

I’m sure you’ve never seen this in practice, ever… Right? “Good idea” what!??? Of course, Alice is acting purely based on her project management targets, but she’s completely missing out on what automation should do for them. That is: reduce the intervention of humans in the process and get faster feedback from that regression suite (as opposed to people playing keyboard monkey roles).

What happened back there? Well, 40% of 10 tests is nothing, so bringing in 5 guys would be one person per test that could help, plus an extra one. The problem could be solved in a matter of minutes…? maybe hours…? Ok, good call Alice! But the elephant in the room, in this scenario, is that the regression suite is actually a joke, not real automation. I mean 10 tests? (yes, I just decided that there were 10 tests, don’t scroll up)

The problem in the large, is that the regression suite has hundreds, if not thousands of tests. This means feedback comes late (hours, sometimes days) after running the suite. Feedback is not clear most of the time, false positives, flakiness and all that good stuff.

Alright. QA automation. But there are other kinds, right? Oh, sure there are. Let’s take the famous “dev-ops pipelines” for example. “What does that even mean? Dev-ops pipeline, pfff…” you say? Stop pretending and just accept you already know what I’m talking about: pull source from GIT onto some jenkins machine, run some stuff from some other jenkins machine and then some stuff on this jenkins machine and then “promote” the buid to some other machine and chop a virgin’s head and make sure all is being pushed to Kibana and kill yourself.

Isn’t that THE automation? Well yes, it is, it’s just that… more often than not it goes wrong at different steps in the process. When was the last time you’ve seen a pipeline like this run well for weeks or months at a time, without human intervention. I can already hear the audience… “months? we’re stepping in a couple of times a day”.

We can see a pattern here: “automation” that hurts rather than helping. Ok, by “hurts” I mean that it does so for the business goals: late deliveries, buggy software products, etc. It totally does NOT hurt the persons which created the “automation” to begin with. To them, it’s an industry standard and if you’re not doing it, you’re so not worth living and must crawl with shame under your desk (not standing desk, of course, ’cause you’re a loser and you’re not trendy). They will defend it to their deaths (that is, until they move to a more trendy company, with bean bag chairs and super-expensive espresso machines).

There are many other kinds of “automation” which generate similar results to the above. Bottom line is: if it constantly hurts the business goals, you’re doing it wrong. “Yes, but my special case…”. NO! Just throw it away (YES, away it should go) and rethink the whole thing. I mean really rethink it in such a way that you won’t have to touch it afterwards, not just use a different technology to do the same thing all over again. It will probably take a bit of time (weeks, months…), but the business will certainly thank you for it.

P.S. What are you doing when you give business a software product? You automate its processes. Why? So they don’t have to do it manually. That business may just as well be you.

Stop mocking your system!

Yes, mocking has a mostly negative meaning! Only this – to imitate (someone or something) closely – doesn’t feel negative.

Mocking is usually used in testing code and it looks something like this:
We have module A which uses modules B and C. When we write tests for module A, we don’t initialise it with the real B and C, but rather “mock” B and C and hand those mocks to A.

All of us have used mocks in our testing code, to substitute the real things. I know I’m definitely guilty of this. Why though? Why do we do it? Well, It’s usually because initialising those collaborators is not trivial (i.e. can’t be done with a one-liner).
But why wouldn’t the collaborators be easy to spin-up? Well, because they have their own issues with their own collaborators and so, you see, the design issues start to emerge. This is why you read things like “TDD makes your design” better (I’m not going to start this now… maybe in a future article).

But let’s backtrack a little bit and be honest with ourselves. What we’re actually doing is faking (please don’t start with the stub/fake flamewar) things that we’ve created ourselves. We lie to ourselves saying that we’re only testing one thing at a time, so having two or more real things breaks best practices. Which best practices? Stack overflow’s most voted answers? Check this out: you set your test victim with mocks and boom, now you have two of the same collaborators living in your codebase. And guess what… you have to keep the mocks in sync with the real things, in case you thought otherwise. Don’t worry, you’ll forget to do that once in a while (I’m being gentle here…) and you know what the end result of that is? Yes, yes, the all too familiar “green tests broken product” syndrome. You know… a dev and a QA walk in to a bar. QA says: “it doesn’t work”. Dev says: “it must be working, all the tests pass”. All the dev tests that is…
And this is exactly where the heart of the problem lies! Creating mocks is just creating an alternative universe for your system, that will eventually lose all connections with reality.
What you almost always want, when mocking, is really just different input data for your module, data normally provided by a long stream of collaborators. Just build your program in such a way that you can send it this data, regardless of the runtime (unit test framework, test env or production env). Yes, it is possible and highly desirable and don’t be in denial right now.

But dude, I don’t want to use a real database (or AWS endpoint or rocket launcher) in my tests. Debatable, but fair enough. Simulating 3rd party systems is acceptable when not doing so would lead to bad consequences in a production environment. Key concepts here are “3rd party” and “production”. If using a 3rd party production system won’t hurt anyone, just do it. Being as close to reality as possible is the best thing to do. Anything else is just a web of lies that we have to maintain.

Mocking your own system is just mocking your own reality. Stop lying to yourself!

Cargo cult programming

…or how to create huge useless programs, that is. In all fairness to the folks doing this, it’s the best they can do, given the knowledge they hold. And this is where I want to make my argument, but first…

Cargo cult programming, says Wikipedia, “is a style of computer programming characterised by the ritual inclusion of code or program structures that serve no real purpose”.
Let’s expand on that a little bit. How can there be code that serves no purpose? What is the purpose? Good questions!

The purpose of the code is to respond to every need of the product. Pretty simple, right? Right! It’s the every need of the product thingy that’s sort of vague and, frankly, it’s not easy to determine EVERY need of the product. There are lots of variables in this domain, some of which are: user preferences, product usage patterns, user base growth, product runtime infrastructure changes, product developers change rate etc. And I mentioned nothing about actual code yet! That’s right, because before having actual code, we need to determine a model for all those variables mentioned before. “Architecture” I hear you say? Whatever… I don’t care how you call it as long as there is some good thinking done first, to address that model.

Please, oh please don’t get hung up on words like: architecture, model etc. This is exactly where cargo cult programming (CCP henceforth – it would have been funny to have another C in there, before the P) stems from.

I’ll give you a method to identify whether or not CCP is employed in your project/product:
Talk to a senior programmer who’s working on the product. Ask them to explain in detail, a small part of the product. Yes! In detail! Tell them to show you the classes (because OOP is almost certainly what you’ll find) and explain how and why they are organised like they are. Take your time, be patient and do your best to follow the explanation. And now the Aha! moment: if the programmer is not eloquent and you don’t understand the explanation or it doesn’t make sense, chances are you’ve got yourself a little CCP going on.
Everything is explainable in layman’s terms… if you understand what you explain, that is. Oh and beware of the “best practice” expression. It usually means: “others on the web are doing it like this and it means this is the way to do it”. So usually “best practice” = CCP (of course it shouldn’t, but that’s what it usually hides behind)

Another way to spot CCP would be to ask programmers what literature they got the ideas from, at which point, more often than not, they would quote websites, articles and blogs (that’s right! You better not be learning programming from this blog!). What you need to hear is books and good authors (e.g. Kent Beck, Robert C. Martin, Martin Fowler, Michael Feathers and many more).

So, CCP is a BAD thing. Repeat after me: CCP is a BAAAD thing. It makes programs less maintainable, more buggy, more expensive. That’s it!

All right. Enough chit-chat! I’ll let you get back to your Wikipedia binge-clicking. Cheers!


Another term that is being overused and misused across our industry… Ok, I want to clear its name a little bit and explain how we actually do trade-offs every day.

Consider this scenario: You need to take over a big legacy system. Really poorly written, no tests, coupling everywhere. You are required to add a couple of minor features and fix a few old bugs. You, you’re a professional programmer, with a proven track record. “They” are product owners which would like those features and bug fixes in production ASAP. Users are demanding justice!

You are faced with a dilemma: How can I, THE programmer, deliver this in an impossible time frame? It’s crazy! It’ll never work… I need to isolate the parts where I’ll have to add those features, I need to write tests to make sure I’m not breaking stuff, I need to refactor the code, to understand it better now and later… all these things, the humanity!!! And what the hell is a m_hWnd pointer?!

In the face of this apocalypse, you might crumble and cry, or you might hit “them” hard until they leave you alone and slowly move you to a “green field” project nobody uses, or… you may consider the following aspects to this problem:

1. The application is in production for years and it produces $$$.

2. Users are pushing.

3. The code is shit.

4. The new features and bug fixes are not really that big.

Now, should you hold your own and tell everyone about point 3, crying how this should inevitably be rewritten and that will take months, years, omg, the humanity!!?? (really author… again with this?) Or instead, should you realise that there is no way in hell you can rewrite this in a timely fashion and just slap some more shit on top, to get it over with and crash like a diva ninja-developer on a chaise longue asking for refreshments?

Well… (get ready for it…) it depends! This is it, thank you folks, good bye!

Wouldn’t you really hate my guts if I ended it there (like some professionals often do)? Hahaha. No my friend. I’m here to help. Here’s what’s gonna happen: you’re going to realise that indeed, you don’t have the time to make it all nice and shiny. But! You won’t slap shit on top of it. You’ll make it so that the new stuff you write is nice and tidy, with tests and all that, but linking it to the legacy system will probably be a little shitty, yes. There’s your first trade-off. The bugs? Well, the bugs get sprayed, but the code defects, you’ll track down with the debugger (ooh, watch out TDD purists), understand what’s going on and apply a small local fix. Second trade-off.

Now, will the code be better after this? Mostly no. But the new parts you wrote will be good and overall, the system will satisfy the angry user mob.

What about your previous articles about clean code and refactoring and all that good stuff (a bit of narcissism, I agree)?

Well, they are part of what I described above too (remember? the “new stuff” you added to the legacy crap). Funny thing is, we expect other people to do these kind of trade-offs every day. Imagine the scenario above, but with a little alteration: you take your car to the mechanic for a couple of small tune-ups and some scratch fixes. Of course you want your car back ASAP and with those things done. What do you think the mechanic does in his little magic castle? Why trade-offs, of course. You get your car back, with the stuff you asked for, you use it, there you go…

So back to the term. Trade-off. It’s actually a placeholder for judgement. The more you think about trade-offs, the more judgement you apply to the problem at hand. So there it is, I guess that’s what I was trying to say.

Quality in software development

What does this truly mean? Somebody actually told me, during a business talk, that quality was just a buzzword. It felt wrong, but in a way it felt real too. I now understand where he was coming from and I can explain it. The only way he experienced quality in software development, was through the word itself: quality. The context around it was usually messy (code, defects, unhappy clients etc.). No wonder he started hating the word and sarcastically referring to it as a buzzword.

So back to the question: what does quality in software development really mean? There is more than one point of view that we should explore, so here we go:

The big picture

Ultimately, everyone wants to get what they ask for, without any hassle and within their budget. The holy grail is getting the best thing, for free, right away. Don’t try to fight this. It’s true. Think of the best version of anything you want and tell me you wouldn’t like it materialising in front of you, right now. So from this point of view, quantification of quality is done on the scale of “this is not what I wanted and it’s too expensive and it takes too long to get” to “this is exactly what I wanted and it’s free and I got it“.

Of course, there are some variations, one of which is heavily encountered in the industry: “this is not quite what I wanted, but it’s cheap, yet it took too long to get“. These are the murky waters many clients jump into when having to do trade-offs. In my experience, these are usually budget trade-offs, that end in an unfortunate state, where over-budget gets spent in the same bucket, but for patch work.

This is the scenario I like best:

Client: This is my problem. Can you solve it?

Provider: Yes.

Client: What will it cost me, in time and money?

Provider: I estimate around 6 months and 500.000$

Client: Great! Let’s do it!

(every couple of weeks)

Provider: Is this what you had in mind?

Client: Almost. Here, I’d like a little more blue.

Provider: Done!

(around 6 months later)

Client: It looks like we’re done. Great job! Thanks!

Provider: Anytime.

There are lots of details behind this scenario, but this is the big picture. When I read it, it gives me a peaceful feeling of “yes, this is how business should be“.


These are blueprints of behaviour that help when dealing with certain problem patterns. They also help the entities that participate, have some (illusion of) predictability of the future. It’s a sort of guarantee that the future will happen in a certain way. This helps people feel in control and reduces the level of stress. BUT! Since we derive a level of comfort from having the illusion of a certain future, we tend to equate the process with the future itself. This is a mistake that leads to process rigidity and reluctance to step outside its bounds.

I like to look at processes as checkpoints and guidelines along the road. The main focus should always be the product. Engaging my expert skills right now, while actively building it, IS the best guarantee I can have of the future I want. The future is built from a constant stream of present 🙂

I always look at the reason for the existence of the process, so it gives me a better understanding of why it was set up in the first place. This way I can work along the lines of that reason, rather than blindly following a process I cannot understand (and sometimes come to hate).

One thing I noticed is that the fuzzier the goal, the more processes there are. The more processes there are, the more people tend to make a goal out of them. And so the wheel spins…


This is, surprisingly, not the most looked at factor in the software development industry. In my experience, if a client sees processes and pays an amount that suits her, THE CODE (which is ultimately the product) doesn’t matter as much, as long as it… exists.

WHAT??!! This is outrageous! But wait… is this something the client should even be aware of? Why would it be so? If I buy a pair of shoes, I’m not interested in the way they are built, what I’m really interested in is how they look, how they feel and how long they last.

The craft IS really important, inside the guild. Because we’re talking about code, things should be simple, really: it should work as required (without maintenance, preferably) and it should be easy to modify (if needed).

These are two seemingly straightforward characteristics, but to achieve them, takes not only time and experience, but continuous learning too. A craftsman should be able to easily understand the request (even help with it) and pick the most appropriate tools for the job. A craftsman should be able to read code to business people so that they hear a narrative. A craftsman should be able to write such code.

I want my client to understand where their product stands at any time and what it can do. I do NOT want my client to be forced to listen to what kind of refactoring and how many unit tests I wrote yesterday. Or even worse, she shouldn’t be forced to decide if I need to write them tomorrow.

Also, I don’t want to hear that “we are thinkers, don’t force us to estimate“. Well, if you’re thinkers, then think about how long it takes you to do something you should already know how to do. Nobody said estimates are deadlines (did you?!!), but clients need to have a rough idea about resources they need to allocate. Your GPS doesn’t say “you know what… we’ll get there when we get there! The important thing is the magic I constantly use to display these awesome maps.


I don’t want to be cheesy and say that trust is earned, because of course it is. The difficult thing is to trust someone I don’t know. Looking at references and traces of past work helps, yes, but I also place a huge weight on the quality of the first discussion. It always served me well in assessing my future collaboration with anyone. Everyone is an intuitive psychologist by nature (not mentioning special health conditions).

The thing is, we almost always know (trained people are an exception, hence the “almost”) when someone is hiding important details in a conversation and we can also “feel” when what they say is what they think.

Obviously, the higher the level of trust, the higher the expected product quality. I’ll stop here and save the psychology magic for another time 🙂


These are my most valuable points of view, when looking at quality in software development. I also look for all of the above in business partners and also, I’m happy to say that for us, at bitGloss, quality is not just a buzzword.

Artificial intelligence

It’s been almost 20 years since I wrote my first neural network, during my university studies, in C#. Even back then, it was still unclear to me why this was called intelligence. I mean, after all, it was doing exactly what I programmed it to do and it felt like cheating to call “learning” what it was doing.

In the years since, I’ve read quite a few books on natural sciences (Darwin, Dawkins, Pinker…) and I discovered some wonderful things about the world we live in and how we “operate”. One extremely important thing I’ve learned is that while we know a lot about our brain and its functions, we still have a long way to go before we can say we know everything about it.

Natural intelligence has yet to define natural intelligence.

I still stand by my idea that software is just a tool that we use. It is not, in any way, an emerging new species that will overthrow humanity.
Is it smarter than us? No way! “Ok”, you might say, “but AlphaZero…”. AlphaZero what!? It can beat me at chess in no time? I bet it can’t if I spill a glass of water on top of it, can it? This is exactly the way we should evaluate things when we bring AI into the realm of inter-species competition. “Ok”, you might say again, “but what if all those specialized systems are put together into one big system that knows how to do every task? Isn’t that like a brain?”. No! Just read about cases where specialized brain modules were damaged, but somehow the functions were taken over by other modules. This is truly wonderful. Our brains adapt in such ways the aforementioned machine brain can’t.

A couple of words about the fear of AI destroying us: it’s most definitely possible for machines to destroy humanity (and possibly many other species as well), but that’s because WE control it. WE pull the trigger. WE have nukes, we don’t even need AI to do that. AI is just a means to an end in this scenario.

I believe that once we do fully solve the complex riddle of how the brain works, we might be able to recreate one from scratch. When that happens, it will probably be indistinguishable from a “natural” brain.

I will say this though: AI does have a bad PR, especially when it comes to public that is not trained in the subject at all (and most people don’t have a computer science degree). Of course people fear what they don’t understand AND is presented to them as a threat. Our brain always decides it’s better to be afraid and run if there is a sound in the tall grass of the savanna. The cost of being eaten by a predator hugely outweighs the cost of being wrong and tired from a run.

By the way, if you haven’t already done so, I strongly recommend you read Asimov’s “Robots” series. It’s wonderful! Now that’s a world I’d really like to live in.

The risks of code review

Code (or peer) review is supposed to be a process, during development, that helps producing better software. It sounds simple enough, except it’s not.

Coding is not an exact science. Not even close! It requires a lot of subjective things of the person producing it. Things like: empathy, ability to communicate, imagination, past experiences, capacity for understanding different layers of a problem etc. These things are also the ones being employed during code review. They constitute the differences between us and let’s face it, as much as we would like to think that our differences create a better world, in reality we are hardwired mostly for war and these differences are triggers.

Of course, we do have mechanisms to regulate fighting impulses, but most of them are activated as responses to direct stimuli: voice tone, hand movement, body language in general. During code review, all these things are missing, so guess which way the “conversation” will go. Without a real conversation with the other person, all the important cues and hints are missed, making for an extremely poor conversation. This is one of the reasons internet discussions are almost always going from “Here’s my nice kitten” to “I hate your race!” in just a few replies.

This is one major reason why pair programming makes more sense.

The subjective things matter a lot for: naming (functions, variables, constants, classes…), choices for code composition, choices between imperative vs. declarative styles, choices for frameworks. This is the space where most code reviewing is taking place, unfortunately. There is a high risk of overseeing a real problem (e.g. we read way too much data from DB into memory) while getting lost in debates over the name for a local variable (e.g. “k” vs. “counter”).

We then tend to create the so called “code conventions” within teams (which btw, will always piss off a subset of the team members), that will no doubt focus on those small things. And so will the developers from then on, trying to stick to the conventions and losing the overview (can’t see the forest from the trees). These “code conventions” (I’ve seen wiki pages with dozens of conventions for a single team) are usually not a positive, common understanding of “how we should do things in this team”, but rather a list compiled by some people that “won” the fight. This can clearly be seen as subjective, as one encounters many such lists, different from one another, when switching teams.

It’s fun to see how the author’s review responses are also extremely selfish. More often then not, they try to justify mistakes that they would have clearly accepted and corrected would they have been in direct contact with the reviewer. In a best case scenario, the discussion would be taken “offline” and resolved through direct communication. In a worst case scenario, the comments thread would span for miles. Now multiply this with just the lack of context for the reviewer and the number of reviewers. So delayed delivery is another, very real risk. What I’ve seen happening a lot is a third party (usually some form of management) stepping in and forcing a compromise that will, most of the time, send local variable “k” to production and the overlooked memory leak along with it.

I don’t want to make a case against code review here, but in my experience people engaged in it need to be really good managers of other aspects of their lives too, in order to leverage its full potential. Otherwise it’s like giving the car keys to a bunch of 12-year-olds. Also, when I say pair programming, I don’t mean focusing specifically on the “driver/navigator” paradigm, but simply 2 people sitting and working together on the same code.


This is new terminology for a concept that has existed for a long time in our industry. People are slowly rediscovering old paradigms. NoSkills is actually referring to an old working technique that maximizes efficiency of planning, processes and meta work by ensuring no actual work gets done. Here’s what one should do to get highly skilled in NoSkills:

  • learn industry terminology like: scrum, lean, kanban, agile, CI/CD
  • read first links that pop-up on google about the aforementioned terms
  • learn that waterfall is bad
  • learn that documentation is bad and people interaction is good
  • complain about people not interacting
  • apply 2 week sprints, so that “we can track velocity”
  • never finish the work defined in a sprint
  • have constant meetings
  • religiously impose all process ceremonies
  • NEVER write tests if there is no time
  • switch from scrum to kanban, because “scrum doesn’t fit our work flow” (= we never finish the work in any sprint)
  • use GitFlow, with independent feature branches, because our 5 dev team needs faster progress
  • use CI/CD (= install a jenkins server)
  • have refactoring sprints (“oops, but we’re kanban now…”)

This is by no means an exhaustive list, because there are so many things to be studied and so many wonderful certifications to be purchased in the NoSkills world, that it would take more than a sprint to list them here.

Folks, work is more important than meta work (and don’t let anyone fool you). If software doesn’t get done, it’s because software doesn’t get done, not because the process around it is obsolete and needs to be changed. Eating the same food with different tools won’t make it better. Something we need to re-learn is that complexity-first is a bad approach. It makes us focus on the wrong things.

One great skill to acquire is recognizing the smell of NoSkills. Don’t fight it, just walk away.


Over the years, I’ve come to realise that code is more of an inter human communication device than a human-machine one. Machines only need electric current to perform their tasks. Logic gates, binary digits, you know what I’m talking about…

We, on the other hand, have evolved sophisticated mechanisms to relate with the surrounding environment. Even if the commands are still electric current based, we do not know how, nor do we want to communicate with each other at that level. One of the most advanced such mechanisms is the human language. This is a very complex construction, based not only on words, but also sounds, visual queues and, most importantly, abstract concepts.

It’s difficult enough to be in perfect understanding with each other, through human language (not to mention the differences between human languages and how they might map to different abstract concepts and all that…), let alone having to translate what we mean through an enormously oversimplified mechanism: that of a computer programming language.

So, thinking this way (that writing a computer program, in a computer programming language, is something that we do mostly for our own and our peers’ understanding) it makes sense to have that program as clear as possible. And yes, I know “clear” is a subjective concept, but that doesn’t mean we mustn’t try to achieve clarity. What I mean by this, is that it’s really much better for program readers to be able to grasp the intent of the program as easily as possible, if they are to do something based on it (modify it, learn from it, fix it, etc.). As always, don’t forget that you are also one of those program readers, even if it’s your own program.

Compare the following snippets of code and see which one you prefer:

  def exec(list) do
    res1 = []
    res1 = for x <- list, x.age > 18 do
    res2 = []
    res2 = for x <- res1 do
    Enum.sum(res2) / length(res2)


  def avg_adult_weight(people) do
    adult_weights = people 
      |> Enum.filter(&(&1.age > 18))
    Enum.sum(adult_weights) / length(adult_weights)

The first snippet means nothing to us until we stumble upon x.age > 18, which gives away some hint about x probably being a person. A person over 18 years of age. This looks like some kind of filtering, ok. Next there’s some kind of transformation and in the end, some math. Sum / length looks like an average. Ok, I think I understand what they meant.

But wait! Why do they iterate twice over the list, isn’t that wasteful? Sure it is.

I left the multiple iteration pattern in there on purpose (something that is unfortunately very common in production code), to emphasise the fact that the authors were just drafting their intention. Once they tried it and saw that it “worked”, they moved on.

The second snippet should be clear for every programmer and, even for non-programmers. I specifically didn’t mention the programming language, because I really shouldn’t have to, for the reader to understand it. Even if you don’t get the weird syntax with those ‘&’ signs, you should easily read over them and see what the author meant. This is the essence of code clarity.

For even more clarity, I would also add details about the function signature:

@type person :: %{name: String.t(), age: integer, weight: integer}
@spec avg_adult_weight([person]) :: float

I think that code produced in the style of the first snippet, comes from the fact that authors think in an artificial way first, trying to explain human concepts in computer science terms (data structures, loops…). Refactoring then, is the process of translating the expression of those terms to a more human friendly form. This, to me, is a very low level process that we could avoid, by approaching coding from a human language perspective. For example, this is how I would approach the average adult weight function (ok, I would directly write the elixir code in this case, but the process is valuable when you don’t know exactly how you would implement it):

I know I want a function to tell me the average adult weight for a bunch of people:

  def avg_adult_weight(people) do

Ok. Now I get those bunch of people and I want just the adults and then do some math with their weights:

  def avg_adult_weight(people) do
   # get just the adults… some filtering
   # do some math with the weights… sum / count probably

You see, by taking notes (in comments) of what I want to achieve, I also get implementation ideas (after the ‘…’). This is quite the reverse of the above process. It’s a transition from human to programming language. Now, actually substituting the comments with code, is straightforward.

NOTE: Do not assume this function is written without having some verification mechanisms in place (tests, REPL sessions…).

Too many (maybe most) refactoring debates are taking place at the low level we discussed about a few paragraphs above. In my experience, this is really counterproductive. I see refactoring, as a process, starting from the step where you already have humanly readable code (like the second snippet) and you want to reshape it, to accommodate and play nicely with similar or new concepts. I’m talking about things like creating a higher order function, to abstract an algorithm, or similarly, in OOP, creating a template method in a superclass.

Senior developers

I guess this can be about senior anything, but I’ll stick to developers here.

There are loads of senior developers out there (and I don’t mean señor developer 🙂 this time). At least that’s what they call themselves and most of the time, it’s because the industry recognizes them as such.

I argue that, at least in most of the cases that I’ve encountered, this title is meaningless. Why? It should imply a lot of things, but it usually implies years spent in the industry, which is by far not enough to drive a team, to come up with adequate designs, to talk the client’s language etc.

I have seen far too many seniors that slow businesses down, or even drive them towards critical situations, because they know better. Classical example: senior developer imposes major code refactoring, in order to “stabilize the system”. Another example: senior developer destroys junior’s confidence by completely dragging their work through mud, trying to prove all the ways in which the junior is unworthy.

I’ve met a few real seniors. The experiences I’ve had, while working with them, can be represented by just a few words: safety, familiarity, clarity, simplicity, confidence. This is not at all related to how many technologies, frameworks or languages they master (although they do master several), but how well they can get their ideas across and how they can shed light on problems.

Oh yeah, by senior developer I mean also technical leads, team leads, architects and all the titles we like to invent to make ourselves feel better. How many times have you heard an architect saying something very clear and simple, with real business benefits? Now, how many times have you listened to an architect, having no clue what’s going on? Fun times, right?

Senior C# Developer (of course the caps make a difference…). Did you stop to think what that even means? Sure. It means a person working with C# for more than X years (5, 10?). Again with the time quantification… I’ve interviewed senior java developers, with 15 years of experience (almost exclusively java), that knew the language very well, but had no clue about JVM details. Good luck dealing with performance issues in production…

A senior developer is someone who makes things clear both in business and technical talk, who is really fun to work with, laughs and teaches you things. One key trait, as far as I can tell, is the diversity of things they’ve experienced: programming languages, industries, companies and life in general.