Making Java code more readable with Type Abstractions

Let’s consider a simple example project, appointments, where each appointment has a date, doctor, patient, and comments. These records are read from input, displayed in list or tabular form and saved. The code involves multiple layers — reading and writing I/O, formatting data, and writing formatted views.

Here’s the actual Java representation of an appointment:

public record Appointment(LocalDate date, String doctor, String patient, String comments) {}

Early in the design, many of the functions that manipulate these appointments had verbose, deeply nested type signatures like BiFunction<List<String>, Supplier<Stream<List<String>>>, String>. While correct, such signatures obscure intent. That’s where type abstractions come in — small, domain-specific type aliases that make the structure of the code clearer and more readable.

The Problem: Overly Complex Function Signatures

Consider this example, which takes a list of headers and a content supplier, producing a string representation of a list view (the table view is omitted for brevity) :

public static BiFunction<List<String>, Supplier<Stream<List<String>>>, String>
listFormat =
    (headers, content) ->
        header(headers).append(data(content)).toString();

This works, but the type signature is long and opaque. It doesn’t immediately tell you what the function is about — only that it’s a BiFunction of lists and streams that returns a string.

The Solution: Creating Semantic Type Aliases

Java doesn’t have native type aliases, but we can emulate them through interfaces that extend existing types. For instance, we can define a View type that captures the concept of a function that takes headers and content, and returns a formatted string:

public interface Types {
    interface View extends BiFunction<
            Collection<String>,
            Supplier<Stream<Collection<String>>>, String> {}
}

With this, our previous function becomes much cleaner:

public static View listFormat = (headers, content) ->
  header(headers).append(data(content)).toString();

The functionality hasn’t changed — but the intent is now explicit. We’ve moved from a generic, mechanical type to a domain-level concept: a View.

Extending the Abstractions

Next, the display function — responsible for rendering appointments — takes a View (the formatter) and returns a ViewWriter<IO> (the executor that writes the formatted output to an IO stream). Originally, its signature was difficult to read:

public static Function<
    BiFunction<List<String>, Supplier<Stream<List<String>>>, String>,
    BiFunction<List<String>, Supplier<Stream<List<String>>>, Consumer<IO>>>

display = ...

Using type abstractions, this becomes far more expressive:

public static Function<View, ViewWriter<IO>> display =
view ->
  (headers, content) -> io ->
    io.print(content.get().count() == 0 ? "No appointments found\n"
      : view.apply(headers, content));

Here’s how the types break down:

  • View — the formatter: takes headers and content, produces a string representation of the table.
  • ViewWriter<W> — the executor: takes headers and content, and produces a side effect by writing the formatted string to a consumer of type W (like IO).
interface ViewWriter<W> extends BiFunction<
        Collection<String>,
        Supplier<Stream<Collection<String>>>,
        Consumer<W>> {}

This separation keeps formatting logic distinct from side-effect logic: View handles what the output looks like, and ViewWriter handles where it goes.

Abstracting Read and Write Operations

Finally, the function responsible for reading a new appointment from input and writing it is defined as:

public static ReadWriter<Appointment, TypedIO> addNew =

    writer ->
      reader ->
        writer.accept(new Appointment(
            reader.readDate("Enter date: ", "invalid date"),
            reader.readString("Enter doctor: ", "").orElse(""),
            reader.readString("Enter patient: ", "").orElse(""),
            reader.readString("Enter comments (if any): ", "").orElse("")));

Its type alias is equally simple and expressive:

interface ReadWriter<R, W> extends Function<Consumer<R>, Consumer<W>> {}

Rather than dealing with nested Function<Consumer<X>, Consumer<Y>> constructs, we now have a ReadWriter — a function that connects two side effects: reading and writing.

The Result

These abstractions don’t change the runtime behavior of the code. They change its shape.
Now, a developer reading Function<View, ViewWriter<IO>> can immediately tell that the function takes a view and returns a view writer — no decoding of generic types required.

The result is clearer, safer, and more expressive code. Type abstractions let us represent not just data, but also intent, in the type system. They make it easier to reason about functions, compose behaviours, and test side effects in isolation — all without adding boilerplate or runtime overhead.

(Adapted from Chapter 3, “Type Abstractions,” in Dr. Software, available at bitgloss.ro/dr-software.pdf

Full example code found here)

Artificial Intelligence: A misleading name for powerful tools

This article is a follow-up of my 2019 article on the same topic, but updated in the light of today’s AI hype.

Like I mentioned then, when I first experimented with neural networks years ago, I was struck by how little “intelligence” there actually was in them. They did exactly what I coded them to do—no more, no less. Today’s systems may look more impressive, but the principle hasn’t changed: these are tools, not minds.

The Problem With the Word “Intelligence”

Humans don’t even have a precise definition of their own intelligence or consciousness. If we can’t define those terms for ourselves, it makes no sense to claim that we’ve created them in machines.

What we can say is that current AI systems don’t exhibit the qualities most people intuitively link with intelligence: self-awareness, understanding, intent, or meaning. They generate outputs by compressing patterns in data, not by reasoning or knowing.

To call that “intelligence” is to confuse statistical mimicry with cognition.

What AI Actually Does

Modern AI systems, especially large language models and generative tools, are best understood as:

Pattern machines. They excel at finding correlations in vast amounts of data.

Automation engines. They can handle repetitive, data-heavy tasks quickly and consistently.

Amplifiers. They extend human capability, but only within boundaries set by training data and design.

This is powerful, but it is not thought.

The 2025 Reality Check

Scale isn’t sentience. More data and bigger models don’t bring us closer to human-like understanding.

Usefulness ≠ understanding. A tool can be highly practical without being intelligent.

The real risks are human. Bias, misuse, privacy abuse—these are problems in how people deploy the systems, not evidence of AI “deciding” anything.

Why the Distinction Matters

If we keep pretending AI is a kind of mind, we risk treating its outputs as if they were grounded in meaning or truth. They aren’t. They’re grounded in probabilities.

AI is not intelligent because we don’t even know what that word would mean in this context. What we do know is that these systems are fundamentally different from human thought: they calculate, predict, and generate—but they do not understand.

Conclusion

The danger isn’t that AI will “wake up.” The danger is that humans will forget what it actually is: computation dressed up in human-like outputs. Powerful, yes. Useful, yes. But never a mind.

Automation

A term thrown randomly in lots of software development shops, to describe different things. Why? Automation is a simple term, that used to mean a process that can run without (or with minimal) human intervention. Now it means lots of things and, in most cases, it’s a lie. Let me explain what I mean, through a software development skit:

Alice: Bob, where do we stand with the QA automation for this release?

Bob: We’ll probably have to modify 40% of the regression suite, ’cause lots of tests are failing.

Alice: Do you think we’ll make it in time?

Bob: We could bring in 5 guys from John’s team to help.

Alice: Sounds like a good idea. Let’s do that!

No one ever…

I’m sure you’ve never seen this in practice, ever… Right? “Good idea” what!??? Of course, Alice is acting purely based on her project management targets, but she’s completely missing out on what automation should do for them. That is: reduce the intervention of humans in the process and get faster feedback from that regression suite (as opposed to people playing keyboard monkey roles).

What happened back there? Well, 40% of 10 tests is nothing, so bringing in 5 guys would be one person per test that could help, plus an extra one. The problem could be solved in a matter of minutes…? maybe hours…? Ok, good call Alice! But the elephant in the room, in this scenario, is that the regression suite is actually a joke, not real automation. I mean 10 tests? (yes, I just decided that there were 10 tests, don’t scroll up)

The problem in the large, is that the regression suite has hundreds, if not thousands of tests. This means feedback comes late (hours, sometimes days) after running the suite. Feedback is not clear most of the time, false positives, flakiness and all that good stuff.

Alright. QA automation. But there are other kinds, right? Oh, sure there are. Let’s take the famous “dev-ops pipelines” for example. “What does that even mean? Dev-ops pipeline, pfff…” you say? Stop pretending and just accept you already know what I’m talking about: pull source from GIT onto some jenkins machine, run some stuff from some other jenkins machine and then some stuff on this jenkins machine and then “promote” the buid to some other machine and chop a virgin’s head and make sure all is being pushed to Kibana and kill yourself.

Isn’t that THE automation? Well yes, it is, it’s just that… more often than not it goes wrong at different steps in the process. When was the last time you’ve seen a pipeline like this run well for weeks or months at a time, without human intervention. I can already hear the audience… “months? we’re stepping in a couple of times a day”.

We can see a pattern here: “automation” that hurts rather than helping. Ok, by “hurts” I mean that it does so for the business goals: late deliveries, buggy software products, etc. It totally does NOT hurt the persons which created the “automation” to begin with. To them, it’s an industry standard and if you’re not doing it, you’re so not worth living and must crawl with shame under your desk (not standing desk, of course, ’cause you’re a loser and you’re not trendy). They will defend it to their deaths (that is, until they move to a more trendy company, with bean bag chairs and super-expensive espresso machines).

There are many other kinds of “automation” which generate similar results to the above. Bottom line is: if it constantly hurts the business goals, you’re doing it wrong. “Yes, but my special case…”. NO! Just throw it away (YES, away it should go) and rethink the whole thing. I mean really rethink it in such a way that you won’t have to touch it afterwards, not just use a different technology to do the same thing all over again. It will probably take a bit of time (weeks, months…), but the business will certainly thank you for it.

P.S. What are you doing when you give business a software product? You automate its processes. Why? So they don’t have to do it manually. That business may just as well be you.

Stop mocking your system!

Yes, mocking has a mostly negative meaning! Only this – to imitate (someone or something) closely – doesn’t feel negative.

Mocking is usually used in testing code and it looks something like this:
We have module A which uses modules B and C. When we write tests for module A, we don’t initialise it with the real B and C, but rather “mock” B and C and hand those mocks to A.

All of us have used mocks in our testing code, to substitute the real things. I know I’m definitely guilty of this. Why though? Why do we do it? Well, It’s usually because initialising those collaborators is not trivial (i.e. can’t be done with a one-liner).
But why wouldn’t the collaborators be easy to spin-up? Well, because they have their own issues with their own collaborators and so, you see, the design issues start to emerge. This is why you read things like “TDD makes your design” better (I’m not going to start this now… maybe in a future article).


But let’s backtrack a little bit and be honest with ourselves. What we’re actually doing is faking (please don’t start with the stub/fake flamewar) things that we’ve created ourselves. We lie to ourselves saying that we’re only testing one thing at a time, so having two or more real things breaks best practices. Which best practices? Stack overflow’s most voted answers? Check this out: you set your test victim with mocks and boom, now you have two of the same collaborators living in your codebase. And guess what… you have to keep the mocks in sync with the real things, in case you thought otherwise. Don’t worry, you’ll forget to do that once in a while (I’m being gentle here…) and you know what the end result of that is? Yes, yes, the all too familiar “green tests broken product” syndrome. You know… a dev and a QA walk in to a bar. QA says: “it doesn’t work”. Dev says: “it must be working, all the tests pass”. All the dev tests that is…
And this is exactly where the heart of the problem lies! Creating mocks is just creating an alternative universe for your system, that will eventually lose all connections with reality.
What you almost always want, when mocking, is really just different input data for your module, data normally provided by a long stream of collaborators. Just build your program in such a way that you can send it this data, regardless of the runtime (unit test framework, test env or production env). Yes, it is possible and highly desirable and don’t be in denial right now.

But dude, I don’t want to use a real database (or AWS endpoint or rocket launcher) in my tests. Debatable, but fair enough. Simulating 3rd party systems is acceptable when not doing so would lead to bad consequences in a production environment. Key concepts here are “3rd party” and “production”. If using a 3rd party production system won’t hurt anyone, just do it. Being as close to reality as possible is the best thing to do. Anything else is just a web of lies that we have to maintain.

Mocking your own system is just mocking your own reality. Stop lying to yourself!

Cargo cult programming

…or how to create huge useless programs, that is. In all fairness to the folks doing this, it’s the best they can do, given the knowledge they hold. And this is where I want to make my argument, but first…

Cargo cult programming, says Wikipedia, “is a style of computer programming characterised by the ritual inclusion of code or program structures that serve no real purpose”.
Let’s expand on that a little bit. How can there be code that serves no purpose? What is the purpose? Good questions!

The purpose of the code is to respond to every need of the product. Pretty simple, right? Right! It’s the every need of the product thingy that’s sort of vague and, frankly, it’s not easy to determine EVERY need of the product. There are lots of variables in this domain, some of which are: user preferences, product usage patterns, user base growth, product runtime infrastructure changes, product developers change rate etc. And I mentioned nothing about actual code yet! That’s right, because before having actual code, we need to determine a model for all those variables mentioned before. “Architecture” I hear you say? Whatever… I don’t care how you call it as long as there is some good thinking done first, to address that model.

Please, oh please don’t get hung up on words like: architecture, model etc. This is exactly where cargo cult programming (CCP henceforth – it would have been funny to have another C in there, before the P) stems from.

I’ll give you a method to identify whether or not CCP is employed in your project/product:
Talk to a senior programmer who’s working on the product. Ask them to explain in detail, a small part of the product. Yes! In detail! Tell them to show you the classes (because OOP is almost certainly what you’ll find) and explain how and why they are organised like they are. Take your time, be patient and do your best to follow the explanation. And now the Aha! moment: if the programmer is not eloquent and you don’t understand the explanation or it doesn’t make sense, chances are you’ve got yourself a little CCP going on.
Everything is explainable in layman’s terms… if you understand what you explain, that is. Oh and beware of the “best practice” expression. It usually means: “others on the web are doing it like this and it means this is the way to do it”. So usually “best practice” = CCP (of course it shouldn’t, but that’s what it usually hides behind)

Another way to spot CCP would be to ask programmers what literature they got the ideas from, at which point, more often than not, they would quote websites, articles and blogs (that’s right! You better not be learning programming from this blog!). What you need to hear is books and good authors (e.g. Kent Beck, Robert C. Martin, Martin Fowler, Michael Feathers and many more).

So, CCP is a BAD thing. Repeat after me: CCP is a BAAAD thing. It makes programs less maintainable, more buggy, more expensive. That’s it!

All right. Enough chit-chat! I’ll let you get back to your Wikipedia binge-clicking. Cheers!

Trade-off

Another term that is being overused and misused across our industry… Ok, I want to clear its name a little bit and explain how we actually do trade-offs every day.

Consider this scenario: You need to take over a big legacy system. Really poorly written, no tests, coupling everywhere. You are required to add a couple of minor features and fix a few old bugs. You, you’re a professional programmer, with a proven track record. “They” are product owners which would like those features and bug fixes in production ASAP. Users are demanding justice!

You are faced with a dilemma: How can I, THE programmer, deliver this in an impossible time frame? It’s crazy! It’ll never work… I need to isolate the parts where I’ll have to add those features, I need to write tests to make sure I’m not breaking stuff, I need to refactor the code, to understand it better now and later… all these things, the humanity!!! And what the hell is a m_hWnd pointer?!

In the face of this apocalypse, you might crumble and cry, or you might hit “them” hard until they leave you alone and slowly move you to a “green field” project nobody uses, or… you may consider the following aspects to this problem:

1. The application is in production for years and it produces $$$.

2. Users are pushing.

3. The code is shit.

4. The new features and bug fixes are not really that big.

Now, should you hold your own and tell everyone about point 3, crying how this should inevitably be rewritten and that will take months, years, omg, the humanity!!?? (really author… again with this?) Or instead, should you realise that there is no way in hell you can rewrite this in a timely fashion and just slap some more shit on top, to get it over with and crash like a diva ninja-developer on a chaise longue asking for refreshments?

Well… (get ready for it…) it depends! This is it, thank you folks, good bye!

Wouldn’t you really hate my guts if I ended it there (like some professionals often do)? Hahaha. No my friend. I’m here to help. Here’s what’s gonna happen: you’re going to realise that indeed, you don’t have the time to make it all nice and shiny. But! You won’t slap shit on top of it. You’ll make it so that the new stuff you write is nice and tidy, with tests and all that, but linking it to the legacy system will probably be a little shitty, yes. There’s your first trade-off. The bugs? Well, the bugs get sprayed, but the code defects, you’ll track down with the debugger (ooh, watch out TDD purists), understand what’s going on and apply a small local fix. Second trade-off.

Now, will the code be better after this? Mostly no. But the new parts you wrote will be good and overall, the system will satisfy the angry user mob.

What about your previous articles about clean code and refactoring and all that good stuff (a bit of narcissism, I agree)?

Well, they are part of what I described above too (remember? the “new stuff” you added to the legacy crap). Funny thing is, we expect other people to do these kind of trade-offs every day. Imagine the scenario above, but with a little alteration: you take your car to the mechanic for a couple of small tune-ups and some scratch fixes. Of course you want your car back ASAP and with those things done. What do you think the mechanic does in his little magic castle? Why trade-offs, of course. You get your car back, with the stuff you asked for, you use it, there you go…

So back to the term. Trade-off. It’s actually a placeholder for judgement. The more you think about trade-offs, the more judgement you apply to the problem at hand. So there it is, I guess that’s what I was trying to say.

Quality in software development

What does this truly mean? Somebody actually told me, during a business talk, that quality was just a buzzword. It felt wrong, but in a way it felt real too. I now understand where he was coming from and I can explain it. The only way he experienced quality in software development, was through the word itself: quality. The context around it was usually messy (code, defects, unhappy clients etc.). No wonder he started hating the word and sarcastically referring to it as a buzzword.

So back to the question: what does quality in software development really mean? There is more than one point of view that we should explore, so here we go:

The big picture

Ultimately, everyone wants to get what they ask for, without any hassle and within their budget. The holy grail is getting the best thing, for free, right away. Don’t try to fight this. It’s true. Think of the best version of anything you want and tell me you wouldn’t like it materialising in front of you, right now. So from this point of view, quantification of quality is done on the scale of “this is not what I wanted and it’s too expensive and it takes too long to get” to “this is exactly what I wanted and it’s free and I got it“.

Of course, there are some variations, one of which is heavily encountered in the industry: “this is not quite what I wanted, but it’s cheap, yet it took too long to get“. These are the murky waters many clients jump into when having to do trade-offs. In my experience, these are usually budget trade-offs, that end in an unfortunate state, where over-budget gets spent in the same bucket, but for patch work.

This is the scenario I like best:

Client: This is my problem. Can you solve it?

Provider: Yes.

Client: What will it cost me, in time and money?

Provider: I estimate around 6 months and 500.000$

Client: Great! Let’s do it!

(every couple of weeks)

Provider: Is this what you had in mind?

Client: Almost. Here, I’d like a little more blue.

Provider: Done!

(around 6 months later)

Client: It looks like we’re done. Great job! Thanks!

Provider: Anytime.

There are lots of details behind this scenario, but this is the big picture. When I read it, it gives me a peaceful feeling of “yes, this is how business should be“.

Processes

These are blueprints of behaviour that help when dealing with certain problem patterns. They also help the entities that participate, have some (illusion of) predictability of the future. It’s a sort of guarantee that the future will happen in a certain way. This helps people feel in control and reduces the level of stress. BUT! Since we derive a level of comfort from having the illusion of a certain future, we tend to equate the process with the future itself. This is a mistake that leads to process rigidity and reluctance to step outside its bounds.

I like to look at processes as checkpoints and guidelines along the road. The main focus should always be the product. Engaging my expert skills right now, while actively building it, IS the best guarantee I can have of the future I want. The future is built from a constant stream of present 🙂

I always look at the reason for the existence of the process, so it gives me a better understanding of why it was set up in the first place. This way I can work along the lines of that reason, rather than blindly following a process I cannot understand (and sometimes come to hate).

One thing I noticed is that the fuzzier the goal, the more processes there are. The more processes there are, the more people tend to make a goal out of them. And so the wheel spins…

Craft

This is, surprisingly, not the most looked at factor in the software development industry. In my experience, if a client sees processes and pays an amount that suits her, THE CODE (which is ultimately the product) doesn’t matter as much, as long as it… exists.

WHAT??!! This is outrageous! But wait… is this something the client should even be aware of? Why would it be so? If I buy a pair of shoes, I’m not interested in the way they are built, what I’m really interested in is how they look, how they feel and how long they last.

The craft IS really important, inside the guild. Because we’re talking about code, things should be simple, really: it should work as required (without maintenance, preferably) and it should be easy to modify (if needed).

These are two seemingly straightforward characteristics, but to achieve them, takes not only time and experience, but continuous learning too. A craftsman should be able to easily understand the request (even help with it) and pick the most appropriate tools for the job. A craftsman should be able to read code to business people so that they hear a narrative. A craftsman should be able to write such code.

I want my client to understand where their product stands at any time and what it can do. I do NOT want my client to be forced to listen to what kind of refactoring and how many unit tests I wrote yesterday. Or even worse, she shouldn’t be forced to decide if I need to write them tomorrow.

Also, I don’t want to hear that “we are thinkers, don’t force us to estimate“. Well, if you’re thinkers, then think about how long it takes you to do something you should already know how to do. Nobody said estimates are deadlines (did you?!!), but clients need to have a rough idea about resources they need to allocate. Your GPS doesn’t say “you know what… we’ll get there when we get there! The important thing is the magic I constantly use to display these awesome maps.

Trust

I don’t want to be cheesy and say that trust is earned, because of course it is. The difficult thing is to trust someone I don’t know. Looking at references and traces of past work helps, yes, but I also place a huge weight on the quality of the first discussion. It always served me well in assessing my future collaboration with anyone. Everyone is an intuitive psychologist by nature (not mentioning special health conditions).

The thing is, we almost always know (trained people are an exception, hence the “almost”) when someone is hiding important details in a conversation and we can also “feel” when what they say is what they think.

Obviously, the higher the level of trust, the higher the expected product quality. I’ll stop here and save the psychology magic for another time 🙂

Conclusion

These are my most valuable points of view, when looking at quality in software development. I also look for all of the above in business partners and also, I’m happy to say that for us, at bitGloss, quality is not just a buzzword.

Artificial intelligence

It’s been almost 20 years since I wrote my first neural network, during my university studies, in C#. Even back then, it was still unclear to me why this was called intelligence. I mean, after all, it was doing exactly what I programmed it to do and it felt like cheating to call “learning” what it was doing.


In the years since, I’ve read quite a few books on natural sciences (Darwin, Dawkins, Pinker…) and I discovered some wonderful things about the world we live in and how we “operate”. One extremely important thing I’ve learned is that while we know a lot about our brain and its functions, we still have a long way to go before we can say we know everything about it.


Natural intelligence has yet to define natural intelligence.


I still stand by my idea that software is just a tool that we use. It is not, in any way, an emerging new species that will overthrow humanity.
Is it smarter than us? No way! “Ok”, you might say, “but AlphaZero…”. AlphaZero what!? It can beat me at chess in no time? I bet it can’t if I spill a glass of water on top of it, can it? This is exactly the way we should evaluate things when we bring AI into the realm of inter-species competition. “Ok”, you might say again, “but what if all those specialized systems are put together into one big system that knows how to do every task? Isn’t that like a brain?”. No! Just read about cases where specialized brain modules were damaged, but somehow the functions were taken over by other modules. This is truly wonderful. Our brains adapt in such ways the aforementioned machine brain can’t.


A couple of words about the fear of AI destroying us: it’s most definitely possible for machines to destroy humanity (and possibly many other species as well), but that’s because WE control it. WE pull the trigger. WE have nukes, we don’t even need AI to do that. AI is just a means to an end in this scenario.

I believe that once we do fully solve the complex riddle of how the brain works, we might be able to recreate one from scratch. When that happens, it will probably be indistinguishable from a “natural” brain.

I will say this though: AI does have a bad PR, especially when it comes to public that is not trained in the subject at all (and most people don’t have a computer science degree). Of course people fear what they don’t understand AND is presented to them as a threat. Our brain always decides it’s better to be afraid and run if there is a sound in the tall grass of the savanna. The cost of being eaten by a predator hugely outweighs the cost of being wrong and tired from a run.


By the way, if you haven’t already done so, I strongly recommend you read Asimov’s “Robots” series. It’s wonderful! Now that’s a world I’d really like to live in.

The risks of code review

Code (or peer) review is supposed to be a process, during development, that helps producing better software. It sounds simple enough, except it’s not.

Coding is not an exact science. Not even close! It requires a lot of subjective things of the person producing it. Things like: empathy, ability to communicate, imagination, past experiences, capacity for understanding different layers of a problem etc. These things are also the ones being employed during code review. They constitute the differences between us and let’s face it, as much as we would like to think that our differences create a better world, in reality we are hardwired mostly for war and these differences are triggers.

Of course, we do have mechanisms to regulate fighting impulses, but most of them are activated as responses to direct stimuli: voice tone, hand movement, body language in general. During code review, all these things are missing, so guess which way the “conversation” will go. Without a real conversation with the other person, all the important cues and hints are missed, making for an extremely poor conversation. This is one of the reasons internet discussions are almost always going from “Here’s my nice kitten” to “I hate your race!” in just a few replies.

This is one major reason why pair programming makes more sense.

The subjective things matter a lot for: naming (functions, variables, constants, classes…), choices for code composition, choices between imperative vs. declarative styles, choices for frameworks. This is the space where most code reviewing is taking place, unfortunately. There is a high risk of overlooking a real problem (e.g. we read way too much data from DB into memory) while getting lost in debates over the name for a local variable (e.g. “k” vs. “counter”).

We then tend to create the so called “code conventions” within teams (which btw, will always piss off a subset of the team members), that will no doubt focus on those small things. And so will the developers from then on, trying to stick to the conventions and losing the overview (can’t see the forest from the trees). These “code conventions” (I’ve seen wiki pages with dozens of conventions for a single team) are usually not a positive, common understanding of “how we should do things in this team”, but rather a list compiled by some people that “won” the fight. This can clearly be seen as subjective, as one encounters many such lists, different from one another, when switching teams.

It’s fun to see how the author’s review responses are also extremely selfish. More often then not, they try to justify mistakes that they would have clearly accepted and corrected would they have been in direct contact with the reviewer. In a best case scenario, the discussion would be taken “offline” and resolved through direct communication. In a worst case scenario, the comments thread would span for miles. Now multiply this with just the lack of context for the reviewer and the number of reviewers. So delayed delivery is another, very real risk. What I’ve seen happening a lot is a third party (usually some form of management) stepping in and forcing a compromise that will, most of the time, send local variable “k” to production and the overlooked memory leak along with it.

I don’t want to make a case against code review here, but in my experience people engaged in it need to be really good managers of other aspects of their lives too, in order to leverage its full potential. Otherwise it’s like giving the car keys to a bunch of 12-year-olds. Also, when I say pair programming, I don’t mean focusing specifically on the “driver/navigator” paradigm, but simply 2 people sitting and working together on the same code.

NoSkills

This is new terminology for a concept that has existed for a long time in our industry. People are slowly rediscovering old paradigms. NoSkills is actually referring to an old working technique that maximizes efficiency of planning, processes and meta work by ensuring no actual work gets done. Here’s what one should do to get highly skilled in NoSkills:

  • learn industry terminology like: scrum, lean, kanban, agile, CI/CD
  • read first links that pop-up on google about the aforementioned terms
  • learn that waterfall is bad
  • learn that documentation is bad and people interaction is good
  • complain about people not interacting
  • apply 2 week sprints, so that “we can track velocity”
  • never finish the work defined in a sprint
  • have constant meetings
  • religiously impose all process ceremonies
  • NEVER write tests if there is no time
  • switch from scrum to kanban, because “scrum doesn’t fit our work flow” (= we never finish the work in any sprint)
  • use GitFlow, with independent feature branches, because our 5 dev team needs faster progress
  • use CI/CD (= install a jenkins server)
  • have refactoring sprints (“oops, but we’re kanban now…”)

This is by no means an exhaustive list, because there are so many things to be studied and so many wonderful certifications to be purchased in the NoSkills world, that it would take more than a sprint to list them here.

Folks, work is more important than meta work (and don’t let anyone fool you). If software doesn’t get done, it’s because software doesn’t get done, not because the process around it is obsolete and needs to be changed. Eating the same food with different tools won’t make it better. Something we need to re-learn is that complexity-first is a bad approach. It makes us focus on the wrong things.

One great skill to acquire is recognizing the smell of NoSkills. Don’t fight it, just walk away.