Is Your Website Going Down Right When You Need It Most?

There are few things more damaging to a business than an application that becomes unresponsive exactly when traffic peaks. Not during quiet hours, not in the middle of the night — but during a product launch, a marketing campaign, or the busiest trading period of the year. The timing feels cruel, but it’s not a coincidence. Peak load is precisely when hidden architectural problems surface.

This is the story of one such system, and what was actually causing it to fail.

The architecture that looked fine on paper

The infrastructure in question was built on Azure and, by most measures, looked solid. Redis was in place as a caching layer to reduce database pressure. The database itself was properly indexed. Servers had comfortable headroom on both CPU and memory — no resource exhaustion, no obvious bottlenecks. On a quiet afternoon, everything worked beautifully. Response times sat around 200ms, well within acceptable range.

Then peak traffic would arrive. Response times would climb from 200ms to 2 seconds, then 5, then 10. Eventually the application would stop responding altogether. And then, as traffic subsided, it would recover on its own — as if nothing had happened.

This pattern is particularly disorienting for engineering teams. The system heals itself, so there’s no crash to investigate, no error log with a clear smoking gun. Just a recurring window of failure that’s hard to reproduce and even harder to explain.

Why the obvious suspects weren’t guilty

The natural instinct when an application slows under load is to look at the most visible resources: CPU, memory, database query times. All of them were fine. Redis, the caching layer specifically designed to handle this kind of load, was responding in microseconds. The database wasn’t under unusual pressure. The servers weren’t breaking a sweat.

This is where many investigations stall. If the database is fine, Redis is fine, and the servers have headroom — what’s left?

The answer was in a place most teams don’t think to look: thread pool metrics and connection pool utilization.

The actual problem: threads waiting for nothing

Modern web applications handle concurrent requests using thread pools — a fixed set of worker threads that process incoming requests. When a request comes in, it gets assigned to an available thread. If all threads are busy, the request waits in a queue.

The application was running with default thread pool settings. Those defaults are reasonable for low-to-moderate traffic, but they set a relatively low ceiling on how many threads are available at any given moment. Under normal load, there were always enough threads to go around. Under peak load, every thread was occupied — not doing heavy work, but waiting. Waiting to make a call to Redis. Waiting for a Redis response that would arrive in microseconds.

Here’s the paradox: Redis was fast. The problem wasn’t Redis performance. The problem was that the threads making Redis calls were blocking while they waited, even for those microseconds, and there weren’t enough of them to keep up with the incoming request volume. Requests piled up in the queue. Response times climbed. Eventually the queue filled and the application became unresponsive.

The infrastructure was fine. The bottleneck was a configuration default that nobody had revisited since the system was first deployed.

What the fix looked like

The solution involved three targeted changes, none of which required re-architecting the system.

Thread pool reconfiguration. We analyzed the expected concurrent load and pre-allocated a sufficient number of worker threads to handle peak traffic without queuing. This meant the application could process many more simultaneous requests without threads blocking each other.

Proper connection pooling for Redis. Related to the thread problem was how connections to Redis were being managed. Without a proper connection pool, the application was creating and tearing down Redis connections more frequently than necessary, adding latency and overhead to every cache interaction. A well-configured connection pool meant connections were reused efficiently, and Redis calls became as fast as they should have been all along.

Monitoring for thread pool utilization. Perhaps as importantly as fixing the immediate problem, we added visibility into thread pool metrics going forward. CPU and memory graphs are standard in most monitoring setups. Thread pool saturation almost never is — which is exactly why this problem had gone undetected for so long. If thread pool utilization starts climbing toward its ceiling, the team now knows before users feel it.

The results

Response times stabilized at under 300ms even during peak traffic periods. The infrastructure was able to handle five times the previous concurrent load without degradation. The underlying hardware, the database, and Redis itself didn’t change. Only the configuration did.

What this means for your team

If your application behaves well under normal conditions but degrades or fails under peak load, the problem is almost certainly not the thing you’re measuring most. CPU and memory are easy to monitor, so teams watch them closely. Thread pools, connection pools, and queue depths are harder to instrument, so they go unmonitored — and that’s precisely where these failure modes hide.

Before reaching for more servers or a bigger cache layer, it’s worth asking: do we actually know what’s happening inside our application at the thread level during peak load? In most cases, the answer is no. And in many cases, that’s where the answer is.

Infrastructure problems are rarely about infrastructure. They’re about configuration, visibility, and knowing where to look.


At Bitgloss, we help engineering teams find the real cause of performance failures — not just the obvious suspects. If your application is struggling under load, get in touch.


Is Your Database Slow? Probably Not.

Nine times out of ten, when an engineering team starts complaining about “slow database performance,” the database itself is perfectly fine. The real culprit is almost always hiding somewhere else — in the way queries are written, in how the application accesses data, or in decisions that made sense at the time but were never revisited as the system grew.

This distinction matters more than it might seem. If you assume the database is the problem, you start looking at hardware upgrades, migration to a “faster” database engine, or expensive infrastructure changes. All of that costs time and money — and probably none of it will fix the actual issue.

The symptom everyone misreads

A slow application feels like a slow database. A page that takes five seconds to load, a report that times out, an API endpoint that keeps failing under load — these all point fingers at the database layer. But the database is usually just doing exactly what it was asked to do. The problem is what it was asked to do.

We’ve seen queries that took literal minutes to complete, not because the server was underpowered, but because they were missing proper indexes, performing full table scans on millions of rows, or doing complex joins in the wrong order. In one real case, a reporting query was joining five tables with no indexes on any of the join columns — and then passing the entire result set back to the application to be filtered in code. The database was faithfully returning millions of rows that would eventually be narrowed down to a few hundred. It was working extremely hard to accomplish something that should have been trivial.

What’s actually going wrong

There are a handful of patterns that cause the vast majority of database performance problems:

Missing indexes on join and filter columns. When a query runs a WHERE clause or a JOIN on a column with no index, the database has no choice but to scan the entire table to find matching rows. On a table with a few thousand rows, this is fast enough that nobody notices. On a table with a few million rows, it becomes a bottleneck that grows worse every week as data accumulates.

Filtering in application code instead of in the database. This happens when developers pull a large dataset and then loop through it in the application to find what they need. The database does more work than necessary, more data travels across the network, and the application spends time on logic the database could handle in a fraction of a second.

N+1 query patterns. This is one of the most common and most damaging patterns, especially in applications that use ORMs. Instead of fetching related data in a single query, the application fires one query to get a list of records, then fires a separate query for each record to get its related data. Fetch a list of 200 orders? That’s 201 database queries. At scale, this silently destroys performance.

Joins ordered by convenience rather than efficiency. The order in which tables are joined can dramatically affect how much work the database needs to do. Starting a join with a large, unfiltered table and narrowing down later means the database is carrying a heavy load through most of the operation. Reordering joins so that the most selective conditions are applied early can cut execution time significantly.

What a proper fix looks like

When we analyzed the execution plans on the slow queries described above, the path forward became clear quickly. Execution plans show you exactly how the database is processing a query — which indexes it’s using (or not using), how many rows it’s scanning at each step, and where the time is actually being spent. Most teams never look at them.

The changes we made were not dramatic. We added indexes strategically on the columns used in WHERE clauses and JOIN conditions. We rewrote the join order to match the actual selectivity of each table. We moved filtering logic from application code into proper WHERE clauses. And we replaced N+1 patterns with proper JOINs or batch fetches.

The results were. Queries that previously took two to three minutes now complete in under 100 milliseconds. The database hardware didn’t change. The database engine didn’t change. Only the way the queries were written changed.

What this means for your team

If your application is feeling sluggish and the database is getting the blame, the first step is not to panic and not to reach for the infrastructure budget. The first step is to look at what your application is actually asking the database to do.

Pull the slowest queries from your logs. Look at their execution plans. Check whether the columns you’re filtering and joining on have indexes. Look for N+1 patterns in your ORM queries. These are not exotic problems — they are extremely common, and they are fixable without rewriting your application or migrating to a new database.

The database is rarely the bottleneck. It’s just being asked to do things inefficiently.


At bitGloss, we help engineering teams diagnose and fix exactly these kinds of problems — turning slow, expensive queries into fast, predictable ones without unnecessary infrastructure changes. If your application is struggling with database performance, get in touch.


Making Java code more readable with Type Abstractions

Let’s consider a simple example project, appointments, where each appointment has a date, doctor, patient, and comments. These records are read from input, displayed in list or tabular form and saved. The code involves multiple layers — reading and writing I/O, formatting data, and writing formatted views.

Here’s the actual Java representation of an appointment:

public record Appointment(LocalDate date, String doctor, String patient, String comments) {}

Early in the design, many of the functions that manipulate these appointments had verbose, deeply nested type signatures like BiFunction<List<String>, Supplier<Stream<List<String>>>, String>. While correct, such signatures obscure intent. That’s where type abstractions come in — small, domain-specific type aliases that make the structure of the code clearer and more readable.

The Problem: Overly Complex Function Signatures

Consider this example, which takes a list of headers and a content supplier, producing a string representation of a list view (the table view is omitted for brevity) :

public static BiFunction<List<String>, Supplier<Stream<List<String>>>, String>
listFormat =
    (headers, content) ->
        header(headers).append(data(content)).toString();

This works, but the type signature is long and opaque. It doesn’t immediately tell you what the function is about — only that it’s a BiFunction of lists and streams that returns a string.

The Solution: Creating Semantic Type Aliases

Java doesn’t have native type aliases, but we can emulate them through interfaces that extend existing types. For instance, we can define a View type that captures the concept of a function that takes headers and content, and returns a formatted string:

public interface Types {
    interface View extends BiFunction<
            Collection<String>,
            Supplier<Stream<Collection<String>>>, String> {}
}

With this, our previous function becomes much cleaner:

public static View listFormat = (headers, content) ->
  header(headers).append(data(content)).toString();

The functionality hasn’t changed — but the intent is now explicit. We’ve moved from a generic, mechanical type to a domain-level concept: a View.

Extending the Abstractions

Next, the display function — responsible for rendering appointments — takes a View (the formatter) and returns a ViewWriter<IO> (the executor that writes the formatted output to an IO stream). Originally, its signature was difficult to read:

public static Function<
    BiFunction<List<String>, Supplier<Stream<List<String>>>, String>,
    BiFunction<List<String>, Supplier<Stream<List<String>>>, Consumer<IO>>>

display = ...

Using type abstractions, this becomes far more expressive:

public static Function<View, ViewWriter<IO>> display =
view ->
  (headers, content) -> io ->
    io.print(content.get().count() == 0 ? "No appointments found\n"
      : view.apply(headers, content));

Here’s how the types break down:

  • View — the formatter: takes headers and content, produces a string representation of the table.
  • ViewWriter<W> — the executor: takes headers and content, and produces a side effect by writing the formatted string to a consumer of type W (like IO).
interface ViewWriter<W> extends BiFunction<
        Collection<String>,
        Supplier<Stream<Collection<String>>>,
        Consumer<W>> {}

This separation keeps formatting logic distinct from side-effect logic: View handles what the output looks like, and ViewWriter handles where it goes.

Abstracting Read and Write Operations

Finally, the function responsible for reading a new appointment from input and writing it is defined as:

public static ReadWriter<Appointment, TypedIO> addNew =

    writer ->
      reader ->
        writer.accept(new Appointment(
            reader.readDate("Enter date: ", "invalid date"),
            reader.readString("Enter doctor: ", "").orElse(""),
            reader.readString("Enter patient: ", "").orElse(""),
            reader.readString("Enter comments (if any): ", "").orElse("")));

Its type alias is equally simple and expressive:

interface ReadWriter<R, W> extends Function<Consumer<R>, Consumer<W>> {}

Rather than dealing with nested Function<Consumer<X>, Consumer<Y>> constructs, we now have a ReadWriter — a function that connects two side effects: reading and writing.

The Result

These abstractions don’t change the runtime behavior of the code. They change its shape.
Now, a developer reading Function<View, ViewWriter<IO>> can immediately tell that the function takes a view and returns a view writer — no decoding of generic types required.

The result is clearer, safer, and more expressive code. Type abstractions let us represent not just data, but also intent, in the type system. They make it easier to reason about functions, compose behaviours, and test side effects in isolation — all without adding boilerplate or runtime overhead.

(Adapted from Chapter 3, “Type Abstractions,” in Dr. Software, available at bitgloss.ro/dr-software.pdf

Full example code found here)

Artificial Intelligence: A misleading name for powerful tools

This article is a follow-up of my 2019 article on the same topic, but updated in the light of today’s AI hype.

Like I mentioned then, when I first experimented with neural networks years ago, I was struck by how little “intelligence” there actually was in them. They did exactly what I coded them to do—no more, no less. Today’s systems may look more impressive, but the principle hasn’t changed: these are tools, not minds.

The Problem With the Word “Intelligence”

Humans don’t even have a precise definition of their own intelligence or consciousness. If we can’t define those terms for ourselves, it makes no sense to claim that we’ve created them in machines.

What we can say is that current AI systems don’t exhibit the qualities most people intuitively link with intelligence: self-awareness, understanding, intent, or meaning. They generate outputs by compressing patterns in data, not by reasoning or knowing.

To call that “intelligence” is to confuse statistical mimicry with cognition.

What AI Actually Does

Modern AI systems, especially large language models and generative tools, are best understood as:

Pattern machines. They excel at finding correlations in vast amounts of data.

Automation engines. They can handle repetitive, data-heavy tasks quickly and consistently.

Amplifiers. They extend human capability, but only within boundaries set by training data and design.

This is powerful, but it is not thought.

The 2025 Reality Check

Scale isn’t sentience. More data and bigger models don’t bring us closer to human-like understanding.

Usefulness ≠ understanding. A tool can be highly practical without being intelligent.

The real risks are human. Bias, misuse, privacy abuse—these are problems in how people deploy the systems, not evidence of AI “deciding” anything.

Why the Distinction Matters

If we keep pretending AI is a kind of mind, we risk treating its outputs as if they were grounded in meaning or truth. They aren’t. They’re grounded in probabilities.

AI is not intelligent because we don’t even know what that word would mean in this context. What we do know is that these systems are fundamentally different from human thought: they calculate, predict, and generate—but they do not understand.

Conclusion

The danger isn’t that AI will “wake up.” The danger is that humans will forget what it actually is: computation dressed up in human-like outputs. Powerful, yes. Useful, yes. But never a mind.

Facturarea modernă și simplă prin worklog.ro

Pentru mulți freelanceri și mici antreprenori, finalul de lună vine cu aceeași rutină: adunarea orelor lucrate, verificarea contractelor și pregătirea facturilor pentru clienți. Un proces care, de multe ori, consumă timp și energie, mai ales atunci când se face manual sau cu instrumente disparate.

Aici intervine worklog.ro, platforma care integrează nu doar raportarea timpului lucrat, ci și facturarea. În loc să treci prin mai multe aplicații și fișiere, totul este disponibil într-un singur loc, simplu și organizat.


De la ore lucrate la factură emisă

Unul dintre cele mai mari avantaje este legătura directă între jurnalul de lucru și modulul de facturare. Orele înregistrate pentru un proiect se pot importa direct în factură, eliminând complet calculele manuale și riscul de erori.

Tot ce trebuie să faci este să alegi proiectul și să adaugi liniile corespunzătoare. Totalurile se calculează automat, iar factura finală poate fi descărcată imediat în format PDF.


Integrare cu e-factura

În contextul reglementărilor fiscale actuale, e important ca facturile să fie conforme cu cerințele ANAF. worklog.ro oferă opțiunea de a genera facturi pregătite pentru sistemul RO e-Factura și de a le încărca direct acolo, fără pași suplimentari.

Astfel, în câteva minute, documentul nu doar că este emis, ci și înregistrat în sistemul oficial. Este o soluție practică atât pentru firmele mici, cât și pentru colaborările cu parteneri mai mari care cer respectarea acestor standarde.


Flexibilitate și control

Facturarea cu worklog.ro nu se rezumă doar la emiterea unui document. Ai la dispoziție:

  • posibilitatea de a copia rapid o factură anterioară,
  • opțiuni pentru ștergere (în aceeași zi, dacă nu a fost încărcată în e-factura),
  • funcționalitatea de storno, atunci când e nevoie să corectezi o situație,
  • vizualizarea facturilor primite, inclusiv cele venite din sisteme externe.

Practic, ai o imagine de ansamblu completă, într-un singur tablou de bord.


Mai mult timp pentru business, mai puțin pentru birocrație

Atunci când îți emiți facturile cu worklog.ro, câștigi două lucruri esențiale: timp și siguranță. Nu mai pierzi ore întregi cu verificări repetitive, nu mai riști să omiți detalii și știi că documentele tale sunt conforme cu legislația.

Pentru detalii tehnice și exemple concrete de utilizare, poți consulta documentația oficială de facturare.


Încearcă și tu modulul de facturare din worklog.ro. Vei descoperi cât de simplu poate fi să îți organizezi activitatea și să îți gestionezi clienții fără stresul facturilor de la final de lună.


De la haos la claritate: povestea unui final de lună cu worklog.ro

Este joi seara, aproape de finalul lunii. Andrei, freelancer în IT, primește un email de la client:

„Ne poți trimite un raport cu orele lucrate în ultimele patru săptămâni?”

Pare o cerere simplă, dar Andrei știe ce urmează: câteva ore pierdute căutând prin Excel-uri, notițe pe telefon și mesaje vechi din Slack. Nu e prima dată când se întreabă dacă nu a uitat să factureze câteva zile.

Așa arată realitatea pentru mulți freelanceri și echipe mici: multă muncă, dar evidențe împrăștiate. Iar finalul de lună devine un stres în loc să fie o formalitate.


Cum arată alternativa?

Acum, imaginează-ți aceeași situație, dar cu un mic detaliu diferit: Andrei folosește worklog.ro.

În loc să caute prin fișiere, deschide aplicația. Acolo găsește toate orele deja înregistrate, împărțite pe proiecte, cu descrierile aferente. Selectează perioada, apasă pe „Generează raport” și, în câteva secunde, are un document clar, gata de trimis clientului.

Stresul dispare. Raportul arată profesionist, iar Andrei câștigă câteva ore pe care altfel le-ar fi pierdut făcând calcule manuale.


Ce face worklog.ro diferit?

Secretul e simplitatea. În fiecare zi, când termină o sarcină, Andrei adaugă câteva detalii: data, numărul de ore, descrierea și proiectul. Atât. Intrarea apare imediat în jurnal.

Dacă are de lucrat pe același proiect mai multe zile la rând, folosește funcția Adăugare multiplă și economisește timp. Dacă are nevoie să reia o activitate mai veche, găsește rapid ultimele 100 de înregistrări în secțiunea Past items.

Iar dacă lucrează cu echipe care folosesc Jira, orele pot fi importate direct, fără muncă suplimentară. Pentru cei care preferă Excel, există și importul CSV. Practic, Worklog se adaptează felului în care lucrezi deja.


Beneficiile reale

Pentru Andrei, și pentru oricine lucrează pe bază de proiecte, asta înseamnă:

  • claritate – știe exact cât și pe ce a lucrat;
  • profesionalism – clienții primesc rapoarte clare, fără erori;
  • timp câștigat – orele pierdute cu administrarea se reduc la câteva minute;
  • siguranță financiară – nicio oră lucrată nu rămâne nefacturată.

Povestea de final de lună, rescrisă

În loc să rămână până noaptea târziu să adune orele, Andrei își încheie ziua liniștit. Trimite raportul în câteva minute și știe că este complet și corect. Clientul îl primește la timp și apreciază organizarea.

Finalul de lună devine previzibil, nu stresant.


Concluzie

Dacă și tu te regăsești în povestea lui Andrei, poate că e momentul să îți simplifici viața.

Încearcă worklog.ro și vezi cum e să ai mereu control asupra timpului tău, fără să te mai pierzi în hârtii și Excel-uri.

De la înființarea firmei la facturarea clienților – pași practici


1. Deschiderea unei firme în România

Dacă vrei să începi să lucrezi legal cu clienți, trebuie să alegi o formă juridică. Cele mai comune sunt:

  • PFA (Persoană Fizică Autorizată) – simplu de deschis, costuri reduse, dar răspunderea este personală.
  • SRL (Societate cu Răspundere Limitată) – implică mai multă birocrație, însă oferă protecție patrimonială și credibilitate în fața clienților.

Pașii principali pentru un SRL:

  • Alegerea și rezervarea denumirii firmei la ONRC.
  • Stabilirea sediului social (poate fi și la domiciliu).
  • Redactarea actului constitutiv.
  • Depunerea capitalului social minim (200 lei).
  • Înregistrarea la Registrul Comerțului și obținerea certificatului de înregistrare (CUI).
  • Înregistrarea opțională în scopuri de TVA, dacă e necesar.
  • Deschiderea unui cont bancar pentru firmă – obligatoriu pentru SRL, recomandat și pentru PFA. Contul va fi folosit pentru tranzacții și încasarea facturilor.

Mai multe detalii oficiale găsești pe site-ul ONRC – Înmatriculări societăți.


2. Ținerea evidenței orelor lucrate (exemplu simplu în Excel)

Un mod clasic de a urmări timpul lucrat este să folosești un tabel în Excel.

Exemplu de structură:

DataClientProiectActivitateOre lucrateTarif/orăTotal
01.08.2025Client AWebsiteDezvoltare frontend5150750
02.08.2025Client BConsultanțăAnaliză procese interne3200600

Poți găsi modele gratuite de foi de pontaj pe site-ul Microsoft Office Templates.


3. Facturarea manuală în sistemul e-Factura

Din 2024, firmele din România trebuie să trimită facturile către clienți prin RO e-Factura. Sistemul funcționează pe baza unui fișier XML conform standardului european UBL 2.1.

Procesul manual:

  1. Creezi factura în Excel/Word.
  2. Construiești manual un fișier XML conform specificațiilor ANAF.
  3. Te loghezi în Spațiul Privat Virtual (SPV).
  4. Încarci fișierul XML.
  5. Aștepți validarea. În caz de erori, refaci factura și retrimiți.

Este un proces posibil, dar consumator de timp și predispus la greșeli tehnice.


4. Cum simplifică worklog.ro procesul

worklog.ro este o platformă gândită special pentru freelanceri și mici afaceri din România, care își facturează timpul și serviciile.

Beneficii:

  • Logarea simplă a orelor – în loc de Excel, introduci activitățile direct pe clienți și proiecte.
  • Rapoarte lunare instant – afli exact câte ore ai lucrat și cât trebuie facturat.
  • Facturi conforme e-Factura – platforma generează automat fișierul XML conform standardelor ANAF, gata de încărcat în SPV.
  • Reducere erori – nu mai scrii manual coduri XML.
  • Economisești timp – un proces care altfel dura ore devine o chestiune de câteva minute.

Concluzie

Deschiderea unei firme și facturarea clienților implică pași clari și proceduri oficiale. Deși Excel și crearea manuală a XML-urilor sunt soluții de început, ele devin rapid complicate și ineficiente.

Prin worklog.ro, antreprenorii și freelancerii își pot ușura munca: loghează orele direct în platformă, generează facturi conforme e-Factura și scapă de stresul birocrației.


Resurse utile


Automation

A term thrown randomly in lots of software development shops, to describe different things. Why? Automation is a simple term, that used to mean a process that can run without (or with minimal) human intervention. Now it means lots of things and, in most cases, it’s a lie. Let me explain what I mean, through a software development skit:

Alice: Bob, where do we stand with the QA automation for this release?

Bob: We’ll probably have to modify 40% of the regression suite, ’cause lots of tests are failing.

Alice: Do you think we’ll make it in time?

Bob: We could bring in 5 guys from John’s team to help.

Alice: Sounds like a good idea. Let’s do that!

No one ever…

I’m sure you’ve never seen this in practice, ever… Right? “Good idea” what!??? Of course, Alice is acting purely based on her project management targets, but she’s completely missing out on what automation should do for them. That is: reduce the intervention of humans in the process and get faster feedback from that regression suite (as opposed to people playing keyboard monkey roles).

What happened back there? Well, 40% of 10 tests is nothing, so bringing in 5 guys would be one person per test that could help, plus an extra one. The problem could be solved in a matter of minutes…? maybe hours…? Ok, good call Alice! But the elephant in the room, in this scenario, is that the regression suite is actually a joke, not real automation. I mean 10 tests? (yes, I just decided that there were 10 tests, don’t scroll up)

The problem in the large, is that the regression suite has hundreds, if not thousands of tests. This means feedback comes late (hours, sometimes days) after running the suite. Feedback is not clear most of the time, false positives, flakiness and all that good stuff.

Alright. QA automation. But there are other kinds, right? Oh, sure there are. Let’s take the famous “dev-ops pipelines” for example. “What does that even mean? Dev-ops pipeline, pfff…” you say? Stop pretending and just accept you already know what I’m talking about: pull source from GIT onto some jenkins machine, run some stuff from some other jenkins machine and then some stuff on this jenkins machine and then “promote” the buid to some other machine and chop a virgin’s head and make sure all is being pushed to Kibana and kill yourself.

Isn’t that THE automation? Well yes, it is, it’s just that… more often than not it goes wrong at different steps in the process. When was the last time you’ve seen a pipeline like this run well for weeks or months at a time, without human intervention. I can already hear the audience… “months? we’re stepping in a couple of times a day”.

We can see a pattern here: “automation” that hurts rather than helping. Ok, by “hurts” I mean that it does so for the business goals: late deliveries, buggy software products, etc. It totally does NOT hurt the persons which created the “automation” to begin with. To them, it’s an industry standard and if you’re not doing it, you’re so not worth living and must crawl with shame under your desk (not standing desk, of course, ’cause you’re a loser and you’re not trendy). They will defend it to their deaths (that is, until they move to a more trendy company, with bean bag chairs and super-expensive espresso machines).

There are many other kinds of “automation” which generate similar results to the above. Bottom line is: if it constantly hurts the business goals, you’re doing it wrong. “Yes, but my special case…”. NO! Just throw it away (YES, away it should go) and rethink the whole thing. I mean really rethink it in such a way that you won’t have to touch it afterwards, not just use a different technology to do the same thing all over again. It will probably take a bit of time (weeks, months…), but the business will certainly thank you for it.

P.S. What are you doing when you give business a software product? You automate its processes. Why? So they don’t have to do it manually. That business may just as well be you.

Stop mocking your system!

Yes, mocking has a mostly negative meaning! Only this – to imitate (someone or something) closely – doesn’t feel negative.

Mocking is usually used in testing code and it looks something like this:
We have module A which uses modules B and C. When we write tests for module A, we don’t initialise it with the real B and C, but rather “mock” B and C and hand those mocks to A.

All of us have used mocks in our testing code, to substitute the real things. I know I’m definitely guilty of this. Why though? Why do we do it? Well, It’s usually because initialising those collaborators is not trivial (i.e. can’t be done with a one-liner).
But why wouldn’t the collaborators be easy to spin-up? Well, because they have their own issues with their own collaborators and so, you see, the design issues start to emerge. This is why you read things like “TDD makes your design” better (I’m not going to start this now… maybe in a future article).


But let’s backtrack a little bit and be honest with ourselves. What we’re actually doing is faking (please don’t start with the stub/fake flamewar) things that we’ve created ourselves. We lie to ourselves saying that we’re only testing one thing at a time, so having two or more real things breaks best practices. Which best practices? Stack overflow’s most voted answers? Check this out: you set your test victim with mocks and boom, now you have two of the same collaborators living in your codebase. And guess what… you have to keep the mocks in sync with the real things, in case you thought otherwise. Don’t worry, you’ll forget to do that once in a while (I’m being gentle here…) and you know what the end result of that is? Yes, yes, the all too familiar “green tests broken product” syndrome. You know… a dev and a QA walk in to a bar. QA says: “it doesn’t work”. Dev says: “it must be working, all the tests pass”. All the dev tests that is…
And this is exactly where the heart of the problem lies! Creating mocks is just creating an alternative universe for your system, that will eventually lose all connections with reality.
What you almost always want, when mocking, is really just different input data for your module, data normally provided by a long stream of collaborators. Just build your program in such a way that you can send it this data, regardless of the runtime (unit test framework, test env or production env). Yes, it is possible and highly desirable and don’t be in denial right now.

But dude, I don’t want to use a real database (or AWS endpoint or rocket launcher) in my tests. Debatable, but fair enough. Simulating 3rd party systems is acceptable when not doing so would lead to bad consequences in a production environment. Key concepts here are “3rd party” and “production”. If using a 3rd party production system won’t hurt anyone, just do it. Being as close to reality as possible is the best thing to do. Anything else is just a web of lies that we have to maintain.

Mocking your own system is just mocking your own reality. Stop lying to yourself!

Cargo cult programming

…or how to create huge useless programs, that is. In all fairness to the folks doing this, it’s the best they can do, given the knowledge they hold. And this is where I want to make my argument, but first…

Cargo cult programming, says Wikipedia, “is a style of computer programming characterised by the ritual inclusion of code or program structures that serve no real purpose”.
Let’s expand on that a little bit. How can there be code that serves no purpose? What is the purpose? Good questions!

The purpose of the code is to respond to every need of the product. Pretty simple, right? Right! It’s the every need of the product thingy that’s sort of vague and, frankly, it’s not easy to determine EVERY need of the product. There are lots of variables in this domain, some of which are: user preferences, product usage patterns, user base growth, product runtime infrastructure changes, product developers change rate etc. And I mentioned nothing about actual code yet! That’s right, because before having actual code, we need to determine a model for all those variables mentioned before. “Architecture” I hear you say? Whatever… I don’t care how you call it as long as there is some good thinking done first, to address that model.

Please, oh please don’t get hung up on words like: architecture, model etc. This is exactly where cargo cult programming (CCP henceforth – it would have been funny to have another C in there, before the P) stems from.

I’ll give you a method to identify whether or not CCP is employed in your project/product:
Talk to a senior programmer who’s working on the product. Ask them to explain in detail, a small part of the product. Yes! In detail! Tell them to show you the classes (because OOP is almost certainly what you’ll find) and explain how and why they are organised like they are. Take your time, be patient and do your best to follow the explanation. And now the Aha! moment: if the programmer is not eloquent and you don’t understand the explanation or it doesn’t make sense, chances are you’ve got yourself a little CCP going on.
Everything is explainable in layman’s terms… if you understand what you explain, that is. Oh and beware of the “best practice” expression. It usually means: “others on the web are doing it like this and it means this is the way to do it”. So usually “best practice” = CCP (of course it shouldn’t, but that’s what it usually hides behind)

Another way to spot CCP would be to ask programmers what literature they got the ideas from, at which point, more often than not, they would quote websites, articles and blogs (that’s right! You better not be learning programming from this blog!). What you need to hear is books and good authors (e.g. Kent Beck, Robert C. Martin, Martin Fowler, Michael Feathers and many more).

So, CCP is a BAD thing. Repeat after me: CCP is a BAAAD thing. It makes programs less maintainable, more buggy, more expensive. That’s it!

All right. Enough chit-chat! I’ll let you get back to your Wikipedia binge-clicking. Cheers!