adFoamy logo

fixing a broken record

Fixing a broken record.

17 March 2026
5 min read
fixing a broken record

When I used to work on high-level credit card complaints, I was sent on a customer service course to learn more about dealing with difficult customers. One of the techniques covered on the course was the broken record technique.

I had of course heard people say things like "they sound like a broken record", usually in a negative way when someone is banging on about something that no one else wants to hear. It had never occurred to me that it could be a useful technique for communication.

what is the broken record technique?

The idea is that if you have a position that you are unable or unwilling to budge on, then you calmly repeat the position over and over, whilst avoiding over explanation. This was very helpful when dealing with nightmare customers if there was a black and white decision to get across. From what I remember of the course, which isn't much, the suggested approach was:

  1. Acknowledge the point being made.
  2. Calmly state your position.
  3. Repeat until the position is accepted, or the customer gives up.

Point three was a valid approach in the role as I was the last line of complaints — the customer was free to contact the Financial Ombudsman Service if they disagreed.

For example:

"Yes, I understand that you're irrationally angry at the man in the offshore contact centre for no apparent reason. I am willing to offer 50p to cover the costs of being on hold, but I will not be offering anything for distress and inconvenience caused".

does it work in quality engineering?

Absolutely not. Things are rarely, if ever, as black and white in quality engineering. Nobody wants to work with someone who is so stubborn that they're unwilling to change their mind — especially when there are valid arguments against the point being made. I have continued to make use of the first two points as they served me well, but I take a more flexible approach afterwards:

  1. Acknowledge the point being made.
  2. Calmly state your position.
  3. Offer a path forward.
  4. Ask for feedback and/or suggestions.
  5. Repeat until a decision is made and action is taken.

The key change here is to make it clear that there are different options available to get out of the deadlock. It's worth noting that offering a path forward forms part of the overall technique, as a decision might not be made immediately.

when should the adapted technique be used?

I typically use the technique as a last resort when all other attempts to reason with people have failed. Even in the adapted form, this isn't something that should be overused otherwise you're at risk of breaking down relationships with colleagues, which is probably not a good idea.

The trigger for me is usually when I start to feel like I'm in a war of attrition, when something I perceive to be important isn't being taken seriously, or when the same issues with quality culture or processes are being repeated over and over (like a broken record).

examples

we need more UI tests

I've worked in a few teams where there's a push to create more and more UI or end-to-end automated tests over increasing unit or integration test coverage. This results in an inverted test pyramid, which is a nightmare to maintain if it gets out of hand. Here's one way to approach a situation like this:

I understand what you're saying as UI tests add a lot of value, but we're better off increasing test coverage at the unit and integration layers instead. I can facilitate a workshop around writing effective unit tests, and I'm happy to pair with people to help create meaningful integration tests. Should we try this out or does anyone have other suggestions?

can you test this empty story pls

There are a lot of bad quality processes that can lead teams to a point where stories have a semi-descriptive title but lack any form of description or acceptance criteria.

It's easy to fall into a pattern where you have enough tacit knowledge to figure it out, but this is generally a bad idea when you consider acceptance criteria are a form of requirements. It's worth pushing back on a lack of acceptance criteria to cover yourself, otherwise too many assumptions can be made.

One way to address this using the technique above:

I realise it's a priority (like everything else, amirite?), but we shouldn't be testing against stories with no acceptance criteria. I am going to start reviewing user story testability using a prompt based on this awesome blog post to get us started on the right path. Please take a look and let me know what you think.

and what about you?

Have you ever planted a flag on something and not budged on it? What led you to that point and how did you end up breaking the deadlock? Let me know if you fancy a chat about it!