My Quiet Life My Quiet Life

jira is bad


Disclaimer: I am not a project manager. I’m a devops/SRE/security type guy. I could barely explain to you the difference between “agile” and “waterfall” (in practice, anyway). These opinions come from someone that has spent 20 some odd years using every manner of PM tool under the sun in some capacity – whether as a manager, a developer, an engineer, or all of the above. This is not a solution or a manifesto (I think). It’s ungenerous, a complaint, an airing of grievances and a summary of my PTJD (Post Traumatic Jira Disorder).


Jira is an issue tracking, bug management and project management tool. Doesn’t sound so bad, does it? But, many people hate Jira. Some people (me) loathe it with the fire of a thousand suns. I won’t get into the nitpicky reasons that a lot of people cite for disliking Jira: lack of certain features, things implemented poorly, the choice of Java, etc. These are all valid, but they are points of contention you’ll always have with such software. Debates around the merits of various PM tools have existed since the dawn of software engineering and will never go away.

Here are some links to such critiques:

Instead, I want to talk more about the Jira mindset as it pertains to project management specifically, and how it affects organizations. Jira has other functionality which is “fine” but not at the root of the problem. I will quote part of the top comment on a hackernews post about one of the above articles that I think gets into the meat of it:

To me, what sucks about JIRA (and would suck about any well-designed tool that replaces it) is not “feature x” but the entire JIRA mentality. All of it. It encourages micro-management. It encourages more and more process. It is the enemy of getting better at the DORA metrics, which requires streamlining process. tickets in JIRA are not the work itself, never was and never will be, it is a LARP of the work, but it gets taken for the central thing. This is an illusion. Fixing a bug without filing a JIRA ticket is in itself progress. Moving a JIRA card without any other change is not. Yet the second is what’s visible and therefor what’s rewarded. Any problem gets solved with “more JIRA “ which stops working when the remaining problems are caused by too much JIRA. And yet they keep trying, because it gives “control”. JIRA is a like metastatic tumour that will grow until it kills the host.

Project Management

Let’s say you’re writing some software. You’ve got a handful of developers and a sourcecode repository and everything is going swimmingly. One day, you add a few more developers and find your team getting confused. There’s no consensus on what “we” should be working on as a whole, or individually. You need Project Management! In its simplest form this can be accomplished trivially with a spreadsheet or even sticky notes on a wall in various categories, much like a basic TODO list:

TODO -> Doing -> Done

You assign things to people, move cards around to reflect reality, and everything is peachy. This is basically what Trello is in digital form, and what github issues nabbed for its PM tool as well.


What happens next is not surprising: A simple workflow is great, but sometimes you want more info. What if I want a category to reflect “things we were doing but temporarily stopped, so they technically go back into TODO but really are more important”, i.e. a “Backburner” status. What if you could “tag” things with metadata? What if you could “group” these todo items into discrete projects somehow. Jira (and many other tools, including trello, to some extent!) chose to support this by adding a powerful and deep ability to customize nearly everything. This affords you a lot of flexibility in creating a workflow in Jira to reflect nearly any internal process. Workflows from the simplest small agile development team to the biggest, most complicated NASA-level complexity waterfall process you can imagine (who, to be honest, might genuinely need something as customizable as Jira). Great, right? But now you find yourself and the other developers spending a non-zero amount of time futzing with your PM software workflow.

Enter the Project Manager

Almost every company has one or more Project Manager employees. Some are good, some are bad. Some are technical leads, some are not. Sometimes it’s the developers themselves, but at the end of the day, if you’re using a PM tool, someone is “managing” it. A necessary evil, perhaps, but updating, organizing and refining workflows is what we all do to some degree when using a PM tool, and it comprises some portion of your time at work. A friend of mine was fond of focusing on the difference between “working at the job” (i.e. doing your core competency that you were hired for) vs “working on the job” (i.e. time spent on non-core-competency things: checking email, meetings, etc. and of course project management). “Working on the job” is always gonna happen, but obviously you want to minimize it, and this includes time spent in and on project management related stuff. I continue to use the second-person pronoun “You” here to refer to a “project manager” because we’re all project managers one way or another and this is not an us-vs-them screed.

Jira Enables and Encourages Complexity

The fact that Jira can be infinitely customized makes it very tempting for a project manager to use it to track the “on the ground” reality of what people are working on. You tweak the workflow endlessly. You refactor it from scratch to reflect the Shiny New Workflow you imagined in the shower that morning that will make things much better. It feels good. You’ve got the solution that finally will let everyone update and observe the reality of a project’s progress. You start to get a small rush from finding a new way to tweak the workflow. The problem is that a complex/complicated process that you create is often not scrutable to others because, well, they didn’t make it and aren’t intimately familiar with it. Anyone who has spent a lot of time making complicated spreadsheets is probably familiar with this. You spend 2 days making a super-awesome spreadsheet and show it off to your coworkers and get blank stares and yawns because it’s complicated and they have shit to do.

The Dopamine Hit

Despite my aversion to Jira, over the years, I’ve tried to suck it up and use it. I’ve really tried. I remember one particular job using Jira extensively, where I had boards and issue backlogs from years prior (before I was even an employee) that hadn’t been updated in years and no longer reflected entire projects still in progress (and not), much less individual issues. “I’m gonna clean this up,” I thought to myself. And I did. I spent three days burning through coffee and re-familiarizing myself with Jira itself, this company’s workflow itself, and updating, closing, archiving, creating issues, etc. I even created a new element of the workflow! Because of course I did! I finally finished and everything was clean and up to date. It felt good. I got a little hit of that dopamine rush from a job well done. That’s when it hit me: I just spent three days doing … nothing. I was hired to do devops and SRE, and I instead spent this time getting our PM tool up to date with a “reality” that was gonna change as soon as I close the tab. We have Actual Problems as far as the eye can see. “What am I doing with my life??”

what do you do here

The (Worst Case) Result

You find yourself with one (or more) Project Manager who is enthusiastic about Jira and loves updating it.

You are harangued regularly by the PM for not updating your issues.

You feel guilty and stressed about not doing it.

The PM feels annoyed and resentful because you aren’t using the tool and the awesome workflow.

You start attending regular meetings with the entire team to review and update Jira because despite your best intentions Jira no longer reflects reality.

The PM finds themselves exporting data from Jira into spreadsheets because Management is asking for a coherent/simple progress report and, God help you, you can’t really figure out how to actually do that with your super awesome complicated workflow in Jira itself. (True story)

You take a look back at where you’re spending your time and realize, with horror, you spend more time in meetings and Jira than doing your actual job.

You find yourself hating your job.


None of these problems are, themselves, unique to Jira, and I realize that. It’s possible I’m being unfair. But Jira, in my opinion, uniquely enables and encourages this dysfunction. Goodhart’s Law is best summarized as “When a measure becomes a target, it ceases to be a good measure.” Jira, by way of its infinite customization and complexity enables measurement as the target instead of what it should be: getting the damn job done and shipping.

So what better options are there? It’s a hard problem. KISS (Keep It Simple Stupid) is a good guiding rule for many things, including this. The best PM workflow I ever had at any company was as simple as Trello with some webhooks to cross-post updates in/out of slack. Simple, efficient, legible. Github Issues is similarly simple (so far) and has the benefit of tight integration with pull requests and the codebase itself. This is good.

If an organization finds itself with one FTE whose entire job can fairly be summarized as “working in Jira”, you’re in bad shape. The cancer has taken hold and will start metastasizing. Cut the tumor out before it’s too late!

focal reducers

A few years ago, I ran across an ad for a product: a “speed booster” lens adapter that promised to increase the ‘speed’ (effective light-gathering normally associated with the maximum aperture of the lens) by up to 1 full stop. A popular example is the Metabones adapter for various cameras/lenses. It sounded, at first, like a scam. “How can you change the fundamental limits of a lens to be .. better? You can’t, it makes no sense.” I blew it off and moved on. It wasn’t till I got into astrophotography that I started hearing talk of similar devices: “focal reducers”, which increased the effective field-of-view (of a lens or telescope) and also gave a similar “speed” boost. Again, I was confused – how is this possible? I couldn’t blow it off this time (turns out they are fairly important for astrophotography), so I did some learnin’ – turns out it’s not a scam! It’s a real thing, and here’s how it works:

First a quick review of a basic lens. This is a basic diagram of a converging lens from wikipedia:

simple converging lens

Where the rays of light converge at the focal point, the result is, of course, a circle of light – the size of which depends on the lens design. Specifically, it depends on what sort of camera the lens is designed to work with: how large the sensor (or film plate) is and how far away from the lens it is. This circle is not perfect, of course – light falls off gradually due to diffraction around the non-optical parts of the camera and lens (this, incidentally, is what causes vignetting, at least when it’s not being applied digitally by instagram). The result is something like this, with a rectangle representing the sensor the image circle is actually projected on:

image circle

With a well-coupled lens and camera, there’s not much wasted light: the chosen rectangle of the sensor or film generally extends to the border of the light circle for the largest image possible without excessive vignetting in the corners. But! There are a lot of cameras and lenses out there – and now more than ever. In the world of digital, cameras are getting smaller and lighter – sometimes with reductions in the size of the actual sensor. You may be familiar with the term “crop sensor” – a broad term for a variety of sensor sizes that are smaller than the standard 35mm film/sensor plane (e.g. APS-C). They are generally called “crop” sensors because they are smaller rectangles and, when used with a lens designed for a full 35mm frame, they effectively “crop” a smaller portion of the image. Naturally manufacturers also design lenses to match these smaller sensors, but people of course want to still be able to use their old lenses. So, often the result, when you use a lens designed for a full frame sensor with a smaller sensor, is a light circle projection like this:

image circle with various sensors

Note that there’s a lot of light being converged by the lense that is effectively “wasted” because it’s outside the bounds of the sensor being used in the cases of the APS-C and 4/3” sensors. This part of the image circle projected by the lens falls not on the sensor, but on the back/sides of the camera, never to be seen again. This is where the voodoo comes in. If, say, you’re using a Canon EF 200mm lens (designed for a 35mm sensor) on your fancy new micro 4/3” sensor camera – obviously you already need an adapter to convert the mount and flange distance (the expected distance between the lens and the camera). What if you put an optical element in that adapter as well to further converge the light? This, effectively, is what a focal reducing “speed booster” is doing:

metabones speed boster

Think back to when you were a kid playing with a magnifying glass: you’ll remember that converged light through the magnifying glass was warmer, right? The closer you move the magnifying glass to something (hopefully not an ant, you psychopath), the hotter it gets. The same thing (sortof) is at play in optics – converging the light available (formerly ‘wasted’) on the smaller sensor makes it, effectively, brighter. So it’s not changing anything fundamental about the limits of a lens, but is instead sortof ‘reclaiming’ otherwise wasted light the lens is converging for you.

So, there you have it – not a scam or magic, but capitalization on otherwise imperfectly adapted gear!

healthcare reading

I see that healthcare reform is in the news again! How exciting.

A few months ago I joked that there were two articles that were required reading before I discussed healthcare in the united states with anyone. I thought I’d flesh that out a little bit:

  • A good succinct no-frills timeline of how we got here. Do we have a government system? A private system? Why is the responsibility for healthcare placed on employers? Why is it called insurance when it covers not-unexpected things? If you don’t know the answer to any of these questions, read this.
  • This recent econtalk episode is a good, lengthier discussion of the above process. Christy Ford Chapin’s book is worth reading as well.
  • This summary of medical pricing and the AMA helps explain where and how costs are set in our current system. “Why is our healthcare so expensive?” is a difficult question, but this is one glaring part of the answer. (tl;dr: it’s a profiteering racket)
  • If you’ll allow some self-indulgence, this post by me, opining on obamacare (my optimism has since greatly diminished in the intervening 7 years), but in particular pointing out the difference between “health care” and “health insurance”, and lamenting the fact that in public debate, no one even bothers to make this lexical distinction.
  • Buy Health, Not Healthcare - a good framing of the situation by Robin Hanson in 1994.
  • A good thorough analysis of healthcare costs and the ramifications of various single-payer proposals by Megan McArdle.
  • Econtalk’s fascinating episode with Kevin Smith on the Surgery Center of Oklahoma and how most of the industry currently bills for procedures. (You might need to sit down for this one, parts of it are maddening).

a really good play about housing

The scene: The Metro Nashville Commission to investigate forming a committee to plan to address the mysterious problem of expensive housing plaguing the growing city.

Expert: We’re gathered here today to address the problem of expensive housing that we just noticed again. Does anyone have any thoughts?

Expert #2: It’s really bad. Growth is bad.

Expert #3: Yeah, terrible, really bad – especially for poor people.

Voice from the back of the room: It’s because zoning/building regulations restrict supply, home purchase subsidies drive up demand, and property tax disincentivizes long-term ownership for low/middle-income families, driving up rent and effectively transferring wealth to the wealthy.

Expert: It’s a mystery, for sure.. it’s really bad – does anyone have any ideas?

Voice from the back of the room: uh, I just said it’s bec–

Expert #2: We need more affordable housing

Expert #3: Yes!! Affordable housing, that’s good – if people can afford housing, it’s cheaper.

Expert: Okay, but how?? strokes forehead

Voice from the back of the room: Hello? Is this thing on? encourage building and eliminate property ta–

Expert #2: This problem is really bad

Expert #3: Developers are evil.

Expert #2: YES! ugh, developers.

Expert #3: What if we made it illegal for housing to be unaffordable..

Voice from the back of the room: da fu–

Expert: hm, good point, good idea.. what if we made some regulations forcing developers to make housing affordable?

Expert #3: ooh, i like that, then the housing will be affordable, and we will punish developers

Voice from the back of the room: why not just let them build mo–

Expert #2: Perhaps a rule that developers have to sell housing slightly cheaper for the upper middle class for a miniscule amount of time before things go back to normal.

Expert: Perfect! Then we can say we’ve Done Something!

Expert #3: as long as they don’t build too much, though – otherwise the evil developers will profit

Expert #2: yes, definitely don’t want too much building

Expert: yes, growth is bad..

Sound of a head banging on wall in the back of the room

Expert: say .. can we claim this solves homelessness too?


I’ve always been a big fan of Umberto Eco’s writing, and at a fairly young age I came across his essay “Eternal Fascism: Fourteen Ways of Looking at a Blackshirt”. Even then what I liked about it was the attempt to describe a broader pattern for a specific historical phenomenon. I’ve seen it used (by myself, included) as a checklist I could use to trump people lobbing accusations of Fascism. “That’s not fascism, this is fascism!”

But it’s wrong, I think, to treat it (or anything else, really) as some sort of canonically useful bad-guy-identification template. Eco was trying to describe a broader Ur-Fascism and he came close, but he was too close to the specific Fascism of his time, and his essay reflects that in some anachronistically specific characterizations of a broader phenomenon. I won’t go into specifics, because i think it’s fairly obvious, but it does highlight the current lexical problem we face.

The phenomenon he was perhaps trying to describe (Ur-Fascism) still doesn’t really have a good name, but let’s just describe it roughly as: the populist empowerment of a profiteering class of murderous (inter)national warlords. “Shitheads”, if you will. His essay contains many things that are probably pretty close to a broader indicator that things are headed this way. Others, though, are too specific to fascism as it emerged in the 20th century from right-nationalism in ways that isolate it as distinct form the leftism of the time. In the last ~100 years, though, things have changed. The relatively liberal democratic republics have joined with socialist communality into something new, different, and markedly less liberal: the modern “democratic socialist something something” nation state that we’d recognize today. “Corporatism”, in particular, is no longer a uniquely “fascist” phenomenon, but is rather the norm, with corporations as we know them being entrenched government-sanctioned entities. The world has moved far beyond the time of fascists vs. commies vs. anarchists vs socialists, but nonetheless we still frame things in these terms. That people still refer to “corporatism” as an indicator of nascent fascism instead of a fundamental element of the status quo goes to show how disconnected modern rhetoric is.

“Fascism”, in particular, still gets thrown around a lot as a sortof catch-all epithet, with varying degrees of nuance/understanding as to what that actually means. In many circles, it amounts to something as broad as “controlling, violent dickhead”, a characterization that would include Stalin as easily as it would Mussolini. Nonetheless, because of the historical association of 20th century fascism with right-nationalism, today’s left seems to see themselves as immune to (or worse, the antidote to) the phenomenon of fascism. But the manipulation of human failings that led to the rise of historical monsters are not unique to the right or the left.

As long as we let ourselves be useful idiots whipped up into violent frenzies along anachronistic (or modern!) ideological lines, there will be new “fascists” that emerge. But they won’t look like Hitler, or Mussolini. They may not even look like Donald Trump or Hillary Clinton – the growth of the modern nation-state is such that a strongly charismatic figurehead may not even be necessary to unleash the next wave of genocidal terror on the world. (For that matter, the mass harm inflicted on the world’s human population may not even resemble the overt violence of the past, either.)

I am not sure if we need a new word for what we used to call “fascism” or not. My friend rev suggested the more on-the-nose “totalitarian gleichschaltung”, but it doesn’t quite have the same ring to it.

Maybe what we call it doesn’t matter as long as we recognize it, but I believe (as Eco did) that words matter, and when I see people advocating mob violence in the name of “anti-fascism”, it’s a sign that we have a real problem of lexical confusion on our hands!