Inside this episode

In today’s episode, I am joined by Jason McDonald, Director of Community Development at the OWASP Foundation. Author of “Dead Simple Python” and the brain behind today’s topic. Quantified Tasks.

Host: Aaron Rackley

Guest: Jason C. McDonald

Book Recommendations:

Show Transcript

These transcripts where auto generated by Descript. If you see any issues, please do reach out and we can rectify the issues.

15 - Jason McDonald

15 - Jason McDonald

Aaron Rackley: [00:00:00] Hey everyone, and welcome to the Tech Leadership Decoded podcast, where through conversations, we unravel the intricacies of leadership in the tech industry. My name is Aaron, and I'm a tech geek here based in London, UK. And in today's episode, I'm joined by Jason McDonald, a Director of Community Development at the OWASP Foundation, author of Dead Simple Python, and the brain behind today's topic, Quantified Tasks.

I really hope you enjoy today's episode, and if you do, Please can you take a moment to like this episode and leave a review on the platform that you're currently listening to it on. It really helps us reach more people like you through that algorithm. And with that, let's get into today's episode. Enjoy.

Okay. And welcome Jason to the podcast. How are we doing? Hi,

Jason C. McDonald: um, I'm doing great.

Aaron Rackley: Um, I'm really excited to have you on. Um, I have been a. In my previous life, I've been a scrum master a lot. So story points, estimating, all that kind of stuff is something that's been ingrained into me at this point. [00:01:00] And when I saw the blog, your blog posts on quantified tasks, I was super excited.

And then I heard you on a podcast a couple of days later. So it was like, it's in the sphere now. So I needed to get you on to have a chat. So before we get straight into it, um, do you want to just give us a five minute overview on you and. Your, your background.

Jason C. McDonald: Absolutely. Yeah. So I have worn many hats over my career.

Um, I think you could say, well, my, my tagline for many years has been author, speaker, hacker, time Lord. Um, because I have, um, I have done a lot of things over my career. I think most significantly I'm a traumatic brain injury survivor, and that gave me a very unusual entrance into the industry. Um, I started out by founding my own open source.

Organization running what turned out to be one of the first [00:02:00] full remote internship programs in the industry gained my skills while teaching others My first full time job in the industry was as a senior engineer So it's an odd Odd way to get in to be sure. It's like getting on a plane from the top of the empire state building.

Um, but along the way I never was really acting just as a traditional software engineer. Um, I was doing a lot of, uh, especially training interns running those projects. Um, And then every other job after that, I wound up in various capacities of, of, um, what I understood later to be business analysis, uh, project management, uh, product strategy, um, of course, good old fashioned software engineering management, um, because I guess.

My primary fascination is not code itself. It has never been code itself. It's actually been psychology. Why do we do the things we do? [00:03:00] Why do we communicate the way we do? And in college I studied communication. So I'm fascinated about how ideas get from one person to another person. Um, and how they change in that process.

That is, that enigma is one of my favorite. Things to ponder and it influences everything. I everything I build everything. I write

Aaron Rackley: Interesting. No, um, I come from a design background in theater and costume and stuff like that So my even my journey into software is a bit odd as well but uh, I I know a friend who wants to do a podcast actually on just talking to developers and leaders and Knowing their beginnings and journeys.

And I find it fascinating. But the question of the hour is what inspired you to develop quantified tasks and what are they?

Jason C. McDonald: So this goes all the way back to when I was first starting out in software engineering, [00:04:00] I was looking for my local library, uh, for books on coding, cause I was brand new to it. And I came across this book called dreaming in code by Scott Rosenberg.

And he chronicles the, the, um. The start and almost up to, but not exactly up to, but almost up to the demise of a project started by Mitch Kapoor, who is famous for creating three to one Lotus, um, he wanted to create something called Chandler, which was a completely revolutionary for the time project information management system that, that.

Broke down the silos between the different objects in the PIM, and he wanted to build it in open source, um, which was still in its infancy at this time. Uh, and Scott Rosenberg was in the same office building, and so he kept going over to the Chandler Foundation, uh, watching what they were doing, asking questions, kind of became their Their scribe, as it were, um, so [00:05:00] it's a very poignant look into everything that works and everything that doesn't work in software engineering, especially the things that go wrong and.

He made some interesting remarks in there about planning, but one of the things that stuck with me was this one line where he says, it is possible in the first few minutes of working on a task for a developer to determine whether or not a task is a black hole by black hole. He meant a task that it doesn't matter how much effort you put into it.

It never gets done. It just sucks up in the ordinance amount of time. And this To a young software engineer fascinated me because this seemed to defy all logic. How could you have a task that never got done? And this started my drive to understand what causes these black holes to form. How can we identify them?

How can we mark them out in our task manager so we know they're there? and that pursuit led [00:06:00] me to start coming up with ways of measuring tasks. Mm-Hmm. quantifiably. Um, when I finally learned about story points a few years into my career, I discovered that I was not the first person to try and quantify things, but perhaps I was the first person to actually

Yeah. Pardon the hubris. I was perhaps one of the first people to do it. Okay. Because the fascinating thing about story points is that it is a little bit like the imperial measurement. Of the foot back in its infancy, the foot was the king's foot. It was the literal king's foot that everything was measured against.

You change kings, you change countries, you change measurement systems. So it was an attempt at standardization, but it was still incredibly relative. And it took actually coming up with a standard measurement. Um, and, uh, you know, around the time [00:07:00] of the French Revolution, the whole thing with, with the standardization of measurements and the scientific community in France and Europe beyond was instrumental in making, um, science and engineering possible.

In the first place. So it's a matter of moving to that.

Aaron Rackley: Okay. So obviously in story point, we'll, we'll come to the comparison of story points in this later. I think everyone has a story point as a frame of reference, but how do we measure quantified tasks? What is the, what are your measurements in it? So

Jason C. McDonald: the three, actually, the interesting thing about quantified test is there's actually eight different measurements, but in terms of estimation itself, there's three key numbers.

Okay. Um, so the first one is distance, and this is maybe the most obvious one. Okay. How [00:08:00] long would it take you to complete this task relative to a development cycle? Whatever's normal for your team. Usually it's your sprint. Um, how long would it take you relative to a development cycle if you knew everything?

That little clause is the key part. That's what makes it objective because It's going to take everyone a different length of time when you factor in differences in experience and skill level and familiarity with code. But when you take all that out of the way, and you just say, if you knew everything you had to learn absolutely nothing at all, how long would this take you?

Reality is people can come to a pretty solid consensus on that. And it's a measurement of raw work. How much of this is just fingers on keyboard? Yeah. The second one is friction, which is what resources exist, um, to help you solve the task. So again, it's something you can observe empirically. How much documentation in there?

How healthy is the code? [00:09:00] How, um, how well known is this, you know, process? What's the precedence on it? Um, Are there subject matter experts we can go pick the brains of easily? Um, the less of that you have, the higher the friction, because the more you're going to have to research things, the more you're going to have to experiment, so the longer it's going to take you and the, but more importantly, the more effort.

It's going to take you because that's the key thing here is we're talking about developer effort more than time because developer effort, if you think about is actually the limiting factor, you, you can have 80 hours available to you to code in a week, but that doesn't mean you're gonna be able to code for 80 hours, your brain's gonna get tired.

That's why we, that's why we don't want to ship on Fridays, not because there's something special about Friday, but because we're just burned out at that point. We're missing obvious things. Um, and then the third one is, kind of goes back to that black hole I mentioned, Relativity, which is, how much [00:10:00] of this do we know versus how much of this do we not know?

How many unknowns are there in this? Some things are very straightforward. Um, You know, we, we need to, we need to add a button to the homepage and it needs to be blue. Well, there's no surprises there. We know how to add a button and we know how to make it blue. Um, yeah, hopefully, hopefully, uh, there's, and that's why the lowest relativity score is a one because it's never, it's never a zero.

There's always a possibility. But, um, then on the other end of it, you have the things that are, um, we know nothing, we know absolutely nothing about them actually. Relativity 5 means you should stop and rethink what you're doing, because if you know nothing, you can't even code at that point. So it's somewhere in that continuum between there's no unknowns that we know of, and there's no knowns that we know of.

You're going to land somewhere in that, uh, 1 to 5 scale. Um. And where you ultimately get [00:11:00] a single number out of out of these three, because each of these is one to five is you add your scores for distance and friction, and then you multiply by relativity. You remember that old agile story point method of make your best guess and multiply by three in terms of time estimates, and that's the rule here.

That's that. That's the same thing, except you're not multiplying by a fixed arbitrary number three, you're multiplying by your level of uncertainty. Yeah, if it's really obvious and there's no no one knows that you know of your estimates probably spot, you know Bang on the dot or darn close But if you don't have any information at all, it's, it's, it's a total swag.

And at that point you might as well just say, eh, eh, ordinarily this looks like it should take about, uh, you know, two weeks worth of effort. Um, but we don't know anything. So three months, you know, and again, I, I know I keep going back to time, but that's, that's what we wind up thinking about often. [00:12:00]

Aaron Rackley: Yeah. I think it's interesting because what drew me to this, um, was.

When I was doing scrum a lot of the time it got so focused on how many story points you could deliver in a sprint and You can fake that to a degree and I think one thing that Interested me about your calculation was that friction aspect because there's a part in that which you mentioned Which is the how much you can actually focus on that work, right?

So whether you're burning out near the end of the week things like that story points doesn't really take that kind of aspect into account so You know, it's like you're just hitting, I don't know, some arbitrary number eight a day, eight a day, eight a day, but you're not thinking that what happens if the developer's getting tired during that day or, you know, things like that.

So that was, that was one aspect that really drew me into it. And, um, and obviously. The kind of unknowns is always a big one, right? [00:13:00] So, yeah, so I'm really interested by that. But, um And

Jason C. McDonald: it's interesting that you bring that up, too, because that is part of what I hope to address with this, because, yeah, you know, you Story points, not all story points are created equal.

If you get eight story points because you completed, uh, four two point tasks. Yeah. That's nowhere near the effort involved in completing one eight point story.

Aaron Rackley: Yeah, that's true. Yeah. How do you find it works for the difference in individual software developers? So, obviously, with story points, you're hoping generally to get to a consensus so that If, depend on who picks up the time is relatively the same in story points delivered, but how does, how does this factor into that?

Jason C. McDonald: Well, there, there's two, there's two, uh, aspects of that. One is I was mentioning earlier, standardization, one of the key parts of this is that this is [00:14:00] repeatable. So, um, Unlike story points, which vary from team to team and project to project, and you can't really compare across, this compares across. So if something scores as an 8 in energy points in quantified tasks in one project, is about the same amount of effort as an 8 in another project with a different team and a different language and a different context.

Um, because you are looking at those things of distance, friction, and relativity. Um, so this allows a developer to form a personal relationship to the numbers. Instead of it being about, well, um, all our developers should get, you know, eight story points a week done. You start to understand what you can deliver and what you can deliver under certain contexts.

For example, when I'm working in Python, which is a language I know very well, I average about 24 story points. Um, and so when it comes to planning, I can know that I'm [00:15:00] taking on a reasonable amount of work for me if I'm aiming for picking up 24 points of work. However, there's that other aspect that not every point is equal, even in this, not, not every point is equal, which is why we preserve those three numbers we get it from because it allows for self selection.

So, and there's two aspects of self selection, um, anyway, one is that a junior developer is not going to want to pick up high friction, high relativity tasks because they're going to need to bring a lot more of their own knowledge to that task. So you can distribute the lower friction tasks to the more junior members of the team.

Uh, open source projects benefit from this in that this automatically becomes what's your low hanging fruit. Yeah, you know, you want to find your low hanging fruit, look for low distance, low friction. There's your low hanging fruit right there, right for the picking. Um, and then the senior level engineers can pick the more challenging work.

Um, [00:16:00] Because you can see what's more challenging, but there's this other aspect where the amount of energy you have and what, what sort of energy you have varies from day to day. Um, you have, um, two hours to code before, uh, before you have to start getting ready for a meeting. Uh, are you gonna want to pick up?

Um, a task that is really simple and straightforward and you can get it done because it's low friction. Or are you gonna want to pick up a really hard task? Obviously you don't want the hard task right now. You gotta go to a meeting or it's friday afternoon and you're exhausted versus it's monday afternoon and you've got nothing on your calendar.

You got this beautiful five hour slot where there's just nothing going on. Now you're gonna go for that high friction task because you have the timing, you're in the right headspace for the additional challenge. So it allows people to select the right work for them within the sprint and within the workday.

Aaron Rackley: Yeah, I think that's really interesting because, like, actually in the [00:17:00] past, obviously, when I was doing a lot more development, yeah, if you wanted to pick up a quick, quick win, you do the one to two story points, right? You try and find the smallest ones. But I think what you're saying there is a lot more interesting, which is like, what do I have the mental capacity to really think about?

Like, can I take on these less ones or am I, do I have the ability to take on a complex one that's short or do I need a less complex one that's short? And I like, I like that. That gives you a lot more kind of flexibility in work you should be doing at that time, rather than just taking the next one and droning for it.

Jason C. McDonald: Right. And it might not even be short. It might just be busy work, you know, you, you could have something that's like, it's, it's low friction, low relativity. It's just, it's just, it's really straightforward, but it's high distance. You're going to need the whole sprint to do this because you're having to, uh, rename all of your tests to use underscores instead of camel case.

Yeah. And that's just, that's a lot of busy work. Well, you know, you can't finish that in one [00:18:00] session, but you can certainly spend 30 minutes on it before a meeting and you can see that.

Aaron Rackley: How do you recommend teams transition to start using quantified tasks? Especially if they're like really already grained into story points, like, where did we start?

Jason C. McDonald: Well, first of all, the nice thing is that energy points from quantified tasks and story points have a lot of similarities from the outside. They cover roughly the same range of numbers. They have roughly the same curve as the modified Fibonacci. Um, you will get numbers that aren't true modified Fibonacci, so Uh, when you adopt this, you will have to tell your team.

We don't care what the number is. We care that the number came from this. That's the important part. So don't, don't twitch. If you get a seven, a seven is valuable information still don't freak out. Um, but beyond that, you use basically the same process as you already do. [00:19:00] The only thing that changes really.

is that your conversation about estimation follows a particular structure. These are things that we already should be talking about if we're using story pointing, but the reality is we don't. The three questions that I brought up one, one for each of the measures. How long would this take? If, uh, how long does it take me if I knew everything?

Relative to the sprint. Again, it's not, it's not, it's not an hour commitment. It's just relative to the sprint. Um, what resources are available to help us complete this task and how much do we know versus how much do we not know? Those are the three big questions. And by going through that list. With your team, um, you can very quickly, and by the way, in practice, this is faster than story point estimation because you're streamlining your conversation.

You can really rapidly narrow in on where [00:20:00] developers may be disagreeing about the estimate. Oh yeah, this is obviously a week of grunt work, but, um. I think it's more. I think I think it's higher friction. Why do you think it's higher friction? Have you seen their documentation lately? It's got major gaps.

Oh, I hadn't looked at the documentation to realize that. Yeah, I agree with you. Okay, conversation goes a lot faster and it's a lot more productive. Once you have come up with those three scores of the one to fives. Um, once you come up with those three scores on your team has consensus on them. Um, Yeah.

You write them down so you can update your task tracker. Many task trackers have custom fields or tags or whatever. You can use that for capture this. And even if you don't have that, stick it up at the top of the description. This is a three friction to relativity one, whatever. And then you apply the formula to it.

So distance plus friction and multiply that sum by relativity. That goes in your story point score box. Okay. And that's it. That is literally the whole thing. [00:21:00] Um, if God help you. You have one of those story point boxes that is a Fibonacci number drop down, then write your actual score in the description and then pick the closest Fibonacci number.

It's still, it still works well enough to get you started. As you use it, you will, in a, in a very agile way, like Agile's supposed to iteratively, you will find. Your team refining how you approach this, uh, you will find some habits of story pointing that you no longer need. You may discover, every time I've used this on has discovered they did not need poker.

Beyond the first day, they're just like, yeah, the poker was a waste of time. It was no longer needed. Um, that may not be the case for everyone. Um, you may make adjustments to your issue tracker. You may adopt other parts of quantified tasks, but you can do that iteratively.

Aaron Rackley: Yeah. I think what's also interesting about that is that if I, [00:22:00] if in whatever software I'm using, Jira, whatever, right.

I had these custom fields for the three values as well as the final one as a tech lead or something like that. I can look through the backlog and see. at a glance, which ones are looking like they're going to have a high friction or So for me to then go and have a look and see why are they, why is that high friction there?

Maybe there's something I can do to reduce that friction later on. And I think that's a really good metric to have. Yes. Which you don't get in story points, right? You just see how it's a seven and then you're like, well, I don't know why it's a seven. You know, there's no context around that

Jason C. McDonald: precisely. And I've seen that a couple of times.

One of the projects I was leading, we had a story that was 12 points. We were looking at that and going, that's a lot of work. Why is it a 12 again? We scored that last month. Why is that a 12? When we looked at what? Oh, because we don't know how this works. Oh, well, we just need a spike. Yeah. And then we'll understand that and that'll lower that.

The other nice thing though, about these custom fields is [00:23:00] that, and I really hope, I really hope this achieves adoption such that the tool makers start actually built baking this into their tools because there's additional things that we could do. With it. One of my favorites is the averages. If you take an average of, say, and all of the averages of all the different metrics are actually useful.

But if you take an average of, say, friction, if you've got a high average friction and you're struggling with your team's velocity. You know, you need to add more senior members. Yeah, right there. Like you, you immediately know that. Okay, we, we are in the weeds. We don't have enough tools for a junior to get their head around this easily.

We need more senior members on the team versus, um, if you're, if you're struggling with velocity and you've got a high distance. It, uh, it just means there's a lot of busy working. You just need more, you just need more, more energy available to you. Literally just need more coding time. That might [00:24:00] mean reducing meetings, um, or, or, or eliminating other distractions.

So your development team can actually just. Have more time hands on keyboard. So you can guide that conversation with the managers. And that's kind of the clever bit about this, where story points lends itself to being misunderstood by managers who really want that, that inherently unit list, meaningless number.

We even say it's meaningless. That's the point. I've heard someone say that. Of course, story points are, are, are meaningless. That's the point of story points. Like, but managers don't understand pointlessness. They want to have a metric. This gives your managers metrics. So when they involve themselves in the conversation, as they inevitably will, instead of them trying to assign an arbitrary meaning that is going to be wrong to your story points and use that to make decisions, there's already meanings there and they're coming in and they're able to see, Oh, you have really high relativity.

Why is your relativity so [00:25:00] high? Well, because our product definition is not very good. Oh, we should have more conversations about what we're building.

Aaron Rackley: Yeah, I think that that's important, right? Is that it gives more context and more information to, to those individuals that are also going to be a factor in your planning, right?

And your project manager is definitely one of them. And Anything that you can do as a team to make sure that you and your product manager on the same language, always help.

And I think this is really exciting for me personally, because I've just done a load of blog posts, but, um, sorry, podcast episodes around like developer mesh, uh, measuring developer productivity and should you shouldn't you and stuff like that. And I think this kind of does it the should and shouldn't at the same time, because it's like, it's, it's.

Story points is like, you shouldn't because it's artificial and it doesn't really give you a good [00:26:00] representation. Whereas this gives you a perfect representation, I think, of areas of improvement, which is what I want to measure for my developers. I want to figure out how I, as a tech leader, can make their lives easier.

And this, this will give me a consistent. set of metrics across all projects across all teams that I can look at and have a good visual aspect and what's going

Jason C. McDonald: on. Yeah. That was one of my goals with this back before, back when I was first coming up with it, you know, something else that, um, dreaming and code mentioned was that it was difficult to measure developer productivity.

Yeah. You know, do we measure in hours? That's a bad idea because then you reward the slowest coder. I do a measure of lines of code. Well, then we never refactor, you know? So it's like a closed issues. Uh, you know, then, then I'm at the joke from Dilbert all those years ago when Wally says, I'm going to go write myself a new minivan.

It's like you, we didn't have a good way of [00:27:00] measuring developer productivity. This becomes a good way. If asterisk, it is the beginning of the conversation and not the end of the conversation because. The important thing is you cannot, and any managers who listen to this need to understand this, you cannot raise a developer's velocity beyond a certain inherent threshold.

Yeah. The human brain actually has, neuroscience has told us that the human brain actually has a maximum capacity of how much it can do. It is based on our metabolism. Each individual is different, but there's only so much you can get out of a brain. Once you reach that threshold, you're done. The brain can do nothing else.

It's tired. It can no longer do that, that, that, that work. And we use so much of that and, and managers have often [00:28:00] thought in terms of, well, you know, we just need to get developers who are willing to work 80 hours a week. You don't want your developers working 80 hours a week, because as soon as your developers are working 80 hours a week, they are, they are spending so much of that time, at least half of that time.

I would argue more than half of that time, um, in a state of mental exhaustion that you are creating low quality code, not because they're bad coders, but because they simply do not have medically cannot write good code at that point. They're tired. So when you're looking at velocity, you're looking at developer energy.

You cannot artificially inflate that. What you want to do is look at the average over time and you want to recognize when that average dips. Recognizing that, you know, it will go up and down. Everybody has good weeks, bad weeks. We'll look at the average over time. And when that average dips, when it's trending down, then your question should be, Why?

What is consuming their energy? And let [00:29:00] that be the start of the conversation on figuring out how to improve their productivity by making more energy. more of their energy available for their use in engineering. Yeah. Um, but you cannot, you, no matter how much money or time or, and, or perks or pizza or whatever you throw at them, they're never going to be able to work past their maximum and that maximum will shift a little bit.

But it'll average out to something. Get to know that average and respect it. Because if you don't respect it, they can and should leave.

Aaron Rackley: No, I think, yeah. It's very rare that you find people that are able to code non stop for that period of time. It happens, but very, very rare. Very,

Jason C. McDonald: very rare. And, and, and even the ones who can do it cannot do it long term.

Aaron Rackley: And When you've [00:30:00] for the places that you've implemented this, what kind of are the first kind of challenges you've come across?

Jason C. McDonald: Usually it's the, the, the number one challenge I run into is, is what I refer to as the religion of Agile. So it's not the methodology of Agile, because Agile is great. Agile is wonderful. I love Agile, but the religion of Agile. Is the idea that it is this, that is the set of, of, of philosophic principles that as long as you adhere to these and, um, and, you know, gold plate a copy of the, of, of your scrum certification, put it on the wall and pray to it three times a day, you will somehow achieve project management nirvana.

Yeah. Does not work that way. And the places that follow the religion of agile are very resistant to this because it runs counter. To this sort of [00:31:00] institutionalized agile mindset, because you're moving away from the magical numbers of the Fibonacci, which are somehow supposed to, you know, align our chakras or whatever, whatever is allegedly special about about the Fibonacci sequence, um, it, it, it.

Changes so much of it from being just meaningless, empty ceremony into actual conversation and as one person put it to me, he said, I don't like this because you know that these numbers shouldn't have any meaning. I don't want them to have any meaning because look at our lovely burndown chart. I publish these.

I am serious. Actually, I actually had to technically tell me I publish our burndown chart. Oh God, online. This is how I market. And I'm just thinking, heaven, help your developers.

Aaron Rackley: God, that's like a race to the bottom. That is don't,

Jason C. McDonald: yeah, don't do that. If you know, if you're, if you're one of [00:32:00] those shops that is inviting your clients to your retros, you're publishing your burndown charts.

You are setting story point objectives for your staff. You're going to get resistance. Because this really runs counter to that nonsense. And I do mean nonsense, but at the same time, that's because this is truly agile. This is putting information back in the hands of the developers so that they can iteratively improve.


Aaron Rackley: then on the other side of that, what successes have you had implementing this so

Jason C. McDonald: far? Every time I've, every time I've been able to put this onto a team, what I've heard over and over is where has this been all my life? And the reason for that is because it takes all of the subjectivity out of how do we score?[00:33:00]

Because that's really where a lot of the friction around estimation comes from. Everybody's got their own idea on how to do estimation. And really most of those ideas don't disagree that much. They disagree just enough to be like the, to go back to measurements again, to be, uh, to be like the, um, like, like the, like the U S survey foot and the, and the standard foot it's off by just, just, just enough to be a real pain.

Um, and that creates a lot of. confusion. Um, and so this, this removes this whole question of what do we assign more weight to? And it allows the conversation to focus on what's actually important. And that has been a breath of fresh air for a lot of people. Because it makes scoring actually productive and fun again, when you actually get [00:34:00] information, it's very satisfying.

Aaron Rackley: So I think one more question for this is

in the terms of like small time, sprint stuff like that, it seems to be perfect, but I know that time's relative, but when it, when it comes to planning long term projects, how do you think, how does that work?

Jason C. McDonald: There's actually a separate. Triad of measurements and quantified tasks that relate to that long term planning that I'd like to bring up.

So, the three planning metrics, impact, gravity, and priority, form a sort of funnel. And it allows you to take your backlog and identify priorities. Um, and those, um, those priorities ensure that what makes it into the sprint to be scored is what you need to be working on. [00:35:00] Um, You start at the very top of impact.

So impact is the importance of the item, the work item, whether it's a task or a bug to your overall project goals and more specifically to your users. And I actually go into on, on the website, quantify task. org, actually go into this whole. impact planning process. You can follow, identify your users, prioritize which users do we care about more, you know, because you have certain users that matter more in your product than, than, than others.

It doesn't mean you don't care about the others, but you have priorities. So identifying that at the get go helps. Because then you can say, you know what, this is a blocker for this smaller group of users and it's a major inconvenience for this larger group of users. So this is fairly high priority. You can identify priorities as things come in.

You can identify those in collaboration with your client or your other stakeholders. [00:36:00] Um, once you have those priorities, then the things that matter the most bubble to the surface. Your, your, your priority, your, your, your impact four, impact five things bubble to the surface. From there, you kind of grab the stuff from the top.

Of the impact and you start planning your release around that gravity relates to planning releases. So this is the importance of the work item to the, to the release and the goal of a release, um, all the way from 1, which is it doesn't matter at all. Leave it out. To and actually kind of maps to the to Moscow must have How does it go must have should have could have won't have?

So one is like won't two is would if we had extra time We probably gonna leave it off all the way up to must this is a blocker We don't have a release without this you identify that that gives you a that frames your [00:37:00] release and then from there You can, again, start pulling things into your sprint during your sprint planning and priority is your now next later.

Um, priority five is emergency. Our hair's on fire. Fix this right now. The server room has just been hit by a meteor. Priority four is I'm currently working. Someone's currently working on it. Three is someone needs to pick this up as soon as their work item is done. Two is probably a subsequent sprint.

One is probably a much later sprint. And so you continually rework these numbers sprint after sprint after sprint because they continue to bubble up your priorities and you're not losing this information as you move from sprint to sprint. Um, then you, as you pull things in, this is where the estimation comes in, because as you're doing that sprint planning, you're scoring as you go.

And so, you know, when you're sprint full, because everyone's got a full plate, they know what their capacity is. And so once everyone's [00:38:00] plate is sufficiently full, not too full, not too empty. You have a sprint and

Aaron Rackley: you run with it. So when we're doing the planning for a sprint, do we take each individual's?

Energy points and then obviously the site like look at work that they will pick up based on that and then or is it like you do like a the same kind of thing with story points where you take are usually our sprints are 200 energy points. So we take 200 energy points. Or is it,

Jason C. McDonald: um, you can do either way.

You can do whatever, whichever way works better for your team. But I tend to recommend more towards, um, getting your team's capacity. In terms of potential velocity from the individuals because it does vary, you know, your, your average is, your average is going to fluctuate, you know, Bob's on vacation this week, uh, Laura, uh, Laura is just getting over COVID.

She's still not a hundred percent, but hey, she showed up to the meeting and, uh, you and, uh, Michael [00:39:00] is, uh, losing his mind. He's been in meetings for the last two weeks and he wants to code a lot. And so your capacity looks different than normal. And you want to be able to take that into account. Um, so considering the energy that each of your developers has is going to be beneficial.

And besides that, if you want your manager to do it, you should be doing it yourself. You know, if you want your managers to be thinking about developer energy, then that should be right at the front of your planning.

Aaron Rackley: Yeah. Cause that's interesting. I was just thinking as you were talking there about like, if you were to, if you're focusing on individuals in your team and their energy and you notice that.

Oh, in last two sprints or cycle, John A has dipped a little bit in his energy, maybe I'll go and have a look why, and then you might just realize that he's been picking up very complex friction tasks, right, and then it's not because his [00:40:00] energy's dropped, but it's more to do with the kind of Work he's doing could be the reason and I think that's another interesting thing that story points once again, just wouldn't give you that information Yeah,

Jason C. McDonald: now I do have to do one other warning about planning though.

And that is that we tend to leave some important things off Okay we include Tasks for coding. We don't tend to include tasks for reviewing We don't tend to include tasks for testing. Those need to be separate tickets scored separately because, um, I had one, I had a couple of sprints where the front end lead was averaging what looked like about five energy points, a sprint.

But that's because she was doing all of the code reviews and more than 75 percent of her time was going into those code reviews, but she wasn't getting any velocity credit for it at [00:41:00] all. So you do need to factor that in. And it's helpful when you're, when you're figuring out, um, energy. Um, and it also helps your, it also helps your testing and review because the fact we leave it off the board also means it becomes an afterthought and it becomes an inconvenience like, Oh, but I have all these story points or these energy points to get through and I have to go read your code now, waste of time.

And they're more likely to rubber stamp it. Whereas if it's actually a ticket. It's like, okay, that's a, that's an eight story point job that he's doing and we need a code review. Oh, I can do the code review on that. Okay, that's going to be another six story points. It's on your board. You actually have planned for that.

That will also help improve the accuracy of your estimates because you're factoring, not just in building, but you're factoring in everything else that, that, that has to go along with that, which again, is something else that managers tend to overlook. Why do they overlook it? It's because we don't. put it on the board.

Aaron Rackley: [00:42:00] Actually, yeah, that's that's actually very interesting because in teams that I worked in in the past when you're doing story point estimates of the planning poker is as we say the QA are obviously involved in that and generally you'll come into a consensus around that table what that story point representation of the ticket is including.

Potentially testing KIPA reviews in this, whereas this actually allows you to, if you're separating it and then you do the individual estimating on each, it gives you a clear, clear view on where you're

Jason C. McDonald: going and it ensures the right people are working on it because, because if you factor in testing and review into your estimate and you assign it to, to, to Alice.

But Alice is not the one writing the tests for, for some reason, maybe she's not the one doing the testing or she's not the one doing, well, of course, she's not doing her own code review. Yeah. Um, you know, and you actually have, you know, Jane's doing the code review. Um, well, Jane should have that on her own velocity chart, but it's, it is amazing to me.

I've seen this for years, how [00:43:00] the code reviews will accidentally drop off people's radar because it's not, they don't have a card. They don't realize it's a thing. They forget to check their to do list on GitLab. Of course, we all forget to check the to do list on GitLab, you know, because where do we go to find work?

That's a to do list, look. Yeah, we go to JIRA. That's where we go to find, to find the next step. We go to our task tracker. So if the task is on the task tracker, it gets overlooked and then, Oh, shoot, I'm sorry. Yeah, it's Friday afternoon and I got a meeting with the boss in 10 minutes. But okay, I'll give your, I'll give your code a quick glance.

Oh, shoot. It's 957 lines. Looks good to me. Stamp ship. Whoops.

Aaron Rackley: Awesome. No, I, I'm definitely. Energized by this, by this whole concept. And I, I really want to try and use it. I'm gonna have to try and figure out a way to a team using it. Before I ask you to give us a little look on where you are online and the website and stuff, I'd like to [00:44:00] ask our guests to recommend a book for our audience to put on their shelf and have a read.

And it doesn't have to be a technical book. It can be anything you like. So do you have a book that you would recommend for our.

Jason C. McDonald: I do. I might be biased. It's my own. Um, so, um, I am the author of a dead simple Python. It's published by no starch press. This is if you have cross paths with the Python language.

Um, but maybe you're struggling to get the most out of it, especially if you're coming from another language. Most books out there for Python are written for people who don't have prior coding experience. And that's great, because Python's a great entry level language. But if you know how to code, you don't want to suffer through another 200 page explanation on functions and variables.

You already know how to work. Yeah. So This gives you a really deep, exhaustive tour of the entire language from the perspective of you already know how to code, you're coming from [00:45:00] another language. Um, but most importantly, it doesn't just teach you how to do things in the language. It's not just like, okay, well, you know, you know how to use if statements.

So here's the if statements in python. It's what's the pythonic way to do it? How do you do it? Getting the most out of the language. Why do we do things a certain way in Python versus another language? That's really the focus. And it covers the entire core language. Um, so I definitely recommend, uh, checking it out.

It's, uh, it's about 800 pages, which people laugh when they hear dead, simple 800 pages, but that's because looking backwards, it's, it's not simple looking in, nothing simple looking in, but when you get to the other side of a concept, you look back and go, Oh, it's so simple. That's what I mean by that. So you can find that anywhere.

That's simple.

Aaron Rackley: I'll have to pick that up. Um, because. I, I been, I want more books that are aimed at, I already know how to code. I just want to know why the language does it that [00:46:00] way or what is the pitfalls I'm going to get in that language. Right. So I'll definitely take a look at that. And then where can everyone find your wonderful self online?

Jason C. McDonald: Right. So if you want to learn more about Quantified Tasks, there is a website. You can also subscribe to the newsletter, which I periodically put stuff out on there, uh, that is quantifiedtasks. org. Um, quantified tasks, plural at the specified, uh, dot org. Um, and then if you want to find me, I am ubiquitously known online as Codemouse92, uh, so you can find out more about me.

Um. Find my other books, because I also write fiction books, um, various podcast appearances, conference talks I've given, all that good stuff, at codemouse92. com.

Aaron Rackley: Awesome, I'll make sure they're both in the show notes. But again, thank you for coming on, I'm, as I said, I just Want to try and use this now because [00:47:00] it's it's answered all of the problems that i've ever had with story points Even though i'm a fan of scrum and story points.

It's a it's a weird edge. Um, so yeah, I appreciate you coming on

Jason C. McDonald: Excellent. Thank you so much. And hey, um what you or anyone listening to this if you Use this on your team. Whether you find that, Hey, this works great. Or we came up with this or we changed it this way, or this isn't working. Please, please, please do reach out.

There is contact information at quantifiedtask. org. Um, I want to gather as much information about this, um, being used in the field, um, as I possibly can, because, um, this is agile. This is, we, we improve things through collaboration, through iteration. So please share anything you learn. Um, be very appreciative of it.


Aaron Rackley: we'll do. Thank you very much. Thank you.