undefined banner

S2E03 - Why we need to stop measuring developer productivity?


Inside this episode

In today’s episode, I’m joined by Tobias Mende, a leader, mentor and Tech advisor and together we will be answering the question. Why do we need to stop measuring developer productivity?

Host: Aaron Rackley

Guest: Tobias Mende

Book Recommendations:

Show Transcript

These transcripts where auto generated by Descript. If you see any issues, please do reach out and we can rectify the issues.

Aaron Rackley: [00:00:00] Hey everyone, and welcome to the Tech Leadership Decoded podcast, where through conversations, we unravel the intricacies of leadership in the tech industry. My name is Aaron, and I'm a tech lead here based in London, UK. And in today's episode, I'm joined by Tobias Mender, a leader, mentor, and tech advisor.

And together we'll be answering that question, why do we need to stop measuring developer productivity? I really hope you enjoyed today's episode. And if you do, please, can you take a moment to like this episode and leave a review on the platform you're currently listening to it on. It really helps us reach more people like you interested in tech leadership.

And with that, let's get into today's episode. Enjoy. Okay. Good morning, Tobias, and welcome to the podcast. Hope you're having a great week so far for the audience. I read a, um, sorry, I read a LinkedIn blog post and it was all around. developer productivity and how we measure it or we shouldn't measure it and things like that.

And so I immediately reached out and thankfully [00:01:00] you said you'll come on the podcast and talk about it. But before we get into that, because I'm super excited to talk about it, actually, um, would you just give us a little brief intro about you and how. You know, your career and how this blog post came to be.

Tobias Mende: Yeah, sure. Thank you for inviting me to your podcast, Aaron. I'm very excited to be here. My name is Tobi and I've been working in software engineering for about 20 years now. And I spent over five years as a tech employee in various companies, in remote and hybrid teams. And lately in one developer experience team, which I built up from scratch.

Uh, then that brought me to the topic of developer experience and developer productivity and thinking about how we can, uh, increase the engineering excellence and happiness. And today I'm working as a coaching consultant for software companies and as their partner for those topics. [00:02:00]

Aaron Rackley: Awesome. Cool. Um, so I know that the, the blog post is split into two parts.

So what I thought we'd do is we'd start with part one and you've got some good headlines. So I thought the headlines could just basically be the questions. What do you think is? First of all, high level, what do you think is wrong with measuring developer productivity?

Tobias Mende: So, um, actually that goes back already, um, 20 years, I think.

Um, um, when, um, Martin Fowler wrote that, uh, you cannot measure developer productivity because. You, um, cannot measure output and that's kind of the issue because what is engineering output, um, you might think, okay, but it's code, uh, maybe because engineers produce code, but then on the other hand, is that really valuable?

And, uh, [00:03:00] today we know code isn't an asset, right? Code is a liability. So producing a lot of code is not, uh, not a quality of a great engineer. And, um, that, that means we cannot measure output just by the amount of code, but we need to measure it by business value. So how do you measure business value then?

And that's something we can also not measure in the moment we are creating a feature, for example, because when we are creating a feature, we are not knowing yet how long will this feature be in production? How many customers will it attract? What will customers pay for it? And also the other side, how much bugs do we have because of that?

How much incidents, how much maintenance effort does this feature cost us? So the output is something we can maybe estimate or probably more likely guess. And because we cannot measure the output, everything else to measure developer productivity is in my opinion, based the time, uh, [00:04:00] because we are only measuring some, some, uh, meta or some, um, proxy metrics that, um, can.

Help us to understand, okay, how often do we deploy to production and these kind of things, which are valuable, but in the end, it's not about developer productivity. It's about, um, our flow and developer flow. And this is something that's more interesting to measure, but it doesn't give us any idea about productivity in the end.

Aaron Rackley: And you mentioned in your blog post, uh, the, the title of the section was the lie about sales productivity, which I thought was interesting. So do you want to explain that concept?

Tobias Mende: Yes. Uh, so the, the reason why people are looking for to measure developer productivity, I think is because they assume they can measure productivity in other areas, for example, sales.

And it sounds logical, right? Um, a deal closed, that's, that's productivity because then we have a [00:05:00] client and that's, that's the output of a sales team. But when we think about it more closely, then that isn't the end of the story. It's just the beginning. Once we have the client, the question is how long will this client stay?

How happy are they with our product? Will they recommend the product? Or will they demand a lot of other features that we still need to build in order to keep this client happy? and so also um, just having a lot of sales and attracting a lot of clients can be Negatively affecting productivity and the the business value when those clients are not the right clients when those clients just close deals because they have a promise some feature that isn't in existence yet that That engineering hasn't built yet, then that creates pressure on the engineering department to build that feature.

And thus, this means that they need to prioritize that. This means that they take shortcuts that they don't finish what they actually think should be built based on a product, um, uh, management. [00:06:00] And then a client can actually be something that you don't want to have. So. Uh, just having, just closing clients and just closing deals with, uh, annual, annual recurring revenue of X doesn't mean that this is also, uh, the value for the company.

There are other things involved in this makes measuring sales productivity in the big picture, in my opinion, a lot more complicated than just by these numbers.

Aaron Rackley: Awesome. And I think you, you give some other examples of other departments, like a HR, um, just hiring the wrong people because they're just trying to fill quotas as another example.

And then I think the best quote out that whole section was the one you've got, God heart law. So when a measure becomes a target, it ceases to be a good measure. I think that's very, very good one. And I liked that a lot. So you said we shouldn't measure it, but I guess, um. Well, I, I'm assuming that we have to measure something, right?

So, [00:07:00] um, cause again, naivety, I've come from, it's a lot of places of sprint scrum agile, and that's how they're measuring it, right? Do you have this traditional kind of, um, you're measuring your story points or you're measuring your releases and things like that. So what. What do you think we should be doing instead or yeah,

Tobias Mende: yes, uh, of course.

So one thing that I believe is, uh, the, um, measures such as 30 points or deployments made or so, uh, and also the Dora metrics. Uh, lagging indicators in, in, in a sense. So, uh, when, when those go down, then you can look into it and see, okay, why does this happen? Do we have a problem here? They, they are indicators for that, but they don't tell you much about the productivity because maybe the team just changed how they work.

Uh, on the other hand, what I find quite valuable is to think about developer experience and how we can manage developer experience. So how [00:08:00] can we measure how enjoyable it is for engineers to work in the system? How much flow do they have? How much feedback do they get? And these kinds of things we can measure.

First of all, we can measure those using surveys, not like this hard data that we get from our systems. Like, okay, you have 10 commits per day and you have, uh, every engineer creates an average of thousand lines of codes per week or something. But more like, um, we can ask engineers in the end, how do they feel about their productivity?

How do they feel about their excellence? What is blocking them, where do they feel that they are wasting most of the time and that gives us a picture of where do people think there is an improvement opportunity and when we improve those bottlenecks and we remove those bottlenecks. Then that eventually also increases the productivity.

So without even measuring productivity, we can still improve the productivity by listening to [00:09:00] engineers.

Aaron Rackley: Yeah, it makes sense. I think, um, one of the things I learned when I transitioned from an individual contributor into a leadership role was that you do start looking at. Um, measuring differently. So as an individual contributor, I'm just looking at have I got my task over?

Have I done it in the time that I said I'd do it? That kind of stuff. Whereas like what you alluded to there is as a team leader, you start looking at it more holistically, globally, and kind of what are the areas of. Productivity and where are we slowing down, speeding up what areas, because like you say, every developer is individual and they can, they all have their own quirks and individualism that makes them either faster in one area or slower in the other area.

But the one thing that we all have in common is generally like the tool chain that we're doing, the product that we're working on, that's all very similar. So I think what you said there about [00:10:00] looking into listening, sorry, to what developers are saying is, you know, their, their pain points is definitely an interesting thing.

And I've always found that generally, and this is from me personally, I've generally found that it normally is like a tooling issue that normally is the, the slowing down for a lot of things for me. It's like, um. I'm not able to test properly because of this, I'm not able to deploy fast enough because of this.

And, um, I generally find it rare that a coder goes, I couldn't code fast enough because my hand didn't type quick enough. So, or like you say, it becomes a lack of documentation. So I think, yeah, um, definitely looking into these, um, kind of stuff. So,

Tobias Mende: yeah, go on. You mentioned, you mentioned another important thing there, um, also, I think.

That it's not about individual performance in this case, this is, we are so used to think about individual performance historically from, from management, but this is actually [00:11:00] not what software engineering is of engineering is steam effort. And in the end, the team performance matters. So we cannot, like when we measure by code output, for example, it might be that an engineer doesn't output any code because they are more in a navigator position in a pair programming or team programming session because they are more experienced and distribute their knowledge to other team members.

Then their productivity in that metric, uh, is low and measuring the productivity by such a metric would then lead these engineers to not do that anymore, but to code more themselves in isolation. And this is. It's actually the opposite from what you want. So this is a good example of contrast law in an effect that it, uh, turns people into the wrong direction.

Aaron Rackley: So in your, um, first blog post as well, you mentioned something called a Dora metrics or space framework. Do you want to. Explain a little bit about

Tobias Mende: that. Yeah, I think the, um, the Dora [00:12:00] metrics, uh, pretty well known, um, uh, probably because they are there since the, I think accelerate, since the book accelerate came out, they gained a lot of popularity.

They are deployment frequency. Lead time for changes, change failure rate, and time to restore. And, um, they are, um, they are DevOps metrics. So DORA stands for DevOps Research and Assessment. So, uh, they are metrics how well a team is doing in terms of, uh, continuous deployment or deployments in general. Uh, how much defects do they introduce, how quickly they can restore their services, and these kind of things.

Uh, these are, well, they are labeled, I think, developer productivity metrics. And I think that's also as close as we, as we get to a developer productivity there. But I think it's in the end, it's not developer productivity. It's just a proxy metric that can make sense for a team to measure, but I don't think it makes sense for others to measure them [00:13:00] globally for all the teams and then compare teams by those.

So they can give a team insights in their own productivity. So as a team, we can decide we would like to measure those metrics and then reflect on, do we like what we see, or do we think we can do better there? Um, so that's, that's Dora. The space framework, um, is a framework that spans five dimensions, uh, and every letter stands for, stands for one.

So, um, the S is for example, satisfaction and well being and the space framework. Is, um, is, uh, was introduced by Nicole Fosgreen and others, uh, a while ago to, um, show people what kind of dimensions to think about when they think about developer productivity. Uh, and it's also relevant for developer experience because what they said, and a lot of research is backing this, that satisfaction and well being.

It's one of the most important factors when we want to achieve developer productivity. And coincidentally, it's [00:14:00] also one of the most important factors when we want to achieve high developer experience. It's not a surprise that people who feel, um, well in their job and, um, are satisfied with their work and have a purpose in their work and understand why they are doing what they are doing and are not blocked by too many bureaucracy and policies and, uh, other teams.

Are just more productive and also more happy in their work. So the space framework, um, I'm not sure if you want to go into detail there, but, uh, it, yeah, it spans these five dimensions and, um, that can be used. Uh, and I like to use it, for example, when thinking about developer experience, uh, surveys, when I create them for clients, the space framework gives me like guidance, not just to go into one.

Uh, into one of the dimensions too much because, for example, the A is activity in space and, uh, many people tend to go a lot into activity [00:15:00] or in, in some performance metrics, but they don't go into a satisfaction of well being or collaboration communication, which would be the seat in space. So, uh, having the space framework, uh, can help leaders to, um, ask better questions when they think about service.

Aaron Rackley: Okay. So there's a lot, a lot to, um, digest there. So we've got the Dora, which I've got them up because I ain't going to remember him, like you said, but you've got developer frequency, lead time, change failure rate, time to restore that kind of stuff. And then we have the space, which tell me if I'm getting this wrong.

Cause I, I'm not looking it up correctly, but satisfaction, performance activity, communication, efficiency. So that's a lot of metrics. Can too many metrics be a burden?

Tobias Mende: Yeah. So the space framework, um, those are not metrics, but dimensions and [00:16:00] you can measure every dimension. Um, with multiple metrics or none at all, uh, defense, for example, satisfaction of well being, we can measure by, by asking questions in the survey, right?

Activity, there are a lot of things you could measure there. And there are metrics that might fall in there that, um, that don't actually make sense. For example, I could measure how many commits are made, but maybe that's not an irrelevant metric for activity. Um, that's good to measure. So it's really the space framework.

I would see as something to us to support a discussion around what's important for us to measure. And so that we cover all of those five, um, dimensions and get a good insight into the developer productivity and experience in our teams and. Uh, then, for example, the answer can also be, we have a, we have a, um, a survey, uh, that backs most of those dimensions in, in that survey.

So we don't need a [00:17:00] lot of metrics to measure because, um, of course, when we have a lot of metrics to measure. Then, um, we also need to make sure, okay, how do we make sense of all of that? How do we put that all together, uh, when this metric goes up, but another metric goes down? Is that good? Is that bad? Or does it depend on something else?

So, uh, I, and like also in the past, I don't really have seen that metrics. It's created a lot of value and clarity for most engineering teams, because teams very much know where they lose time, what frustrates them, what, what their bottleneck is. For example, do they have a lot of incidents or do they have flaky tests that always interrupt them?

Or do they have too many meetings? The teams know, so the question is if they already know, and we can figure that out with the teams together, then do we really need a metric to prove that once we remove those bottlenecks, it gets back better? Or can we [00:18:00] just ask the team again how they now feel about their work?

And most of the time. I find that this gives us much more, uh, insights and much, uh, clearer picture than looking at all those metrics because they go up and down and there's, there's some noise in there. People go on vacation and the deployments go down. People come back and the deployments go up. All these kinds of things.

It's a lot of data and a lot of noise. And in the end we are thinking, okay. What does this metric tell us? Probably it's going fine. Oh no, maybe it's going up down a bit too much. So maybe we have a problem here. Not sure yet. And then what are you doing then? Of course, you're asking the team, uh, how it feels.

You look into the retrospective and see what, what are the topics that are coming up, this kind of. These kind of things. So I personally won't spend too much time on finding the right metrics to measure. I would straight ask the teams. Okay.

Aaron Rackley: Yeah, no, that's super, super [00:19:00] interesting because yeah, I, I'm a big believer in retros and having conversations with team to figure out what's going on.

But obviously you don't have to wait for a predefined, um, time slot to do that. Um, what kind of. Do you, have you found, um, developers are giving you feedback on and which ones of them do you think are harder to solve than others?

Tobias Mende: So, um, of course a lot, uh, some of those that are easier to fix are usually like, oh, this build takes too long.

Or this, these tests are too flaky, um, like the technical things are usually the ones that you can fix more easily, because then you go dive into the build and find out where the issue is, why it is slow, what you can do about this, then you, then you fix it. And that can have a huge boost in productivity.

Uh, and also of course, satisfaction, some things that are more difficult to change is [00:20:00] of course, when we come to organizational structures, for example, when you have Uh, different departments and you have the engineering teams in one, and then you have an operations team because you are still not understanding how to do DevOps.

Uh, and then, uh, the, uh, engineering teams hand over their deployments to the operations teams in, in a meeting, uh, once a week or twice a week or every two weeks. Uh, just for the operations team to run those deployments and operate the system. And of course, that's frustrating for engineers because they cannot move to faster deployments.

They always have blocked by another team before they get feedback from production. And, uh, they need another meeting to hand over this deployment, this artifact to another team. And this is of course, something that's super frustrating, that's slowing teams down a lot. And has a huge impact, uh, and changing that of course involves at least two teams, sometimes even different managers, and, uh, sometimes [00:21:00] even two areas of an organization in the worst case.

And you need to get everybody on board to understand why this is a problem and what we could do instead, for example, developing the operations team more into a platform team that provides safe service capabilities for doing deployments and doing the monitoring and so on. So that they provide the infrastructure and the support, um, to, to learn about those, but that they don't do the actual deployments.

And then we can remove the bottleneck, right? Then we increase the flow of the engineering team. So. Uh, in the end, the goal is, um, also their team to politics come into play. The goal is that, um, an engineering team, our product team can, can own a value stream or a slice of a value stream from the customer idea to, uh, delivering the artifact to the customer without any dependencies.

Uh, on other teams where we need to wait for other teams, because this is, this is frustrating, but, uh, fixing that and [00:22:00] changing the team structures and how they interact with each other, that's a lot more difficult because that there, we are not only dealing with code and CI pipelines, but we are dealing with humans, human relationships, team structures, sometimes also responsibilities and, uh, like how many teams belong to a manager.

So there's also personal interest involved. Okay. Uh, that makes it,

Aaron Rackley: yeah, no, it makes total sense. Um, so we've identified that we have problems. We just identified those metrics that we should or shouldn't be using. Um, and we know that ultimately it comes down to shaping the environment where the engineers, uh, I think the phrase you might have used was something like, um, outstanding experience, um, and outstanding work.

So they have the perfect place for them to work and produce what they need to work. Um, So you are a agile coach. Is that a technical agile coach? [00:23:00] Is that correct? Um,

Tobias Mende: I'm a little bit of everything. I'm, I'm, I'm a technical coach, uh, for address of the teams. Um, I'm also a consultant, um, around technical and engineering leadership and, uh,

Aaron Rackley: So if I was to hire in a technical agile coach to help with this, what would be?

The kind of things they would be looking at doing, because I know that on your blog post, you link to a developer experience assessment, I think that you do. So maybe you could talk about what that is.

Tobias Mende: Yes. So what I realized when talking to companies is that of course, everybody's now measuring Dora metrics.

Unfortunately, our tools made this really, really easy, but what it's far more difficult for people is to. Uh, develop surveys to understand the developer experience of their teams and, um, to make sense of the data they get. So, um, [00:24:00] when, when you develop and run a survey, of course you have different challenges first, understand what kind of questions should I ask?

Because I can just not ask everything. Otherwise nobody will answer the survey because it takes two hours to answer all the questions. And then once you have the survey, you need to get it out. to everybody, um, to the target audience, like all the engineers and maybe other people in the product area to get feedback.

And then you have the question, is there a lot of engagement with the survey? Otherwise the data would not be relevant. And the developer experience assessment is, uh, my service that helps companies to do exactly this. So first of all, it starts with the briefing with one of the leaders of the company who wants to have that for their area and we understand, okay, what.

What kind of, um, uh, topics could be interesting, but, um, I'm not only, um, listening to the leader, but I also talk to four people in the organization to understand what they, what they see, like different perspectives on, [00:25:00] on the engineering organization of certain people. And then finally, with all that knowledge and my own expertise, of course, I will create a survey that is tailored to the organization.

And then, uh, executed with them inside of the organization and create a report of the results and my suggestions. So this also should help with, uh, selling developer experience improvement initiatives to non technical folks, because also something that I noticed is that, uh, engineers and sometimes also tech leads are very good at understanding the technical issues and solving them.

But they are sometimes struggling with, uh, communicating them to non technical people that in the end make the decision. So sometimes you have a non technical manager who then thinks, yeah, but why should we do this? We have business priorities, we have this feature and we assume this feature gives us revenue X.

So, um, then the question is how does this developer experience initiative fits in? And quite often that's then why those initiatives, uh, do not happen or are [00:26:00] deprived forever. Uh, because. Uh, engineers cannot make a valid business case out of those. And that's kind of where my assessment also helps to, to show how does that impact the productivity?

How much time do we waste? How much does it affect the frustration of engineers? And maybe what are the most important things that we can first address? And of course, then if there is desire by the client, I'm moving in with them into a long term partnership to have that teams to remove those bottlenecks.

Um, To train the people on what we found out that is missing. So that's then the next step. But the first step is knowing what, what the issues are and then I move in. Um, you also ask about the technical coach and what, what they should do. Is that right? Yeah. I would say it depends a bit on what kind of technical coach you are looking for.

So what I, what I do with clients when [00:27:00] I work as a technical coach with teams is that I'm collaborating with them a lot and pairing with them on their day to day work challenges to also see where do they struggle to, is it about unsure that they are unsure how to test a certain thing, or is it that they again have some issues with CICD, for example, that I observed that builds take very long or that They have a very complicated branching model or that there is some power dynamics within the team that influences their collaboration so that there's, um, that are people not speaking up or just sitting there quiet and collaboration and also meeting.

So it's a mixture of technical coaching on, on, on the coding on architecture, design, testing, automation, all these kinds of things. Uh, and on top of that is also of course the HRI software perspective and. To me, it means I'm asking a lot of questions, sometimes unpleasant questions, like, why do you estimate story points, or, [00:28:00] um, why do you do dailies, uh, which doesn't mean that I say nobody should ever do dailies, but it means have you thought about why you are doing them and what the value for you is.

And, um, so, yeah, I'm observing them kind of, um, and work with them. Uh, as an external kind of technical lead, uh, looking into the factors that, uh, improve their technical excellence, but also of course, without looking into the factors that improve satisfaction, happiness. And with that, then also again, retention of engineers.

So these are kind of the topics that I would cover there. So, um, and general, uh, for a technical coach, I would, um, check with them. If they have the experience that, that we need in the organization. So if, if they know the tech stack or at least, um, have seen a lot of tech stacks that they can, that they can, um, support valuable insights and outside perspective.

And, uh, ask, uh, [00:29:00] interesting and thought provoking questions for the engineering team. And then of course, um, yeah, it's more also, uh, do I like this person? Uh, and, uh, do I think I can trust this person? Does this person fit in from how they work, how, how, um, the attitude. So this is very helpful. And of course also have they worked on similar projects, for example, if I have.

technically agile coach who always only worked on iOS apps in the past with teams. They might not be the right technical coach for a team that's working on a backend infrastructure or software as a service product, because challenges there are a lot different, of course.

Aaron Rackley: Yeah, no, I think, um, as a technical leader myself, it's definitely giving me a lot of forethought because again, like everything is.

Companies, like you say, are obsessed with metric driven reporting. And, um, I think you as a technical [00:30:00] leader can kind of slip into that behavior. And what you're saying to me today really makes total sense of like how we, as leadership for the technical, forget the business as a whole for a moment, but as a technical leadership, we should definitely be looking at how we can make our teams better.

Um, and I don't mean better in a negative sense, right? I mean, just everyone's happier. Everyone's more productive. Everyone's just working towards the same goal, right? So what I will 100 percent do is I will share the link to your. Unblock. engineering website, because the blog posts on there, there's just so many of them and they're all great for getting deep diving into all this information and we wouldn't have enough time in a blog, in a podcast to go through all of this stuff that you've got, but, um, on your website, I noticed that you are looking like you, you obviously offer services you offered.

Potentially workshops and talks and things like that. So I'm definitely going to try and sign [00:31:00] up to one of these workshops, come and come and learn something. Um, do you want to just, uh, tell everyone about the unblocked engineering website a little bit more and what that is?

Tobias Mende: Yes, sure. Um, so unblocked engineering is the business that I started, uh, basically beginning of this year, um, before that I was working as a software engineer and tech lead as a freelancer, but the unblocked engineering came from.

Uh, my realization that there is a need for, um, for coaching, mentoring, consulting, and training around, uh, technical excellence, uh, and that, uh, involves the developer experience and developing productivity topic, which is very close to my heart, of course, but it involves everything else from day to day work of engineering teams, uh, up to software architecture, team structure, technical leadership.

And, um, I finally bundled all of this together in. Uh, a package that's called the excellence and happiness partnership. Uh, and this [00:32:00] package kind of is my offer for any engineering team, uh, in software companies between 15 to 70 people in product and engineering that, uh, want a partner at their site to improve their technical excellence, their engineering excellence, and the happiness of the engineering team.

And that's kind of a long term engagement with me as a partner for all those topics and everything we talked about and of course, much more. Um, yeah, if that's interesting, you can look it up too.

Aaron Rackley: Yeah, no, I'm pretty sure there will definitely be people here listening going, I need to go and find that out.

Um, now, um, Obviously we could go on about this for ages, and, um, I highly recommend everyone to go and read the blog post that obviously brought me to you, which is why we need to stop measuring developer productivity, and it's split into two parts, but I think you have a longer form version of it on your sub stack as well.

Um, [00:33:00] so I'll, I'll link to both of those and then if they've got any questions, come and speak to them. So one thing I ask all my guests that come on to the channel, uh, is to recommend a book and the book doesn't have to be technical, doesn't have to be, it can be your childhood favorite story. It could be anything, but some people give a couple, so feel free to do that as well.

But basically on my podcast website, I have a bookshelf and all the books go on there. So.

Tobias Mende: Okay. So, um, then the first book, uh, that I would like to recommend is, uh, reinventing organizations from Frederick Lallou. Uh, it's about organizational design and, um, uh, like stages of organizational development. And for everybody who only knows classical hierarchical companies, that can be very eyeopening to see what other companies are there, how [00:34:00] they can work.

For example, hierarchical management and leadership. And for me, it was just, um, very eyeopening to see, uh, how that can be possible. I worked in a company that partially. Um, used some of those ideas as well, for example, um, decision making process that was very, very great because it involved everybody and allowed everybody to bring in their attentions and propose a change to the organization.

And so that's a really cool book and it's really very written. I can highly recommend that. And of course, then, um, because you already mentioned that the accelerate book is still, still really a great book to read, um, on, um, uh, DevOps basically, but also, uh, it's interesting for everybody who was in software engineering and things, how can we develop a better software faster?

So, um, it's cool. And it's backed on with a lot of research. So. That's another good book I can recommend. [00:35:00] Awesome. And I can, can go on if you like.

Aaron Rackley: I'll tell you what, give us one more, one more. Let's do it.

Tobias Mende: One more. Okay. Then I need to choose which one I chose. Um, okay. Maybe, um, let's, uh, Oh yeah. Let's, uh, uh, take this one because it's also a leadership podcast.

So leadership as language by, um, David Marquette is about how, how we can lead and how, what we say and what we don't say influences. Uh, our way of leading teams. So it's really cool and I really love his story. He is, um, he is, or he was the captain of a US submarine. And, um, quite a hierarchical, um, organization, you would say, and he also wrote another book, an earlier one, Turn the Ship Around, which is quite popular, where he moved from this classical command and [00:36:00] control kind of leadership to a more intent based leadership approach.

So both of those two books, I can definitely recommend.

Aaron Rackley: Okay, perfect. I've written them down. They'll go on the bookshelf. Awesome. Now, um, before you go, um, could you give everyone a little rundown of where to find you online? Um, you know, social website, et cetera.

Tobias Mende: Sure. Um, uh, obviously, uh, unblocked dot engineering is, uh, the, my website.

So there you can find me. Uh, and of course also on LinkedIn, it's Tobias Mender. Um, maybe we can also link it. Um, those are the most popular channels where you can get in touch with me. Kind of immediately. Awesome.

Aaron Rackley: Perfect. Well, I really appreciate you coming online to talk about this subject matter. It's something I'm really now obsessed with and I need to go away and read a lot more.

And I'm definitely going to add all those books to my, uh, to my Amazon [00:37:00] and get them ordered. Um, yeah, so really appreciate it. And, um, hope to speak to you again soon. Yeah.

Tobias Mende: Thank you for having me. It was, was a lot of fun and really enjoyed being on this podcast.