The Learner Lab

Learning from Good and Bad Outcomes

March 31, 2020 Trevor Ragan, Alex Belser - The Learner Lab Season 2 Episode 2
The Learner Lab
Learning from Good and Bad Outcomes
Show Notes Transcript

Often times we judge our process and decisions by to the quality of the outcome. The research calls this outcome bias and the poker world calls it resulting. No matter what you call it, this can lead to a number of traps that hurt the learning process. The good news: once we understand how it works we can find ways to avoid resulting and become better learners.

Full Show Notes

Quick Links
Outcome Bias Study
Thinking in Bets by Annie Duke
Casey Yontz’s Twitter
Alex Belser’s Website
Trevor Ragan

Speaker 1:

It's the middle of winter, a blizzard hits. Jackie, give us some blizzard. Sounds thank you. There's two people, Sarah and Doug, they both have a trip planned to a destination, a couple of hours away. However, with this storm comes a no unnecessary travel

Speaker 2:

Warning to roads are icy conditions.

Speaker 1:

Basically the government is saying you shouldn't be driving right now, both Sarah and Doug make the decision to go on their trip anyways. So they made the same choice to brave the elements go on their trip. Sarah makes it to her destination safely. Doug is in a ditch halfway to his destination wedding on triple a to come tell him out. Nope. Both of them

Speaker 2:

Chose to drive in this whiteout condition. Right,

Speaker 1:

Right. But okay. Let's put ourselves into Sarah's shoes, Sarah hanging out with the friends. She feels like, yo, I'm so glad I made the choice to go. I made a good decision to come on this trip because I made it right.

Speaker 2:

How do you think Doug's feeling Doug's over here? Like why did I decide to drive in this

Speaker 1:

Idiot? I should have listened to the warning. Now think about this though. They made the same decision. Sarah thinks it was a great decision and Doug thinks it's a crappy decision, but it was the same one.

Speaker 2:

So this story is an example of what's called the outcome bias. And what's happening here is that we end up judging our process based on the outcome that it leads to. Right? We

Speaker 1:

Assume it's like, oh, the quality of my decision is based off what happened. Right? Sarah, good decision, Doug bad. But if you just zoom out, they made the same. They made the exact same decision. Okay. So we can see how outcome bias is shaping how they are judging their decisions. But this is actually a bigger phenomenon that affects all of us in big and small ways. And when it comes to learning, this is I think an essential tool to be aware of. So that's what we're going to do.

Speaker 3:

Welcome to the litter lab podcast. I'm Trevor Ragan. I'm Alex Belser each week. We're going to explore a topic to help us become better learners. If you're interested in more, you can check out the learn lab.com for videos, articles, and more pots. Let's go.

Speaker 1:

It's February 1st, 2015 super bowl. Sunday, the Seattle Seahawks are on the one yard line. 27 seconds left down by four Russell. Wilson begins his cadence for what will become one of the most infamous plays in super bowl history.

Speaker 4:

[inaudible]

Speaker 1:

During the game, the announcers, weren't huge fans of this game. It's up

Speaker 4:

To the punch and I'm sorry, but I can't believe the call me. I cannot believe the call. You've got a few weeks. It seems like everyone agreed on one thing. One of the dumbest costs offensively at super bowl history. That was the worst play call in super bowl history. The worst play call I have seen in the history of football. That was the most idiotic. If I live to be 200, is anything as dumb in my life was the worst play call in the history.

Speaker 1:

I think in a big way, this Seahawks super bowl and the Pete Carroll goal line call is a huge example of outcome bias. It's showing like it's literally the outcome and result. We're using that to judge his decision and process. So like the funny thing is that commentators during the game and sort of the narrative the next day was this was the worst decision in football history, Superbowl history, worst play ever called, but the stats show like we found two great articles, one on 5 38, where they break down, it was actually a really smart move. Like it was better time management gives us more plays.

Speaker 2:

The reason that people were saying it was the worst decision was because the outcome was poor. Over-weighting

Speaker 1:

The outcome. And then a good sort of thought experiment to prove. The point is if they score on the past play, Pete Carroll is a legend, right? So sneaky. So smart. This is the greatest. So it's, it's the same decision, but the way we judge it, it's totally based on the outcome.

Speaker 2:

This is also reflected in the scientific literature. So there was an original study done in the 1980s by two researchers named Baron and Hershey. They wanted to know how knowing the outcome impacts our judgment of a process. So they came up with a bunch of medical scenarios where doctors were having to perform surgeries and they would present people with these scenarios. But what changed from person to person is that some of the surgeries were successful and then other ones were not. They ended up in a patient dying and after receiving all that information, they had people rate the quality of the decision. What they found is that the positive outcome group rated the quality of decision-making higher. Whereas the people who had negative outcomes, it was seen as poor decision-making. So it was the same. It's exactly the same. The only thing that was different was that the outcome it changed, one was poor and one was positive. Sarah

Speaker 1:

And Doug all over again. It's the Pete Carroll example. It's like, because it didn't work. It was a bad choice. If it would have worked, it would have been a great choice, outcome bias. And this happens, but literally during sports, during a particular play the parents in the sense I shoot a three and miss, oh, bad shot. You got to pass the ball. But if I make it, yeah, Trevor, give it to him. Yeah. It's like, we're putting all the weight into the outcome. It's like, uh, it's kind of a, a trap that we fall into in different. But the problem is it's sort of our default, but if we're looking at just learning and performance as a whole, like there's a lot of traps we can fall into. One could be this overconfidence. And in the research, they call it like conflating luck with skill. So it's like, oh, I drove during the winter storm warning and made it, I'm a skilled winter driver when the truth is like, maybe so, but also you were lucky, right? And then you flip it, the next trip would I end up at the ditch doesn't necessarily mean I'm not skilled, but it's like, well, luck was that play. There's a lot of other things that it's messier than this is all luck are all skills. Right.

Speaker 2:

And this is sort of also, this makes me think of like a lot of people think that flying is more dangerous than driving is. Sure. When in reality, like flying is actually safer, statistically. Yeah. Statistically speaking, it's safer. But when we're driving, we think that we have more control and we do have control over our own car, but what we don't have control of is everything else that's happening. Right? So we sort of conflate luck and skill in that scenario. It was great example,

Speaker 1:

Professional poker player, Annie duke wrote a fantastic book called thinking in bets and early in the book, she talks about how in the poker world, when people fall into the trap of outcome bias, they actually call it resulting for today's episode. We actually have a professional poker,

Speaker 5:

My name's Casey Yance professional poker player. And I got 53rd place in the main event in 2012,

Speaker 1:

To talk to us about how resulting works in the poker world and some different strategies that we could use to avoid it.

Speaker 5:

This is a concept that, uh, almost epitomise is poker. The outcome bias where people just, if they win, they won because of pure skill. And if they lost, it was because of bad luck. And so you attribute the results, you know, if I'm winning it's because I'm just a skillful player who made good decisions. And if I lost, well, I got unlucky and I just, you know, I didn't deserve it. So

Speaker 1:

In the poker world, this would be like, uh, I go to Vegas with my friends. I enter like a Texas Hold'em tournament and I win. Right. So complaining, like, it's like, oh, you're a great poker player. It's like, I am a great poker player player. I should quit my job and GoPro. Right. But I think that's a great example of like maybe some of it had to do with skill, but there was also a lot of luck at play. It's like the right.

Speaker 2:

It's certain the ring cards. Exactly.

Speaker 1:

But if I overweight that outcome, that I won the tournament, therefore I'm amazing. This is that overconfidence. And it could send me down the wrong path. That's another big example. Like a small one could be an iPhone under this chapel in college. Um, I would depend on like all night benders where I'd study for a test and then I'd pass the test. And so I'm just like, oh, I can just rely on these benders. I don't in some, some classes I wouldn't go to class because I'm like, well, I can just like stay up all night the night before the test and I can do good

Speaker 2:

Enough to pass. Right? Like you've, you've sort of assumed that because you were able to pass that test the first time, the first time, then that's the process you should stick with the next time and you don't have to go to class to get

Speaker 1:

It. I think one that's not sustainable. And it wasn't for me too. It's messing me up in my other classes. Three. I'm definitely not learning as much as I can from this approach. And so this, this overconfidence trap in a way, what happens is when we see the good outcome, we don't really dig into the process of like, well, was the process actually optimal? There's probably some things we did well, but there's also some luck at play. There might be some things we could improve, but if we overweight the outcome and it's like, oh, that worked, I don't need to dissect my process. Right. Think of all the growth opportunities I'm missing, or I just like totally depend on this non-optimal process, which is robbing me of learning, uh, in these other classes and getting the most out of the class I'm in. Right. Uh, so that's kind of like sacrificing the short-term outcomes for maybe these long-term things that I should be focused on more, that's all a by-product of waiting the outcome too much. Another trap. I think we can call it rinse and repeat syndrome. So it's like, okay, we won, I discovered the exact formula change, nothing. And this happens a lot with like successful coaches. It's like we won the state championship in 1998. Therefore I run the same place with the same grade, the same clipboard, like don't change anything. So this rinse and repeat is, is totally assuming that everything in my process led to this outcome and I shouldn't change anything. This makes me kind of a more rigid and less open to experimentation. And innovation happens in the corporate world.

Speaker 2:

Right? Well, cause this is sort of assuming that there's a set recipe for success and if no luck was involved, maybe yeah. But we're not baking muffins

Speaker 1:

Here. Exactly. And we haven't talked about muffins yet, but that'll make sense in a second. Now this isn't always about like the positive outcomes. So there's traps in the negative one. This is sort of the underconfident.

Speaker 2:

This is when sort of you do something, it doesn't end up well. And so you stopped doing it.

Speaker 1:

It's like, I assume like, oh, that is not the thing. It's like, oh, I'm trying to, uh, interact more with people or be more sociable at networking events. I go talk to someone one time and they don't respond well, it's like, screw that. Right.

Speaker 2:

And it's even like, because oftentimes, you know, conversations are never terrible, but it's like, oh, that was just kind of awkward. Right. And we sort of assume that because it was awkward, my process was awkward and I'm just, I'm an

Speaker 1:

Awkward person. Right. Uh, that was a bad decision. I shouldn't do it again. So that's the under confidence, the most common trap of the under is I just stopped doing it. But another one that I, I have fallen into and then you could see playing out in the world is bad. Outcome meant everything we did in the process was bad. It's like we lost the game. We're terrible. We need to relook at how we practice the drills. We run the plays we run. But the truth of the matter is because luck is at play. It's like, I guess again, sorry to go back to poker. It's like, look, I can play a hand really well. And when I could play a hand really well and lose, I could play a hand like an idiot. And when I could play a hand like an idiot and lose, it's like, it's not involved. It's not a one-to-one good result equals good or good process equals good outcome or bad process equals bad out. Right. Because there's involved. And that's

Speaker 2:

Sort of like the big takeaway for me is that we should be looking at the outcome as a source of information, but it's not the only information that exists.

Speaker 1:

Right. If the world was absent of luck and randomness resulting is a tremendous strategy. Right. So think of like a game like chess, right? Yeah. There's not as much luck involved in chess because there's no hidden information. Right. You know, the rules of the game, there's an optimal strategy. And based on what my opponent does, there's kind of a right or wrong choice. And so, uh, kind of a good model of showing how chess doesn't involve that much luck would be, look, none of us listening could go beat a Grandmaster of chess. Right. We just can't. Right. But all of us listening could beat the best poker player in the world heads up on a couple hands, right? Because there's luck involved. There's this hidden information there's cards. We don't see. There could be a 2% chance of me winning. And I do, because look, that's not necessarily the case in chess right now. Any duke does a great job of explaining this in her, her book, uh, thinking in bets, but she's like, life is more like poker than chess because in life just like poker, there's hidden information. There's uncertainty, there's luck. And when that's the case, we have to avoid falling into this trap of resulting or putting too much weight into the outcome. Right. Um, a good way to think about it would be the more luck involved, the less weight we should give the outcome, the less luck involved. The more that outcome is a gauge on the court quality of

Speaker 2:

Our processes. And the reason behind this is because it's reflective of what's actually happening. So like in chess, if you lose, that is pretty reflective of the decisions that you made. Right? But in poker you could make, you know, the best. Yeah. You can make optimal decisions and still lose because there's luck at plagues. Exactly.

Speaker 1:

So we call this the muffin to lotto, spectrum, great name, trademarks. We're making hoodies. So if I'm baking a muffin, I have a set recipe. And if I follow that recipe, I'm going to get a pretty decent muffin, right. Because there's very, there's really not much chance or a lot of tunnels follow the plan. You get the muffin, right. If I bake the muffin and it's a disaster, I can wait that outcome. And it is a pretty good reflection that there was something wrong in my process. So it's like, whoops, added double the salt. You have to

Speaker 2:

Be over something. There's something wrong with your product.

Speaker 1:

So in that case, this relationship of the process and outcome are closely tied together. We can weight that outcome more. Right. Okay. So that's one end of the spectrum. Very little luck, put a lot of weight into the outcome. What's on the other. On the other side, we have the lottery. It's like, okay, I win the lottery. That's a great outcome. But I think all of us listening know it's like, that is pure

Speaker 2:

Luck, right? Like you chose some random numbers. I can't go

Speaker 1:

Create a seminar and write a book of like, here's how to pick the winning lottery numbers. It's like, no, bro, you can't wait that outcome. That is not a reflection of a good process because it was pure luck right there. Like you're not going to come to me and be like, Trev heard you won the lottery. How'd you pick your numbers. But a lot of us actually do this. Like we don't, we might not

Speaker 2:

Do that with a lottery. Like I think that example resonates with people, but we do this a lot with, with other things. Like we assume like whatever we did to get to a certain outcome, we have to do that even in scenarios where there's a lot of luck involved.

Speaker 1:

So to use this spectrum, I think it's actually a great tool. It's the more luck at play, the less weight we give the outcome and the less we can like use it to judge our process, right? When the luck is sort of minimized, then that relationship is more one-to-one and we can use an outcome to judge process, right? Bad muffin, bad process, fix the process, better muffin. What we're trying to do is whatever it is that we're doing, it's like, okay, where are we at on that spectrum? I would say most things in life that matter, and that we care about are going to involve uncertainty luck and hidden information, right? So they're swayed more towards this lotto end of the spectrum. Right? Which means we have to be careful about over weighting, the

Speaker 2:

Outcome, right? Avoiding this outcome bias. And just to reiterate the value of this whole spectrum, the muffin lottery spectrum is that it helps us sort of frame how much we should be waiting our outcome relative to the process. Absolutely.

Speaker 1:

And this is true in sports in life. I can create an awesome resume and really prepare for this job interview and like really like put a lot of time and energy into it and I could not get the job. Right. Does that mean everything I did in my process was right. No, you could even

Speaker 2:

Have done a really great interview, right? Like you could have had a great resume and a great interview

Speaker 1:

All just because of luck. It could be, there was just someone better or the interviewer was in a bad mood or rubbed them. There's a lot out of my hands. Right. There's luck at play. And so I can't, I can't go muffin syndrome on this of like, oh, I didn't get it. Therefore the recipe was off. It's like, right. Could have been bad luck. Right.

Speaker 2:

It's also worth noting that we can't just assume because we didn't get the job that it was just bad luck also. Right. Like we can't just, you know, put our process off because it

Speaker 1:

Was a bad outcome. Oh, that's too far down the lottery. And it's like, didn't get the job bad luck. Right.

Speaker 2:

They've been some issues with your interview process or your resume. Right. I'm sure

Speaker 1:

There's stuff that could be improved. And then

Speaker 2:

Sort of the messages that we need to be thinking more about how much luck is involved in whatever

Speaker 1:

It's hard. The spectrum helps us. It's not binary of it was all luck or no luck. It's like somewhere in the middle. And that's why this, this spectrum is super valuable. Right? It's we lost the game. Does that mean everything we did to prepare was bad? No, there's probably some things we actually did well, but when we overweight the outcomes, sometimes we don't see the growth or the, these things that we did well. And then you flip it. It's like, oh, we won. But actually there was some cracks in our process. We just happened to get lucky and win the game. And so again, we're just trying to be more rational or objective about, okay, this happened, let's weigh it properly and start to diagnose, not diagnose, but dissect the process, like an objective, a more helpful way. And we're never going to know it all. It's like, we're never going to know exactly why we didn't get hired for the job, but we can be objective and look at some things that went well,

Speaker 2:

We can fix, we can help ourselves become more aware that all of the information does not exist just in the outcome. And we can

Speaker 1:

Fall down those two slippery sides of the spectrum of like, oh, bad luck or everything I did was wrong. Right. Okay. One thing we gotta be clear on. We're not falling into this, uh, Twitter cliche of it's all about the process, not the outcomes. That's not what we're saying. It's like, honestly, the outcome is a great source of feedback and information. It's a measuring stick. We're just saying don't overweight

Speaker 2:

It. Right. We just need to be aware of those two things. Absolutely.

Speaker 1:

Even when luck is at play, the outcome is useful information. It's like, Hey, we lost the game. Some of it was luck, but some of it could have been process-related. So we're not saying yes, it's all the process, not the outcomes. It's like, no, wait it properly. Be objective. Use it in a way that can inform the process. But remember luck is that play. That's all we're saying. Hopefully we've done a good job of showing like positive or negative. This can get in the way of learning opportunities, overconfidence trap, under confidence, trap, falling into short term, doing things to get these short-term wins, but sacrificing long-term growth. Uh, I do something, it doesn't work. I do it again. Like this is all robbing me of reps and experiences and just learning opportunities left. And right. So now the question is like, okay, now that we know that the outcome bias is at play and it's perhaps more powerful than we know, how do we sidestep this? Right. What do we do? One step two to avoiding outcome bias is to be aware of it. But I think we can do better than that. Right. What are some tools to work, to try to avoid falling into

Speaker 2:

These traps? Yeah, I think the first one kind of piggybacking on that idea of being aware of it is understanding how much luck is it is at play in whatever event or scenario we're taking place in. Right.

Speaker 1:

And it's going to be hard to calibrate, but just remember muffins and lottery. We're operating in the middle of those two. Most likely probably swayed more towards lottery. Right. And that's a good gauge on, well, how much weight should we put into this house?

Speaker 2:

And then another strategy for this is to increase your sample size. Uh, yeah. So oftentimes we might, like we're saying we do something once and then we assume, oh, um, you know, that's the outcome that is always going to produce, like I'm never going to do that for your confidence or under confidence. Another helpful way of thinking about this is through the term regression towards the mean, so as you increase your sample size, you're going to get closer and closer to like your true average. An example of this can be seen in like the first few games of baseball, right? You'll have a lot of players who might be hitting like an 800 batting, average, ridiculous, or maybe something like a hundred, right. They're not hitting well at all, but what's going to happen is as you play more games, people are going to get closer and closer to their true average. And more people will be like 200 to 300.

Speaker 1:

So you can't judge the quality of the player after the first two weeks, it's like, oh, batting 800 best ever. It's like, that's ridiculous. It's like, we need a bigger sample size, but we have to use the same logic when it comes to us. It's like, how are we judging the quality of our process based off one or two attempts. Okay. We do approach the person at the networking event and maybe it doesn't go so well. Right. Okay. That was like one rep right before I'm going to judge myself and start shaming myself, go try it a little more because maybe it was just like the wrong person at the wrong time and they're in the wrong way.

Speaker 2:

Yeah. And I think this also works for overconfidence, right? So if you, if you've done something just once or twice and you do it really well, uh, you might be overly inflating your confidence in something, but if you increase the sample size of it, you're going to reduce that down. So it's a more accurate reflection of your actual abilities.

Speaker 1:

The takeaway is simple. Get a few more data points, like increase the sample size, play more hands, go through the interview process a few more times before we completely throw our entire process out the door or assume that we discovered the secret sauce. So step one, take into account luck and understand like there's probably more luck involved than we realize. Uh, tool number two is get a bigger sample size, right? Upside of that is getting a more accurate reflection on our process. And too, if you're thinking about learning, you're getting more reps. And so like you're going to be increasing the skill right? In the same process,

Speaker 2:

A third strategy that we can use to sort of curb this outcome bias is to just seek out feedback from people on our process, on the

Speaker 1:

Process itself, the decisions you made, right? The question I was most excited to ask Casey was like, yeah, we know what resulting is. We understand outcome bias, but what are some strategies we could use to avoid it? I think his advice was not only relevant for poker players, but it's something that we could all use. No matter what we do.

Speaker 5:

The biggest thing is keeping an open mind to being wrong. You have to be able to accept that, Hey, maybe I was wrong. Maybe there's a different way to play that hand or maybe I could have done something differently. I really think you've learned for me. I learn by talking and playing. So if you can get buddies and dissect hands and just to be blunt with yourself, be honest with yourself and say, yes, maybe I made a mistake. Don't hold true to, oh, well I lost, I would've lost. And anyway, so yeah, there's always something you can learn and the same holds true. Even if you win. So even when you're winning, people tend to think, oh, I won, I played perfectly. I played well, that's the same. You can fall into the same trouble there where yeah. You may have won the hand, but it wasn't a long-term winning play. And that will catch up with you eventually. So you need to be honest with yourself after every time you play say, Hey, you know, go through the big hands in your head, even though the ones you want and the ones you lost and think, did I play this the best I could? Or did I get lucky? Did I get unlucky?

Speaker 1:

I saw, this is what I did. Check my line of thinking here. Do you think I made the right choice or not? I mean, they're breaking down these hands. Like a sports team might watch. Exactly.

Speaker 2:

So another way that we can elevate this feedback technique is we can hide the outcome wherever possible. When we're trying to get feedback.

Speaker 1:

That's smart because we know the person that I'm getting feedback from their outcome bias.

Speaker 2:

Right, right. They're going to be focused on the outcome. So if you think back to that original study where they gave people the decision-making process, and then they also gave them the outcome. So, you know, do you conduct surgery here? Yes or no. And then does it lead to good or bad outcomes, good or bad outcomes. They

Speaker 1:

Literally in the study, they, even the groups where they told don't overweight the outcome, we're just judging the decision. They still still did it. So even when we were aware of it, we're still

Speaker 2:

Going to fall into that trap. So what this looks like is if I'm going to try and get some feedback on my process from you, I should tell you, look, here's the scenario, right? Like in the medical scenario, if I'm choosing to operate, I need to tell you, you know, here's all the information I had. This is what I saw. And this is my decision. And then ask you for feedback. And then I can tell you the outcome. It's

Speaker 1:

Like, uh, dude, I just went for this job interview. And um, the interview went really well. It went really well. I got this job. One of the interview was this. Um, and this is what I said, what do you think about that? You're probably going to be like, good answer. Cause you got the job. Maybe it wasn't right. Could have been better. Right. So hide the outcome. And I know that's not always easy to do, but I think if we want valuable feedback, the smart approach, definitely

Speaker 2:

Another strategy that a researcher from Harvard business school Francesca Gino talks about is this term of counterfactual thinking. And this is just sort of yeah.

Speaker 1:

Big words, but really it's just the idea of

Speaker 2:

Envisioning like alternate realities. So what would, what are other outcomes that could have happened? So in the Pete Carroll scenario, right? When we talk about what would have happened, if they score, if they scored, then does that make it a good decision

Speaker 1:

Or this legend? Right. Great strategy. And out of the box thinking, yeah. Innovative. If we realize that when

Speaker 2:

We're, when our judgment of the process of the decision-making process changes based on the outcome, then we're resulting

Speaker 1:

Where we're resulting into the outcome. Whereas

Speaker 2:

If we don't change our judgment of that decision based on the outcome, then it's probably a fair assessment.

Speaker 1:

We go back to the parent watching the basketball game, uh, the good shot, bad shot. Shouldn't only be, did I make the shot or miss the shot? It's was I open? Did I shoot with rhythm? Was it in the right like time and place of the shot clock or whatever it may be. It's a good shot. It's like, we want you to shoot that. Right. That's a high percentage shot in the right time and place. Good shot. Right. And that doesn't change. If it doesn't go in, if we're really approaching this situation, avoiding the outcome bias or on the flip side, I take a terrible shot early in the shot clock while there is a defender on me. It just so happens to go in. Right. That doesn't mean you should do that. Exactly. It's like, dude, great. That it went in happy for you, not a good decision. And if we do too much of that in the long run, we're going to get worse outcomes. So it's like, again, counterfactual thinking, is that what you call it? Exactly. It's the, the thought experiment is if the outcome was different, would it change my judgment of the process in most cases? Yes. Because we're operating with the outcome bias. What we're trying to do is say exactly the filter we use when we create an episode is will this help us become a better learner? Think the answer for this topic is absolutely. Yes. A couple of things to keep in mind, this is a skill like any skill we get better through practice. We're never perfect with it. All of us are going to still fall into the trap of outcome bias, but now that we're aware of it, we can practice sidestepping it a little bit more. And honestly, I think this is a powerful one for leaders and learners like this could certainly help the way I approach the, the failures, the mistakes and the good outcomes in life. And as a leader, the way I talk about mistakes failures and the good outcomes with the people around me. So it's one of those useful tools for the leader and the learner. And I think it's actually very simple. Um, and it's a powerful one to try to figure out, thank you so much for listening. We'll be back next week.