Again: if the changes are small enough and you have automated checks in place, they should not require manual intervention.
You’ve used the magic word “should”. “Should is famous last words.” The trick to keeping developer talent is not to risk the developer’s weekend plans on “should”.
And yes, maybe I’m only risking our cloud ops person’s weekend plans. Same principle applies.
Every change that isn’t already an active disaster recovery can wait for Monday.
Every change that isn’t already an active disaster recovery can wait for Monday.
I honestly fail to see the difference between “don’t deploy on Friday if this can wait until Monday” and “don’t deploy on the evening if it can wait until the next morning”.
The idea of CD is that changes are small and cheap. No one is saying “it’s okay to push huge PRs with huge database migrations on a Friday”, what is being said is “if your team is used to ship frequently and incrementally, it won’t matter when you ship and your risk will always be small.”
I honestly fail to see the difference between “don’t deploy on Friday if this can wait until Monday” and “don’t deploy on the evening if it can wait until the next morning”.
Both are top tier practices.
If your team is used to ship frequently and incrementally, it won’t matter when you ship and your risk will always be small."
Yep. That’s all great advice.
But I’m just a veteran saying that all the preparation in the world doesn’t compare with simply not inviting trouble right before the evening or the weekend.
Organizations that feel that they desperately need to take that risk, are doing it because they disrespect their team’s time.
It can be the smallest risk in the world, but it’s still a risk, and it’s a completely unnecessary one (outside of an active in progress disaster recovery).
Since we’re doing a deep dive, I’ll share some additonal context. I’m the manager of the developers. On my team, that means the call comes to me first.
I have had Thursday deploys that resulted in bugs discovered on Saturday. Here’s how the conversation on Saturday went:
“Thanks for letting me know. So we didn’t notice this on Friday?”
“No, it’s subtle.” Or “We noticed, but didn’t get around to letting you know until now.”
“Okay. I’ll let the team know to plan to rollback at 0900 on Monday, then we will start fixing any damage that happened on Friday, and the weekend.”
Only inexperienced developers* are unafraid of deploying right before leaving the office.
There’s an entire untapped universe of possible new ways that things can go horribly wrong.
*Experienced developers who hate their boss and their colleagues, too, technically.
Name two, please.
XML
How is that not easily reversible?
It’s not about how hard the problem is to reverse, it’s about respecting the team enough not to call them on Saturday.
Again: if the changes are small enough and you have automated checks in place, they should not require manual intervention.
Plus, what happens if a deploy on Thursday has a bug which only is manifested on a Saturday?
You’ve used the magic word “should”. “Should is famous last words.” The trick to keeping developer talent is not to risk the developer’s weekend plans on “should”.
And yes, maybe I’m only risking our cloud ops person’s weekend plans. Same principle applies.
Every change that isn’t already an active disaster recovery can wait for Monday.
I honestly fail to see the difference between “don’t deploy on Friday if this can wait until Monday” and “don’t deploy on the evening if it can wait until the next morning”.
The idea of CD is that changes are small and cheap. No one is saying “it’s okay to push huge PRs with huge database migrations on a Friday”, what is being said is “if your team is used to ship frequently and incrementally, it won’t matter when you ship and your risk will always be small.”
Both are top tier practices.
Yep. That’s all great advice.
But I’m just a veteran saying that all the preparation in the world doesn’t compare with simply not inviting trouble right before the evening or the weekend.
Organizations that feel that they desperately need to take that risk, are doing it because they disrespect their team’s time.
It can be the smallest risk in the world, but it’s still a risk, and it’s a completely unnecessary one (outside of an active in progress disaster recovery).
Good question.
Since we’re doing a deep dive, I’ll share some additonal context. I’m the manager of the developers. On my team, that means the call comes to me first.
I have had Thursday deploys that resulted in bugs discovered on Saturday. Here’s how the conversation on Saturday went:
“Thanks for letting me know. So we didn’t notice this on Friday?”
“No, it’s subtle.” Or “We noticed, but didn’t get around to letting you know until now.”
“Okay. I’ll let the team know to plan to rollback at 0900 on Monday, then we will start fixing any damage that happened on Friday, and the weekend.”
Can i work with you please? That sounds heavenly