A few years ago I visited a vendor that my company (at the time) was partnering with on a project. We were doing some truck testing to make sure their engine still operated well once it was integrated onto the back of our aircraft’s fuselage—though that’s not quite relevant to this story (and yes, I do plan to talk at some point about how, for all its janky appearances, truck testing is incredibly useful).
Walking through their facility, I found this sign, and it struck me for how relevant and timely it was.

In the weeks prior to this my coworkers had been talking, and complaining, to me about this attitude among their project teams—analysis results being needed so urgently that there “wasn’t time” to slow down, do the double and triple check, and get things right. They felt forced to zip through setting up the simulation, running it, and processing the data down.
And what happened with those results?
They ended up unusable because they were full of errors. Nothing that you could fault the engineers for; just choices that an intelligent human made, but that could have benefitted from a once-over check before running the analysis.
So they needed to rerun a whole lot of that analysis. The results that they needed right now ended up taking twice as long to get anyways. That’s not just twice as much time: it’s the computing resources being twice unavailable for other work. Twice as much money spent in the hours needed to sort out the problems and fix the setups, if not more. And twice as much irritation for everyone involved.
I often champion moving fast, making choices that are “good enough” at the time. And that’s still true. But that’s on a macro level, when you need to make decisions that propagate through multiple disciplines and dependencies.
In some cases, you need to admit the limitations of the information you do have and make a choice. Or you have a lower-fidelity model, but you know you’ll have data to make a higher-fidelity one later, so you push ahead with the intent to go back and improve your inputs when you’re able.
But the actual calculations, the analysis that informs those macro decisions? Those deserve the extra time to make sure you’re asking the computer to do exactly what you want it to do. To look at your setup with a fresher pair of eyes and be sure your choices make sense. To rework a few lines of code to be a bit more precise, a bit faster to compute, and a bit more reusable.
In all cases, it should be a conscious, deliberate choice to continue forward with your given assumptions and knowledge gaps.
But if you’re doing something knowing full well you’re rushing, because your lead engineer is breathing down your neck or you want it done before you leave for the day or you promised results for the meeting in ten minutes
Stop.
Take a pause.
Admit and apologize for the delay.
And take the time to do it right, the first time.