Hi Erin. This looks like a good investigation. I will try to look at your configuration more closely later, but here is an initial thought regarding the impact of process error (model misspecification) and measurement error.
Your original data came from a multitude of actual processes that are more complex than what we can represent with a SS configuration. We try, but all models are approximations. When we do a parametric bootstrap, we are using one possible configuration and parameter set to create a population and sample - completely randomly - from that population. So, I do not think it is surprising that the distribution of estimates from bootstrap samples is narrower than the distribution from a profile using the original data. Another consequence of the "all models are approximations" axiom is that the fit to the original data probably has some pattern to the residuals, sometimes called data conflicts. These patterns mean that there are some unresolved gradients in the final model fit, but the model lacks enough flexibility to resolve them. Then when we fit the model to bootstrap data there is no inherent data conflicts (all data are random) so the average set of parameters among the bootstrap runs can differ from the original set of parameters that are based on real data. This is informative about the existence of such data conflicts. I assume you have seen the papers by Hui-hua Lee, Kevin Piner, Mark Maunder and myself in this regard.
I hope this helps. The difference in spread for the steepness parameter does seem greater than I would have expected so I do hope we can find something to help resolve this difference or at least understand it more fully.
Rick