Aloha Rick,
Meg, Felipe, and I are currently working on SS models for American
  Samoa bottomfishes. We've been playing around using both Fmethod 2
  (Baranov) and Fmethod 3 (hybrid) and running R0 profiles, among other things. 
We've noticed that the Catch component appears in the R0 likelihood
  profile when using F2, which makes sense since under this SS setup,
  catches can be adjusted based on their CV (so we have expected and
  observed catches, with an error distribution, allowing SS to calculate likelihood).
However, we've also noticed the Catch component appearing
  in some R0 profiles when using F3. This brought us to wonder
  how the Catch likelihood is calculated in SS under this approach?
  Isn't Catch fixed and therefore there is no observed vs expected Catch
  to calculate likelihood on? Also, we are not using a catch multiplier.
We looked at recent and old SS manuals but couldn't find a reference
  for how Catch likelihood is calculated in SS. Is there a reference available?
Hopefully this makes sense.
--------------------------------------------------
Rick's reply:
Hi Marc et al.
This is sort of a perennial question so I clearly need to make it
  more clear in the output and manual.  Copying Chantel to help in that.
Catch logL is calculated the same way for method 2 and 3 and by the
  same code.  The only difference is in how the F that calculates the
  expected catch is calculated.
With method 2, F is a parameter ( or set of parameters for all the
  fleets).  SS3 uses that parameter(s) to go straight to calculating
  expected catch for each fleet, then to the logL.
Method 2 includes option to do method 3 in early phases and then
  transition to method 2 to finish (which can be faster in high F,
  many fleet situations)
With method 3, SS3 does a Pope's (method 1) to get each F in right
  ballpark and then adjusts it through a few iterations to get closer to
  matching the observed catch.  It then finishes through the method 2
  code to calculate the final iteration of expected catch and then
  logL.  At high F and/or with many fleets it may not get to an exact
  match to the observed catch.
New'ish method 4 provides more flexibility in that some fleets can do
  hybrid and other fleets (particularly high F fleets) can do parameters
  starting at a fleet-specific phase.  Method 4 is clear winner in my mind.
----------------------------------------------------
Marc et al.'s reply:
Aloha Rick,
If I understand correctly, method 2 focuses on estimating F as
  independent parameters first. I imagine these estimates are heavily
  informed by size structure data and CPUE trends (and the catch?). It
  then simply uses these F estimates to calculate expected Catch, as you
  said. On the other hand, method 3 is more focused on estimating the F
  values that match the observed catch. Is this roughly correct?
This would match our observation that catch is very minimally
  adjusted when using Method 3 vs. Method 2, for certain species.
For method 4, if we only have a single fleet, would there be any
  difference here? I thought the only advantage to this method is that
  you can assign F methods to different fleets, correct?
------------------------------------------------------
Rick's reply:
Correct on your last point regarding method 4, but someday I might
  deprecate methods 2 and 3 which really are just more restrictive
  versions of method 4.  All use the same code.
Your first point regarding other data is not really an issue.  Method
  2 uses parameters and method 3 uses internal coefficients.  Both have
  one parameter/coefficient for each catch observation.  Further, method
  2 can start with coefficients in early phases while temporarily using
  method 3, then during BETWEEN_PHASES copy the coefficients into
  parameters and proceed.  I have run tests showing that whether you do
  method 2 with hundreds of F parameters or method 3 with the same
  number of coefficients has nil impact on the estimated variance of
  other model derived quantities, like ending year spawning biomass.
We have found one situation in which method 2 vs. 3 matters:  when
  there is discard.  This has come up in SEFSC assessments.
The F (whether parameter or coeff) produces a total catch, then the
  retention fxn splits it into retained and discarded catch, then the
  logL for catch compares the retained catch to the observed retained
  catch, and the logL for discard compares the estimated discard catch
  to the observed discard.
So with method 3, the coeff adjustment algorithm tunes F to match the
  retained catch, regardless of the catch se value.  The resultant F may
  not match the discard well and that could result in the model changing
  other parameters (recruitment, selectivity, etc. ) in order to also
  fit the discard reasonably well.  
However with method 2 the F as parameter will initially not fit
  either catch or discard well in early phases and the ADMB gradient
  algorithms slowly will adjust the F to get a better fit to both
  retained and discarded catch; with the catch se relative to the
  discard se influencing the relative fit.  In other words, if you have
  discard and use method 2, you should not expect an exact fit to the
  catches unless the have a very tight se.
The other difference between method 2 and method 3 is model speed. 
  It is a tortoise and hare situation.
method 2 is the tortoise:  F starts off being far off and gradually
  gets better during the model run, but each model iteration is quick
  because there are no fancy hybrid loops embedded in each ADMB iteration
method 3 is the hare and uses those several internal hybrid loops to
  get the F coefficients very close beginning with the first ADMB
  iteration.  So each ADMB iteration is slightly slower.
At low F situations, method 3 seems to get to final solution faster,
  but at high F situations the internal dynamics get very nonlinear as
  the model constantly adjusts the F coefficients to match the observed
  catch.  Consequently each ADMB iteration tends to only move all the
  non-F parameters only slowly towards the solution and method 3 loses
  the race because it needs more ADMB iterations.
Hence method 4 which makes it easy to use hybrid in early phases to
  get good starting values for F parameters; then polish the run with F
  as parameters.
--------------------------------------------------------
Marc et al.'s reply:
Thank you for those extra details, very interesting to hear that
  Fmethod 2 vs 3 can generate a difference in that special edge case you
  described. More specifically, you mentionned "The resultant F may
  not match the discard well and that could result in the model changing
  other parameters (recruitment, selectivity, etc. ) in order to also
  fit the discard reasonably well. ".
We think this can also happen in a situation where the Catch data
  have very high CVs and there is some data conflict with length data.
  We are running very simple models (one fleet, catch data back to 1967
  with CVs between 20-50%, size data from 2006-2021, CPUE data from
  2016-2021, no rec devs). We find that when there is a conflict between
  the length and catch data, the SS models with Fmethod 2 will sometimes
  aggressively adjust the Catch and F estimates while models under
  Fmethod3 won't adjust catch at all (it seems), but play with other
  parameters (i.e. selectivity, leading to worst size data fits).
I have put together the attached example comparing model outputs for
  4 scenarios: Fmethod 3 with Catch CVs from 10%-50% (i.e. our original
  estimated CVs), Fmethod 2 with Catch CV=10%, Fmethod2 with Catch
  CV=15%, and Fmethod2 witch CVs from 10% to 50%. You can see in the
  attached word doc how Fmethod 3 doesn't adjust Catch much in the model
  despite the high CVs, while Fmethod2 adjust catch more and more
  aggressively with higher CVs. You can also see how the SS model under
  Fmethod3 tries to resolve the conflict by playing with the selectivity
  parameters (since it's not adjusting the catch), as in your example above.
Note that this is a mild example of catch adjustment. We have worked
  on this model to reduce the catch vs length conflict by adjusting the
  growth parameters. The catch adjustments used to be much greater
  originally (the catch estimates in the mid-1980s would be almost doubled).
Please let us know what you think and what you would advise us to
  stick with.
--------------------------------------------------------
Rick's reply:
I am sure you are correct about the case with high catch CV.  With
  Fmethod 2, catch is just data like any other data and data with high
  CV will not be matched exactly.  I have seen the same thing with the
  TMB model SAM.
--------------------------------------------------------
Marc et al.'s reply:
Ok, thanks Rick. I think this all makes sense now.
To recap:
1) Both Fmethods generate an expected catch and that's where the
  likelihood component for "Catch" comes from. It's simply
  hard to see the difference between Exp. and Obs. catches when catch
  CVs are low or when Fmethod3 is used.
2) For the vast majority of models, which have low-to-moderate Catch
  CVs, using either methods generate the same F and Catch estimates (and
  it is recommended to use Fmethod 4 nowadays).
3) With very high Catch CVs and Fmethod 2, there can be large
  Catch adjustments if it is conflicting with other data types, since
  this data is treated like any other data types in the model.
I hope I captured this correctly.
Again, thanks Rick, your help is very much appreciated!
-------------------------------------------------------
Rick's reply:
Perfect!  Now wishing that we had had this conversation on the VLAB
  forum for all to benefit from.
--------------------------------------------------------
End thread.