This page is with ready responses, for these questions, about reduced-nets of mid-80. Readers may ask yet other questions, too:
Two reasons favor reduced-nets.
The reachability-test is more expensive, with larger nets. That is, explosion of run-time, or state-space (marking) complexity (dimensionality). To let verify more than toy problems, we may reduce the net, first.
With ready-reduced subnets, as with VD78 well-formed subnets, the subnet is easy to re-use. No memory load, to remember its (internal-state) intricacies. That job is finished, already.
As an alternative to reductions, macros may offer convenience, too. Macros are for the net-modeler people, though, not for the verifier. For a verifier to verify a net, any macros in it, need replacement/expansion with the equivalent subnet. This is what NN73 had suggested. If that is ignored, the net may be ill-structured and not verifiable.
Well-formed, or free-formed. These go well together, too.
VD78 works with well-formed subnets. These are equivalent to ordinary primitives (Petri net transitions), when reduced (abstraction).
SARA reduces afterwards, with its SARA/UCLA strong-reduction algorithm. It finds reduceable (equivalent to primitive) patterns, in a net, and replaces them with the primitive.
Pe77 shows abstraction/refinement, manually, as a modeling example, similar to the SARA/UCLA strong-reduction algorithm, reduces a net. VD78, instead, listed a few restrictions, to identify (reduceable) well-formed subnets.
Both strategies may work in many rounds. The algorithm may loop until there are no more any reduceable patterns. The human may model with refinement of nets. To build a well-formed subnet, as a (reduced) event (primitive, transition), is to provide a more informative net, without any shift or shuffles with the already built net structure.
It resembles, revealing/sealing a folder (what subfolders, or files in it), in a tree-view in GUI shells. e.g: in Microsoft Windows' Explorer.
A VD78 strategy is most profitable, when the human knows what he/she wants, and how he/she would implement it, to improve performance. VD78 lets hand-crafted compression. SARA is best for novices, although if its patterns/heuristics are sufficient to optimize the given problem, it is an easy tool for any of us. The re-compression time of SARA, unless for very large nets, is probably neglectable.
In any case, the VD78 strategy is readily applicable for SARA modules, too, when they fit the well-formed subnet norms of VD78. The human may verify such subnets, a piece at a time.
As a bit of extra research, we may even find new patterns/heuristics to implement in the automatic reduction algorithm, when we study with VD78 rules. Many reduction-patterns are also storable, by an automatic reduction algorithm itself, without any human intervention, if it identifies patterns of most-often-reduced net-structures, and if it is able to re-use them, either as new patterns to identify with subnets, or at least, as a heuristic to favor/search the paths that have led to the most-profitable reductions, so far.
Note: Global vs. Local optimization: Possibly, it does not hurt, if you would reduce subnets separately first, as far as you would again reduce the full net, after all the submacros in it were replaced. But, it may hurt, too, if the reduction algorithm favors some reductions more than the others. In that case, the minor improvement in subnet-reduction phase may unable the major (global-scope) improvement. This is the case, [especially] if the reduced subnets are mapped through some extra gate-logic. i.e: Information might get lost. This is about not-fully-reduced subnets (i.e: macros which were stored, with such a minor-reduction), not about (primitive-equivalent) reduced-subnets.