copycat82 confuses a macro, as opposed to a reduction. A macro is a design aid, and it is not verifiable, unless its name-and/or-shape is replaced, with its content.
A reduced subnet is representable with a primitive element of the employed net formalism - Petri nets, E-nets, etc.
A reduction is a performance aid, for reachibility test (improves marking dimensionality/complexity), as well as it is a possible design tool. Macros are more free-form. The relative advantages of macros versus reduced-subnets, for design, are discussed elsewhere, at this site. This page concentrates on verifying, rather than designing. In a single sentence, though, the reduced-subnets, when they are easy-to-design with, are preferrable over raw macros, for design, too, because it reduces the need for memorizing the intricacies of each macro. Good macro-design rules-of-thumb, may help, though. We discuss these, elsewhere. (See NN73, and SARA, too.) SARA lets designing with macros (SARA SL), and without designer intervention, automatically reduces the net, for verifier performance.
Reductions need serious thought. Either theorems to restrict what is a valid reduction (as VD78 lists), or algorithms to reduce a net (as UCLA/SARA strong reduction algorithm does) may do. Ignorance will not do. It is chaotic.
Let alone any such methodological, general-case studies ("in the name of founding a methodology"); In fact, copycat82 is immensely faulty, even at the very examples itself publishes. That is, design errors being not caught, and blindly published. Even a high-schooler were not supposed to design charts and macros that way.
Still worse, is the attempt to verify those macros, without any method to identify, and expand them - as macros must be. In fact, copycat82 exactly attempts to verify each of those macros separately. That does require the VD78 approach, i.e: It needs properly-reduced subnets, with its restrictions.
The macro-hierarchy is attempted to be verified piece-by-piece, imitative of (level one of) VD78. (With the reachability test, which loops over the separate modules, separately.) That is, it is fault-prone. No real verification, there.
In other words, although VD78 requirements would ask for more rigidity than Macro E-nets, copycat82 laxifies even what Macro E-nets already contained, all at the same time as it attempts to (implicitly) claim to be similar to VD78 - to do the reachability test, with submacros separately, instead of a single-net. (NN73 had suggested to expand all macros as a single net.) Next, copycat82 does not publish any proofs to support that outrageous (implicit) claim, and we could readily tell that it is provably wrong, any way.
You could, at first, think the macro-hierarchy imitates SARA-SL, but in fact, copycat82 erroneously assumes unity of subnets. i.e: Independent verifiability of each of them, which needs reduceability/unifiability of a subnet to stand as an ordinary-transition. SARA SL itself is for building a hierarchy of macros, too, similar to Macro E-nets, itself. A SARA SL node is not equivalent to a SARA control-node - unless sepecial restrictions, such as VD78 lists, or such as the SARA/UCLA strong-reduction algorithm does, reduces that macro to a single control-node.
A net formalism is employable, to abstract a real-world system, represented with the net elements, events-and-conditions. This is the definition of meaningfulness to the verifier, and to those who know the rules of that net formalism.A reduced-subnet, redoes the abstraction, after the model is ready. (On this page, the Petri nets is the relevant net formalism, other than when E-nets is specifically noticed.)
copycat82 "component"s turn out to be only vague-macros, in need of replacement, for a verifier to make sense, as NN73 had suggested for macros. Only replacing the i/o macros would not suffice. Not to mention that, those (trivial) i/o macros have their own grave faults.
copycat82 proposes subnets to remove the enabling-token(s), immediately after the execution of the subnet starts (page 46). This means, the input places are instantaneous, because a subnet is "started to be executed as soon as" enabled (page 60).
If there are no input-macros at the entrance, or if it is "and" (defined on pages 45-46, implemented on page 129), that means tokens are lost altogether, until the subnet/transition finishes. In the case of the other input macros , the input places still lose their tokens, but the internal place, similar to the input-location just preceding an E-net transition, may hold the token, until the subnet finishes.
From the other side, this also means that, those transitions at the entrance, must be "zero-wait."
That is a very confused behavior, to state about the input-places. Not to mention that, a Petri net verifier, would neither apply that "modeling assumption" of skipping a wait at the input places (instead, generating a lot of spurious states, not meaningful to the model), nor would the verifier make sense, at all, of "lost tokens," in those cases, where there are no such internal-places to hold the token.
In copycat82/83, there are a total of five input macros (a limited, predefined, trivial bunch), with the logic-element-like names such as "and," "or," "xor," "another or," "the 'another or' with priority," three of which have sticky-activation. That is, they display a memory-effect that disrupts the future-functioning of the same "logic element" once a particular combination occurs. (The other two are, trivially, basic elements of Petri nets already, and one of them, the "another or" being another pointer to plagiarism, as its degeneracy implies.) As a result, let alone any further, a "component"s internal functioning, the whole myth of well-isolatedness, "component"ness, evaporates even at the gate of it. What else may follow?
There is worse. Once enabled, a "component" may release any number of tokens out. (See where it discusses Fout, and its "superscript" (repeated-release) operation.) In other words, it may continue token-output, forever. This violates the Petri net a-single-token-per-arc rule, at the output side. How would the verifier/reader make sense of it? How would it cope with the enormous explosion in complexity, even if the verifier is hacked to accept such a "possibly continues forever" case. Keep in mind that, for reachabiliy test, copycat82 adopts the VD78-first-level-only strategy of neglecting the data and the procedures. In other words, the Da80 relief of reducing complexity, while compressing nets, by employing the X-transition of E-nets, is not the case.
Yet, in other words, copycat82, when plagiarizing, has again made a wrong selection. This is the usual pattern in copycat82. It cuts-and-pastes from elsewhere, and it is always picking the unfitting alternative(s).
And even worse, there is the possible compounding of multiple-token-release-per-single-enabling, with the vague (undiscussed, unpublished, although advertised) existence of ADTs. If the ADTs are allowed to transfer tokens, then, in case of exceptions, that means a token could appear anywhere. A verifier could never make sense of such a "any token may appear anywhere, even if its previous paths were never enabled" case.
The name "component," in copycat82, is also a bit different than we would expect in a software-engineering context, and/or when compared to the well-isolated layered-network architectures, such as ISO OSI, because something buried within several layers, may lead to a deadlock, after another layer is placed on top of the existing layers. Forget about any well-isolation.
In other words, if copycat82 has any claim of OSI-relevance in any positive sense, that is only another of its false claims. Instead, for example, Da80 had published, about nets (Petri nets, E-nets), as employed for networked, layered-architectures.