NN73, refers to an alternative in representation: "transition procedure involving statistical terms" (p.724). For experimental studies, to model real-world phenomena, as events, such an idea would fit. An experimenter may run enough rounds (sample points), and do statistics. e.g: A transition-procedure with a Poisson/Gaussian/etc. distribution. Process the experimental-results with ANOVA, linear regression, etc. This is the interpreted-abstraction, for evaluating data (internal states) at run-time, within net structures.
Da80 studies interpreted-abstraction, as a strategy to extend the verifier with procedures/data. This compresses the net with interpreted-abstraction, although it is still possible to verify without data, because Da80 preserves the net-element functionalities. For example, an X-transition may have a lot of preferences implemented in it, but it must send out a single token, and this is through one, or the other, output path. i.e: The amount of tokens does not fluctuate within an element, and tokens do not get lost in the middle of a net, to appear elsewhere, either. Da80 is a good strategy, that is easy to reconcile with VD78 strategy, too. This is not doable with the extra-vague macros/"component"s of copycat82.
Da80 studies E-nets, and introduces the X-transition of E-nets to time Petri nets. There exist a lot of ideas in it. e.g: The equivalence of the X-transition, with its (E-net) procedures, to the two-levels of an FSA architecture. e.g: Compress the net with interpreted-abstraction. e.g: context-tipped-records similar to variant-records in Pascal, or the unions in C. And a variety of data-communications-representation ideas.
An E-net transition has two levels: the (visual) network-flow and the (internal) data. Watching, on a computer monitor, the flow of transition activations (the token flow), is the visual part. The internal-events (null-events), are those parts that are not visible, but we would like our model to reflect their side-effects. [Da80] The third level of expression is the full real-world activity, and it is not relevant to the modeling itself. Otherwise, we would be capturing it, at least, as part of the internal-events modeling (that is, in the data). For a verifier/interpreter, the third-level is useless, although it is the level the human that runs the verifier, thinks of. For application purposes, it may be relevant, too. For example, the model that best fits the preferred criteria best, may be picked and the associated event-schedule may be applied in the real-world, through system-supplied function-calls to run computer-controlled gadgets, and/or run software (e.g: send an e-mail).
For example, when the transition named "John is thirsty." leads to the transition named "John is drinking water." visually, what happens in the internal-events may be a subtracting of the price from John's treasury, and adding the quantity of liquid-consumed, to his daily-liquid-consumed records. Whether he liked its taste, or whether he was talking with anyone, while buying the drink, must have been irrelevant. Instead, the modeling is interested in the data that will tell, how to proceed after that. For example, "Is John still thirsty, after drinking a glass of water?" This may be a base for further action.