ČASTI MODELU INFORMAČNÝCH A ZABEZPEČOVACÍCH SYSTÉMOV PARTS OF MODEL OF INFORMATION AND SAFETY SYSTEMS ČASTI MODELU INFORMAČNÝCH A ZABEZPEČOVACÍCH SYSTÉMOV PARTS OF MODEL OF INFORMATION AND SAFETY SYSTEMS

quantification of faults occurring by information


Introduction
When analysing and synthesising an information and safety system two ground lines exist: G for denominated functions a structure of technical and program elements of the system is selected based on a designer's experience from the last application. The selection depends on technology level of system components and on general sums which the customer is willing to pay. The task of system control consists of "standard elements", G the demanded functions of the system are thoroughly specified in connection with the controlled system regardless of the future composition of HW and SW components. In the next step the function model is created on the basis of which a control task is defined as a partial task of the primary process control. When creating a function model it is possible to use some of the reference models as a basis provided that a usual range of services is demanded. For a special range of services a similar procedure to the one used during the creation of reference models, has to be performed. The interleaving of functions is an effective tool. This procedure enables decomposition of synthesis responsibilities on simple modules (layers of the functions). Provided that the co-operation of the layers is solved, the procedure well-known from the object programming can be used in the synthesis. HW and SW components are chosen for nologického stupňa použitých komponentov a od "šikovnosti" vytvorenia modelu funkcií.
In many applications the first procedure is sufficient. Sometimes a special range of services is demanded in the composite and extensive information systems. An example is the safety systems used for critical processes control. Transportation process is a typical critical process. When a failure occurs in the control of such a system, it may lead to the loss of property, health, or life. The failure that invokes a danger state can be a product of the used service system, too. It is evident that functions of such a system need to be specialised by the second procedure. The function group in single groups (layers) has to be formed in such a way that it guarantees stability, causality and safety. The result of function specification of the whole system has to be the "safety case" for concrete application.
The activity of information and safety system can be divided into four basic kinds of services: obtaining, safekeeping, transmission and transformation of relevant, current and guaranteed information ( Fig. 1). Each of the partial services is connected with the structure of HW and SW means performing competent operation "technology". To enable the system to offer the required range of services the service elements have to be formed into sequences, whose execution is controlled. Characteristics of functions execution are strongly dependent on the kind of control, too. Even when forming the partial services sequence the technological degree of system elements has to be respected. For example the efficiency of computer network is influenced by the level of distribution of DB system, the way and time interval of actualisation of DB replicas, etc. The core of this thesis to create a control procedure, which begins by with decomposition of system services to primitives, and ends by with protocol specification and with estimating the competent formalism on its construction. Formation of the model that covers the greatest possible number of functions and system states is considered to be the main problem of analysis and synthesis of information and safety system with special range of services. In our paper we try to present a unifying approach to modelling such systems using some elements of information theory.

Selection of tools for modelling of control task
The selection of modelling techniques for up to present technological degrees of information and safety systems is based on empirical method. This approach covered almost all of the requirements for solution of synthesis and analysis tasks. Examples include: function models, data flow model, entity-relational model, protocol model, failure model and further modified models. These models can be managed by special software packets, and thus they are sufficiently effective even in praxis. However they all have a shared disadvantage which limits their application and effectiveness for information and safety systems of the last technological degree. In the chain of the Information theory System theory Control theory Circuit theory, all mentioned models fall to Pre rozsiahle a zložité systémy by sa mali modelovacie techniky doplniť najmenej o informačný model. Informácia je pre všetky takéto systémy primárnym "substrátom", s ktorým sa v systéme manipuluje. Informačný model má preto ambície byť jednotiacim pre doterajšie jednoúčelové modely.
Predpokladajme v prvom priblížení, že existuje mechanizmus rozdelenia rizika medzi zabezpečovací systém a ostatné subsystémy riadenia dopravy. Potom možno hovoriť o úrovni bezpeč-include the conclusions of Information theory System theory and some of them even the conclusions of Control theory. Modelling techniques for large and composite systems should be enlarged by the data model at least. For all information is such systems the primary "sub-slope" which the system manipulates with. Information model has therefore an ambition to be the unifying one for present special purpose models.
Safety system is a subset of the information system. Its task is to handle the information (acquisition, transformation, transmission and safekeeping) in a specific way, different from other similar activities in the information system. Questions of the structure and behaviour of the safety system can be divided into solution of its basic processes. Stationary structural function can be used to describe the system (structure and behaviour) without including dynamics of its states. If the system contains independent and non correlated elements, it is a monotonous structural function. Description of the system is valid for a chosen pair of its states (the safe operating state and dangerous failure state).
It is simple to pass from structural function to probability function, which describes the probability of single system state at some point of the time axis.
As the most important model can be regarded the one that describes the safety system in its ageing process. Such a model has to permit the selection of probability distribution of occurrence of competent random parameter (failure) and calculation of appearance probability of a chosen state in the necessary time period.
The safety system is a part (subsystem) of the railway traffic control system. In the control of the traffic process three hierarchical levels can be distinguished: procedural, operational and managerial. The highest risk of hazard occurrence exists on the procedural level. This risk is shared by all parts of the traffic control system. A significant role is played by the safety system since it sets ("calculates") most of the commands given to change the conditions of the traffic process. Provided the command is correct, it results in an operational state. The state is considered faulty if for any reason the command given to change the condition is made incorrectly or misinterpreted. Such a condition can lead (but not in all cases) to an accident which damages property, health, lives and environment. The safety level must therefore be derived from the acceptable hazard rates for the traffic process. This level depends on the protected value and on the traffic process intensity.
For the tasks of analysis and synthesis of the system with a defined level of safety the procedures ensuring the behaviour of the system in all its predictable states must be defined.
These procedures are realised through the defence mechanisms of the safety system. They have to ensure fulfilling of the demanded functions according to the pre-defined algorithm even in the case of failure. Precautions taken to ensure such system behaviour can be applied on the system level as well as on the level of functional units and system components.
On the system level the choice of the appropriate system structure is involved above all. Precautions taken on the level of functional units and components aim mainly at fault detection and negation of fault effects.

Fault Classification
The safety system takes part in the following operations of the control scheme shown in Fig. 2: G Obtaining V, D, R, S quantities G Analysing R, Z′, V, D, S quantities G Producing C quantities G Transmission of required quantities between 2 places.
A fault may occur in all of these operations. Faults result in an incorrect production of the control C quantity (the command for a change of state) or in misinterpretation of the control quantity (transition to an unauthorised state in the area of S quantities). To define the safety level these faults must be classified and precautions that can guarantee acceptable occurrence (probability or rate) of unidentified or unattended faults must be taken.
(1) Z toho vyplývajú niektoré jednoduché dôsledky: G X → Y → Z vtedy a len vtedy, ak X a Z sú podmienene nezávislé pre dané Y. Implicitne je v tom podmienená nezávislosť, pretože All parts of the safety system that obtain V, D, R, S quantities, analyse R, Z′, V, D, S, quantities produce C quantities and take part in the transmission of all the controlled system quantities can be regarded (almost without exception) as a certain kind of the finite automaton. Faults that can occur in operation of the finite automaton may be classified into the following classes: a) Language faults b) Faults in transmission from the input to the output of the finite automaton c) Format faults d) Behaviour faults (faults in causality of performing partial functions) e) Faults in providing services to the superior system (faults in service causality).
For a complete system description we need to create a model that, apart from the mentioned facts, enables to incorporate the effects of the operational surroundings as well-thus to combine two and more random processes, which can have different character of probability distribution.
Supposing that the object states can be described as random variables, the information theory apparatus can be conveniently used to create characteristics of fault flow (during analysis) or to describe the modification of such a flow (during synthesis). The following parts of information theory are involved.

Object states as random variables
Let the object states be regarded as the random variables X 1 , X 2 , … X n . Their characteristics are sufficiently described by the probability distribution function p(x 1 , x 2 , … x n ). Variables X 1 , X 2 , … X n can be identically sorted by some type of probability distribution. They can be independent, conditionally dependent or statistically dependent. When the probability distribution of the random variables is known, entropy of the object states can be estimated.
Entropy enables to describe the object in the necessary form, e.g. during the creation of the code, by which is the comprehensive object state described.
Let us suppose that object states create a Markov chain. The data processing inequality can be used to show that clever manipulation with the data cannot improve the computation of state characteristics.
Definition: Random variables X, Y, Z form a Markov chain in this order if the conditional distribution of Z depends only on Y and is conditionally independent from X. Specifically, variables X, Y and Z form a Markov chain X→Y→Z if the joint probability mass function can be written as:

Použitie chybovej nerovnosti pre analýzu bezpečnosti
Použitie chybovej nerovnosti je kľúčové pri rozbore bezpečnosti zabezpečovacieho systému a jeho elementov. Dá sa očaká- This is the characterisation of Markov chains that can be extended to define Markov fields, which are n-dimensional random processes in which the interior and exterior are independent from the given values of the boundary. G X→Y→Z implies that Z→Y→X. Thus the condition is sometimes written X↔Y↔Z.
We can now prove an important and useful theorem demonstrating that no processing of Y, deterministic or random, can increase the information that Y states about X.
Proof: According to the chain rule, we can expand mutual information in two different ways Since X and Y are conditionally independent for the given Y, I(X;ZY) ϭ 0. Since I(X;YZ) Ն 0, we get Thus the dependence of X and Y is decreased (or remains unchanged) by the observation of the decrease of random variable Z.
Note that it is also possible that I(X;YZ) Ն I(X;Y) when X, Y and Z do not form a Markov chain. For example, let X and Y be independent linear random variables, and let Z ϭ X ϩ Y. Then I(X;Y) ϭ 0, but I(X;YZ) ϭ H(XZ) Ϫ H(XY,Z) ϭ H(XZ) ϭ P(Z ϭ1 ) . H(XZ ϭ1 ) ϭ 0,5 bit.
(chosen object state) with small error probability is possible only if the conditioned entropy H(XY) is small (Y is the observed state of X expressed through an intermediary, which can be a circuit output signal). Error inequality quantifies this idea.
We now extend the proof that was derived for zero-error codes to the case of codes with very small error probability. The new ingredient will be error inequality, which defines a lower boundary of the error probability in terms of the conditional entropy.
The index W is uniformly distributed on the set W ϭ {1, 2, … …, 2n R }, and the sequence Y n is probabilistically related to W. From Y n , we estimate the index W that was sent. Let the estimation be Ŵ ϭ g(Y n ). Let us define the error probability Next, we define Then using the chain rule for entropies to expand H(E,WY n ). we get Now, since E is a function of W and g(Y n ), inevitably H(EW,Yn) ϭ 0. Also H(E) Յ 1, since E is a binary valued random variable. The remaining term, H(WE,Y n ), can be bounded as follows: since by given E ϭ 0, W ϭ g(Y n ), and when E ϭ 1, we can get the upper boundary of the conditional entropy. Combining these results, we obtain error inequality: Since for a fixed code X n (W) is a function of W, Lema: (error inequality). For a discrete memoryless channel with a codeboock ( and the input messages uniformly distributed, let P (n) e ϭ Pr(W g(Y n )).
Then H(X n Y n ) Յ 1 ϩ P (n) e nR.
We will now prove this lema which shows that the channel capacity per one transmission is not increased if we use a discrete memoryless channel many times.  (28) je Táto rovnica ukazuje, že ak R Ͼ C, pravdepodobnosť chyby sa vzdiaľuje od nuly so zväčšujúcim sa n. Preto sa pri takých rýchlostiach nedá dosiahnuť ľubovoľne malá pravdepodobnosť chyby (obr. 3).

Recenzenti: P. Peniak, L. Skyva
We can rewrite (28) as This equation shows that if R Ͼ C, the error probability is moving away from 0 for sufficiently large n. Hence we cannot achieve an arbitrarily low error probability at rates above capacity. This inequality is illustrated in Fig. 3. This conversion is a weak conversion of the channel coding theorem. It is also possible to prove a strong conversion, which states that for the rates above capacity, the error probability nears exponentially to 0,5. Hence, the capacity is a very clear dividing point in which the error probability is changing.

Conclusion
In the conclusion a procedure for description of safety from the point of dangerous error probability as a probability of random variable Xij appearance is formulated. Probability P(Xij ) means the probability of transition of the object from state i to state j. State Xi belongs to the set of safe states, state Xj belongs to the set of hazardous states. For both sets of states a more precise division can be used, by which a tree-type structure of displaying states into random variables is used. G safety is described by the states structure. This structure can be used to describe states of the whole object, alternatively in hierarchical order to describe the state of single elements, or object functions. G for individual object types (safety system, or transport route as a whole) a state dependency type is determined (of the states represented by random variables). In the first approach it can be assumed, that random variables (object states) create Markov chain. G when manipulating with random variables, facts concluding from data processing inequality have to be respected. G the rate of object safety depends on the probability of a wrong estimation of the random variable X based on knowing the variable Y. By using the error inequality the requirements for logical representation of safety functions can be estimated.