APPLICATION OF A PRIORI AND A POSTERIORI ESTIMATE ON RISK ASSESSMENT APPLICATION OF A PRIORI AND A POSTERIORI ESTIMATE ON RISK ASSESSMENT

The problem of setting the values and interconnections between elements of the models in the safety, protection and security field, appears as the biggest obstacle in taking crisis management decisions. The article attempts to represent a mathematical approach to modify the expected values and interconnections that can occur in the models describing the protected system in order to minimize errors caused by subjectivity. Here presented procedures are described in the examples of their potential use. The main idea is to focus on improving estimates for better response to reality, then to find new estimates, since those would still be weighed down by the subjectivity caused errors. Based on this premise this article attempts to characterize application of mathematical methods on minimizing the subjectivity caused errors in the models in risk assessment.


Introduction
In Safety and Security Engineering as in many areas of modern science and technology we can see the shift to testing hypothesis on models rather than in real environment. It is obvious, how dangerous could it be to apply any new measure or process to for example crisis management without any previous testing or estimation of consequences. Very effective approach is to create a model of object, area, state, etc. and to simulate different scenarios [1], [2]. This can be done by various methods for particular results. It must be taken into account that every model is only approximated abstraction and therefore inaccuracies can appear. Those mistakes affect adequacy of results obtained with application of such a model. In other words it means that input errors are often mistaken for errors of model itself even that model in general can be good. Consequently, the models are considered as useless or just tools for theoretical science and so they are used at 100% of their potential. On the other hand, if there would be uncritically made decisions based on false results of model, the taken measures might be incomplete but harmful as well. Hence, it is important to keep in mind that even the best model gives only as good outputs as there are the data that comes into it. It is more than understandable why safety and security engineers distrust such tools and avoid their use. In the following article we attempt to illustrate the most often errors and how to avoid them. In article will be presented mathematical approach to modify the expected values and interconnections that can occur in the models. In summary, this article attempts to characterize application of mathematical methods on minimizing the subjectivity.

Modern modelling
In a model there are usually some parameters that describe a modelled system. The particular parameter values are determined by the two different types of methods. Those methods are designed to either measure or estimate either qualitative or quantitative characteristics of the given system. Both types of methods can be used in one estimation of the system and in that way create the mixed method research. The most commonly used methods are listed in the Table 1: • Qualitative methods -deal with understanding of given behaviour of system or quantity with non-controlled observation, process oriented, and are characterized by the high level of subjectivity [3]. • Quantitative methods -search for facts or causalities, is invasive, they are result oriented and measurement controlled and objective [3]. Those methods are based on measurement and use of statistics as case studies. These methods are expert's estimates are often neglected. The most common mistake is the insufficient number of expert estimates obtained as a result of insufficient number of experts to deal with the issue under consideration. Clearly, the biggest challenge is to provide enough experts. Usually, there are not more than 5 experts in reachable distance. Furthermore, all obtained expert estimates need to be further evaluated. It is necessary to remove the factors reducing the noticeable value of the expert estimates thus obtained. The actual work with expert's estimates and their evaluation reduces calculating the mean value or meridian without any deeper analysis or justification. Consequently, all irregularities are mistaken for the "vagueness" of social, psychological, economic sciences, although it is known for long time that even such parameters can be measured.
Potential gap in proper application of estimations made by experts is the lack of literature on this topic. For instance, there is not one monography; in Czech language there is only one [5]. Only a few articles can be found with direct application of particular method [6] for specific problem, but methods are used ad hock without taking into account appropriateness of the used method. This can be another source of vague results. And for more, every estimation made by any expert is, lastly, is burdened by a subjectivity, and thus error.

Understanding of estimations
Estimation usually involves data and their statistics. They are used either to approximate one single value or a small range (interval) in which value most likely appear. Likewise, a function can be found that describes behaviour of unknown parameter based on measurable parameters, what is the matter of this article's interest. Although it is assumed that values of unknown parameters are random, they mostly have some distribution of probability of occurrence of such values. The probability of occurrence is called probability distribution and it is described by those parameters that are measureable [7], [8].
This approach can apparently be used in combination with estimations made by experts if expert′s estimations are considered as measurable parameters , , exact, look like experiment (physical measurement) and are commonly well known from their application in physics or biology.
In application of these methods there are errors that have a major impact on the accuracy of the values obtained. The most frequent are: • Measurement errors -are the physical measurement type of errors. Those errors are natural type of errors and there is a question of optimization between refinement of measurements and necessary precision. • Insufficient total number of experiments or measurement repetitions. The lack of repetition gives data that cannot be statistically analysed. There is a well-known that less than 50 repetitions is statistically insignificant. On the other hand, it is not economically effective to make 50 same measurements or same experiments. • Replacing experiments with expert estimates without subsequent analysis of these estimates. In the case when even one experiment or measurement would be too expensive, or time and material demanding, it is typical to use estimates made by experts. This can be very fast and effective way how to obtain required values but they can also be misleading if any post estimate analysis of obtained data is missing.

• Insufficient number of estimates made by experts (estimation).
This case is very similar to insufficient total number of experiments or measurement repetitions. Despite the tendency to create a good estimates made by experts, it is uncertain if there is not enough of them. • Inherent subjectivity of experts can cause unequal estimation of values and is a big problem if it is not sufficiently reduced.
In modern Safety and Security engineering, common practice is to replace measurements and experiments with estimations made by experts. Strong argumentation pro this approach is that usually it is not acceptable to make any experiment that could potentially harm health and lives of population, or they are have heavy financial burden, and expert estimates are, indeed, less time consuming [4].
Expert's estimation is a special type of finding approximation of uncertain effects, phenomenon or events. Thence, fundamental condition of forming expert's estimations is to have experts. The necessary condition of forming respectable estimations is to have enough experts. However, in the normal practice, the above conditions cannot be met and therefore the principles of proper • Minimum mean squared error is the best known method that starts with prior estimation of probability density function and can be tested with posterior data. • Maximum a posteriori is similar to maximum likelihood method enriched with the optimization of posterior distribution. • Best linear unbiased estimator can be understood as a linear regression which is its most basic case. Methods above are used in Safety and Security Research only as the statistical data processing tools, if ever. The possibility how to use those methods in combination with estimations made by experts, can be found in the following text.
The estimation will enter into those methods as a priori estimation, meaning the estimation before measuring. Those methods differ according to what they approximate. That can be the valuation of particular parameter (e.g. the barriers break time), probability density function (e.g. the density of particular risk), dependence of some parameter on others (e.g. probability of threat manifestation), numerical calculation of complex system In both cases the probability that expected value is in interval , y y  In other words, in the case where enough data is available (at least 50 estimations made by experts or at least 50 measurements), this method can find the estimation of linear behaviour if it exist. This method has a greater predictive value than the forecasting based on arithmetic mean, median and similar one dimensional statistical characteristics. The observed data are an a priori estimation and the prediction is an a posteriori estimation since it will decide if the constructed approximation (linear regression) describes the behaviour good enough. For example, it can find application in the future behaviour prediction of aggressors at football matches and to help design effective countermeasures or for avoiding repeated theft in supermarkets.
For improvement of already existing estimations minimum mean square error can serve very well. From the text above, error is the difference between the estimated value and real value.
Observed data are values of random variable and therefore their difference from estimated linear regression is also a random variable and has given mean value. To calculate the mean value of differences between estimation and the real data, it is necessary to understand that the difference is the (perpendicular) distance between for example point A and the red line, as it is clear from Figure 3. According to the Pythagorean Theorem, the square of the distance equals the area of the biggest square that can be in between A and the red line. Calculation with the second power of difference is furthermore good way of penalization errors. The mean value of errors penalization is good estimation of fitting the estimation on data 10].
For example, there be a threat, the probability of performance has a uniform prior distribution with the mean value . 1 v = . Two experts were asked to estimate the probability of threat performance. The first one said y 1 , the second y 2 . From previous experiences, the first expert is inaccurate with an error ε 1 with (unknown distribution) mean zero and variance 2 1 vf and the second expert is inaccurate with an error ε 2 with (unknown distribution) mean zero and variance 2 2 vf . How to (e.g. obtaining values in risk assessment). Detailed explanation of those methods is believed to be helpful in creation of functional and accessible models in the future. Application of corresponding mathematical techniques into already existing custom of using expert′s estimations instead of measuring parameters will lead to objectification of conclusions made in all areas of Safety and Security Research.
Linear regression is a conventional technique how to estimate some dependent parameter if there are some observed independent variables and there is an assumption that the relation between observed and estimated parameters is a linear dependence. In that case, the expectation of errors is uncorrelated. This technique is successfully used in cases where the forecast of the next value is expected and there is enough data available.
Linear regression can be written in a vector form: where: y is the dependent measured (observed) parameter, x is the independent parameter (e.g. time), β is the regression coefficient which define the effect of estimation, ε is the error term.
On the left-hand side graph in Figure 2 is shown the solution of linear regression (red line) on data presented as dots. This linear regression is defined by the regression coefficient , which determines the slope of the red line and has some tolerance (errors) , represented by the two blue lines on the right-hand side graph in Figure 2. Depending on the mass of data that falls between these two lines, the model is accurate. Of course, for good match of linear regression with the real dependence, represented by the green line, it is fundamental to define small enough. Under those conditions, the linear regression can be found for example with the least square method or the by the minimum mean squared error described in the following paragraph. situation is convenient to use the Maximum likelihood estimation, which maximizes agreement between the fixed set of data , , , and the selected model. Its application in Safety and Security Research is promising since it works very well for data with normal (and uniform prior) distribution. Let now be assumed that the observations are independent, with unknown probability density function f $ i r h of a certain type of distribution. The task is to find the estimator i t with the highest agreement with the real parameter i . Now, fix the observed data and let vary the parameter in joint density function , , f x xn 1 f î h , then the likelihood function is defined as: (9) which is more practical.
The method produces estimation i r , while minimizes the average likelihood For a uniform prior estimation it forms two special cases Maximum a posterior probability estimate and Bayesian estimator.
The point estimation of unobserved parameter i can be done by Maximum a posterior probability estimate that estimates the mode (most often value) of posterior distribution.
The Bayesian estimator for the given prior distribution for parameter i , called P( i ) and probability of selected data from all the data available P(x 1 ,...,x n ) is the maximum a posteriori estimate of i is given by the Bayes′ theorem: The Maximum a posterior probability estimate has the same starting assumptions as both cases above. For completeness they are shown in Table 2.
obtain the probability of threat preformation from these experts′ estimations?
Both estimations have some error (difference) from the real probability x, therefore: The mean values of estimations are the same as the mean value of the given distribution x r . Thus, the linear combination of both estimations gives us the linear minimum mean square error estimator: x v y x v y x x giving the reader the good information about the behaviour of predicted probability. The similar process can be done with generally estimations [10]. Sometimes can models used in Safety and Security Research seem as the non-transparent composition of plenty parameters. Work with corresponding model can lead to misunderstanding of studied system, to cumulative mistakes, etc.
A number of times the model can be reduced to simpler one that would sufficiently describe the considered system. In that  In order to adopt the proper estimations made by experts, it is crucial to understand what estimations are, how they are made, and at least the minority of the mathematical notation such as: probability density function, linear regression and commonly used minimum squared error method, or likelihood function.
Exact processing of data used as an inputs into Safety and Security models can enhance actual results and their benefits in decision making. Safety and Security Engineers can use this article as the base line of proper qualification of estimations made by experts and also in obtaining values and their proper analysing.
By assumption of existence of a prior distribution P( i ), the posterior distribution can be calculated from the Bayes′ theorem. Then, from the maximum likelihoodestimate arg max θ f (x\ i ) the Bayesian estimator of will be estimated [11].
For example, estimation of expert′s error can be calculated from maximum likelihood function. Unobserved parameter i is this time the expert′s error, observations x can be some testing examples. At first, expert has to estimate some already for researcher known values. Then from the maximum likelihood estimation arg max θ f (x\ i ) his error parameter can be calculated. This calculation can change after every new estimation made by an expert giving the information about change of experts judgement. Taking testing time as continuous, the curve in Figure  4 can be observed.
The prior and posterior probability distributions are tools used in Bayesian probability and statistics. The prior probability distribution allows to make assumption about how values of parameter will be distributed before the parameter is measured. And then can this guess be improved after every observed value giving the posterior probability distribution. The best agreement between assumption and reality is delivered from the maximizing likelihood function.

Conclusion
The distrust of Safety and Security Engineers into models is understandable and it is seen as major cause of their misprision. The mathematical modelling in terms of suitable methods for