**Abstract 2008**

**Identification of the Victims of a Mass Fatality Incident based on nuclear DNA evidence**

**David Cavallini, Fabio Corradi**

This paper focuses on the use of nuclear DNA Short Tandem Repeat traits for the identification of the victims of a Mass Fatality Incident. The goal of the analysis is the assessment of the identification probabilities concerning the recovered victims. Identification hypotheses are evaluated conditionally to the DNA evidence observed both on the recovered victims and on the relatives of the missing persons disappeared in the tragical event. After specifying a set of conditional independence assertions suitable for the problem, an inference strategy is provided, treating some points to achieve computational efficiency. Alternative solutions to the problem will also be illustrated for comparison purposes. Finally, the proposal is tested through the simulation of a Mass Fatality Incident and the results are compared with the considered alternative solutions.

wp2008_02 (Economic Statistics)

**Innovative Competitiveness: A Latent Factor Approach**

**Margherita Velucchi, Alessandro Viviani**

Competitiveness is one of the most quoted concepts in economic studies but its meaning and the way it can be measured are still a matter of lively debate. From a statistical point of view, competitiveness is a multidimensional and relative concept: it depends on the variables included in the analysis, on the disaggregation level, on the data sources. In this paper, we use a Factor Analysis approach to compare different competitiveness indices for European regions (NUTS 2). The latent variables approach allows to identify the variables that can be used to define competitiveness and permits a simple and flexible interpretation of the most recent developments in the European economies. We devote particular attention to the role of innovation in creating the fertile context for competitiveness in international markets and we focus on the skills of the human capital in each region. We find that rankings are consistent with similar studies and that only some Italian regions benefit by the introduction of innovation proxies.

**Comparison of Volatility Measures: a Risk Management Perspective**

**Christian T. Brownlees, Giampiero M. Gallo**

In this paper we address the issue of forecasting Value–at–Risk (VaR) using different volatility measures: realized volatility, bipower realized volatility, two scales realized volatility, realized kernel as well as the daily range. We propose a dynamic model with a flexible trend specification bonded with a penalized maximum likelihood estimation strategy: the P-Spline Multiplicative Error Model. Exploiting UHFD volatility measures, VaR predictive ability is considerably improved upon relative to a baseline GARCH but not so relative to the range; there are relevant gains from modeling volatility trends and using realized kernels that are robust to dependent microstructure noise.

Published as *Journal of Financial Econometrics*, Volume 8, Issue 1, pp. 29-56, 2010; link, published.

wp2008_04 (Economic Statistics, Econometrics)

**Estimating the Parameters of a CES Production Function in a Regional Environmentally Extended CGE Model Framework: a RESAM Only Based GME Approach **

**Guido Ferrari, Anna Manca**

This paper deals with the problem of estimating the parameters of a Constant Elasticity of Substitution (CES) production function in the framework of a Computable General Equilibrium (CGE) model. Usually, after specifying the GE model, the computation, consisting of both calibration and parameters estimation, is carried out based on a Social Accounting Matrix (SAM), in some cases supported by additional accounting information, and on information concerning production activity. Calibrations is performed on the basis of the SAM and, if the case, on the additional accounting information. Estimation of the parameters, and namely of elasticity of substitution, and of income and prices, of the functions that are used both in production and in consumption spheres is performed by making resort to information concerning production, as specified by time-series or cross-section data on enterprises. A new approach for the above parameters estimation is proposed here based on the first type of macroeconomic information only, by making resort to the Generalized Maximum Entropy (GME) method, which is used for estimating the parameters of a CES production function based on a Regional Environmentally Extended SAM (RESAM).

**Ranked set sampling allocation models for multiple skewed variables: an application to agricultural data **

**Chiara Bocci, Alessandra Petrucci, Emilia Rocco**

The mean of a balanced ranked set sample is more efficient than the mean of a simple random sample of equal size and the precision of ranked set sampling may be increased by using an unbalanced allocation when the population distribution is highly skewed. The aim of this paper is to use the data of the Italian Fifth Agricultural Census driven in year 2000 and of the Italian Farm Structure Survey driven in year 2003 in order to compare several possible allocation rules and to identify the more appropriate one when several skewed distributed attributes of each sample are of interest. Our study shows that when an auxiliary variable correlated with the study variables is available and is used as ranking variable, a multivariate extension of the univariate unequal allocation models suggested for skew distributions by Kaur et al (1997) may be a good choice.

Published as *Environmental and Ecological Statistics*, Volume 17, Issue 3, pp. 333-345, 2010; link, published.

wp2008_06 (Demography, Economic Statistics)

**A conjecture on the evolution of household income**

**Giambattista Salinari, Gustavo De Santis**

We present a markovian homogeneous model that mimics the evolution of household income. With three parameters only, the model generates a set of theoretical curves that closely fit actual income distributions, as observed in 19 advanced economies in the period 1967-2004. The fit is better, and theoretically more consistent, than that obtained with other models customarily used in the literature, for instance log-linear or power-law models.

**Individual and Contextual Correlates of Economic Difficulties in Old Age in Europe**

**Daniele Vignoli, Gustavo De Santis**

With data drawn from the second public release version of the ‘Survey of Health, Aging and Retirement in Europe’ (SHARE), we scrutinize individual and contextual (regional) correlates of economic difficulties among older Europeans, aged 65 or more. A logistic multi-level regression model with random intercept shows that the risk of being relatively poor varies considerably among the aged. Beside the individual-level covariates, which all act in the expected direction, the risk of being in economic difficulties is also markedly influenced by contextual variables. Regions with faster levels of economic development experience higher levels of poverty alleviation.

Published as *Population Research and Policy Review*, Volume 29, Issue 4, pp. 381-501, 2010; link, published.

wp2008_08 (Statistics, Statistics for experimental and technological research)

**Space prediction models: an application to agricultural data**

**emanuela dreassi, alessandra petrucci, emilia rocco**

Space prediction, based on individual data, has been widely used in several applications and many advances have appeared in literature. This paper aim is to discuss an application of a hierarchical space Bayesian prediction model at unit level to an agricultural data set, where the geographical location of each unit is known and the response variable is a zero-inflated count variable. The results of our study show that, when a large amount of spatial heterogeneity is present in the data, prediction at unit level may be not suitable.

**A MEM-based Analysis of Volatility Spillovers in East Asian Financial Markets**

**Robert F. Engle, Giampiero M. Gallo, Margherita Velucchi**

Transmission mechanisms in financial markets reflect the degree of integration of capital markets and of real economies. As a matter of fact, volatility has components which may behave differently across quiet and turbulent periods, but appear to behave in similar ways from market to market. In this paper we suggest a Multiplicative Error Model (MEM) approach which is suitable for modelling directly the conditional expectation of the market daily range which is a good proxy for volatility. In the present context, the dynamics of the expected volatility of one market is extended to include interactions with the past daily ranges of other markets, thus building a potentially fully interdependent model. We analyze eight East Asian markets in the period 1995-2006, devoting particular attention to the treatment of the 1997-1998 turbulence period. We show that for some of the markets there is no evidence of changes in the dynamic impacts within the crisis and without and for other markets such a change is limited to a level shift: this suggests that the links may have been stable across subperiods.

wp2008_10 (Statistics for experimental and technological research, Economic Statistics)

**Response surface methodology and conjoint analysis: the best preference through status-quo and optimization
**

**Rossella Berni, Riccardo Rivello**

In this paper we attempt to study the conjoint analysis (CA) method together with response surface methodology (RSM); more precisely, our aim is focused on the proposal of an opportunely modified CA studied through the fundamental elements of RSM, more precisely statistical models and optimization theory. As regards the service/product, this can be revised considering the current situation (status-quo) if it is just in producing; alternatively, for a new service/product, the presented method only considers the baseline variables of the user/customer. With this spirit, changing related to the standard CA are performed in order to collect quantitative information about the current situation of the service/product and the baseline variables about the user/customer. RSM is applied in order to achieve the best optimal solution, equivalent to, in this case, the best preference of the respondent.

Published as *Statistical Methods for the Evaluation of Educational Services and Quality of Products*, Physica-Verlag, pp. 18, 2009, ISBN 978-3-7908-23.

**Marginal distributions of maximum-likelihood estimator when one or two components of the true parameter are on the boundary of the parameter space**

**Marco Barnabani**

When the true parameter lies on the boundary of the parameter space it is difficult to investigate the asymptotic distribution of maximum likelihood estimator. In some relatively simple cases it is a mixture of truncated normal distributions. In this paper we shall be concerned with the the marginal distributions of maximum likelihood estimator when one or two components of the true parameter are zero and can be on the boundary of the parameter space. We found that these distributions are (mixtures of) normal or truncated normal multiplied by "skew functions" which distort the symmetry of the normality. Some of these are skew-normal.

**Robust Random Effect Models: a diagnostic approach based on the Forward Search**

**Bruno Bertaccini, Roberta Varriale**

This paper presents a forward robust procedure for the detection of atypical observations and the analysis of their effect on model inference in random effect models. Given that the observations can be outlying at different levels of the analysis, we focus on the evaluation of the effect of both first and second level outliers and, in particular, on their effect on the higher level variance which is significativity evaluated with the Likelihood-Ratio Test. A cut-off point separating the outliers from the other observations is identified throughout a graphical analysis of the information collected at each step of the Forward Search; the Robust Forward LRT is the value of the classical LRT statistic at the cut-off point. Through some Montecarlo simulation studies we were able to claim the clear superiority of our proposal since, on one hand, the probability of the type I error computed with the proposed method is lower and, on the other hand, the power of the proposed robust test is higher than the ones computed with the classical approach.

Ultimo aggiornamento 10 gennaio 2013.