Free download. Book file PDF easily for everyone and every device. You can download and read online Design and Analysis of Simulation Experiments file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Design and Analysis of Simulation Experiments book. Happy reading Design and Analysis of Simulation Experiments Bookeveryone. Download file Free Book PDF Design and Analysis of Simulation Experiments at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Design and Analysis of Simulation Experiments Pocket Guide.
Log in to Wiley Online Library

The simulations where elastic-net performed considerably better than ridge, i. Elastic-net outperformed lasso in Again, this is not an unexpected result since, as described above, the performance of lasso is compromised when the number of covariates in the true model is larger than sample size, whereas elastic net can select more than n covariates [6].

In the next subsections we investigate in detail the absolute and relative predictive performances of these three penalized regression methods. In this section, we investigate the effect of the simulation parameters on the mean squared error of the ridge model. Figure 3 presents the response distributions for each one of the five simulation parameters across their respective ranges represented by 10 equally spaced bins.

The red horizontal line represents the median of the response distribution. Table 1 also shows highly significant interactions for number of features vs correlation, sample size vs number of features, and sample size vs correlation. Figure 4 shows the respective interaction plots. The x-axis show the parameter ranges comprised by each of the 10 bins.

Services on Demand

The y-axis shows the absolute performance response. The dotted line is set at zero.

The values of the interaction test statistics are shown on the top of the figures. As expected, Figure 3 shows improvement in predictive performance as the sample size increases, the number of features decreases and the amount of correlation increases panels a, b, and e, respectively. Furthermore, the interaction plots in Figure 4 show synergistic effects of these parameters, with larger improvements in performance achieved by combinations of larger sample sizes with smaller number of features and with larger correlation values.

Note that the strong decrease in MSE as a function of the amount of correlation among the features is expected since ridge-regression is highly effective in ameliorating multi-collinearity problems the purpose for which it was originally developed. The lack of influence of the saturation parameter on the predictive performance is expected for ridge-regression, whereas the marginal influence of the signal-to-noise parameter is investigated in more detail in Text S1.

The permutation tests Table 2 confirms these results, and shows that even the much weaker group differences for the signal-to-noise parameter are statistically significant. Highly significant interaction terms included: sample size vs number of features, number of features vs saturation, number of features vs correlation, sample size vs saturation, and sample size vs correlation.

Figure 6 shows the respective interaction plots. Figure 5 shows improvement in predictive performance as the sample size increases, the number of features decreases, the saturation decreases, and the amount of correlation increases panels a, b, c, and e, respectively. Once again, the interaction plots in Figure 6 show synergistic effects of these parameters, with larger improvements in performance achieved by combinations of larger sample sizes with smaller number of features, smaller saturation and with larger correlation values.

For lasso too, we observe a considerable decrease in MSE as a function of the amount of correlation in the features although not as strong as in ridge-regression corroborating empirical observations that although lasso can combat multi-collinearity problems it is not as effective as ridge-regression. On the other hand, and contrary to ridge-regression that is insensitive to the influence of the saturation parameter, we clearly observe an improvement in MSE as a function of decreasing saturation values for the lasso. The marginal influence of the signal-to-noise parameter is again investigated in more detail in Text S1.

Table 3 , confirms these results, and shows that even the much weaker group differences for the signal-to-noise parameter are statistically significant. Table 3 also shows highly significant interactions for sample size vs number of features, number of features vs saturation, number of features vs correlation, sample size vs saturation, and sample size vs correlation. Figure 8 shows the respective interaction plots. As expected, Figure 7 shows improvement in predictive performance as the sample size increases, the number of features decreases, the saturation decreases, and the amount of correlation increases panels a, b, c, and e, respectively.

Zielgruppe

Furthermore, the interaction plots in Figure 8 show, once again, synergistic effects of these parameters with larger improvements in performance achieved by combinations of larger sample sizes with smaller number of features, smaller saturation and with larger correlation values. For elastic-net, we observe a strong decrease in MSE as a function of the amount of correlation in the features comparable to ridge-regression corroborating empirical observations that elastic-net can be as efficient as ridge-regression in the combat multi-collinearity problems.

Furthermore, and similarly to lasso, we clearly observe improvement in MSE as a function of decreasing saturation values for the elastic-net. The marginal influence of the signal-to-noise parameter is investigated in detail in Text S1. In order to compare the predictive performance of ridge-regression against lasso we defined the response as. Note that positive values of the response represent the simulations where ridge-regression outperforms lasso, and vice-versa. Figure 9 presents the response distributions for each one of the five simulation parameters.

Permutation tests for the equality of group distributions null hypothesis, presented in Table 4 , confirm that the group differences are highly significant p-value for the number of features, correlation, saturation, and sample size parameters note the high values of the observed test statistics , but non-significant for signal-to-noise.


  1. Design of Experiments (DOE).
  2. Commitment to Privacy - Virginia Commonwealth University?
  3. Tails in Biomimetic Design: Analysis, Simulation, and Experiment?
  4. Virtual Library of Simulation Experiments: Test Functions and Datasets.
  5. Perturbation Methods.
  6. ICTS Social Science-History 114 Teacher Certification, 2nd Edition (XAM ICTS)?

Table 4 also shows highly significant interactions for number of features vs saturation, sample size vs saturation, and sample size vs number of features. Figure 10 shows the respective interaction plots. The y-axis shows the relative performance response. Comparison of ridge-regression against lasso corroborate two well known results, namely: i that lasso outperforms ridge when the true model is sparse, whereas the converse holds true for saturated models Figure 9c ; and ii for highly correlated features, ridge tends to dominate lasso in terms of predictive performance Figure 9e.

More interestingly, our simulations also detected a couple of less appreciated patterns. First, Figure 9a shows that the average advantage of ridge-regression over lasso tends to increase as the sample size gets larger. Nonetheless, the interaction plots in Figures 10b and c show that this advantage is larger in moderate to highly saturated models, but that lasso tends to outperform ridge-regression when sample size is large, but number of features and saturation are small.

Second, Figure 9b shows an interesting pattern for the number of features, where the advantage of ridge-regression over lasso tends to increase at first, and then decreases as the number of covariates increases further. Figure 10a provides an explanation for this curious trend. Lasso is clearly better for small number of features if the saturation is also small, but ridge is better if the saturation is moderate to large.

Experimental designs for sensitivity analysis of simulation models

The advantage of ridge decreases with the number of features for moderate or large saturation. With small saturation, increasing the number of features makes ridge more competitive, since at some point the number of features entering the model i. For the comparison of ridge-regression against elastic-net we defined the response as , so that positive values show the simulations where ridge-regression outperforms elastic-net, and vice-versa.

Figure 11 shows clear shifts in location and spread of the boxplots for saturation, number of features, sample size, and correlation, but considerably constant distribution for the signal-to-noise parameter. Overall, we see that the predictive performances of ridge and elastic-net tend to be closer to each other than the performances of ridge and lasso note the small spread of most of the boxplots, and the closeness of the boxplot medians to 0. The permutation tests Table 5 detected highly significant differences in group distributions for saturation, number of features, sample size, and correlation, and marginally significant differences for signal-to-noise.

Highly significant interaction terms included: number of features vs saturation, sample size vs saturation, and sample size vs number of features. Figure 12 shows the respective interaction plots. Comparison of ridge-regression against elastic-net corroborates the well known result that elastic-net tends to show much better performance than ridge-regression when the true model is sparse, while these methods tend to be comparable for saturated models Figure 11c.

Novel insights uncovered by our simulations include that: i ridge tends to outperform elastic-net when sample size is small, but the reverse is true for larger sample sizes Figure 11a. Furthermore, the interaction plots in Figure 12 show that the better performance of elastic-net is accentuated when sample size is large but number of features and saturation are small; ii elastic-net tends to outperform ridge when number of features is small, but both methods tend to become comparable with ridge being slightly better for larger number of features Figure 11b.

This pattern is explained by a strong interaction between number of features and saturation Figure 12a , which shows that the elastic-net performs much better than ridge when the number of features is small and the true model is sparse; and iii elastic-net tends to perform slightly better than ridge when the covariates are highly correlated Figure 11e. For the comparison of lasso against elastic-net we defined the response as.

Hence, positive values of the response show the simulations where lasso outperforms elastic-net, and vice-versa. Figure 13 shows clear distribution differences for the number of features, correlation, sample size, and saturation parameters, but practically no differences for signal-to-noise. Table 6 corroborates these findings showing a non-significant p-value for signal-to-noise, but highly significant results for all other parameters. The permutation tests also detected highly significant interactions for: number of features vs saturation, sample size vs number of features, number of features vs correlation, sample size vs saturation, and sample size vs correlation.

Comparison of lasso versus elastic-net also corroborates the well established results that: i lasso and elastic-net show comparable performances when the true model is sparse, whereas elastic-net outperforms lasso when the true model is saturated; and ii elastic-net outperforms lasso when the covariates are highly correlated. Furthermore, our simulations generated a couple of new insights.

First, the advantage of elastic-net over lasso increases as the sample size gets larger Figure 13a. Figures 14b, d and e , show that this advantage of elastic-net is more accentuated: for smaller number of features red curve in Figure 14b ; for moderate to larger saturations green and blue curves in Figure 14d ; and for larger correlations blue curve in Figure 14e. Together, these results explain the larger spread of the boxplots on Figure 13a as sample size gets larger.

Second, Figure 13b shows that, relative to number of features, the advantage of elastic-net tends to increase at first, but then starts to decrease as the number of features increases.

Are We Prepared for Simulation Based Studies in Software Engineering Yet?

Figures 14a and 14c provide an explanation for this curious trend. Figure 14a shows that elastic-net is clearly better than lasso for smaller number of features, if saturation is moderate or large, but lasso becomes more competitive when saturation is small, and that the advantage of elastic-net decreases with the number of features. Figure 14c shows that, when the correlation is high, the advantage of elastic-net tends to increase rapidly as the number of features reaches approximately 8, , before it starts to decrease with increasing number of features.

In this paper we propose running simulation studies as designed experiments. We illustrate the application of DOSE in a large scale simulation study comparing the relative performance of popular penalized regression methods as a function of sample size, number of features, model saturation, signal-to-noise ratio, and strength of correlation between groups of covariates organized in a blocked structure.

We restricted our simulations to the case where the number of features is larger than the number of samples, since this is the usual setting in the analysis of real genomic data. Our simulations corroborated all well-established results concerning the conditions under which ridge-regression, lasso, and elastic-net are expected to perform best, but also provided several novel insights, described in the Results section. In the present work we adopted MSE as the scoring metric since it is widely used in practice and is the metric adopted in the original papers proposing the lasso and elastic-net approaches.

Nonetheless, it is important to point out that the results presented in this paper could be metric dependent, as alternative metrics might rank competing models differently. Interesting alternatives to the MSE metric include: concordance correlation coefficient [11] , Pearson correlation, and mean absolute error. We point out, however, that an in-depth robustness investigation of our results with respect to alternative scoring metrics is out of the scope of the present paper, and is left as an interesting future research project.

The incorporation of experimental design techniques in simulation studies can be useful in computational biology for three main reasons. For instance, suppose that a researcher is working in a pharmacogenomics data set, aiming to perform predictive modeling of drug response sensitivity. Suppose further that comparison of ridge-regression and lasso model fits shows a better predictive performance by ridge as is often the case in pharmacogenomic data sets [12] , [13].

Next the researcher needs to decide between ridge-regression and elastic-net. This point is important, since it is unrealistic to expect any given method to outperform its competitors across a large panel of data sets, with diverse characteristics such as different sample sizes, amount of signal, correlation structures, etc.

A more realistic goal is to demonstrate the improved performance of the given method under specific conditions, i.


  • Design of Experiments (DOE) Tutorial.
  • AMS162: Design and Analysis of Computer Simulation Experiments.
  • Alternatives for Environmental Evaluation (Routledge Explorations in Environmental Economics).
  • How It Feels to Be Free: Black Women Entertainers and the Civil Rights Movement.
  • Tails in Biomimetic Design: Analysis, Simulation, and Experiment | MIT Biomimetics Robotics Lab;
  • Handbook of Commercial Catalysts: Heterogeneous Catalysts.