# 6052 Discussion 9

AmblerchickHere are what the other students in my group have posted so far.

NURS6052 Discussion Week 9

Initial Post: Group B Discussion

Sampling Distributions

**
Inferential Statistics:
**A means of drawing a conclusion about a population using sample data. Inferential statistics are based on the

**laws of probability**and create a framework which aid in making an objective judgment regarding sample estimates reliability. Inferential statistics collect random samples from a given population, with assumptions that are generally violated. Statistical calculation validity is related to the extent to which sample results are similar to if one had previously gathered data from randomly selected individuals within a given population and even when random sampling is utilized, the characteristics of the sample rarely hole identical population characteristics.

**
Sampling Error:
**The propensity of statistics to vary from one sample to another sample.

**
Sampling Distribution of the Mean:
**The basics of inferential statistics which are theoretical in nature due to inconsistent consecutive samples from any population and because researchers fail to plot consecutive means.

**
Standard Error of the Mean (SEM):
**The standard deviation of a sampling distribution of the mean and error denotes that there is an error in the sampling distribution as a population means estimate. The smaller the SEM, the higher, the more accurate the estimate of the means of the population value. The SEM is

*SD / √N.*SEM sample: SEM of an SD of 100,0 with a sample of 25 students: SEM = 100.0 /

*√25 = 20.0.*

Confidence intervals

**
Parameter Estimation:
**Signifies an estimate of a mean, a proportion or a mean difference between two experimental and control groups.

**
Point Estimation:
**A Single description statistic calculation which estimates the parameter of a population. Point estimation does not give information on the margin of error. Thus no inferences about the accuracy of parameter estimates can be made from point estimation.

**
Interval Estimation:
**indicates a range of values that the specified probability may lie within a given parameter.

**
Confidence Interval (CI)
**: The range of values of a population value as well as the probability is correct which is constructed around a sample mean. CI is the probability of being right or an estimate a certain degree of confidence.

**(CI)**may be constructed with any computed statistic, which can include correlation coefficients, means differences, and proportion differences and are relevant to clinicians who make decisions on whether differences are real (Polit & Beck, 2017, P. 401).

Null Hypothesis

**
Null Hypothesis:
**A statement that there is no relationship between the given variables. The null hypothesis can be shown to have a high probability of being correct by utilizing theoretical sampling distributions.

**are the means by which researchers attempt to reject a null hypothesis.**

*Statistical tests*
**
Null Hypothesis: H0 µE = µC
**Whereby (

**)**

*H0***signifies the null hypothesis, (**

**) signifies the experimental group and (**

*µE***) signifies the control group.**

*µC***is used to signify that the means are not the same.**

*HA: µE ≠ µC*Type I and Type II Errors

**
Type I Error:
**An error (false positive conclusion) in which a researcher rejects a null hypothesis incorrectly when the null hypothesis is indeed correct. A type I error may allow an ineffective treatment to be implemented.

**
Type II Error:
**An error (false negative conclusion) in which a false hypothesis is incorrectly accepted as true. When a researcher concludes that there is not a difference between the experimental group and the control group due to the subjects all being affected by some stimulus there is a high probability of a Type II error (Bengston, & Moga, 2007).

**A type II error might prevent an effective treatment from being implemented.**

**
Power Analysis:
**The method used to estimate the probability of a Type II error or requirements for a sample size. Power analysis involves:

*desired significance level*(α),

*power*(1 – β),

*sample size*(

*N*), and estimated

*effect size*(ES).

**
Effect Size:
**Conveys vital information on the magnitude of effects in a study. Effect size often supplements

*p*values and CI values.

**
Multiple Comparison Procedures (Post hoc Tests):
**The ANOVA null hypothesis is rejected by isolating the differences in group means by use of the post hoc tests.

**
Kruskal-Wallis test:
**a more generalized Mann-Whitney

*U*test which assigns ranks to various group scores when the number of groups are greater than two, and a one-way test for independent samples is desired.

**
Friedman Test:
**A method of testing which is a non-parametric test designed to test the differences between several related samples and is an alternative for repeated measures analysis of variances which is utilized when the same parameter is measured under different conditions on the same subjects (Schoonjans, 2017).

**
Cohen’s d:
**An index of effect size utilized when summarizing mean-difference effects between particular groups.

References

Bengston, W. F., & Moga, M. (2007). Resonance, Placebo Effects, and Type II Errors: Some Implications from Healing Research for Experimental Methods. Retrieved from https://eds-a-ebscohost-com.ezp.waldenulibrary.org/eds/pdfviewer/pdfviewer?vid=2&[email protected]

Polit, D. F., & Beck, C. T. (2017). *Nursing research: Generating and assessing evidence for nursing practice* (10th ed.). Philadelphia: Wolters Kluwer.

Schoonjans, F. (2017, January 21). Friedman test. Retrieved from https://www.medcalc.org/manual/friedman_test.php

**RE: Group B Discussion**

Top of Form

Initial post: Chapter 17 .

When studying a population, sometimes it is not feasible to use every person as a participant in the study, so researchers take a sample.

**Sample of a population** = approximation of the actual data researched for, but not the exact data. “Sampling error” denotes the fluctuation around the actual “population mean.” Several sample means can be plotted on a graph.
**Standard deviation** = accuracy. In statistics, a “normal distribution” has 68% of the values plotted will fall between -1 and 1 standard deviations. The smaller the error, the greater the accuracy. **Standard Error of the Mean** = SEM is: SD/√N
**Statistical inferences are made of two parts:** Estimation of parameters and Hypothesis testing

Confidence intervals (CIs)are usually given at “95%” or “99%.” how certain is the researcher.

95% CI means that there is 95% confidence that the mean lies within the given standard deviation. CI around risk indexes refers to “binomial distributions” or basically: how many positives vs how many negatives. Yes’s vs no’s. This type of CI is used in health research. It is, for example, dealing with how many people may contract a certain disease? Either the patients are positive, or negative.

**Statistical hypothesis**, offers an unbiased criterion for deciding if the hypothesis is supported by the data. Researchers make objective decisions about whether the study results are likely to reflect chance sample differences or true population differences. **Null Hypothesis-** No connection between the variables. Researchers seek to reject the null hypothesis using a **statistical test.** (null)H₀: (mean experimental) µₑ **= **µc (mean control) in essence, they are equal/same. In alternative (Hₐ) hypothesis, the means are not equal. Hₐ: µₑ ≠ µc** Type I and Type II Errors- **How probable the results are due to chance. In testing, there is always a degree of error that can be presumed on the probability of the results being true or false based on the samples data. Type 1 error: rejects the null hypothesis that is in fact, true.Type II error: a false negative conclusion **Level of Significance- **Attempts to reduce errors or type 1 errors by adding a level of significance- the probability of incorrectly rejecting a true null hypothesis. Alpha (α) is 0.05 or 0.01 meaning out of 100 population samples a true null hypothesis would be rejected five times.** Critical regions: **indicates whether the null hypothesis is not likely, based on the results.** Statistical test**- Tests the hypothesis and evaluate the believability of the findings. Includes One-tailed or two-tailed, Parametric and non-parametric, Between subject test and Within-subject test.

Testing Differences Between Two Group Means: Two classes of statistical tests: Parametric and nonparametric tests.

**Parametric tests:** Estimation of a parameter, require measurements on a interval scale, and involve several assumptions such as variables are normally distributed in the population. **Examples:** t-test, one sample t-test, and paired t-test. **t-test** is most common and analyzes the difference between two means. Used with two independent groups and when sample is dependent. Formula for t-test uses group means, variability, and sample size.

**Degrees of freedom** are also calculated. These refer to the number of observations free to vary about a parameter. Things to consider: **Bonferroni** **correction** which is an adjustment made to establish a more conservative alpha level when multiple statistical tests are being run from the same data set and **confidence intervals** which are a range of values within which a population parameter is estimated to be at a specified probability. **One sample t-test: **Compares mean values of a single group to a hypothesized value. **Paired t-test: **Used for dependent groups

**Nonparametric tests: **Do not estimate parameters, data are nominal or ordinal, and normal distribution cannot be assumed. **Examples: **Mann-Whitney U test and Wilcoxon signed rank test. **Mann-Whitney U test **measures the difference between two independent groups based on ranked scores. **Wilcoxon signed rank test **compares two paired groups based on the relative ranking of values between two pairs.

Testing Mean Differences with three or more groups: **Analysis of variance** (ANOVA) is the parametric procedure for testing differences between means when there are three or more groups. (Polit & Beck, 2017). A **one-way analysis of variance** (ANOVA) compares two or more independent groups or conditions to investigate the presence of differences between groups on a continuous variable. The statistic computed in ANOVA is the **F-ration**. The F-ratio is used to compare the variance between the groups to the variance within the groups. The ratio of the between groups variability (numerator) to the within groups variability (denominator) is the F-ratio. The larger the F-ratio, the more certain we are that there is a difference between the groups. For example, in a study by (Ghazavi et al. 2016) The researcher wanted to investigate the effect of cognitive behavioral stress management program on psychosomatic patients’ Quality of Life. The participants were assigned to two control and experimental groups using random allocation method (odd and even numbers). The intervention for the experimental group included 8 sessions of a 90-minute weekly program the control group did not receive any intervention. Repeated measures ANOVA showed that, in the experimental group, the mean QOL score in three stages (before, immediately after, and 1 month after the intervention) had a statistically significant increase. This mean had a statistically significant decrease in the control group during the same three stages. The F-ration for the experimental group was greater than the F-ratio for the control group. With the larger difference in F-Ratio, researchers concluded that the Cognitive-behavioral stress management, conducted in the present study, had a notable effect on Quality of Life.

Testing Differences in Proportions: The **Chi-Square Test** (X2) **Test** is used to test relationships to determine if there is a significant relationship between two variables. Calculations are made by comparing the observed data (values observed in the data) with data we expect to achieve (values found with no relationship between the data) according to a certain hypothesis. **Fisher’s exact test** is a test used for small samples to exam variables to see if the proportion of one variable is different depending on the value of the other variable. **McNemar’s test** is a test used to compare two paired groups of nominal data for changes in proportions of those groups such as with a dichotomous variable.

Testing Correlations: **Pearson’s r** is a correlation coefficient that measures a relationship between two variables. Thus, giving information about the importance of the relationship as well as the direction of the relationship. **Spearman’s rho **(rs) measures the strength of association between paired data. The paired data can increase or decrease together or can be opposite of each other. **Kendall’s Tau** measures the ordinal relationship between two measured variables.

A power analysis is used to reduce the risk of type II errors and strengthen statistical conclusion validity by estimating in advance how big a sample is needed. Four components in a power analysis: 1. The significance criterion (alpha). Other things being equal, the more stringent this is, the lower the power. 2. The sample size (N). The bigger the sample size, the more power. 3.The effect size (ES). Estimate of how wrong the null hypothesis is or how strong the relationship between the independent variable and dependent variable in the population. 4.Power (I-B). This is the probability of rejecting a false null hypothesis.

**The effect size** is the magnitude of the relationship between the research variables. The greater the relationship between variables, the smaller the sample is needed to avoid Type II errors.

**Sample Size Estimates for Testing Differences between Two Means: ES** is usually designated as Cohen’s d: d= (µ1- µ2) / (ơ) . The effect size is the difference between the population means divided by the population standard deviation.

**Sample Size Estimates for Other Bivariate Tests**
*: *Alternative routes to doing a power analysis include estimating eta-squared rather using the ANOVA. Eta squared is the sum of squares between (SSB) divided by the sum of squares (SST) and can directly give an effect size. The terms small, medium, and large can be used when eta-squared cannot be estimated.

**Effect Size Completed Studies:**
* *Power analysis concepts can be used after a study to determine ES. This can be helpful for meta-analysis and allow for possible revelations of statistically relevant data that may be hidden in large samples.

**Critiquing Inferential Statistical Analyses: **Some of the important questions to ask when critiquing an analysis include the following: Does the report present the results of all tests and was there a significant amount of statistical tests provided? Did the researcher examine internal validity? Did the researcher use the right statistical test and provide a rationale for the test they used? Were results presented clearly and concisely and were tables used for large amounts of statistical information?

References

Ghazavi, Z., Rahimi, E., Yazdani, M., & Afshar, H. (2016). Effect of cognitive behavioral stress management program on psychosomatic patients’ quality of life. Iranian Journal of Nursing and Midwifery Research, 21(5), 510–515. http://doi.org/10.4103/1735-9066.193415

Polit, D. F., & Beck, C. T. (2017). Nursing research: Generating and assessing evidence for nursing practice (Laureate Education, Inc., 10th ed.). Philadelphia, PA:

Lippincott Williams & Wilkins. https://www.statpac.com/manual/comparethreeormoremeans.htm

Bottom of Form