Technology & Engineering
Single Sample T Test
A single sample t-test is a statistical method used to determine if the mean of a single sample is significantly different from a known or hypothesized population mean. It is commonly used in research and engineering to assess the significance of experimental results or to compare a sample mean to a known standard. The t-test provides a way to make inferences about population parameters based on sample data.
Written by Perlego with AI-assistance
Related key terms
1 of 5
5 Key excerpts on "Single Sample T Test"
- eBook - ePub
Sensory Evaluation of Food
Statistical Methods and Procedures
- Michael O'Mahony(Author)
- 2017(Publication Date)
- Routledge(Publisher)
t tests on the Key to Statistical Tables. It must just be borne in mind that “one sample” can mean more than one thing.Look up these three t tests on the Key to Statistical Tables to see their relationship to the other tests.Who Is Student?
Student is the pen name of William Sealy Gosset (1876-1937), who worked for the Guinness Brewery in Dublin. He published details of his test in Biometrika in 1908 under a pen name because the brewery did not want their rivals to realize that they were using statistics.t Distribution
The t distribution resembles the normal distribution except that it is flatter (more platykurtic) and its shape alters with the size of the sample (actually the number of degrees of freedom).7.2 The One-Sample t Test: ComputationThe one-sample t test is rarely used in sensory analysis and behavioral sciences, but it is included here for the sake of completion. In the one-sample t test we are testing whether a sample with a given mean,X ¯, came from a population with a given mean μ . In other words, is the mean of a sample(significantly different from the mean of the population (μ )?; t is calculated using the formulaX ¯)t =whereX ¯− μSX ¯X ¯= mean of the sampleμ = mean of the populationS= estimate from the sample of the standard error of the mean.X ¯As discussed in the section on z tests, the standard error is the standard deviation of the sampling distribution, a theoretical distribution of means of samples of size N , drawn from the population of mean μ . This estimate is obtained from the sample S and N using the formula.S= S / NX ¯Essentially, the calculation is one of seeing whether the difference between the means (X − μ ) is large compared to some measure of how the scores might vary by chance [i.e., how they are spread(S)X ¯ - No longer available |Learn more
- K. Paul Nesselroade, Jr., Laurence G. Grimm, K. Paul Nesselroade, Jr(Authors)
- 2018(Publication Date)
- Wiley(Publisher)
The critical value to which t obt is compared is based on n − 1 degrees of freedom. The assumptions for the single-sample z and t tests are representativeness, independent observations, interval-scaled or ratio-scaled data, and population distributions that are normally distributed. These tests are robust to violations of normality as n increases. The standard error, sample mean, and t distribution can also be used to create a confidence interval for the actual value of an unknown population mean. Statistical significance reflects the degree of certainty that the null hypothesis is false, but it does not necessarily reflect the size of the difference between the null mean and the sample evidence. To measure effect size, Cohen ’ s d can be calculated. Using Microsoft ® Excel and SPSS ® to Run Single-Sample t Tests Excel General instructions for data entry into Excel can be found in Appendix C. Data Entry Enter all of the scores from the sample in one column of the spreadsheet. Label the column appropriately. Using Microsoft ® Excel and SPSS ® to Run Single-Sample t Tests 253 Data Analysis 1) Excel has built-in programs for several types of t tests; however, it does not have one for the single-sample t test. As a result, we will need to figure the components of Formula 8.3 ourselves. 2) Determine the null hypothesis ( μ ), and record it in an open cell (label it appropriately). 3) Determine the sample mean by using the built-in Excel function “ AVERAGE. ” Record it in an open cell (label it appropriately). 4) Determine the estimate of the standard error ( s M ) by first determining the sample standard deviation (s) using the built-in function (either STDEV or STDEV.S; both calculate the standard deviation of a sample, which is what we want) and our sample size ( n ). Once we have these two values, we can determine the estimate of the standard error (Formula 7.3 – s M = s n ). Record this value in an open cell (label it appropriately). - eBook - ePub
Introduction to Statistics in Human Performance
Using SPSS and R
- Dale P. Mood, James R. Morrow, Jr., Matthew B. McQueen, Dale Mood, James Morrow, Jr., Matthew McQueen(Authors)
- 2019(Publication Date)
- Routledge(Publisher)
7Two-Sample t-testINTRODUCTION
The one-sample case we examined at the end of Chapter 6 contains many of the important concepts of inferential statistics, but it has rather limited application in human performance research and in science in general. As we saw, the primary use of the one-sample case is to compare the mean of a sample () to a hypothesized mean of a population (μ). We learned that we can compare the two means and either accept the hypothesis that they are not different, except for sampling error (this is called the null hypothesis and symbolized as H0 ), or reject this hypothesis. We also saw that we can even attach various levels of confidence to our decision.X ¯We learned two ways to reach our decision. First, we learned how to construct a confidence interval around the sample mean and then to check to see whether the hypothesized population mean is located within this interval. If it is, we decide the null hypothesis is true and conclude that the difference between the two means is simply the result of sampling error. If the hypothesized population mean is not located in the interval, we reject the null hypothesis (that the means are not different) and accept the alternative hypothesis (symbolized as H1 ) that they are, in fact, different. Recall that the width of the confidence interval is a function of the level of confidence that is desired, the size of the sample, and the variability of the values in the population (represented by Σ).The second way to reach the same conclusion regarding the equality of the means, called hypothesis testing, is to determine where the sample mean is located on a hypothetical sampling distribution. This sampling distribution would, theoretically, be constructed by taking many, many different samples of the same size (N) from the population for which we know µ and σ, calculating the mean of each sample, and plotting them in a frequency distribution. The result would be a normal curve with a mean of µ and a standard deviation (here called the standard error of the mean,) equal toσX ¯. It is important to understand that the standard error of the mean is actually the standard deviation of the sampling distribution of the means that were calculated across many, many samples and then plotted. Although we never actually construct this sampling distribution, we do use our knowledge of the normal curve and the value ofσNto locate where our sample mean would reside in the hypothetical sampling distribution. We can create a 95% confidence interval by multiplying the value ofσX ¯σX ¯ - David Howell(Author)
- 2020(Publication Date)
- Cengage Learning EMEA(Publisher)
We will see in a moment that we often do not 12.2 Testing Hypotheses about Means when s Is Known 307 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it. 308 Chapter 12 Hypothesis Tests Applied to Means: One Sample 12.3 Testing a Sample Mean when s Is Unknown (The One-Sample t Test) The previous example was chosen deliberately from among a fairly limited number of situations in which the population standard deviation ( s ) is known. In the general case, we rarely know the value of s and usually will have to estimate it by way of the sample standard deviation ( s ). When we replace s with s in the formula, however, the nature of the test changes. We can no longer declare the answer to be a z score and evaluate it with reference to tables of z . Instead we denote the answer as t and evaluate it with respect to tables of t , which are somewhat different. The reasoning behind the switch from z to t is not particularly complicated. The basic problem that requires this change to t is related to the sampling distribution of the sample variance. It’s time to grit your teeth and look at a tiny amount of theory because (1) it will help you understand what you are doing and (2) it’s good for your soul. The Sampling Distribution of s 2 Because the t test uses s 2 as an estimate of s 2 , it is important that we first look at the sampling distribution of s 2 . We want to get some idea of what kinds of sample variances we can expect when we draw a sample, especially with a small sample size.- eBook - ePub
- Aviva Petrie, Paul Watson(Authors)
- 2013(Publication Date)
- Wiley-Blackwell(Publisher)
Section 11.3 ) may be used to analyse such data. The distinction between independent and related observations is retained in the analysis even when there are several groups.7.3 One-sample t-test
7.3.1 Introduction
Occasionally, we may be interested in investigating whether the mean of a single group of observations takes a specific value. For example, the pigs in a particular pen on a farm are showing what appears to be a low daily live weight gain compared with the usual growth rate for this farm. We perform a test to assess whether the mean live weight gain of the pigs in this pen contradicts the hypothesis that they are growing at the expected rate for pigs on this farm.7.3.2 Assumption
The one-sample t-test assumes that the sample data are from a Normally distributed population of values and are representative of that population (ideally being chosen by random selection). As we said earlier (see Section 7.2), the test is hardly affected if the data deviate from Normality except in extreme cases where the data are visibly non-Normal. Then we may be able to Normalize the data by an appropriate transformation (see Section 13.2.1 ), typically a logarithmic transformation, in which case the test statistic is calculated using the transformed data values. Naturally, we need to convert the confidence limits obtained by using the transformed data back to the original scale of measurement. Alternatively, we can use an appropriate non-parametric test such as the sign test (see Section 12.3), the Kolmogorov–Smirnov test or the runs test. We refer you to Siegel and Castellan (1988) for details.7.3.3 Approach
We present the approach in general terms and illustrate it using the pig example in Section 7.3.4.1. Specify the null hypothesis, H0 , that the true population mean of the variable of interest is equal to a defined value, μ0 . Generally, the alternative hypothesis is that the mean is not equal to the specified value and this leads to a two-tailed
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.




