Mathematics
Two Quantitative Variables
Two quantitative variables refer to a pair of numerical data sets that can be analyzed together to identify any relationships or patterns between them. This analysis often involves techniques such as scatter plots, correlation, and regression to understand how changes in one variable may be associated with changes in the other. Understanding the relationship between two quantitative variables is essential in statistical analysis.
Written by Perlego with AI-assistance
Related key terms
1 of 5
8 Key excerpts on "Two Quantitative Variables"
- eBook - ePub
- Michel Jambu(Author)
- 1991(Publication Date)
- Academic Press(Publisher)
Chapter 42-D Statistical Data Analysis
1 Introduction
In practice, many users stop their statistical investigations after having studied the variables independently from each other. However, they have used only 1-D analysis, and usually cannot put forward any explanations of any causality for their data. For example, a questionnaire with two questions can be analyzed using two frequency distributions. However, studying each frequency distribution individually cannot provide any relation between the two questions. Another example is given by the study of Two Quantitative Variables, for which as many statistical characteristics or graphics as required can be built (cf . Chapter 3 ). They cannot help, however, to explain the relation between the two variables. The only way to approach the explanation of how one variable is related to another is to build a relation between the two variables. That is the objective of 2-D statistical data analysis, where two variables are analyzed according to the following points of view:1. To express and highlight the relationship between two variables, in order to show the statistical dependence between them.2. When possible, to sum up the relations by a law of variation or a statistical dependence, and to characterize them by a numerical coefficient independent of the units of measure of the variables.These studies vary according to the type of variables involved (quantitative, categorical, chronological, logical, etc.), and are presented in what follows.2 2-D Analysis of Two Categorical Variables
2.1 Contigency Data Sets
The way to express a relation between two categorical variables is to compute a contingency data set as follows: Let two categorical variables be denoted by V 1 and V 2 :V 1 , has h forms denoted by A 1 A 2 , …, Ah;V 2 has k forms denoted by B 1 , …, Bk .For each couple of forms (Ai,Bj), we compute the number of observations, denoted bynij, that possesses the forms A, andBj - Richa Tiwari(Author)
- 2023(Publication Date)
- Society Publishing(Publisher)
• Depends on Question Types: Bias in the results is dependent on the type of questions included to collect quantitative data. The researcher’s information of questions and the objective of research are exceedingly important while collecting the quantitative data in the data analytics process. 2.5. QUALITATIVE AND QUANTITATIVE VARIABLES A variable is a quality or characteristics that vary, as the name implies. It would be called a constant, if the quality doesn’t vary. Why such particular characteristics vary, are the main topic of concern and even the socio- behavioral science is interested in explaining it. The most basic type of variable is dichotomous one in which the quality is either present or absent. Being female is a dichotomous variable, for example, in which the units or cases are classified as being either female or not female. Similarly, another dichotomous variable is being divorced in which cases are categorized as being either divorced or not divorced. The two categories making up a dichotomous variable can be represented or coded by any two numbers such as 1 and 2 or 23 and 71. For example, being female may be coded as 1 and not being female as 2. Variables can be categorized into two types; Qualitative, categorical, nominal or fre- quency variable. Marital status is an example of qualitative variable which might consist of the following five categories: (i) never married; (ii) married; (iii) sepa- rated; (iv) divorced; and (v) widowed. These five categories can be represented or coded with any set of five numbers such as 1, 2, 3, 4, and 5 or 32, 12, 15, 25, and 31. Simply these numbers are used to refer to different categories. The number or frequency of cases falling within each of the categories can only be counted and hence, this variable is also called a frequency variable.- eBook - PDF
- Trudy A. Watt, Robin H. McCleery, Tom Hart(Authors)
- 2007(Publication Date)
- Chapman and Hall/CRC(Publisher)
147 8 Relating One Variable to Another Statistics have shown that mortality increases perceptibly in the military during wartime. —Robert Boynton In the previous three chapters we have been concerned with the relationship between a single continuous variable and one or more categorical variables. Thus, in Chapter 5 we asked whether the number of spiders in a quadrat (continuous) differed depending on whether we sowed wildflower seed (categorical). In this chapter we consider what to do when we have Two Quantitative Variables that we think might be related to one another. 8.1 Correlation The simplest question we could ask about two continuous variables is whether they vary in a related way, i.e., is there a correlation between them? For example, the concentration (ppm) of two chemicals in the blood might be measured from a random sample of 14 patients suffering to various extents from a particular disease. If a consequence of the disease is that both chemicals are affected, we should expect patients with high values of one to have high values of the other and vice versa . Table 8.1 shows the concentra-tions of chemical A and of chemical B in the blood of 14 such patients. The data are shown as a scatter plot in Figure 8.1a. For comparison, Figure 8.1b shows the same data but with the column for B scrambled into a random order. In the graph of the “real” relationship, you can see that generally low concentrations of A tend to be associated with low concentrations of B, giving a “bottom left to top right” look to the graph. If we break up the relationship between each patients’ A and B concentration by randomizing column B, then the pattern disappears (Figure 8.1b). How do we characterize this relationship? Bearing in mind that what we are claiming is that relatively large concentrations of A are associated with - eBook - PDF
Quantitative Research Methods for Social Work
Making Social Work Count
- Barbra Teater, John Devaney, Donald Forrester(Authors)
- 2017(Publication Date)
- Red Globe Press(Publisher)
59% of respondents had IB holders on their caseload with 41% of respondents having no IB holders on their caseload. Before we present the remainder of the findings from this study, we will review a sample of bivariate statistical tests in order to explore their purpose, meaning and how to interpret the results. Multivariate tests will be explored in Chapter 10 . Statistical tests: the difference between bivariate analysis and multivariate analysis Bivariate analysis is simply looking at the relationship between two variables to examine how they are related. For example, what is the relationship between hours spent studying research methods and the results on a research methods test? Are these two variables (hours spent studying; results of research methods test) related? Does a student’s test result increase/improve the more time s/he spends studying? Multivariate analysis involves looking at the relationship among more than two variables to see how they are related or influence one another. For example, considering time spent on studying, hours accessing additional learning resources, and number of homework assignments com-pleted over the course of the semester, what predicts higher scores on a research methods test? Do any of the variables (hours spent studying; hours accessing additional learning resources; number of homework assignments completed) predict a higher test result? Bivariate and multivariate statistical tests can help us in answering these questions and help us to make sense of data and the rela-tionship (and even influential power) between variables. We will review a sample of bivariate analyses in this chapter. Correlation and causation Bivariate analysis explores the extent to which two variables are related. In bivariate analysis, if two variables are related (or associated) then they are said to be correlated – that is as one variable changes the other variable changes. - James E. De Muth(Author)
- 2014(Publication Date)
- Chapman and Hall/CRC(Publisher)
311 13 Correlation Both correlation and regression analysis are concerned with continuous variables. Correlation does not require an independent (or predictor) variable, which as we will see in the next chapter, is a requirement for the regression model. With correlation, two or more variables may be compared to determine if there is a relationship and to measure the strength of that relationship. Correlation describes the degree to which two or more variables show interrelationships within a given population. The correlation may be either positive or negative. Correlation results do not explain why the relation occurs, only that such a relationship exists. Unlike linear regression (Chapter 14), covariance and correlation do not define a line, but indicate how close the data is to falling on a straight line. If all the data points are aligned in a straight diagonal, the correlation coefficient would equal a +1.0 or –1.0. Graphic Representation of Two Continuous Variables Graphs offer an excellent way of showing relationships between continuous variables on interval or ratio scales. The easiest way to visualize this relationship graphically is by using a bivariate scatter plot . Correlation usually involves only dependent or response variables. If one or more variables are under the researcher’s control (for example, varying concentrations of a solution or specific speeds for a particular instrument) then the linear regression model would be more appropriate. Traditionally, with either correlation or regression, if an independent variable exists it is plotted on the horizontal x -axis of the graph or the abscissa . The second or dependent variable is plotted on the vertical y -axis or the ordinate (Figure 13.1). In the correlation model, both variables are evaluated with equal import, vary at random (both referred to as dependent variables), are assumed to be from a normally distributed population, and may be assigned to either axis.- eBook - PDF
Statistics with JMP
Graphs, Descriptive Statistics and Probability
- Peter Goos, David Meintrup(Authors)
- 2015(Publication Date)
- Wiley(Publisher)
It is obvious that it is also not very useful to perform arithmetic operations with ordinal variables. 2.1.2 Quantitative variables A variable that is measured on a quantitative scale can be expressed as a fixed number of measurement units. Examples are length, area, volume, weight, duration, number of bits per unit of time, price, income, waiting time, number of ordered goods, and so on. For quantitative variables, almost all arithmetic operations make sense. This is due to the fact that the difference between two levels of a quantitative variable can be expressed as a number of units in contrast to differences between two levels of an ordinal variable. Within the class of quantitative variables, a distinction is made between variables that are measured on an interval scale and variables measured on a ratio scale. 2.1.2.1 Interval scale An interval scale has no natural zero point, that is, no natural lower limit. For variables measured on an interval scale, calculating ratios is not meaningful. Well-known examples of interval variables are the time read on a clock or the temperature expressed in degrees Celsius or Fahrenheit. The difference between 10 STATISTICS WITH JMP 2 o’clock and 4 o’clock is the same as the difference between 21:00 and 23:00, but it’s not like 4 o’clock is twice as late as 2 o’clock. This is due to the fact that time read on a clock has no absolute zero. The same applies to the temperature measured in degrees Celsius: 20 ∘ C is not four times as hot as 5 ∘ C. 2.1.2.2 Ratio scale A ratio scale does have an absolute zero. Therefore, for variables measured on a ratio scale, ratios can be calculated. A length of 6 cm is twice as much as a length of 3 cm, as the length scale has an absolute zero point. Analogously, an order of six products is twice as large as an order of three products. The temperature measured in Kelvin does have an absolute minimum, so that temperature is sometimes measured on a ratio scale. - eBook - PDF
Statistics Using R
An Integrative Approach
- Sharon Lawner Weinberg, Daphna Harel, Sarah Knapp Abramowitz(Authors)
- 2020(Publication Date)
- Cambridge University Press(Publisher)
CHAPTER FIVE EXPLORING RELATIONSHIPS BETWEEN TWO VARIABLES Up to this point, we have been examining data univariately: that is, one variable at a time. We have examined the location, spread, and shape of several variables in the NELS dataset, such as socioeconomic status, mathematics achievement, expected income at age 30, and self-concept. Interesting questions often arise, however, that involve the relationship between two variables. For example, using the NELS dataset we may be interested in knowing if self-concept relates to socio-economic status; if gender relates to science achievement in twelfth grade; if sex relates to nursery school attendance; or if math achievement in twelfth grade relates to geographical region of residence. When we ask whether one variable relates to another, we are really asking about the shape, direction, and strength of the relationship between the two variables. We also find it useful to distinguish among the nature of the variables themselves: that is, whether the two variables in question are both measured either, at least, at the interval level, or are both dichotomous, or are a combination of the two. For example, when we ask about the relationship between socio-economic status and self-concept, we are asking about the relationship between two at least interval-leveled variables. When we ask about whether sex relates to nursery school attendance, we are asking about the relationship between two dichotomous variables. And, when we ask about whether sex relates to twelfth grade science achievement, we are asking about the relationship between one dichotomous variable and one at least interval-leveled variable. - eBook - PDF
Philosophy of Anthropology and Sociology
A Volume in the Handbook of the Philosophy of Science Series
- Dov M. Gabbay, Paul Thagard, John Woods(Authors)
- 2011(Publication Date)
- North Holland(Publisher)
To begin with, there are technical issues that stand in the way of any straightforward connection. Proba-bilistic theories are typically constructed around the assumption that variables are dichotomous or at least measurable only on a so-called nominal scale — that is, that they take just two or at most a discrete set of values. By contrast, regression equations include relations among variables that are measurable on ratio or inter-val scales — that is, variables that take continuous values, at least within some range, like height or IQ. Although one can include dichotomous variables among the regressors in a linear regression equation, the dependent variable cannot be dichotomous, for the obvious reason that the resulting relationship will no longer be even approximately linear. While there are ways of representing relationships involving dichotomous dependent variables, models in which the dependent vari-able is interpretable as a probability of some outcome occurring, and techniques for estimating the parameters that characterize such relationships, these represent cases with very special features — the more general and typical case of functional relationships among quantitative variables does not fit naturally into such a frame-work. For example, if one regresses plant height on amount of water and obtains a non-zero regression coefficient 6, it is unclear how to interpret this in terms of the idea that amount of water raises the probability of plant height and even more unclear what the motivation is for trying to provide such an interpretation, given that, if it is causally correct, the regression equation gives us the functional form and linear coefficient relating these quantities and is thus far more precise and informative than any information provided by the raises the probability of locution.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.







